I'm running code within a sandbox that disallows prettymuch everything (reflection/classloading/etc.). I can still run Rhino Javascript, since it (to a large part) doesn't do any of these things. However, everything I could find about JRuby/Jython point towards their JIT compiter or AOT compiler.
Do these projects have a dumb "i'll interpret the AST as i go along" mode? I'd be happy to take the order of magnitude performance hit (it's nothing intensive) to let it run within the restricted environment.
ohai ;)
Taking a look at the JRuby GitHub wiki page, we have
# Set compilation mode. JIT = at runtime; FORCE = before execution.
# Options: [JIT, FORCE, OFF, OFFIR], Default: JIT.
compile.mode=JIT
Jython does not have a pure interpreted mode at this point. There is a beginning of one based on python bytecodes (that could be pure in memory and avoid all java bytecode issues) but it is not yet usable.
Related
I recently started reading about Java compilers. So far my understanding is, that optimization comes from techniques like tired compilation or code profiling. Now I read that Java 9 respectively Java 10 (Windows) provides the option of AOT compilation. Now I wonder: what use case would justify the use of AOT compilation?
To have better startup performance, like simple desktop app, it would be annoying for user to wait for it to load and then it still will be pretty slow until JIT kicks in. So then you can use AOT to already provide optimized code - it might not be as good as JIT, but will be much faster at startup.
Also some applications are used only for few seconds or even less - JIT will never have a chance to kick in. Like simple command line app that just sends single request and closes. Each function will be probably only executed once - so there is no reason to use JIT at all.
Also it might help to decrease binary size or allow for creation of very simple and small standalone binary. Same for memory usage - as JIT needs some memory to work.
I always come across articles which claim that Java is interpreted. I know that Oracle's HotSpot JRE provides just-in-time compilation, however is this the case for a majority of desktop users? For example, if I download Java via: http://www.java.com/en/download, will this include a JIT Compiler?
Yes, absolutely. Articles claiming Java is interpreted are typically written by people who either don't understand how Java works or don't understand what interpreted means.
Having said that, HotSpot will interpret code sometimes - and that's a good thing. There are definitely portions of any application (around startup, usually) which are only executed once. If you can interpret that faster than you can JIT compile it, why bother with the overhead? On the other hand, my experience of "Java is interpreted" articles is that this isn't what they mean :)
EDIT: To take T. J. Crowder's point in: yes, the JVM downloaded from java.com will be HotSpot. There are two different JITs for HotSpot, however - server and desktop. To sum up the differences in a single sentence, the desktop JIT is designed to start apps quickly, whereas the server JIT is more focused on high performance over time: server apps typically run for a very long time, so time spent optimising them really heavily pays off in the long run.
There is nothing in the JVM specification that mandates any particular execution strategy. Some JVMs only interpret, they don't even have a compiler. Some JVMs only JIT compile, they don't even have an interpreter. Some JVMs have both an intepreter and a compiler (or even multiple compilers) and statically choose between the two on startup. Some have both and dynamically switch back and forth during runtime. Some aren't even virtual machines in the usual sense of the word at all, they just statically compile JVM bytecode into native machinecode ahead-of-time.
The particular JVM that you are asking about, Oracle's HotSpot JVM, has one interpreter and two compilers, called the C1 and C2 compiler, also colloquially known as the client and server compilers, after their corresponding commandline options. HotSpot dynamically switches back and forth between the interpreter and one of the compilers at runtime (but it will not switch between the two compilers, you have to specify one of them on the commandline and then only that one will be used for the entire runtime of the JVM).
As per document here Starting with some of the later Java SE 7 releases, a new feature called tiered compilation became available. This feature uses the C1 compiler mode at the start to provide better startup performance. Once the application is properly warmed up, the C2 compiler mode takes over to provide more-aggressive optimizations and, usually, better performance
The C1 compiler is an optimizing compiler which is pretty fast and doesn't use a lot of memory. The C2 compiler is much more aggressively optimizing, but is also slower and uses more memory.
You select between the two by specifying the -client and -server commandline options (-client is the default if you don't specify one), which also sets a couple of other JVM parameters like the default JIT threshold (in -client mode, methods will be compiled after they have been interpreted 1500 times, in -server mode after 10000 times, can be set with the -XX:CompileThreshold commandline argument).
Whether or not "the majority of desktop users" actually will run in compiled or interpreted mode depends largely on what code they are running. My guess is that the vast majority of desktop users run the HotSpot JVM from Oracle's JRE/JDK or one of its forks (e.g. SoyLatte on OSX, IcedTea or OpenJDK on Unix/BSD/Linux) and they don't fiddle with the commandline options, so they will probably get the C1 compiler with the default 1500 JIT threshold. (But applications such as IntelliJ, Eclipse or NetBeans have their own launcher scripts that usually supply different commandline arguments.)
In my case, for example, I often run small scripts which never actually reach the JIT threshold, so they are never compiled. (Nor should they be.)
Some of these links about the Hotspot JVM (what you are downloading in the java.com download link above) might help:
Java SE HotSpot at a Glance
The Java HotSpot Performance Engine Architecture
Frequently Asked Questions About the Java HotSpot VM
Neither of the (otherwise-excellent) answers so far seems to have actually answered your last question, so: Yes, the Java runtime you downloaded from www.java.com is Oracle's (Sun's) Hotspot JVM, and so yes, it will do JIT compilation. HotSpot isn't just for servers or anything like that, it runs on desktops and takes full advantage of its (very mature) optimizing JIT compiler.
Jvm spec never claim how to execute the java bytecode, however, you can specify a JIT compiler if you use the JVM from hotspot VM, JIT is just a technique to optimize byte code execution.
How come code written in Java needs to be compiled in byte-code that is interpreted by the JVM, but code written in a language like JavaScript does not need to be compiled and can run directly in a browser?
Is there an easy way to understand this?
What is the fundamental difference between the way these two languages are written, that may help to understand this behavior?
I am not a CS student, so please excuse the naivete of the question.
Historically, JavaScript was an interpreted language. Which means an interpreter accepts the source code and executes it all in one step. The advantage here is simplicity and flexibility, but interpreters are very slow. Compilers convert the high level language into a lower level language that either the native processor or a VM (in this case, the Java VM) can execute directly. This is much faster.
JavaScript in modern browsers is now compiled on the fly. So when the script is loaded, the first thing the JavaScript engine does is compile it into a bytecode and then execute it. The reason the entire compilation step is missing from the end user's perspective is because browser developers have (thankfully) maintained the requirement that JavaScript is not explicitly compiled.
Java was from the getgo a language that always had an explicit compile step. But in many cases that's not true anymore. IDE's like IntelliJ or Eclipse can compile Java on the fly and in many cases remove the explicit compilation step.
JavaScript and Java are not the same thing. They might share a similar name, but I refer you to the JS guru - Douglas Crockford to help clear up the fact that they are really not related at all.
The reality is that there is nothing stopping Java being an interpreted language, and there's equally nothing stopping JavaScript being a compiled language (Chrome's javascript engine does do compilation to improve speed, and does a very good job of it).
In the context of the browser, Java runs in the same way as Flash or Silverlight - a plugin is required and the browser acts as a host to that plugin; which hosts a Java runtime environment.
Javascript was designed to be a scripting language for the browser, and that's why the browser can understand it natively. How the browser actually achieves the running of that code, however, is entirely up to the browser. That is - it can operate purely at a script level, assuming zero-knowledge of the next line of code and running a purely software-based stack; or it can perform some JIT to get the code closer to the hardware and (hopefully) improve speed.
Any language can be compiled and interpreted. In both cases, a piece of software has to read the source code, split it up, parse it, etc. to check certain requirements and then assign a meaning to every part of the program. The only difference is that the compiler then proceeds to generate code with (almost) the same meaning in another language (JVM bytecode, or JavaScript, or machine code, or something entirely else) while the interpreter carries out the meaning of the program immediately.
Now, in practice it's both simpler and more complicated. It's simpler in many languages lend themselves better to one of the two - Java is statically-typed and there is relatively little dynamic about the meaning of a program, so you can compile it and thus do some work which would otherwise need to be done at runtime. JavaScript is dynamically-typed and you can't decide a lot of things (such as whether + is addition or concatenation) until runtime, so compilation does not afford you much performance. However, a mix of compiler and interpreter (compile to simplified intermediate representation, then interpret and/or compile that) is increasingly popular among dynamic language implementations. And then there's the fact that modern JavaScript implementations do compile, and in fact V8 never interprets anything.
Because of the being complexity of compiling level between Java and Javascript, there are some limitations in Javascript. Since bytecode is executed on JVM platform which is written for specific OS and Hardware bytecode execution has more advantages to access system resources. Even C code can be embedded into Java bytecode aswell. On the other hand Since Javascript is only run on browser there is less to do with it.
there are two main part at Java platform. Java programming Language and JVM. It makes each part focus only their own area. That is why JVM doesnot dedal with Java programming syntax. It is similar to that when running C code linking does not deal with C code but assembly.
Bytecode in JVM platform is likely an assembly in C.
Eventually all representations are converted into binary representation and then the electirical signals somehow. It proves that we need programming levels.
I read the following articles:
http://searchcio-midmarket.techtarget.com/definition/just-in-time-compiler
http://javarevisited.blogspot.in/2011/12/jre-jvm-jdk-jit-in-java-programming.html
I am now really interested in knowing what will happen when I run a class. JIT compiles the byte code again and then ???
Will this compiled code be converted into an .exe by the JVM?
Like the others said: JIT does not mean the code is compiled to a binary executable (.exe). However, an interesting application that you may consider is Excelsior JET.
I haven't read too much about it and haven't used it, so I don't know exactly how it works... yet. But according to its webpage, it's an AOT (Ahead-Of-Time) compiler. This means that it will compile your .class files to a system-dependent binary file.
You should give it a try, see how it performs. According to the website, you get a free license if your project is non-comercial in nature.
Java Compiler compiles plain-text Java code into JVM bytecode. http://en.wikipedia.org/wiki/Java_bytecode
JVM has a HotSpot optimizer that evaluates the code for "Hot Spots" (basically, code that will be used the most) and pays special attention to those spots when using CPU cache. It may also flag those spots for the JVM to recompile to a native language (like Assembly) and this is called JIT.
JVM is essentially a virtual machine that runs a JVM bytecode interpreter.
There is never a direct .exe. It is a Windows/C/C++ thing, mostly.
No, the code is NOT "compiled" into an "exe"
the program is stored in memory as byte code, but the code segment currently running is preparatively compiled to physical machine code in order to run faster.
I'll go out on limb and say that JIT is a type of interpreter, designed to improve the speed of commonly used branches of code (at least that was my interpretation 10 years ago)
JIT compilers represent a hybrid approach, with translation occurring continuously, as with interpreters, but with caching of translated code to minimize performance degradation. It also offers other advantages over statically compiled code at development time, such as handling of late-bound data types and the ability to enforce security guarantees.
I would like to know how to check if the JIT compiler is turned off. I have the following code which is meant to turn the JIT compiler off.The problem is, I am not sure if it is actually doing that. So I was wondering if there is a way of checking if the JIT is off.
I looked at the Compiler class but there isn't any method like isDisabled/enabled().
Code:
Compiler.disable();
Any help or direction will be highly appreciated.
(Not a direct answer to your question since it seems your were trying to turn off the JIT compiler programmatically, but based on your comment, this might be of interest.)
If you want to turn off the JIT compiler on a Sun/Oracle JVM, you should try the -Xint option:
-Xint
Operate in interpreted-only mode. Compilation to native code is disabled, and all bytecodes are executed by the interpreter. The
performance benefits offered by the Java HotSpot Client VM's adaptive
compiler will not be present in this mode.
I don't believe you can turn the JIT off at runtime.
If you want to seriously benchmark a Java program, you should definitely be ignoring the first few runs. Getting reliable benchmarks in Java is an extremely tricky business, best left to people much smarter than you or I.
I recommend using Caliper, which is used internally at Google for microbenchmarking and is plenty smart about warming up the JIT and stuff. In particular, look at the example here, which shows how to measure the efficiency of an algorithm for different input sizes.
In the article Performance Features and Tools.
The JIT compiler was first made available as a performance update in
the Java Development Kit (JDK) 1.1.6 software release and is now a
standard tool invoked whenever you use the java interpreter command in
the Java 2 platform release.
You can disable the JIT compiler using
the -Djava.compiler=NONE option to the Java VM.
So, you can deduce that when the variable is not set, or set to something other than NONE, then the JIT is enabled.
IBM JVMs definitely support the Java interface java/lang/Compiler.disable() and .enable() which was introduced in Java 5, I believe. That includes WebSphere Real Time (which is a JVM designed to provide more predictable performance) as well as our "standard" JVMs. If you call disable(), it will prevent JIT compilations until you call enable().
I work for IBM on the JIT compiler team. We don't usually recommend people to use this interface, because interfering with JIT compilation heuristics is generally not a good idea, but there are reasonable real-time scenarios where you would use it.
You can printout methods when they get compiled, with `-XX:+PrintCompilation if your method isn't printed out or suddenly gets faster after it is print out, you can see the likely cause.