Are there any JVM simulators / bytecode execution visualizers? - jvm

I'm looking for a visualization tool which could help explain how Java bytecodes interact with operand stack and locals.
Are there any?

Related

Which JVM languages are stackless?

Which Turing-complete language implementations on the JVM do not use the JVM stack as a call stack?
(I ask because I want to implement coroutines between Scala and another language in the same thread.)
Morpho
SISC (Second Interpreter of Scheme Code)
Implementations which once did not use the JVM stack, but in their latest versions, do:
LuaJ, prior to version 2
JRuby (in its experimental YARV bytecode interpreter, since removed)

Control execution speed

I am thinking of making a "programming game", i.e. where each player writes a program to control their "bot", and then the programs are pitted against each-other to see who wins (by some definition of "win").
To make this fair, each bot program should execute at the same speed, so using native pre-compiled C/C++ code seems out of the question.
I can think of 3 options, but am unsure about 2:
Use a language that runs in a VM - This would mean that bots are written in Java and compiled to JVM bytecode. Then every bot gets a JVM and I would need to control the JVM "clock" or whatever it has to control the execution speed.
Problem: Can the JVM "clock" be controlled, telling it to run X clock cycles worth of code?
Use a scripting language - Bots wuld be written in JS or Python or whatever.
Problem: Same as above - can the speed be controlled?
Use my own simplified language -
Problems: I am writing a game, not a compiler. It will mean anyone playing has to learn yet another language, which means no one will play.
So basically, I guess the question is can I control the execution speed of the JVM or some language interpreter (not in theory - in practice)? Or is there another option I didn't think of?
The JVM isn't real-time, nor, I suspect is your OS. Relying on the JVM and/or process interactions isn't going to work since you're at the mercy of OS scheduling, JVM thread scheduling etc.
If you want to coordinate multiple threads, then you should look at the JVM thread model and in particular how to use locks to coordinate 2 threads.
One option would be to write your own JVM that you instrument to run only a fixed number of bytecode instructions from each program. Bytecode is a lot easier to digest that human-readable source code, so you could get away with relatively little implementation work, while your users would get to program in any programming language that can produce Java bytecode.
It gets easier if you institute some restrictions like "no threads" and "no try/catch". You'll need to implement a few core language features from java.lang.* plus some domain-specific I/O features, but for most of the rest of the JRE (for example java.util.*) you should be able to get away with executing bytecode from an existing JRE implementation (modulo legal constraints if you distribute the game engine).
Expect a slowdown of between 10x and 100x (depending on your implementation technology) compared to running on an off-the-shelf optimized JVM.
Alternatively, run an existing JVM in debug mode, single-step through the contestant programs with your game pretending to be a debugger. Whether this is easier or harder than writing a bare-bones JVM yourself I'm not sure.

Difference between Dynamic Binary Instrumentation and Analysis

I am reading about automated bug finding techniques and in (Valgrind) paper it mentions that Valgrind is a dynamic binary instrumentation framework for building dynamic binary analysis tools. This maybe a bit stupid but I am a bit confused about the naming here. What exactly is the difference between instrumentation and analysis ? (I know that they are different words but what is the difference in practice ?)
Instrumentation is collecting data. Analysis is, well, analyzing it. The reason why Valgrind mentions "dynamic" is because there are also static analysis tools that actually analyze the code without running a program whereas Valgrind analyzes a binary code while running a binary.
See also:
http://en.wikipedia.org/wiki/Instrumentation_%28computer_programming%29
http://en.wikipedia.org/wiki/Static_program_analysis
The implementation details of this automated bug-finding tool should answer your question:
You use dynamic binary instrumentation tools to instrument the source code for further analysis.
You use different algorithms or techniques to analyze the code, such as statistical debugging introduced in the article.

Is it possible to compile Java Bytecode to Native Code using pypy?

pypy currently translates Rpython to Native code using Pluggable JIT and GC. Currently it has a Python frontend . I am wondering if it is possible to write a Java Bytecode frontend to pypy making an alternate cool JVM (written in (R)Python)
An RPython interpreter for Java bytecode wouldn't be a compiler for Java bytecode to native code. The RPython code is compiled to native code, not the code the interpreter is interpreting.
At runtime (some-of) the interpreted code would be JIT-compiled to native code, but that's completely different, and the HotSpot VM already does this. Given that HotSpot has been developed over a long period of time with serious resources behind it, and specifically tuned for Java, I doubt you could get anything even approaching as good as it out of PyPy.
PyPy's strength is the idea that you can write things like garbage collectors and JIT compilers as a framework that works independently of the languages you're interpreting. Then lots of people can write lots of interpreters for lots of languages, and write them in a fairly high-level easy-to-code way, but they still all get high quality GCs, JIT compilers, etc without having to specifically implement them for each language. PyPy is unlikely to be a reasonable alternative to an existing project that has already sunk huge amounts of resources into developing highly optimised GCs and JIT compilers that are specifically tuned for their language.

Do JVMs on Desktops Use JIT Compilation?

I always come across articles which claim that Java is interpreted. I know that Oracle's HotSpot JRE provides just-in-time compilation, however is this the case for a majority of desktop users? For example, if I download Java via: http://www.java.com/en/download, will this include a JIT Compiler?
Yes, absolutely. Articles claiming Java is interpreted are typically written by people who either don't understand how Java works or don't understand what interpreted means.
Having said that, HotSpot will interpret code sometimes - and that's a good thing. There are definitely portions of any application (around startup, usually) which are only executed once. If you can interpret that faster than you can JIT compile it, why bother with the overhead? On the other hand, my experience of "Java is interpreted" articles is that this isn't what they mean :)
EDIT: To take T. J. Crowder's point in: yes, the JVM downloaded from java.com will be HotSpot. There are two different JITs for HotSpot, however - server and desktop. To sum up the differences in a single sentence, the desktop JIT is designed to start apps quickly, whereas the server JIT is more focused on high performance over time: server apps typically run for a very long time, so time spent optimising them really heavily pays off in the long run.
There is nothing in the JVM specification that mandates any particular execution strategy. Some JVMs only interpret, they don't even have a compiler. Some JVMs only JIT compile, they don't even have an interpreter. Some JVMs have both an intepreter and a compiler (or even multiple compilers) and statically choose between the two on startup. Some have both and dynamically switch back and forth during runtime. Some aren't even virtual machines in the usual sense of the word at all, they just statically compile JVM bytecode into native machinecode ahead-of-time.
The particular JVM that you are asking about, Oracle's HotSpot JVM, has one interpreter and two compilers, called the C1 and C2 compiler, also colloquially known as the client and server compilers, after their corresponding commandline options. HotSpot dynamically switches back and forth between the interpreter and one of the compilers at runtime (but it will not switch between the two compilers, you have to specify one of them on the commandline and then only that one will be used for the entire runtime of the JVM).
As per document here Starting with some of the later Java SE 7 releases, a new feature called tiered compilation became available. This feature uses the C1 compiler mode at the start to provide better startup performance. Once the application is properly warmed up, the C2 compiler mode takes over to provide more-aggressive optimizations and, usually, better performance
The C1 compiler is an optimizing compiler which is pretty fast and doesn't use a lot of memory. The C2 compiler is much more aggressively optimizing, but is also slower and uses more memory.
You select between the two by specifying the -client and -server commandline options (-client is the default if you don't specify one), which also sets a couple of other JVM parameters like the default JIT threshold (in -client mode, methods will be compiled after they have been interpreted 1500 times, in -server mode after 10000 times, can be set with the -XX:CompileThreshold commandline argument).
Whether or not "the majority of desktop users" actually will run in compiled or interpreted mode depends largely on what code they are running. My guess is that the vast majority of desktop users run the HotSpot JVM from Oracle's JRE/JDK or one of its forks (e.g. SoyLatte on OSX, IcedTea or OpenJDK on Unix/BSD/Linux) and they don't fiddle with the commandline options, so they will probably get the C1 compiler with the default 1500 JIT threshold. (But applications such as IntelliJ, Eclipse or NetBeans have their own launcher scripts that usually supply different commandline arguments.)
In my case, for example, I often run small scripts which never actually reach the JIT threshold, so they are never compiled. (Nor should they be.)
Some of these links about the Hotspot JVM (what you are downloading in the java.com download link above) might help:
Java SE HotSpot at a Glance
The Java HotSpot Performance Engine Architecture
Frequently Asked Questions About the Java HotSpot VM
Neither of the (otherwise-excellent) answers so far seems to have actually answered your last question, so: Yes, the Java runtime you downloaded from www.java.com is Oracle's (Sun's) Hotspot JVM, and so yes, it will do JIT compilation. HotSpot isn't just for servers or anything like that, it runs on desktops and takes full advantage of its (very mature) optimizing JIT compiler.
Jvm spec never claim how to execute the java bytecode, however, you can specify a JIT compiler if you use the JVM from hotspot VM, JIT is just a technique to optimize byte code execution.