Is Terracotta a kind of reliable alternative to JVMs? - jvm

I come across Terracotta as a distributed JVM that is stable and robust, however as my understanding of Terracotta is shallow, i am not sure if Terracotta is an alternative to JVMs?
It looks promising.

Terracotta is not a JVM, it's a shared memory backplane. It does not replace the JVM in any way. Terracotta now owns and makes several projects which can leverage this backplane.
They include
Ehcache
Sessions
Quartz
Amongst others. Their product BigMemory also helps to overcome some of the JVM Heap issues with very large allocations.

Related

What are the disadvantages of the ECS (Entity-Component-System) architectural pattern, compared to OOP (or other paradigms)?

Because of Unity ECS, I've been reading a lot about ECS lately.
There are many obvious advantages to an ECS architecture:
ECS is data-oriented: Data tends to be stored linearly, which is the most optimal way for the system to access it. In decent ECS implementations, data is stored and processed sequentially, with few or no interruptions for any given system processing it's set of components.
ECS is very compartmentalized: It naturally separates data from behavior, enforces 'composition over inheritance' (google it), etc.
ECS is very friendly to parallel-processing and multi-threading: Because of the way things are structured, many entities and components can avoid conflicts and be processed in parallel to other systems.
However, disadvantages to ECS (compared to OOP, or Entity-Component [without systems], as is common in game-engines including Unity up until recently) are rarely, if ever, talked about. Do they exist? And if they do, what are they?
Here's a few points I gathered from my research:
Systems are very dependent on their ordering. Introducing new systems in between already existing Systems can be a challenge.
You also need to plan ahead your data as much as possible, since they will potentially be used by a LOT of systems. Changing the content of components could potentially break quite a few systems.
Although it's easy to debug the flow of a system, it's also harder to debug single component changes and not have a global view of what happened to the entity across all it's components. I'm not sure if Unity introduced new debug features for this.
If you're planning to use ECS in your team, introducing a new paradigm to devs that are not familiar with it could be a challenge. The onboarding time could be longer with more overhead.
Hope this gives you a good starting point.
When it comes to Unity3D, one disadvantage which comes to my mind is that the ECS there is quite restricted to the Unity classes (e.g. MonoBehaviour) and lifecycle. That means that the components are not easy to share with other C# code whereas a well-designed OOP class is reusable by other platforms than Unity.
Another point which comes to my mind is that using Interfaces with Components is sometimes not easy in Unity because only in the newest version serialization of interfaces are supported. Without serialization there don't appear inside of the inspector.

Hardware implementations of WebAssembly

I have been poking around some websites, and discovered WebAssembly, and was intrigued by the fact that, to be implemented, a virtual machine is created, along with instruction sets.
Is it theoretically possible to make a WebAssembly implementation in hardware? Does the vm lack any capabilities that could not be solved by external functions?
Theoretically yes, and someone started to develop an initial implementation for an FPGA called WASM Metal but I believe has since been abandoned. Notably, folks like Brendan Eich are skeptical of the utility of it.
Wasm was designed for just-in-time compilation, so there are some minor complications that make direct execution slightly more involved (e.g., the way branch targets are addressed). Some future extensions, such as garbage collection support, might also be less straightforward, though an implementation will be allowed to not provide those.
But yes, in principle it should be possible (and useful!) to implement Wasm in hardware. I am aware of some people/projects looking into this idea, but none of them have announced anything publicly yet.

Does JVM or CLR use registers for running JIT'ed code?

I understand that JVM and CLR were designed as stack-based virtual machines. When JIT compiles bytecode into native code, does it also translate stack primitives (load/store) to registers on X86 platform?
If yes, it looks like whether bytecode is stack-based or register-based doesn't really matter. JIT matters.
I think that you are confusing two different concepts.
At least for Java, the JVM acts as a virtual machine - it's an idealized computing machine with a comparatively high-level assembly language (the bytecode) that is based on a call stack with stack frames. When compiling Java into bytecode, the Java program is turned into (essentially) an assembly program for controlling this machine.
When actually running Java on a given system, the job of the JVM implementation is to faithfully simulate the execution of this stack-based machine using whatever hardware is actually available. This typically means that a huge number of stack operations would be implemented using registers when possible, and perhaps using other specialized hardware that isn't present in the description of the Java virtual machine. The actual details of how this is done is implementation-specific - some implementations might compile it down to machine code that does almost everything in registers, while a simpler implementation might just compile down to in-memory operations. I worked for a few months on a JavaScript implementation of the JVM, in which case we "compiled" the code down to JS functions, which were in turn handed off to the browser's JS implementation.
The reason for this distinction is that Java was designed to be easily downloaded and embedded (think applets). In this case, security and portability are important concerns. The bytecode had to have some way to be inspected automatically to rule out certain types of malicious code (buffer overruns, for example). Similarly, whatever format was used had to be sufficiently high-level that it could be run on a variety of different platforms (handheld devices, supercomputers, PCs, etc.) The choice of the stack-based JVM made both of these concerns possible to satisfy simultaneously. It's high-level enough that it's possible to inspect the bytecode to rule out many type errors or reads/writes of uninitialized memory, while sufficiently low-level that a JVM can use tricks like compiling down to code using registers.
If you are curious what your particular JVM will do to a specific piece of code, you should take a look at the documentation. Most JVMs have some way of giving you information about how they're executing the code. If your question is "why not just have bytecode do register-based manipulation," the reason is twofold:
There is an analog of registers in bytecode - each stack frame has some extra dedicated space for temporary values to be stored, and
There isn't as robust support for registers as is present in x86 or MIPS because the JVM code had to be easy to execute on multiple pieces of hardware, and hardcoding in a number of registers might complicate things.
Hope this helps!
It is impossible to not use registers on an x86 core. The processor doesn't have an instruction to, say, add two local variables. One of them has to be loaded in a register. Then you can add the value in the register to the value in a variable. And store the result back to a stack variable.
The optimization opportunities are obvious from this sequence. Like not storing it back but keeping the result in a register and using it later, saving both the store and the load. That's the job of the optimizer, it looks for ways to make the best use of the available registers.
The only way to know for sure would be to examine JIT compiled output, but it's quite safe to say that using registers is one of the JIT compiler's lamest optimizations. I believe most programmers would be hard pressed to write faster code than the JIT compiler does.
The JIT compiler is capable of a lot, and probably uses registers as much as is appropriate. Things like method inlining encourage the use of registers, and a lot of imperative program code can be expressed more simply on a register-based architecture, so it only makes sense for the JIT compiler to use registers.

What are alternatives to the Java VM?

As Oracle sues Google over the Dalvik VM it becomes clear, that you cannot implement a Java VM without license from Oracle (EDIT: Matthew Flaschen points out, that the claims of Oracle may not be valid. Anyways we have currently a situation, where Oracle threats VM-implementations.). That may become the death for Open-Source-implementations of Java (like Apache Harmony).
I don't want to discuss the impact or the legitimation of this lawsuit. but as a Java-programmer I want to take a deeper look into the alternatives, to be prepared for every case. As I see the creation of a compiler as a minor problem, my main interest are alternative VM-implementations, that serve a similar purpose as the JVM.
The VM I'm looking for, should meet some conditions:
free of patent-issues
an Open-Source-implementation exists
potential for optimizations/good performance
platform independent (the VM can be ported to different platforms without bigger hurdles)
Please add some recommendations for me.
LLVM is a really good optimizing, low level virtual machine. It can support languages like C and C++, and does not have built in support for high level features like garbage collection.
VMKit is an implementation of the Java and CLI virtual machines on top of LLVM. Since it uses Java bytecode, this probably wouldn't help with the patent issues.
HLVM is another interesting high level virtual machine built on top of LLVM. It is probably different enough to avoid most well known patents, but it is mainly targeted at numerical computing and functional programming.
On the dynamically typed side, there is Parrot.
I am actually working on a compiler and VM for a language of my own design, but don't count on it ever being finished. ;-)
Keep in mind that any large piece of software will infringe on numerous patents, the important thing is how well known they are (and how much the patents' owners actively seek out infringers). Of course, the whole patent system is absurd, and we would be much better off getting rid of it.
I don't think there is any significant piece of software that is free from patent issues.
If you are an independent developer or working for a smaller company you probably won't get hit directly by the problems though. It's unlikely that big companies holding patents will go after lots of small claims - it's an expensive process and causes a lot of resentment. SCO tried something like that and it didn't work out too well for them.
I would concentrate on finding the best tool for the job without worrying too much about the patent issues, otherwise you will never get anything done.
GraalVM is a research project developed by Oracle Labs and already in production at Twitter. I can't believe my eyes that no one mentions anything about it, it’s so weird. Anyways, GraalVM is a well promising extension of the java virtual machine to support more language and execution modes for running applications like JavaScript, Python, Ruby, R, JVM-based languages, and LLVM-based languages such as C and C++.The GraalVM project includes a new high-performance Java compiler, itself called Graal, which can be used in a just-in-time configuration on the HotSpot VM, or in an ahead-of-time configuration on the SubstrateVM. The main goal of this project is to improve the performance of the java virtual machine base language to match the performance of native languages. Let’s sum up the novel features that this project offers and make a brief explanation according to the docs why you should adopt it.
Polyglot: All languages (even LLVM-based) share the same VM and its capabilities. Zero overhead interoperability between programming languages allows you to write polyglot applications and select the best language for your task
Native: Native images compiled with GraalVM ahead-of-time improve the startup time and reduce the memory footprint of JVM-based applications.
Embeddable: GraalVM can be embedded in both managed and native applications. There are existing integrations into OpenJDK, Node.js, Oracle Database, and MySQL GraalVM removes the isolation between programming languages and enables interoperability in a shared runtime. It can run either standalone or in the context of OpenJDK, Node.js, Oracle Database, or MySQL.
Performance: Graal benchmark reports show great performance improvements in almost all of its implementations thanks to the way that GraalVM performs object allocations
If someone don’t get convinced by now that is a good choice and it is a really awesome project you can see this talk by Christian Thalinger on “on why Graal is a good fit for Twitter”

When implementing an interpreter, is it a good or bad to piggyback off the host language's garbage collector?

Let's say you are implementing an interpreter for a GCed language in a language that is GCed.
It seems to me you'd get garbage collection for free as long as you are reasonably careful about your design.
Is this generally how it's done? Are there good reasons not to do this?
Language and runtime are two different things. They are not really related IMHO.
Therefore, if your existing runtime offers a GC already, there must be a good reason to extend the runtime with another GC. In the good old days when memory allocations in the OS were slow and expensive, apps brought their own heap managers which where more efficient in dealing with small chunks of data. That was one readon for adding another memory management to an existing runtime (or OS). But if you're talking Java, .NET or so - those should be good and efficient enough for most tasks at hand.
However, you may want to create a proper interface/API for memory and object management tasks (and others), so that your language ("guest") runtime could be implemented on to of another host runtime later on.
For an interpreter, there should be no problem with using the host GC, IMHO, particularly at first. As always they goal should be to get something working, then make it work right, then make it fast. This is particularly true for Domain Specific Languages (DSL) where the goal is a small language. For these, implementing a full GC would be overkill.