Why is the JVM a Stack Based virtual Machine? - jvm

Why is the JVM a stack based virtual machine? What exactly does that mean and what are the advantages over register based virtual machines? Are there any other major design / implementation choices for virtual machine builders?

A stack based virtual machine is very simple, both as a concept and to implement. Just about anyone with a CS background can implement a simple fully functional VM in a few hundred lines of code.
You can think of the stack as an arbitrary large number of registers if the need arises. Adding registers from the start would be pre-mature optimization.
A simple concept then makes it simpler to do real magic like Hotspot on top of the model. That's not simple, but you can choose the level of complexity based on your ability from a straight interpreter to a simple JIT to Hotspot.

Related

About embedded firmware development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In the past few days I found how important is RTOS layer on the top of the embedded hardware.
My question is :
Is there any bifurcation between device driver (written in C directly burned over the microcontroller)
And the Linux Device driver ?
This question is a little broad, but an answer, a little broad itself, can be given.
The broadness comes from the fact that "embedded hardware" is not a precise term. That hardware ranges from 4 bit microcontrollers, or 8 pins ones, up to big CPUs which have many points in common with typical processors used tipically on linux machines (desktop and servers). Linux itself can be tailored up to the point it does not resemble a normal operating system anymore.
Anyway, a few things, generally acceptable, can be the following. Linux is not, in its "plain" version, a real time operating system - with the term RTOS instead, the "real time" part is implied. So, this can be one bifurcation. But the most important thing, I think, is that embedded firmware tries to address the hardware and the task to be done without anything else added. Linux O.S. instead is general purpose - it means that it offers a lot of services and functionalities that, in many cases, are not needed and only give more cost, less performances, more complication.
Often, in a small or medium embedded system, there is not even a "driver": the hardware and the application talk directly to each other. Of course, when the hardware is (more or less) standard (like a USB port, a ethernet controller, a serial port), the programming framework can provide ready-to-use software that sometimes is called "driver" - but very often it is not a driver, but simply a library with a set of functions to initialize the device, and exchange data. The application uses those library routines to directly manage the device. The O.S. layer is not present or, if the programmer wants to use an RTOS, he must check that there are no problems.
A Linux driver is not targeted to the application, but to the kernel. And the application seldom talks to the driver - it uses instead a uniform language (tipically "file system idiom") to talk to the kernel, which in turns calls the driver on behalf of the application.
A simple example I know very well is a serial port. Under Linux you open a file (may be /dev/ttyS0), use some IOCTL and alike to set it up, and then start to read and write to the file. You don't even care that there is a driver in the middle, and the driver was written without knowledge of the application - the driver only interacts with the kernel.
In many embedded cases instead, you set up the serial port writing directly to the hardware registers; you then write two interrupt routines which read and write to the serial port, getting and putting data from/into ram buffers. The application reads and writes data directly to those buffers. Special events (or not so special ones) can be signaled directly from the interrupt handlers to the application. Sometimes I implement the serial protocol (checksum, packets, sequences) directly in the interrupt routine. It is faster, and simpler, and uses less resources. But clearly this piece of software is no more a "driver" in the common sense.
Hope this answer explains at least a part of the whole picture, which is very large.

Better way to implement I/O in a virtual machine?

I'm writing a virtual machine - not an existing architecture emulator like Virtualbox, but rather something like the JVM or BEAM - with its own instruction set, memory model, etc. Eventually I'm planning to implement a very small and simple (but turing-complete) high-level language that would compile into its bytecode, just for fun.
Of course, the machine must have some support of I/O, but I do not want to limit it only to manipulations with stdin/stdout. I imagine something like modular "virtual devices", which can be implemented as shared libraries so that the VM can load them at runtime and communicate with them through a standard interface. This way, for example, we can have "virtual devices" for standard input/output, graphics (imagine a virtual device that lets your VM program draw stuff inside an SDL window) or maybe even network.
The question is: how should the programs written for the VM communicate with the virtual devices? I decided to mimic techniques which are employed with actual hardware and learned about port-based I/O and memory-mapped I/O. However, I'm not sure which one of them is more suitable for my goals. Can you suggest which one is better or maybe even point out a totally different technique for dealing with input/output?
Thanks in advance.
Both memory-mapped and port based are inappropriate for most I/O.
DMA request with block-copy is usually what you want.

Why would I consider using an RTOS for my embedded project?

First the background, specifics of my question will follow:
At the company that I work at the platform we work on is currently the Microchip PIC32 family using the MPLAB IDE as our development environment. Previously we've also written firmware for the Microchip dsPIC and TI MSP families for this same application.
The firmware is pretty straightforward in that the code is split into three main modules: device control, data sampling, and user communication (usually a user PC). Device control is achieved via some combination of GPIO bus lines and at least one part needing SPI or I2C control. Data sampling is interrupt driven using a Timer module to maintain sample frequency and more SPI/I2C and GPIO bus lines to control the sampling hardware (ie. ADC). User communication is currently implemented via USB using the Microchip App Framework.
So now the question: given what I've described above, at what point would I consider employing an RTOS for my project? Currently I'm thinking of these possible trigger points as reasons to use an RTOS:
Code complexity? The code base architecture/organization is still small enough that I can keep all the details in my head.
Multitasking/Threading? Time-slicing the module execution via interrupts suffices for now for multitasking.
Testing? Currently we don't do much formal testing or verification past the HW smoke test (something I hope to rectify in the near future).
Communication? We currently use a custom packet format and a protocol that pretty much only does START, STOP, SEND DATA commands with data being a binary blob.
Project scope? There is a possibility in the near future that we'll be getting a project to integrate our device into a larger system with the goal of taking that system to mass production. Currently all our projects have been experimental prototypes with quick turn-around of about a month, producing one or two units at a time.
What other points do you think I should consider? In your experience what convinced (or forced) you to consider using an RTOS vs just running your code on the base runtime? Pointers to additional resources about designing/programming for an RTOS is also much appreciated.
There are many many reasons you might want to use an RTOS. They are varied & the degree to which they apply to your situation is hard to say. (Note: I tend to think this way: RTOS implies hard real time which implies preemptive kernel...)
Rate Monotonic Analysis (RMA) - if you want to use Rate Monotonic Analysis to ensure your timing deadlines will be met, you must use a pre-emptive scheduler
Meet real-time deadlines - even without using RMA, with a priority-based pre-emptive RTOS, your scheduler can help ensure deadlines are met. Paradoxically, an RTOS will typically increase interrupt latency due to critical sections in the kernel where interrupts are usually masked
Manage complexity -- definitely, an RTOS (or most OS flavors) can help with this. By allowing the project to be decomposed into independent threads or processes, and using OS services such as message queues, mutexes, semaphores, event flags, etc. to communicate & synchronize, your project (in my experience & opinion) becomes more manageable. I tend to work on larger projects, where most people understand the concept of protecting shared resources, so a lot of the rookie mistakes don't happen. But beware, once you go to a multi-threaded approach, things can become more complex until you wrap your head around the issues.
Use of 3rd-party packages - many RTOSs offer other software components, such as protocol stacks, file systems, device drivers, GUI packages, bootloaders, and other middleware that help you build an application faster by becoming almost more of an "integrator" than a DIY shop.
Testing - yes, definitely, you can think of each thread of control as a testable component with a well-defined interface, especially if a consistent approach is used (such as always blocking in a single place on a message queue). Of course, this is not a substitute for unit, integration, system, etc. testing.
Robustness / fault tolerance - an RTOS may also provide support for the processor's MMU (in your PIC case, I don't think that applies). This allows each thread (or process) to run in its own protected space; threads / processes cannot "dip into" each others' memory and stomp on it. Even device regions (MMIO) might be off limits to some (or all) threads. Strictly speaking, you don't need an RTOS to exploit a processor's MMU (or MPU), but the 2 work very well hand-in-hand.
Generally, when I can develop with an RTOS (or some type of preemptive multi-tasker), the result tends to be cleaner, more modular, more well-behaved and more maintainable. When I have the option, I use one.
Be aware that multi-threaded development has a bit of a learning curve. If you're new to RTOS/multithreaded development, you might be interested in some articles on Choosing an RTOS, The Perils of Preemption and An Introduction to Preemptive Multitasking.
Lastly, even though you didn't ask for recommendations... In addition to the many numerous commercial RTOSs, there are free offerings (FreeRTOS being one of the most popular), and the Quantum Platform is an event-driven framework based on the concept of active objects which includes a preemptive kernel. There are plenty of choices, but I've found that having the source code (even if the RTOS isn't free) is advantageous, esp. when debugging.
RTOS, first and foremost permits you to organize your parallel flows into the set of tasks with well-defined synchronization between them.
IMO, the non-RTOS design is suitable only for the single-flow architecture where all your program is one big endless loop. If you need the multi-flow - a number of tasks, running in parallel - you're better with RTOS. Without RTOS you'll be forced to implement this functionality in-house, re-inventing the wheel.
Code re-use -- if you code drivers/protocol-handlers using an RTOS API they may plug into future projects easier
Debugging -- some IDEs (such as IAR Embedded Workbench) have plugins that show nice live data about your running process such as task CPU utilization and stack utilization
Usually you want to use an RTOS if you have any real-time constraints. If you don’t have real-time constraints, a regular OS might suffice. RTOS’s/OS’s provide a run-time infrastructure like message queues and tasking. If you are just looking for code that can reduce complexity, provide low level support and help with testing, some of the following libraries might do:
The standard C/C++ libraries
Boost libraries
Libraries available through the manufacturer of the chip that can provide hardware specific support
Commercial libraries
Open source libraries
Additional to the points mentioned before, using an RTOS may also be useful if you need support for
standard storage devices (SD, Compact Flash, disk drives ...)
standard communication hardware (Ethernet, USB, Firewire, RS232, I2C, SPI, ...)
standard communication protocols (TCP-IP, ...)
Most RTOSes provide these features or are expandable to support them

Why is the JVM stack-based and the Dalvik VM register-based?

I'm curious, why did Sun decide to make the JVM stack-based and Google decide to make the DalvikVM register-based?
I suppose the JVM can't really assume that a certain number of registers are available on the target platform, since it is supposed to be platform independent. Therefor it just postpones the register-allocation etc, to the JIT compiler. (Correct me if I'm wrong.)
So the Android guys thought, "hey, that's inefficient, let's go for a register based vm right away..."? But wait, there are multiple different android devices, what number of registers did the Dalvik target? Are the Dalvik opcodes hardcoded for a certain number of registers?
Do all current Android devices on the market have about the same number of registers? Or, is there a register re-allocation performed during dex-loading? How does all this fit together?
There are a few attributes of a stack-based VM that fit in well with Java's design goals:
A stack-based design makes very few
assumptions about the target
hardware (registers, CPU features),
so it's easy to implement a VM on a
wide variety of hardware.
Since the operands for instructions
are largely implicit, the object
code will tend to be smaller. This
is important if you're going to be
downloading the code over a slow
network link.
Going with a register-based scheme probably means that Dalvik's code generator doesn't have to work as hard to produce performant code. Running on an extremely register-rich or register-poor architecture would probably handicap Dalvik, but that's not the usual target - ARM is a very middle-of-the-road architecture.
I had also forgotten that the initial version of Dalvik didn't include a JIT at all. If you're going to interpret the instructions directly, then a register-based scheme is probably a winner for interpretation performance.
I can't find a reference, but I think Sun decided for the stack-based bytecode approach because it makes it easy to run the JVM on an architecture with few registers (e.g. IA32).
In Dalvik VM Internals from Google I/O 2008, the Dalvik creator Dan Bornstein gives the following arguments for choosing a register-based VM on slide 35 of the presentation slides:
Register Machine
Why?
avoid instruction dispatch
avoid unnecessary memory access
consume instruction stream efficiently (higher semantic density per instruction)
and on slide 36:
Register Machine
The stats
30% fewer instructions
35% fewer code units
35% more bytes in the instructions stream
but we get to consume two at a time
According to Bornstein this is "a general expectation what you could find when you convert a set of class files to dex files".
The relevant part of the presentation video starts at 25:00.
There is also an insightful paper titled "Virtual Machine Showdown: Stack Versus Registers" by Shi et al. (2005), which explores the differences between stack- and register-based virtual machines.
I don't know why Sun decided to make JVM stack based. Erlangs virtual machine, BEAM is register based for performance reasons. And Dalvik also seem to be register based because of performance reasons.
From Pro Android 2:
Dalvik uses registers as primarily units of data storage instead of the stack. Google is hoping to accomplish 30 percent fewer instructions as a result.
And regarding the code size:
The Dalvik VM takes the generated Java class files and combines them into one or more Dalvik Executables (.dex) files. It reuses duplicate information from multiple class files, effectively reducing the space requirement (uncompressed) by half from traditional .jar file. For example, the .dex file of the web browser app in Android is about 200k, whereas the equivalent uncompressed .jar version is about 500k. The .dex file of the alarm clock is about 50k, and roughly twice that size in its .jar version.
And as I remember Computer Architecture: A Quantitative Approach also conclude that a register machine perform better than a stack based machine.

Hardware-specific questions

I'm good at programming yet I feel like I don't know enough about the architecture of the hardware I'm working on.
What does the Northbridge on the mainboard do?
What does the L2 cache of my processor do?
Can Windows XP use multiple processors? Not in terms of concrete multitasking in all programs but using the capacity of all cores if needed instead of always only one core.
How can my processor/mainboard interact with multiple kinds of graphics/sound cards?
North bridge control Memory usually, http://en.wikipedia.org/wiki/Northbridge_(computing)
L2 cache info http://www.tomshardware.com/reviews/cache-size-matter,1709.html
etc etc
These answers can be resolved/answered via google and 15min :)