I'm writing a virtual machine - not an existing architecture emulator like Virtualbox, but rather something like the JVM or BEAM - with its own instruction set, memory model, etc. Eventually I'm planning to implement a very small and simple (but turing-complete) high-level language that would compile into its bytecode, just for fun.
Of course, the machine must have some support of I/O, but I do not want to limit it only to manipulations with stdin/stdout. I imagine something like modular "virtual devices", which can be implemented as shared libraries so that the VM can load them at runtime and communicate with them through a standard interface. This way, for example, we can have "virtual devices" for standard input/output, graphics (imagine a virtual device that lets your VM program draw stuff inside an SDL window) or maybe even network.
The question is: how should the programs written for the VM communicate with the virtual devices? I decided to mimic techniques which are employed with actual hardware and learned about port-based I/O and memory-mapped I/O. However, I'm not sure which one of them is more suitable for my goals. Can you suggest which one is better or maybe even point out a totally different technique for dealing with input/output?
Thanks in advance.
Both memory-mapped and port based are inappropriate for most I/O.
DMA request with block-copy is usually what you want.
Related
How does the operating system manage the program permissions? If you are writing a low-level program without using any system calls, directly controlling the processor, then how does an operating system put stakes if the program directly controls the processor?
Edit
It seems that my question is not very clear, I apologize, I can not speak English well and I use the translator. Anyway, what I wonder is how an operating system manages the permissions of the programs (for example the root user etc ...). If a program is written to really low-level without making system calls, then can it have full access to the processor? If you want to say that it can do everything you want and as a result the various users / permissions that the operating system does not have much importance. However, from the first answer I received I read that you can not make useful programs that work without system calls, so a program can not interact directly with a hardware (I mean how the bios interacts with the hardware for example)?
Depends on the OS. Something like MS-DOS that is barely an OS and doesn't stop a program from taking over the whole machine essentially lets programs run with kernel privilege.
An any OS with memory-protection that tries to keep separate processes from stepping on each other, the kernel doesn't allow user-space processes to talk directly to I/O hardware.
A privileged user-space process might be allowed to memory-map video RAM and/or I/O registers of a device into its own address space, and basically act like a device driver. (e.g. an X server under Linux.)
1) It is imposible to have a program that that does not make any system calls.
2) Instructions that control the operation of the processor must be executed in kernel mode.
3) The only way to get into kernel mode is through an exception (including system calls). By controlling how exceptions are handled, the operating system prevents malicious access.
If a program is written to really low-level without making system calls, then can it have full access to the processor?
On a modern system this is impossible. A system call is going to be made in the background whether you like it or not.
I am grad student, and I am considering setting up my dream home workstation/art tool/entertainment device/all-purpose everything. I'm wondering if what I want to do is possible (and practical), and if so, get some suggestions and warnings from people who know more about virtualization and hypervisors than I do:
Aim: Set up a 2-4 headed computing station that is optimized for using different OS'sfor different tasks I do. I want to keep my work/play streams separated, and have control over the resources that each one is allowed. For example, one head would be Windows 10 for audiovisual work, media playing, and maybe some gaming. Another head would use Linux and be used mainly for data science (mostly R and Python), and some hosting for purely local use (such as running an instance of the Galaxy bioinformatics server, which I only plan to access locally).Finally, I want a VM that is purely devoted to web-browsing, probably some lightweight Linux distro.
I want each OS to have it's own keyboard and monitor(s), but ideally I want to copy-paste between OSs. The idea is to just swivel my chair to move between operating systems, or even to have one person using each.
What I think I need:
A hypervisor with PCI, USB, and network controller pass-through.
Two video cards,one each for my Windows and Linux workstations (with the web browsing VM using the on-chip CPU graphics). Obviously, a mobo and CPU that support full virtualization.
A USB card with multiple separate controllers, so that I can use a different controller for each OS. Something similar for network interface cards.
Separate SSDs for each OS and its apps.
Some sort of storage pool (probably ZFS based) to hold the bulk of my files, shared so I can access them from either guest. Ideally, I'd like to to be in a separate enclosure, but I don't trust eSATA cables (they seem to fail frequently) and care about speed of database access, so I'll probably put the drives inside the main case, even though that will make future migration more annoying.
Something like SPICE for KVM, so that I can copy and paste freely between OS's.
Is there anything I am overlooking?
What hypervisor or similar solution is best for what I want to do? I am leaning towards KVM, but am far from committed.I will consider paid solutions if there is a compelling reason to use them.
What are some pitfalls I should be wary of?
kvm will work here ideally, a lot of tutorials and lot of intel based configurations working like a charm
zfs can't share your data, u need nfs or samba share on host machine
Synergy software is for you.
I want to read the global variables via the JTAG port, live, when a program is running on the microcontroller. Is it possible?
JTAG defines only a physical interface, it does not describe the on-chip debug capabilities of a particular processor which may or may not support access during execution.
Moreover whether it can be done in VB is not really the issue, the important issue is what hardware device and/or I/O port you are using for the JTAG interface, and whether a driver and API to access via .Net is available. That said VB.Net is not the first language I'd choose for that in any case.
A good place to start perhaps is OpenOCD, though it is not .Net specific.
"Almost-Live" is possibly doable, depending on the JTAG implementation. Often JTAG activity which reads memory does so by stealing cycles from the micro (or sometimes even inserting instructions into the pipeline). I'm not sure there's a micro which allows completely transparent access to memory over JTAG.
"All you need to do" is understand the JTAG implementation, know where the variable is located and issue a "memory read" command by wiggling the JTAG pins in the appropriate fashion. This is not a small task, which is why professional engineers are willing to pay (sometimes large amounts of) money for tools which perform this task.
Often the free (limited) toolchains the vendors provide can perform this also.
Yes, I suppose it is possible. But you'll need to drive the JTAG port (that sounds painful!) and know exactly where the data is stored on the chip, and what the formatting is.
in my project I need to work with device drivers, but have a hard time to understand the naming, scope and function of the abstraction layers. As I see the main layer is HAL - "hardware abstraction layer".
What are the clients of HAL, whom is HAL interfacing?
Are you talking about a specific HAL in windows or linux or something or in general?
When accessing registers from a device driver (code that drives a device, doesnt have to be a kernel thing or have an operating system at all) for example I generally recommend to create functions like PUT32(address,data), data=GET32(address). Or writel and readl, whatever you fancy. The point being to avoid creating a pointer with the address and using that pointer directly. There is a performance gain to the pointer type solution, and performance hit to the abstract PUT32(). Why I use it though is because if the code is clean enough that driver can be used as part of a kernel driver for this os, a kernel driver for that os, run standalone embedded, connect to an hdl simulation of the logic, run on a processor on the same chip, or run on a host computer that reaches into the chip via PCI or jtag, etc. One chunk of code reused from the birth of the logic (hdl sim) to the end user kernel driver.
Perhaps more to your question though think about a uart, you want to send some bytes and receive some bytes right? Create a uart_send() function and a uart_recv() function, everything above the abstraction layer uses these two functions, when you target this code to a specific platform then you implement those functions for the specific uart in that specific hardware. later on you can replace that uart with something else, so long as the new uart can send and receive the code above the abstraction layer does not have to change. Even though you have created an abstraction layer with the functions above, I personally would still use PUT8() and GET8() functions in the implementation of uart_send() and uart_recv() for the specific uart, and in a separate file implement PUT8() and GET8().
How many layers of abstraction between the driver and the actual hardware, how and where are often specific to the task and the hardware.
In computers, a hardware abstraction layer (HAL) is a layer of programming that allows a computer operating system to interact with a hardware device at a general or abstract level rather than at a detailed hardware level. Windows 2000 is one of several operating systems that include a hardware abstraction layer. The hardware abstraction layer can be called from either the operating system's kernel or from a device driver. In either case, the calling program can interact with the device in a more general way than it would otherwise.
As a kind of opposite to this question: "Is low-level embedded systems programming hard for software developers" I would like to ask for advice on moving from the low level embedded systems to programming for more advanced systems with OS, especially embedded Linux.
I have mostly worked with small microcontroller hardware and software, but now doing software only. My education also consists of hardware and embedded things mainly. I haven't had many programming courses and don't know much about software design or OO coding.
Now I have a big project in my hands that is going to be done in embedded Linux. I have major problems with designing things and keeping things manageable because I haven't really needed to do that before. Also making use of multitasking and blocking calls instead of running "parallel" task from main function is like another world.
What kind of experiences do you have on moving from low-level programming to bigger systems with OS (Linux)? What was hard and how did you solve it? What kind of mindset is needed?
Would it be worthwhile to learn C++ from zero or continue using plain C?
The main problems with using the Linux kernel to replace microcontroller systems is driving the devices you are interfacing with. For this you may have to write drivers. I would say stick with C as the language because you are going to want to keep the user-space as clean as possible. Look into the uclibc library for a leaner C standard library.
http://www.uclibc.org/
You may also find busybox useful. This provides many userspace utilities as a single binary.
http://www.busybox.net/
Then it is simply a matter of booting from some storage to a live system and running some controlling logic through init that interfaces with your hardware. If need be you can access the live system and run the busybox utilities. Really, the only difference is that the userspace is much leaner than in a normal distribution and you will be working 'closer' to the kernel in terms of objectives.
Also look into realtime linux.
http://www.realtimelinuxfoundation.org/
If you need some formal promise of task completion. I suspect the hardest bit will be booting/persistent storage and interfacing with your hardware if it is exotic. If you are unfamiliar with Linux booting then
http://www.cromwell-intl.com/unix/linux-boot.html
Might help.
In short, if you have not developed at a deep level for Linux, built your own distro, or have kernel experience then you might find the programming hard-going.
http://www.linuxdevices.com/ Might also help
Good Luck
In order to work with Unix/Linux you should get into the Unix philosophy: http://www.faqs.org/docs/artu/ch01s06.html
I consider the whole book a quite interesting read: http://www.faqs.org/docs/artu/index.html
Here you can find a free Linux distro for embedded targets plus bootloader to get you started: http://www.denx.de/wiki/DULG/WebHome
I was in a very similar predicament not too long ago. I bought and read Embedded Linux Primer and it was a very helpful way to make the mental-transition to a high level OS (from a microcontroller perspective).
If you have the "time to 'take your time'," you could obviously make the transition. But if you need to get up to speed quickly, you may want to strongly consider getting a technical mentor to help guide you.
You also may find it useful to work your way into Linux by starting out with ucLinux. It's basically Linux on a microcontroller. You could get a feel for the kernel without the virtual memory aspect of it as transition. See if ucLinux supports a microcontroller that you are already familiar with and see how the kernel interacts with that architecture.
I agree that the Embedded Linux Primer book is great for getting your brain wrapped around embedded Linux. You're better off sticking with C for now. C++ can wait, and it's more useful for applications, not driver code.
When you're comfortable with how ucLinux operates, then you could start out with a normal Linux kernel on a microprocessor architecture such as ARM that has an MMU and virtual memory.
Just my two cents!