Multiseat setup for fun and profit: hypervisors and other choices - hardware

I am grad student, and I am considering setting up my dream home workstation/art tool/entertainment device/all-purpose everything. I'm wondering if what I want to do is possible (and practical), and if so, get some suggestions and warnings from people who know more about virtualization and hypervisors than I do:
Aim: Set up a 2-4 headed computing station that is optimized for using different OS'sfor different tasks I do. I want to keep my work/play streams separated, and have control over the resources that each one is allowed. For example, one head would be Windows 10 for audiovisual work, media playing, and maybe some gaming. Another head would use Linux and be used mainly for data science (mostly R and Python), and some hosting for purely local use (such as running an instance of the Galaxy bioinformatics server, which I only plan to access locally).Finally, I want a VM that is purely devoted to web-browsing, probably some lightweight Linux distro.
I want each OS to have it's own keyboard and monitor(s), but ideally I want to copy-paste between OSs. The idea is to just swivel my chair to move between operating systems, or even to have one person using each.
What I think I need:
A hypervisor with PCI, USB, and network controller pass-through.
Two video cards,one each for my Windows and Linux workstations (with the web browsing VM using the on-chip CPU graphics). Obviously, a mobo and CPU that support full virtualization.
A USB card with multiple separate controllers, so that I can use a different controller for each OS. Something similar for network interface cards.
Separate SSDs for each OS and its apps.
Some sort of storage pool (probably ZFS based) to hold the bulk of my files, shared so I can access them from either guest. Ideally, I'd like to to be in a separate enclosure, but I don't trust eSATA cables (they seem to fail frequently) and care about speed of database access, so I'll probably put the drives inside the main case, even though that will make future migration more annoying.
Something like SPICE for KVM, so that I can copy and paste freely between OS's.
Is there anything I am overlooking?
What hypervisor or similar solution is best for what I want to do? I am leaning towards KVM, but am far from committed.I will consider paid solutions if there is a compelling reason to use them.
What are some pitfalls I should be wary of?

kvm will work here ideally, a lot of tutorials and lot of intel based configurations working like a charm
zfs can't share your data, u need nfs or samba share on host machine
Synergy software is for you.

Related

Basic virtualization questions

Excuse me for my lack of knowledge but I am really new to the Virtual world and have a few questions.
I work for a small charity who specialise in providing basic IT training. We have recently acquired a few Dell Poweredge 2650 servers and Dell desktops and we wish to offer both XP, Windows 7, Mac and Ubuntu training. I am looking at setting up a Virtual environment so that we can have a standard image for each OS (I currently use image files but it currently takes approximately 25mins to build each machine and multi-boot is not an option as the new machines have 20Gb disks).
The servers are all dual processor and we can purchase more memory(I need to justify the cost)
What are the memory requirements for
the Host?
How many VM's can I run
per server?
Can I run multiple instances of the same VM
Thanks in advance for your knowledge.
Darryn
You might be able to get away with a multi-boot option with those 20 gig disks; each OS will probably take no more than ten gigs for minimal installs, two OSes per machine isn't terrible. (Incidentally, look around for a group like FreeGeek in your area -- larger hard drives ought to be cheap for small sizes like 120-500 gigs.)
That said, virtualization might be just what you need, if you have a handful of pretty powerful machines.
I think between one and two gigabytes of host memory for every guest VM that you want to run would be very useful. At least in my experience, an Ubuntu image I gave 1024 megabytes to ran very quickly, but I didn't press it very far. Running Firefox or OpenOffice inside the VM would probably dictate more memory very quickly. Chrome seemed snappy.
So, if you've got 12 gigabytes of RAM, you might be able to get between four and twenty virtual machines hosted on the machine simultaneously, depending upon what your guests are doing.
As for disk space, if you use QEMU's -snapshot option, you ought to be able to save disk space. Each user could boot the same underlying disk image, but their own modifications would go into the 'snapshot' file. (I have no experience trying to do long-term system maintenance with this option, so it could be that all twenty of your users need to store service pack 2 contents when they upgrade in the future; I'd be scared of trying to modify the shared disk image once you've got snapshots of it running. Perhaps having everyone store 'personal documents' and the like in CIFS shares would make a ton of sense.)
The biggest hurdle will probably be Mac; because the Apple terms of service forbid running OS X on non-Apple hardware, you'll have to have some Apple machines around to run VirtualBox.

How to find an embedded platform?

I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction.
Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do:
Required:
2 RS232 serial ports (one used all the time for primary UI, the second one not continuous)
1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both]
Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files)
RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable.
Nice to have:
USB port(s)
Ethernet network port
Wireless network
Implementation languages (any O/S I will adapt to):
First choice Java/C# (Mono ok)
Second choice is C/Pascal
Third is BASIC
Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size.
Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-).
EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.
I suggest buying a cheap Atom Mini-ITX board, some of which come with multi - 4+ RS232 ports.
But with Serial->USB converters, this isn't really an issue. Just get an Atom. And if you have code, port your software to Linux.
Here is a link to a Jetway Mini-Itx board, and a link to a 4 port RS232 expansion module for it. ~$170 total, some extra for memory, a disk, and a case and PSU. $250-$300 total.
Now here is an Intel Atom Board at $69 to which you could add flash storage instead of drives, and USB-serial converters for any data collection you need to do.
PC104 has a lot of value in maximizing the space used in 19" or 23" rackmount configurations - if you're not in that space, PC104 is a waste of your time and money, IMHO.
The BeagleBoard should have everything you need for $200 or so - it can run Linux so use whatever programming language you like.
A 'modern' system will run DOS so long as it is x86, I suggest that you look at an industrial PC board from a supplier such as Advantech, your existing system may well run unchanged if it adheres to PC/DOS/BIOS standards.
That said if your original system runs on DOS, the chances are that you do not need the horsepower of a modern x86 system, and can save money by using a microcontroller board using something fairly ubiquitous such as an ARM. Also if DOS was the OS, then you most likely do not need an OS at all, and could develop the system "bare-metal". The resources necessary just to support Linux are probably far greater than your existing application and OS together, and for little or no benefit unless you intend on extending the capability of the system considerably.
There are a number of resources available (free and commercial) for implementing a file system and USB on a bare-metal system or a system using a simple real-time kernel such as FreeRTOS or eCOS which have far smaller footprints than Linux.
The Windows embedded site ( http://www.microsoft.com/windowsembedded/en-us/default.mspx )
has a lot of resources and links to hardware partners, distributors and development kits. There's even a "Spark" incubation project ( http://www.microsoft.com/windowsembedded/en-us/community/spark/default.mspx )
What's also really nice about using windows ce is that it now supports Silverlight as a development environment.
I've used the jetway boards / daughter cards that Chris mentioned with success for various projects from embedded control, my home router, my HTPC front end.
You didn't mention what the actual application was but if you need something more industrial due to temperature or moisture constraints i've found http://www.logicsupply.com/ to be a good resource for mini-itx systems that can take a beating.
A tip for these board is that given your minimal storage requirements, don't use a hard drive. Use an IDE adapter for a compact flash card as the system storage or an SD card. No moving parts is usually a big plus in these applications. They also usually offer models with DC power input so you can use a laptop like or wall wart external supply which minimizes its final size.
This http://www.fit-pc.com/web/ is another option in the very small atom PC market, you'd likely need to use some USB converters to get to your desired connectivity.
The beagle board Paul mentioned is also a good choice, there are daughter cards for that as well that will add whatever ports you need and it has an on board SD card reader for whatever storage you need. This is also a substantially lower power option vs the atom systems.
There are a ton of single board computers that would fit your needs. When searching you'll normally find that they don't keep many interface connectors on the processor board itself but rather you need to look at the stackable daughter cards they offer which would provide whatever connections you need (RS-232, etc.). This is often why you see just "serial port" in the description as the final physical layer for the serial port will be defined on the daughter card.
There are a ton of arm based development boards you could also use, to many to list, these are similar to the beagle board. Googling for "System on module" is a good way to find many options. These again are usually a module with the processor/ram/flash on 1 card and then offer various carrier boards which the module plugs into which will provide the various forms of connectivity you need.
In terms of development, the atom boards will likely be the easiest if your more familiar with x86 development. ARM is strongly supported under linux though so there is little difficulty in getting these up and running.
Personally i would avoid windows for a headless design like your discussing, i rarely see a windows based embedded device that isn't just bad.
Take at look at one of the boards in the Arduino line, in particular the Arduino Mega. Very flexible boards at a low cost, and the Mega has enough I/O ports to do what you need it to do. There is no on-chip modem, but you can connect to something like a Phillips PCD3312C over the I2C connector or you can find an Arduino add-on board (called a "shield") to give you modem functionality (or Bluetooth, ethernet, etc etc). Also, these are very easy to connect to an external memory device (like a flash drive or an SD card) so you should have plenty of storage space.
For something more PC-like, look for an existing device that is powered by a VIA EPIA board. There are lot of devices out there that use these (set-top boxes, edge routers, network security devices etc) that you can buy and re-program. For example, I found a device that was supposed to be a network security device. It came with the EPIA board, RAM, a hard drive, and a power supply. All I had to do was format the hard drive, install Linux (Debian had all necessary drivers already included), and I had a complete mini-computer ready to go. It only cost me around $45 too (bought brand new, unopened on ebay).
Update: The particular device I found was an EdgeSecure i10 from Ingrian Networks.

Where do you draw the line between what is "embedded" and what is not?

ASIDE: Yes, this is can be considered a subjective question, but I hope to draw conclusions from the statistics of the responses.
There is a broad spectrum of computing devices. They range in physical sizes, computational power and electrical power. I would like to know what embedded developers think is the determining factor(s) that makes a system "embedded." I have my own determination that I will withhold for a week so as to not influence the responses.
I would say "embedded" is any device on which the end user doesn't normally install custom software of their choice. So PCs, laptops and smartphones are out, while XM radios, robot controllers, alarm clocks, pacemakers, hearing aids, the doohickey in your engine that regulates fuel injection etc. are in.
You might just start with wikipedia for a definition
http://en.wikipedia.org/wiki/Embedded_system
"An embedded system is a computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. "
Coming up with a concrete set of rules for what an embedded system is is to a large degree pointless. It's a term that means different things to different people -maybe even different things to the same people at different times.
There are some things that are pretty much never considered an embedded system, for example a Windows Desktop machine. However, there are companies that put their software on a Windows box - even a bog standard PC (maybe a laptop) - set things up so their application loads automatically and hides the desktop. They sell that as a single purposed machine that many people would call an embedded system (but many people wouldn't). Microsoft even sells a set of tools called Embedded Windows that helps enable these kinds of applications, though it's targeted more to OEMs who will customize the system at least somewhat instead of just putting it on a standard PC. Embedded Windows is used for things like ATM machines and many other devices. I think that most people would consider an ATM an embedded system.
But go into a 7-11 with an ATM that has a keyboard (I honestly don't know what the keyboard is for), press the right shift key 5 times and you'll get a nice Windows "StickyKeys" messagebox (I wonder if there's an exploit there - I sure hope not). So there's a Windows system there, just hidden and with some functionality removed - maybe not as much as the manufacturer would like. If you could convince it to open up notepad.exe somehow does the ATM suddenly stop being an embedded system?
Many, many people consider something like the iPhone or the iTouch an embedded system, but they have nearly as much functionality as a desktop system in many ways.
I think most people's definition of an embedded system might be similar to Justice Potter Stewart's definition of hard-core pornography:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it...
I consider an embedded system one where the software is rarely developed directly on the target system. This definition includes sophisticated embedded systems like the iPhone, and excludes primitive desktop systems like the Commodore 64. Not having the development tools on the target means you have to add 'reprogram device' to the edit-compile-run cycle. Debugging is also made more complicated. This encompasses most of the embedded "feel."
Software implemented in a device not intended as a general purpose computing device is an "embedded system".
Typically the system is intended for a single purpose, and the software is static.
Often the system interacts with non-human environmental inputs (sensors) and mechanical actuators, or communication with other non-human systems.
That's off the top of my head. Other views can be read at this embedded.com article
Main factors:
Installed in a fixed place somewhere (you can't carry the device itself around, only the thing it's built into)
The run a long time (often years) with little maintenance
They don't get patched often
They are small, use little power
Small or no display
+1 for a great question.
Like many things there is a spectrum.
At the "totally embedded" end you have devices designed for a single purpose. Alarm clocks, radios, cameras. You can't load new software and make it do something else. THere is no support for changing the hardware,
At the "totally non-embedded" end you have your classic PCs where everything, both HW and SW, can be replaced.
There's still a lot in between those extremes. Laptops and netbooks, for example, have minimally expandable HW, typically only memory and hard disk can be upgraded. But, the SW can be whatever you want.
My education was as a computer engineer, so my definition of embedded is hardware oriented. I draw the line at the MMU (memory management unit). If a chip has an MMU, it usually has off-chip RAM and runs an OS. If a chip does NOT have an MMU, it usually has on-board RAM and runs an RTOS, microkernel or custom executive.
This means I usually dismiss anything running linux, which is shortsighted. I admit my answer is biased towards where I tend to work: microcontroller firmware. So I am glad I asked this question and got a full spectrum of responses.
Quoting a paragraph I've written before:
An embedded system for our purposes is
a computer system that has a specific
and deterministic
functionality\cite{LamieReal}.
Typically, processors for embedded
systems contain elements such as
onboard RAM, special-purpose
processing elements such as a digital
signal processor, analog-to-digital
and digital-to-analog converters.
Since the processors have more
flexibility than a straightforward
CPU, a common term is microcontroller.

Virtual desktop environment for development

Our network team is thinking of setting up a virtual desktop environment (via Windows 2008 virtual host) for each developer.
So we are going to have dumb terminals/laptops and should be using the virtual desktops for all of our work.
Ours is a Microsoft shop and we work with all versions of .net framework. Not having the development environments on the laptops is making the team uncomfortable.
Are there any potential problems with that kind of setup? Is there any reason to be worried about this setup?
Unless there's a very good development-oriented reason for doing this, I'd say don't.
Your developers are going to work best in an environment they want to work in. Unless your developers are the ones suggesting it and pushing for it, you shouldn't be instituting radical changes in their work environments without very good reasons.
I personally am not at all a fan of remote virtualized instances for development work, either. They're often slower, you have to deal with network issues and latency, you often don't have as much control as you would on your own machine. The list goes on and on, and little things add up to create major annoyances.
What happens when the network goes down? Are your dev's just supposed to sit on their hands? Or maybe they could bring cards and play real solitare...
Seriously, though, Unless you have virtual 100% network uptime, and your dev's never work off-site (say, from home) I'm on the "this is a Bad Idea" side.
One option is to get rid of your network team.
Seriously though, I have worked with this same type of setup through VMWare and it wasn't much fun. The only reason why I did it was because my boss thought it might be worth a try. Since I was newly hired, I didn't object. However, after several months of programming this way, I told him that I preferred to have my development studio on my machine and he agreed.
First, the graphical interface isn't really clear with a virtual workstation since it's sending images over the network rather than having your video card's graphical driver render the image. Constant viewing of this gave me a headache.
Secondly, any install of components or tools required the network administrator's help which meant I had to hurry up and wait.
Third, your computer is going to process one application faster than your server is going to process many apps and besides that, it has to send the rendered image over the network. It doesn't sound like it slows you down but it does. Again, hurry up and wait.
Fourth, this may be specific to VMWare but the virtual disk size was fixed to 4GB which to my network guy seemed to think it was enough. This filled up rather quickly. In order for me to expand the drive, I had to wait for the network admin to run partition magic on my drive which screwed it up and I had to have him rebuild my installation.
There are several more reasons but I would strongly encourage you to protest if you can. Your company is probably trying to impliment this because it's a new fad and it can be a way for them to save money. However, your productivity time will be wasted and that needs to be considered as a cost.
Bad Idea. You're taking the most critical tool in your developers' arsenal and making it run much, much, much slower than it needs to, and introducing several critical dependencies along the way.
It's good if you ever have to develop on-site, you can move your dev environment to a laptop and hit the road.
I could see it being required for some highly confidential multiple client work - there is a proof that you didn't leak any test data or debug files from one customer to another.
Down sides:
Few VMs support multiple monitors - without multiple monitors you can't be a productive developer.
Only virtualbox 3 gets close to being able to develop for opengl/activeX on a VM.
In my experience Virtual environments are ideal for test environments (for testing deployments) and not development environments. They are great as a blank slate / clean sheet for testing. I think the risk of alienating your developers is high if you pursue this route. Developers should have all the best tools at their disposal, i.e. high spec laptop / desktop, this keeps morale and productivity high.
Going down this route precludes any home-working which may or may not be an issue. Virtual environments are by their nature slower than dedicated environments, you may also have issues with multiple monitor setups on a VM.
If you go that route, make sure you bench the system aggressively before any serious commitment.
My experience of remote desktops is that it's ok for occasional use, but seldom sufficient for intensive computations and compilation typical of development work, especially at crunch time when everyone needs resources at the same time.
Not sure if that will affect you, but both VMWare and Virtual PC work very slow when viewed via Remote Desktop. For some reason Radmin (http://www.radmin.com/ ) does a much better job.
I regularly work with remote development environments and it is OK (although it takes some time to get used to keep track in which system you're working at the moment ;) ) - but most of the time I'm alone on the system.

Carrying and Working on an Entire Development Box from a USB Stick. Feasible?

Lately I have been thinking about investing in a worthy USB pen drive (something along the lines of this), and install Operating Systems on Virtual Machines and start developing on them.
What I have in mind is that I want to be able to carry my development boxes, being a Windows Distribution for .Net development and a Linux Distribution for stuff like RoR, Perl and whatnot, so that I would be able to carry them around where need be...be it work, school, different computers at home etc...
I am thinking of doing this also for backup purposes...ie to backup my almost-single VM file to an external hd, instead of doing routinely updates to my normal Windows Box. I am also thinking about maybe even committing the VM boxes under Source Control (is that even feasible?)
So, am I on the right track with this ? Do you suggest that I try to implement this out?
How feasible is it to have your development box on Virtual Machine that runs from a USB Pen-Drive ?
I absolutely agree with where you are heading. I wish to do this myself.
But if you don't already know, it's not just about drive size, believe it or not USB Flash drives can be much slower than your spinning disk drives!
This can be a big problem if you plan to actually run the VMs directly from the USB drive!
I've tried running a 4GB Windows XP VM on a 32GB Corsair Survivor and the VM was virtually unusuable! Also copying my 4GB VM off and back onto the drive was also quite slow - about 10 minutes to copy it onto the drive.
If you have an esata port I'd highly recommend looking at high-speed ESata options like this Kanguru 32GB ESata/USB Flash drive OR this 32GB one by OCZ.
The read and write speeds of these drives are much higher over ESata than other USB drives. And you can still use them as USB if you don't have an ESata port. Though if you don't have an ESata port you can buy PCI to ESata cards online and even ESata ExpressCards for your laptop.
EDIT: A side note, you'll find the USB flash drives use FAT instead of NTFS. You don't want to use NTFS because it makes a lot more reads & writes on the disk and your drive will only have a limited number of reads & writes before it dies. But by using FAT you'll be limited to max 2GB file size which might be a problem with your VM. If this is the case, you can split your VM disks into 2GB chunks. Also make sure you backup your VM daily incase your drive does reach it's maximum number of writes. :)
This article on USB thumbdrives states,
Never run disk-intensive applications
directly against files stored on the
thumb drive.
USB thumbdrives utilize flash memory and these have a maximum number of writes before going bad and corruption occurs. The author of the previously linked article found it to be in the range of 10,000 - 100,000 writes but if you are using a disk intensive application this could be an issue.
So if you do this, have an aggressive backup policy to backup your work. Similarly, if when you run your development suite, if it could write to the local hard drive as a temporary workspace this would be ideal.
Hopefully you are talking about interpreted language projects. I couldn't imagine compiling a C/C++ of any size on a VM, let alone a VM running off of a USB drive.
I do it quite frequently with Xen, but also include a bare metal bootable kernel on the drive. This is particularly useful when working on something from which a live CD will be based.
The bad side is the bloat on the VM image to keep it bootable across many machines .. so where you would normally build a very lean and mean paravirtualized kernel only .. you have to also include one that has everything including the kitchen sink (up to what you want, i.e. do you need Audio, or token ring, etc?)
I usually carry two sticks, one has Xen + a patched Linux 2.6.26, the other has my various guest images which are ready to boot either way. A debootstrapped copy of Debian or Ubuntu makes a great starting point to create the former.
If nothing else, its fun to tinker with. Sorry to be a bit GNU/Linux centric, but that's what I use exclusively :) I started messing around with this when I had to find an odd path to upgrading my distro, which was two years behind the current one. So, I strapped a guest, installed what I wanted and pointed GRUB at the new LV for my root file system. Inside, I just mounted my old /home LV and away I went.
Check out MojoPac:
http://www.mojopac.com/
Hard-core gamers use it to take world of warcraft with them on the go -- it should work fine for your development needs, at least on Windows. Use cygwin with it for your unix-dev needs.
I used to do this, and found that compiling was so deathly slow, it wasn't worth it.
Keep in mind that USB flash drives are extremely slow (maybe 10 to 100 times slower) compared to hard drives at random write performance (writing lots of small files to a partition which already has lots of files).
A typical build process using GNU tools will create lots of small files - a simple configure script creates thousands of small files and deletes them again just to test the environment before you even start compiling. You could be waiting a long time.