What is the difference in physical behavior of hardware on bare metal vs VM with IOMMU passthrough? - virtual-machine

I have some dated equipment used to run an experimental apparatus. Unfortunately, that equipment will only run on WinXP using FireWire/IEEE1394, which is becoming more and more of a pain for us to maintain hardware-wise. Unfortunately we also don't have the money to replace this equipment. We discussed perhaps trying to virtualize the XP environment on a newer OS. I'd been reading about VFIO/IOMMU and figured maybe I could pass the FireWire PCI cards through and just do it that way.
Plus side - I got it to work. I installed XP with a QEMU-KVM hypervisor. Got it set up, passed the firewire cards through, and all was recognized in the VM, including when I attached the equipment to the FW cards. XP device manager saw that it was all there.
Unfortunately, I've found that the actual interaction with the hardware seems to be touchy at best. Things misbehave in weird, unexplainable ways. Some of those made me think that the guest OS wasn't communicating with the passed through cards correctly. This was surprising as I was under the impression that passed through cards were utilized directly by the guest OS without host OS intervention.
My question is basically - if I'm virtualizing an older system and passing through the various ports/cards needed, should it behave as if it were bare metal? Or are there circumstances where what the guest OS tries to do is not the same as if it were bare metal (I.E - the host OS changes something when the instruction leaves the VM)? As I said - I was under the impression that the guest OS was interacting with hardware directly, but experience has made me question if this is actually the case.
Part of the reason I want to know this is that there's other equipment that would be more dangerous or lead to damage of the equipment if it behaved unexpectedly (I.E Lasers where power is computer controlled) that uses other hardware interfaces. So, if there is a risk of what the guest OS thinks it's doing being disconnected from the actual reality, that's a safety risk that I want to understand before going forward.

Related

How do you access secret area of any USB device?

We know interfaces based on "vtable" principle. Once you have a pointer to an object, you can narrow-cast it to an interface and the new object is the same object but is very limited to the interface. I always thought hardware firmware is somewhat similar. For example, for block devices (HDDs or SSDs), this interface is like read, write, status and similar. So driver is a user of such a device interface.
As it turns out, any storage device has firmware and a special area of its storage marked internal where firmware is saved. Manufacturers release programs that allow to "flash" their specific devices, e.g. by writing new program to its internal space, hidden from the OS.
My question is: on a software level, how do they perform this read-write operations to the "hidden" area of a drive? How dead "COM ports" are related?
If HDDs work across all OSes, why do firmware upgrade software is only released for Windows? In open source world of linux, what do i need to read to understand "debugging firmware" better?
We know interfaces based on "vtable" principle. Once you have a pointer to an object, you can narrow-cast it to an interface and the new object is the same object but is very limited to the interface. I always thought hardware firmware is somewhat similar. For example, for block devices (HDDs or SSDs), this interface is like read, write, status and similar. So driver is a user of such a device interface.
No, not really. Object-orientated programming is unrelated to personal computer hardware and your impression that virtual calls are relatable to device drivers is misguided. They're completely unrelated.
As it turns out, any storage device has firmware and a special area of its storage marked internal where firmware is saved. Manufacturers release programs that allow to "flash" their specific devices, e.g. by writing new program to its internal space, hidden from the OS.
This is not true. Not all storage devices have firmware - and whatever firmware they have (if any) is not necessarily stored on rewritable flash-storage. ROM chips exist, for example, which are not rewritable.
My question is: on a software level, how do they perform this read-write operations to the "hidden" area of a drive? How dead "COM ports" are related?
If you're referring to firmware updates of modern (post-2004) SATA and NVMe storage devices, then those devices' firmware can be updated using SATA and NVMe's built-in commands.
This is documented in places like and t13.org ATA/ATAPI Command Set - 4
If HDDs work across all OSes, why do firmware upgrade software is only released for Windows? In open source world of linux, what do i need to read to understand "debugging firmware" better?
why do firmware upgrade software is only released for Window
Because Windows is the predominant operating-system used by users of those kinds of hardware. While the firmware can be updated using raw SATA/NVMe commands, you still need a host operating system to run the program that will issue those SATA/NVMe commands. Supposing it costs $100k to build a firmware update for an SSD for Windows and another $100k for Linux (for $200k for both Linux and Windows) - but 90% of all Linux users also run Windows - so why spend $200k for 100% coverage when you can spend $100k on 90% coverage, then spend the extra $100k buying a Ferrari or Tesla Model X P100D on yourself, and blame the users for not booting from a Windows USB stick to upgrade their firmware? (Side note: I chose the latter, and yes, I really do love my Tesla Model X)
You cannot have a program that just magically runs on any computer platform (Windows, BSD, Linux, macOS, QNX, etc) and updates periphial device firmware: it always needs to be a program that can be executed by a host OS (you can argue that UEFI/EFI is a platform-agnostic approach, but in reality UEFI/EFI is still its own platform)
In open source world of linux, what do i need to read to understand "debugging firmware" better?
200mg of Adderall and a pirated copy of IDA Pro.
...or 500mg of Dexedrine and NSA Ghidra.
It depends on the exact type of block device and how it is interfaced to the PC. A very common interface is SATA, when can be used directly with a SATA controller in a home PC - or it can be reached through a USB-SATA bridge.
If we take SATA as an example, there exists a special command in the SATA protocol known as "Download Microcode" (command ID 0x92) - which exists solely for the purpose of transferring new firmware to the drive controller.
The firmware is typically not stored on a "hidden area of the drive" itself, as your indicate - it is typically stored in flash memory or similar on the drive controller PCB or within the drive controller IC.
There are no "dead COM ports" involved in this.
The reason why harddrive vendors some times release firmware update tools only for Windows is probably the simple reason that most of their customers use Windows, and it is cheaper for them to simply support that one platform.

Multiseat setup for fun and profit: hypervisors and other choices

I am grad student, and I am considering setting up my dream home workstation/art tool/entertainment device/all-purpose everything. I'm wondering if what I want to do is possible (and practical), and if so, get some suggestions and warnings from people who know more about virtualization and hypervisors than I do:
Aim: Set up a 2-4 headed computing station that is optimized for using different OS'sfor different tasks I do. I want to keep my work/play streams separated, and have control over the resources that each one is allowed. For example, one head would be Windows 10 for audiovisual work, media playing, and maybe some gaming. Another head would use Linux and be used mainly for data science (mostly R and Python), and some hosting for purely local use (such as running an instance of the Galaxy bioinformatics server, which I only plan to access locally).Finally, I want a VM that is purely devoted to web-browsing, probably some lightweight Linux distro.
I want each OS to have it's own keyboard and monitor(s), but ideally I want to copy-paste between OSs. The idea is to just swivel my chair to move between operating systems, or even to have one person using each.
What I think I need:
A hypervisor with PCI, USB, and network controller pass-through.
Two video cards,one each for my Windows and Linux workstations (with the web browsing VM using the on-chip CPU graphics). Obviously, a mobo and CPU that support full virtualization.
A USB card with multiple separate controllers, so that I can use a different controller for each OS. Something similar for network interface cards.
Separate SSDs for each OS and its apps.
Some sort of storage pool (probably ZFS based) to hold the bulk of my files, shared so I can access them from either guest. Ideally, I'd like to to be in a separate enclosure, but I don't trust eSATA cables (they seem to fail frequently) and care about speed of database access, so I'll probably put the drives inside the main case, even though that will make future migration more annoying.
Something like SPICE for KVM, so that I can copy and paste freely between OS's.
Is there anything I am overlooking?
What hypervisor or similar solution is best for what I want to do? I am leaning towards KVM, but am far from committed.I will consider paid solutions if there is a compelling reason to use them.
What are some pitfalls I should be wary of?
kvm will work here ideally, a lot of tutorials and lot of intel based configurations working like a charm
zfs can't share your data, u need nfs or samba share on host machine
Synergy software is for you.

How to find an embedded platform?

I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction.
Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do:
Required:
2 RS232 serial ports (one used all the time for primary UI, the second one not continuous)
1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both]
Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files)
RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable.
Nice to have:
USB port(s)
Ethernet network port
Wireless network
Implementation languages (any O/S I will adapt to):
First choice Java/C# (Mono ok)
Second choice is C/Pascal
Third is BASIC
Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size.
Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-).
EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.
I suggest buying a cheap Atom Mini-ITX board, some of which come with multi - 4+ RS232 ports.
But with Serial->USB converters, this isn't really an issue. Just get an Atom. And if you have code, port your software to Linux.
Here is a link to a Jetway Mini-Itx board, and a link to a 4 port RS232 expansion module for it. ~$170 total, some extra for memory, a disk, and a case and PSU. $250-$300 total.
Now here is an Intel Atom Board at $69 to which you could add flash storage instead of drives, and USB-serial converters for any data collection you need to do.
PC104 has a lot of value in maximizing the space used in 19" or 23" rackmount configurations - if you're not in that space, PC104 is a waste of your time and money, IMHO.
The BeagleBoard should have everything you need for $200 or so - it can run Linux so use whatever programming language you like.
A 'modern' system will run DOS so long as it is x86, I suggest that you look at an industrial PC board from a supplier such as Advantech, your existing system may well run unchanged if it adheres to PC/DOS/BIOS standards.
That said if your original system runs on DOS, the chances are that you do not need the horsepower of a modern x86 system, and can save money by using a microcontroller board using something fairly ubiquitous such as an ARM. Also if DOS was the OS, then you most likely do not need an OS at all, and could develop the system "bare-metal". The resources necessary just to support Linux are probably far greater than your existing application and OS together, and for little or no benefit unless you intend on extending the capability of the system considerably.
There are a number of resources available (free and commercial) for implementing a file system and USB on a bare-metal system or a system using a simple real-time kernel such as FreeRTOS or eCOS which have far smaller footprints than Linux.
The Windows embedded site ( http://www.microsoft.com/windowsembedded/en-us/default.mspx )
has a lot of resources and links to hardware partners, distributors and development kits. There's even a "Spark" incubation project ( http://www.microsoft.com/windowsembedded/en-us/community/spark/default.mspx )
What's also really nice about using windows ce is that it now supports Silverlight as a development environment.
I've used the jetway boards / daughter cards that Chris mentioned with success for various projects from embedded control, my home router, my HTPC front end.
You didn't mention what the actual application was but if you need something more industrial due to temperature or moisture constraints i've found http://www.logicsupply.com/ to be a good resource for mini-itx systems that can take a beating.
A tip for these board is that given your minimal storage requirements, don't use a hard drive. Use an IDE adapter for a compact flash card as the system storage or an SD card. No moving parts is usually a big plus in these applications. They also usually offer models with DC power input so you can use a laptop like or wall wart external supply which minimizes its final size.
This http://www.fit-pc.com/web/ is another option in the very small atom PC market, you'd likely need to use some USB converters to get to your desired connectivity.
The beagle board Paul mentioned is also a good choice, there are daughter cards for that as well that will add whatever ports you need and it has an on board SD card reader for whatever storage you need. This is also a substantially lower power option vs the atom systems.
There are a ton of single board computers that would fit your needs. When searching you'll normally find that they don't keep many interface connectors on the processor board itself but rather you need to look at the stackable daughter cards they offer which would provide whatever connections you need (RS-232, etc.). This is often why you see just "serial port" in the description as the final physical layer for the serial port will be defined on the daughter card.
There are a ton of arm based development boards you could also use, to many to list, these are similar to the beagle board. Googling for "System on module" is a good way to find many options. These again are usually a module with the processor/ram/flash on 1 card and then offer various carrier boards which the module plugs into which will provide the various forms of connectivity you need.
In terms of development, the atom boards will likely be the easiest if your more familiar with x86 development. ARM is strongly supported under linux though so there is little difficulty in getting these up and running.
Personally i would avoid windows for a headless design like your discussing, i rarely see a windows based embedded device that isn't just bad.
Take at look at one of the boards in the Arduino line, in particular the Arduino Mega. Very flexible boards at a low cost, and the Mega has enough I/O ports to do what you need it to do. There is no on-chip modem, but you can connect to something like a Phillips PCD3312C over the I2C connector or you can find an Arduino add-on board (called a "shield") to give you modem functionality (or Bluetooth, ethernet, etc etc). Also, these are very easy to connect to an external memory device (like a flash drive or an SD card) so you should have plenty of storage space.
For something more PC-like, look for an existing device that is powered by a VIA EPIA board. There are lot of devices out there that use these (set-top boxes, edge routers, network security devices etc) that you can buy and re-program. For example, I found a device that was supposed to be a network security device. It came with the EPIA board, RAM, a hard drive, and a power supply. All I had to do was format the hard drive, install Linux (Debian had all necessary drivers already included), and I had a complete mini-computer ready to go. It only cost me around $45 too (bought brand new, unopened on ebay).
Update: The particular device I found was an EdgeSecure i10 from Ingrian Networks.

How is the BIOS used by a modern OS?

What's the function of the BIOS in a modern OS? Is it still used after booting? And is there some kind of BIOS API?
The BIOS is still the first thing that runs on the just-started CPU and responsible for getting the motherboard hardware turned on, setting basic chipset modes and registers, initializing some hardware, and running the code that loads the kernel.
The BIOS is usually not used once the kernel is loaded, and depends on a 16-bit execution environment as opposed to the 32- or 64-bit protected mode environment that a modern kernel operates in.
The boot loader normally does require the BIOS IO calls to get the kernel into memory. The BIOS is being replaced even in this role by newer boot-time software such as Coreboot to provide faster boot times. EFI will one day replace the traditional BIOS, and hopefully the boot loader, passing control directly to the kernel after loading it from storage.
The discovered hardware configuration, memory range settings, and ACPI metadata tables are probably the only BIOS-based data used by the OS after the kernel is loaded. Any runnable ACPI code is encoded as ACPI Machine Language and is interpreted by the OS.
Any good traditional book on MS-DOS assembly programming will include information on the BIOS programming interface. Check out The Art of ASSEMBLY LANGUAGE PROGRAMMING
I wrote BIOS for notebook computers for several years. The BIOS does a lot of things while the OS is running.
A major task is to inform the OS when many events happen so the OS can look smart (as if it somehow figured these things out on its own). For example, the BIOS tells the OS when: the power button is pressed, batteries are inserted or removed, AC power comes or goes, the system connects to or disconnects from a docking station, hard drives and or certain types of optical drives are inserted or removed.
Most portable computers have features that you can access/control through Fn keys and through OS-level applications provided by the manufacturers. The BIOS responds to these hotkeys and has code to interface with the OS-level apps. Features like controlling screen brightness (which certain OSes want to appear to control) or controlling bling LEDs fall into this category.
Perhaps the most important task of the BIOS is to shut down the system when the power button is held down for more than 4 seconds (to recover from OS hangs!).
The biggest benefit to having OS control over BIOS now is to control hardware level variables such as fan speed, temperature gauges, etc.

How can I program a wireless adapter?

Is it possible to program a wireless adapter attached to a computer?
I need to modify how they work, not just using them to perform a task such as scanning or connecting.
I have already tried the Native Wifi API, but that library is too high level. I cannot modify how exactly the wireless adapter works.
Any solution in any programming language in any operating system is very welcomed. (Sounds so desperate lol)
You need an open-source operating system then. Hardware varies in how programmable it is, but for example, Atheros wireless cards do not have an on-board processor, and therefore they do the absolute minimum of the 802.11 protocol in hardware, leaving everything else to the device driver. More info in these places: http://linuxwireless.org/ http://git.kernel.org/?p=linux/kernel/git/linville/wireless-testing.git;a=summary;
If you really need to go further that what commodity hardware can do, look in to GNU Radio and the USRP/USRP2: http://gnuradio.org/redmine/wiki/gnuradio
And yes, you do have to be careful about the legal implications of this stuff, but then if you don't turn off the regulatory framework, there is software to help with that.
Generally speaking, the manufacturer will attempt to prevent you from doing this. Since what you're working with is really a radio transceiver, its operation is regulated. In the US, for example, such things fall under the purview of the FCC. Depending on the country, changing how it operates (and then operating it) is likely to be illegal.
If you have an atheros chipset on your WLAN card then load up linux and install ath5k/ath9k or madwifi and you can do some interesting things with the driver.