make virtual machine appear a real machine to applications - virtual-machine

I am using VmWare Workstation 14 and when I install an operating system (any of them) some programs and apps are able to identify that I am using a virtual machine.
I have seen the vm is using virtualized devices that are really named virtual. like for example VmWare Network Card or etc. Is there any way to install fake real like hardware drivers on these virtual machines? Can this simple change make the app see this vm as a real machine?
How to make this virtual machine appear as a real machine to applications?
Is there really any way?

This was asked as a yes-or-no question so my answer is:
Yes... probably. But it's a lot of work.
There's a 2006 presentation by Tom Liston and Ed Skoudis that talks about this: https://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf
It focuses on VMware, but some of it would also apply to other types of Virtual Machine Environments (VMEs).
In summary, they identify as many things as they can find that would allow VM detection, which would each have to be addressed, and they also mention some VMware-specific mitigations for them.
VME artifacts in processes, file system, and/or Windows registry. These would include the VMtools service and "over 50 different references in the file system to 'VMware' and vmx" and "over 300 references in the Registry to 'VMware'", all of which would have to be deleted or changed.
VME artifacts in memory. Specific regions of memory tend to be different in guests (VMs) than hosts, namely the Interrupt Descriptor Table (IDT), Global
Descriptor Table (GDT), and Local Descriptor Table (LDT). The method by which the VM is built may allow these to appear the same in guests as they do in hosts.
VME-specific virtual hardware. This would include the drivers you mention like VmWare Network Card. The drivers would have to be removed or replaced with drivers that do not match the names or code signatures of any virtual drivers. Probably easiest to do on an open-source system, simply by modifying the driver source code and build.
VME-specific processor instructions and capabilities. Some VMEs add non-standard machine language instructions, or modify the behaviour of existing instructions. These can be changed or removed by editing the VME source code, at the cost of convenient host-guest interaction.
VME differences in behaviour. A VM might respond differently on the network, or fail at time synchonization. This could be mitigated with additional source code changes (on both host and guest) to make the network traffic look closer to normal, and providing sufficient CPU cores to the VM would help make sure it does not run more slowly than wall clock time.
Again this is from 2006, so if anyone has a more up-to-date reference, I'd love to see their answer.

Related

Isn't virtual machine quite a type of process?

I'm trying to understand the basic concepts of Docker, and lots of docs say that "Docker is not virtual machine, but a process". To me, this sentence looks quite awkward, since as far as I know, virtual machine it self also runs on host os, which makes itself a 'Process'.
Is there any big difference between the way the virtual machine works and the other normal applications/process do?
Docker is a brand name of a container management software system.
TL;DR:
Containers are a packaging concept.
VMs are a compatibility concept.
VMs are a security concept.
A container is not a process, it is an isolation of a collection of processes within a single-system-image. What is isolated? First, and foremost, the path name space. Processes within a given container share a path name space, so that they agree that /usr/bin/env is the same thing. Two processes in different containers, or perhaps inside the non-containered environment, would not necessarily see the same file for /usr/bin/env. This functionality has been a feature of UNIX derived systems for at least 40 years; under the service chroot().
More recently, containers have taken to isolate things that are not in the namespace, like processes, user ids and network interfaces. In older chroot-based systems, running ps in a container would show processes that were not in that container; although special handling hacked into to prevent a chrooted root user from gaining root access on the underlying system.
In these modern systems, not only is the pid space partitioned, but also user ids, so that root in a container does not correspond to root on the overall system.
All this is accomplished by controlling many features of the kernel in a single-system-image. The software that controls these features: Docker, amongst others.
A Virtual Machine is not part of a single-system-image. Each VM is its own logical computer, running its own kernel, shell, etc.. With some careful configuration, you can make it so various files appear within many of the VMs; but that is no different than mounting file systems exported by a network file system.
Why choose one over the other: containers share my os, and are handy to escape the .so verionitis hell caused by conflicting software systems; I can package my software in a container, and it is isolated from whatever the running system is. I cannot, however, package the kernel I need; so if my software requires ubuntu 14.02; and I am running 18.04, containers will not save me. Containers are a packaging concept.
VMs are handy to support multiple versions or types of operating systems in a single computer. Since each VM runs unique system software, I can run my 14.02 app on my 18.04 system and none is the wiser. VMs are a compatibility concept.
VMs are also handy as a security layer. Imagine that a web page has a js-bomb that can corrupt my kernel (I know, quite a stretch). If I run my browser in a container, I have corrupted my kernel. If I run it in a VM, I have corrupted that VMs kernel -- I merely have to delete it, or rewind it, and the corruption is gone. VMs are a security concept.

Temporary local execution of VM image

I work with a number of different specialized and configured OS environments but I generally only use one at a time. I have a processor-beefy laptop but storage is always an issue. It would also be good to have a running backup of each environment so I can work from other hardware.
What would be ideal would be if I could run some kind of VM library server that maintained canonical copies of each environment from which I could DL local execution copies to my local machine to work with and then stream changes back to the server image as I did my work.
In my research it seems like a number of the virtual machine providers used to have services like this (Citrix Player, VMWare Mirage) but that they have all been EOLd.
Is there a way to set something like this up today? I'd love a foss solution based on KVM but id be willing to take a free proprietary solution.

Need to make USB drive skip itself in boot order without changing bios

I know how to change boot order through bios settings, but I have a unique situation where doing it programatically would be better. The company I work for sells and supports software remotely to thousands of non tech savvy customers. We can't touch their hardware settings, we are only the software vendor.
Recently we rolled out a option for their PCI compliancy that requires a separate removable drive to store a private encryption key. Customers that use this option have to leave a usb drive with a .dat file containing the RSA key at all times. Currently this presents an issue when customers reboot. Sometimes we can walk them through over the phone how to change their bios settings to skip the USB drive, but in many circumstances we cannot, because of the caller and the other end of the phone not being tech savvy enough to change bios settings, and different PC's having different BIOS setups.
So my question is, is there any kind of ini file I can create or boot record on the disk itself than can be added or changed to cause the system to see that there is no OS on the USB, keep going down the list of boot drives. Instead, with no OS many PC's hang on Missing OS screen until we have customer remove drive reboot and plug back in after Windows starts to load. All PC's are Windows, all XP or newer.
You're talking about manipulating BIOS Setup data. Unfortunately there is no industry-standard for computers to manipulate Setup fields, like the boot sequence, so any solution is likely to be vendor-specific.
An example: Dell Inc. provides customers OpenManage Client Instrumentation (OMCI) that allows admins to remotely change settings, like boot sequence, via standard interfaces like CIM/WMI. See this whitepaper:
http://www.dell.com/downloads/global/solutions/omci_info.pdf
Especially:
OMCI is the Dell instrumentation package that enables OptiPlex™, Dell
Precision™, and Latitude™ systems to be managed remotely. OMCI
contains the underlying driver set that collects system information
from a number of different sources on the client computer, including
the BIOS, CMOS, System Management BIOS (SMBIOS), System Management
Interface (SMI), operating system, APIs, DLLs, and registry settings.
OMCI exposes that information through the CIMOM interface of the WMI
stack. Thus, OMCI enables IT administrators to remotely collect asset
information, modify CMOS settings, ...
OMCI is specific to the Dell BIOS, so it won't work with other vendors' machines. Other enterprise hardware vendors (e.g. HP, IBM) provide similar software. If you can live with a vendor-specific solution, this may work for you.
May I ask if your USB is actually non-bootable?
How did you format it? FAT32, NTFS etc?
Why can't Windows bypass this usb when booting normally when (i assume) it is not bootable. Normally my system boots to Windows OS even if there is a non-bootable usb plugged in.
BTW have you tried keeping the DAT file on usb as hidden, read only, it's worth a try.

Setting up a development environment INSIDE a virtual machine

Heres the problem. I use around three different machines for development. My partner is using two. We have to go through the same freaking set up procedure on all five machines to get to work.
Working with a php project here, so:
Install and configure, PDT, a php debugger, and some version of XAMPP.
Then possible install an svn client, and any other tools.
Again, to each of the five machines.
What if, instead, we did all of this once, in a virtual machine that is set up with the same stack, same versions, as the production server. Then each of us could grab a copy of the VM image, run that image on each of the five machines and do all of our development in that VM. Put Eclipse, apache, mysql, the works, all in that vm.
The only negative of this approach, and please correct me on the only part, is performance. Is it really that big of an issue though? The slowest machine out of the five is a Samsung NC10 powered by an Intel Atom 1.6 ghz processor.
Do you think this is possible and practically usable? Or am I crazy?
I use a VM for development (running on my laptop) and have never had performance problems. Another approach that you could take would be to image the drive in the state that you want. Use Acronis or Ghost to re-image each machine when you need to. Only takes about 5-10 minutes to restore an image on any modern PC.
I use a VM for all my "work" as it keeps it away from my "play". This set up allows me to use the office VPN without exposing my whole machine to the office environment (which I trust about as much as the internets. ;-) Also I don't have to worry about messing up my development environment by trying games or other software. My work VM is currently running inside VirtualBox but I have used VMWare in the past. I have only noticed performance issues when using graphic intensive programs like Webex or the Terminal Server Client.
It can certainly be done. What turns me off is the size of the VM image, which would normally be several GBs. Having it on a network share means it can take longer to transfer then your current setup process takes. I guess an external hard drive would be the easiest way to move it around.
Performance wouldn't be an issue with any web development.
I have to ask why your current machines need to be "re-imaged" each time you sit down for work?
If you're using Windows you'll probably want to use SYSPREP on the master image so that the 'mini-setup' runs when you boot up the virtual machines for the first time.
Otherwise in terms of Windows' point of view, the machines have the exact same SID, hostname and other things - running multiple machines with the same SID on the same network can cause tons of headaches. Even more if you want them to communicate with each other.
I've run websphere for zSeries on a vmware virtual machine with no problem and websphere is more resource intensive then any PHP stack. I find that having a multi core machine or at least hyper threading makes it run a lot faster.
With vmware, disk operations are slower. For PHP development I doubt it would be a problem, but you'd definitely notice it if you are compiling a large C++ project. There is also Sun's VirtualBox which is free, and the latest version is rather nice (but I haven't looked at how slow disk operations are yet).
I am using that idea in practice. Virtual machines are generally great for development.
To run on multiple operating systems and multiple separate development environments.
Preserver older development environments for later support.
Can be easily backed up, when hard drive crashes no need to start from beginning.
Can be copied from developer to another, so everyone don't have to do tedious installations and configurations.
Down sides are:
Virtual machines are slower, you need more powerful computers than you would need otherwise. I would recommend having at least 4 G of ram, but preferably more like 16, fast multi core processors and fast hard drives.
Copying Windows OS virtual machines, each used copy of virtual machine should have it's own product key. When you make a copy, it needs to be registered with new product key.
Did you think about a software configuration manager like ansible, chef or puppet? With such software automation of such tasks is very easy! It can even create fresh vm and then configure it.

What are the benefits of a Hypervisor VM?

I'm looking into using virtual machines to host multiple OSes and I'm looking at the free solutions which there are a lot of them. I'm confused by what a hypervisor is and why are they different or better than a "standard" virtual machine. When I mean standard I going to use the benchmark virtual machine VMWare Server 2.0.
For a dual core system with 4 GB of ram that would be capable of running a max of 3 VMs. Which is the best choice? Hypervisor or non-hypervisor and why? I've already read the Wikipedia article but the technical details are over my head. I need a basic answer of what can these different VM flavors do for me.
My main question relates to how I would do testing on multiple environments. I am concerned about the isolation of OSes so I can test applications on multiple OSes at the same time. Also which flavor gives a closer experience of how a real machine operates?
I'm considering the following:
(hypervisor)
Xen
Hyper-V
(non-hypervisor)
VirtualBox
VMWare Server 2.0
Virtual PC 2007
*The classifications of the VMs I've listed may be incorrect.
The main difference is that Hyper-V doesn't run on top of the OS but instead along with the system it runs on top of a thin layer called hypervisor. Hypervisor is a computer hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently.
Many other virtualization solution uses other techniques like emulation. For more details see Wikipedia.
Disclaimer, everything below is (broadly) my opinion.
Its helpful to consider a virtual machine monitor (a hypervisor) as a very small microkernel. It has very few jobs beyond accessing the underlying hardware, such as monitoring of event channels and granting guest domains access to specific resources .. while enforcing some kind of scheduler.
All guest machines are completely oblivious of the others, the isolation is true. Guests do not share memory with the privileged guest (or each other). So, in this instance, you could (roughly) think of each guest (even the privileged one) as a process, as far as the VMM is concerned. Typically, the first guest gets extra privileges so that it can manage the rest. This is the ideal technology to use when virtual machines are put into production and exposed to the world.
Additionally, some guests can be patched to become aware of the hypervisor, significantly increasing their performance.
On the other hand we have things like VMWare and QEMU, which rely on the host kernel to give it access to bare metal and enough memory to exist. They assume that all guests need to be presented with a complete machine, the limits put on the process presenting these (more or less) become the limits of the virtual machine. I say more or less because device mapper QoS is not commonly implemented. This is the ideal solution for trying code in some other OS, or some other architecture. A lot of people will call QEMU, Simics or even sometimes VMWare (depending on the product) a 'simulator'.
For production roll outs I use Xen, for testing something I just cross compiled I use QEMU, Simics or VirtualBox.
If you are just testing / rolling new code on various operating systems and architectures, I highly recommend #2. If your need is introspection (i.e. watching guest memory change as bad programs run in a guest) ... I'd need more explanation before answering.
Benefits of Hypervisor:
Hypervisor separates virtual machines logically, assigning each its own slice of underlying computing power, memory, and storage, thus preventing the virtual machines from interfering with each other.