Hypervisor presence can be detected via WMI like this;
select * from Win32_ComputerSystem
From that, read HypervisorPresent and if true, then it is present.
Equivalent VMIC command:
ComputerSystem get HypervisorPresent
This gives "true" for systems running under VMWare and VirtualBox and Hyper-V.
The problem is that it also gives "true", when run on physical machines when Hyper-V is installed, i.e. outside of a virtualized system.
So, my question is this:
Is it somehow possible to detect if the system is an actual physical machine even when Hyper-V is installed?
I had an idea of also checking if the Hyper-V service/role is installed, but that isn't enough since you can do nested virtualization with Hyper-V.
Check the “HKLM\SOFTWARE\Microsoft\Virtual Machine\Auto” mostly all the host details stored in this key at guest
Using HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters you get more information about it
but only works for HyperV you need to validate twice one time for other hypervisors and second time for HyperV.
There is an class check this to share vm and host details.
You can Check this for more helpful information.
Related
I'm new to the WSL2 and wondering if it's possible to run the same WSL2 ubuntu instance on both my desktop and laptop.
Now I am able to use wsl --export and wsl --import method to save and load the system to/from my portable hard drive. But these methods takes a long time.
I notice that wsl --import load a file named ext4.vhdx. Is there a way to load straightly from this file?
Update v2.0:
I was able to get a workaround and it works great.
Thanks to Booting from vhdx here, I was able to load straightly from my vhdx file on my portable hard disk. Windows track down its subsystem with regedit, So we can write our own(p.s: make sure to get BasePath right, it starts with "\\\\?", or you will not be able to access the subsystem' filesystem on your host system.):
Windows Registry Editor Version 5.00
[HKEY_USERS\【your SID here】\SOFTWARE\Microsoft\Windows\CurrentVersion\Lxss\{【UUID here】}]
"State"=dword:00000001
"DistributionName"="distribution name"
"Version"=dword:00000002
"BasePath"="vhdx folder path" 【 e.g. "\\\\?\\E:\\S061\\WSL\\ubuntu-20"】
"Flags"=dword:0000000f
"DefaultUid"=dword:000003e8
I suppose the best way to do this would be to store ext4.vhd on a network storage device accessible to both devices.
I have previosly mentioned how to move ext4.vhd. You can check that out here
Basically you need to export from one machine and import it while making sure the vhd file is configured for wsl to access from the network storage
Since this should *officially* not supported expect some performance hits
Another way would be to run WSL on one computer and ssh/remote desktop to it from another device on the network
I'm of the strong belief that sharing the same ext4 vhd between two VM's simultaneously would be a bad idea. See this and this Unix & Linux StackExchange, including the part about ...
note that sharing LVs/partitions on a single disk between the servers at the same time is NOT very safe. You should only access whole disks from any of the servers at one time.
However, as dopewind's answer mentioned, you can access the WSL instance on one computer (probably the desktop) from another (e.g. the laptop). There are several techniques you can use:
If you have Windows 10 Professional or Enterprise on one of the computers, you can enable Remote Desktop, which allows you to access pretty much everything on one computer from another. RDP ("Remote Desktop Protocol") even works from other devices such as an iPad or Android tablet (or even a phone, although that's a bit of a small screen for a "desktop"). That said, there are better, more idiomatic solutions for WSL ...
You could enable SSH server on the Windows 10 computer with the WSL instance (instructions). This may sound counterintuitive to some people, since Linux itself running in the WSL instance also includes an SSH server (by default). But by SSH'ing from (for example) your laptop into your desktop's Windows 10, you can then launch any WSL instance you have installed (if you choose to install more than one) via wsl -d <distroName>. You also avoid a lot of the network unpleasantness in the next option ...
You could, as mentioned above, enable SSH on the WSL instance (usually something like sudo service ssh start) and then ssh directly into it. However, note that WSL2 instances are NAT'd, so there's a whole lot more hackery that you have to do to get access to the network interface. There's a whole huge thread on the WSL Github about it. Personally, I'd recommend the "Windows SSH Server" option mentioned about to start out with, then you can worry about direct SSH access later if you need it.
Side note: Even though I have SSH enabled on my WSL instances, I still use Windows SSH to proxy to them, to avoid these networking issues.
I am using VmWare Workstation 14 and when I install an operating system (any of them) some programs and apps are able to identify that I am using a virtual machine.
I have seen the vm is using virtualized devices that are really named virtual. like for example VmWare Network Card or etc. Is there any way to install fake real like hardware drivers on these virtual machines? Can this simple change make the app see this vm as a real machine?
How to make this virtual machine appear as a real machine to applications?
Is there really any way?
This was asked as a yes-or-no question so my answer is:
Yes... probably. But it's a lot of work.
There's a 2006 presentation by Tom Liston and Ed Skoudis that talks about this: https://handlers.sans.org/tliston/ThwartingVMDetection_Liston_Skoudis.pdf
It focuses on VMware, but some of it would also apply to other types of Virtual Machine Environments (VMEs).
In summary, they identify as many things as they can find that would allow VM detection, which would each have to be addressed, and they also mention some VMware-specific mitigations for them.
VME artifacts in processes, file system, and/or Windows registry. These would include the VMtools service and "over 50 different references in the file system to 'VMware' and vmx" and "over 300 references in the Registry to 'VMware'", all of which would have to be deleted or changed.
VME artifacts in memory. Specific regions of memory tend to be different in guests (VMs) than hosts, namely the Interrupt Descriptor Table (IDT), Global
Descriptor Table (GDT), and Local Descriptor Table (LDT). The method by which the VM is built may allow these to appear the same in guests as they do in hosts.
VME-specific virtual hardware. This would include the drivers you mention like VmWare Network Card. The drivers would have to be removed or replaced with drivers that do not match the names or code signatures of any virtual drivers. Probably easiest to do on an open-source system, simply by modifying the driver source code and build.
VME-specific processor instructions and capabilities. Some VMEs add non-standard machine language instructions, or modify the behaviour of existing instructions. These can be changed or removed by editing the VME source code, at the cost of convenient host-guest interaction.
VME differences in behaviour. A VM might respond differently on the network, or fail at time synchonization. This could be mitigated with additional source code changes (on both host and guest) to make the network traffic look closer to normal, and providing sufficient CPU cores to the VM would help make sure it does not run more slowly than wall clock time.
Again this is from 2006, so if anyone has a more up-to-date reference, I'd love to see their answer.
I am trying to deploy a cent-os 7 VM on a vcenetr from pyvmomi python library and then before powering on the VM I am trying to setup static IP and DNS for the VM.
VM creation goes fine , but guest customization fails, givimg following error:
**Customization of the guest operating system 'rhel6_64Guest' is not sup
ported in this configuration. Microsoft Vista (TM) and Linux guests with Logical
Volume Manager are supported only for recent ESX host and VMware Tools versions
. Refer to vCenter documentation for supported configurations."
faultCause =
faultMessage = (vmodl.LocalizableMessage) []
uncustomizableGuestOS = 'rhel6_64Guest'
Now this customization problem goes away if the VM is just rebooted once. After that we can do the guest customization.
But this reboot takes around 30 seconds of time and for our case , we need to get VMs up and running faster than this time.
Any body who faces similar problem and has some context on it will be very helpful.
Also I don't understand how rebooting the VM solves this problem.
Please share your thoughts even if you don't have exact solutions .
On further Investigation I found that open-vm-tools does not work until the VM is powered on atleast once.
When Machine is powered on , the HOST system detects the open-vm-tools running on guest OS , and from there on open-vm-tools works.
So open-vm-tools can not be used for initial provisioning as it will just not work at the start up.
Cloud-init is the alternative solution which should be used for initial provisioning.
Does it differ from "setting up" a machine ?
I can't really tell.
It seems to be if I read the doc from Vagrant but there must be something else.
It says
On the first vagrant up that creates the environment, provisioning is
run. If the environment was already created and the up is just
resuming a machine or booting it up, they will not run unless the
--provision flag is explicitly provided.
So some up need to do the "provisioning" and some up do not.
Provisioning generally refers to the distribution and installation of software. In the context of a virtual machine, it refers to configuring what software and capabilities that each instance of a virtual machine will contain. Think of it here as a virtual machine template, where each new VM instance that is spun up will contain the same software that you've asked to provision.
"Setting up" is a more generic term that appears to be used in the Vagrant documentation as referring to the creation and destruction of each virtual machine instance, e.g. "setting up" vs. "tearing down", as per the "up" and "destroy" commands. "setting up" here has nothing to do with what's actually configured in the VM instance itself, that's the provisioning part.
Put another way, when you set up a new virtual machine instance using the "up" command, it creates a basic virtual machine instance, then triggers the provisioning system to actually install the software you want into that instance. Here's the part of the documentation that hilights this:
Provisioners in Vagrant allow you to automatically install software,
alter configurations, and more on the machine as part of the vagrant
up process.
This is useful since boxes typically are not built perfectly for your
use case. Of course, if you want to just use vagrant ssh and install
the software by hand, that works. But by using the provisioning
systems built-in to Vagrant, it automates the process so that it is
repeatable.
What's the difference between process virtual machine with system virtual machine?
My guess is that process VM is not providing a kind of an operating system for the whole application for that OS, rather providing an environment for some specific application.
And system VM is providing an environment for an OS to be installed just like VirtualBox.
Am I getting it correct?
Another question is the difference between the two different implementation of system VM: hosted vs. stand-alone.
I'm a beginner studying OS, so easy and understandable answer would be greatly appreciated :)
A Process virtual machine, sometimes called an application virtual machine, runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform.
A System virtual machine provides a complete system platform which supports the execution of a complete operating system (OS),Just like you said VirtualBox is one example.
A Host virtual machine is the server component of a virtual machine , which provides computing resources in the underlying hardware to support guest virtual machine (guest VM).
The following is from http://airccse.org/journal/jcsit/5113ijcsit11.pdf :
System Virtual Machines
A System Virtual Machine gives a complete virtual hardware platform with support for execution
of a complete operating system (OS).
The advantage of using System VM are:
Multiple Operating System environments can run in parallel on the same piece of
hardware in strong isolation from each other.
The VM can provide an instruction set architecture (ISA) that is slightly different from
that of the real machine
The main draw backs are:
Since the VM indirectly accesses the same hardware the efficiency is compromised.
Multiply VMs running in parallel on the same physical machine may result in varied
performance depending on the workload imposed on the system. Implementing proper
isolation techniques may address this drawback.