Ensuring that MAC addresses of all virtual-machines run from one image are unique - virtual-machine

We are trying to distribute a virtualized version of our product (a tiny computer) by setting up a virtual machine, installing all of our software on it, exporting it as an appliance, and distributing this appliance to users. The issue is, we rely on the MAC address of each of our physical devices to be unique for registration purposes, and every VM that is created from the image of our original VM has the same MAC address by default.
So the question is, is there some way to distribute VM-appliance images such that each one generated a unique mac-address on boot? If not, are there any unique identifiers (unique across all copies of the same image) that we could use in place of a mac-address to do this?

If using VirtualBox, the only way to ensure a different mac address from the original source is to make a new clone and ensuring to check the "Reinitialice MAC Address", then, export the appliance from the new clone. Due the fact that you can do this also from "vboxmanage", and also, from the same utility you can change the macaddress of the machine, you can set up your clone with the specific mac for your customer "X" first (scripted), then from it create the appliance !.
vboxmanage modifyvm NameOrUUIDOfYourClonedVM --macaddress<1-N> THENEWMAC
Later, export the appliance with the same utility:
vboxmanage export NameOrUUIDOfYourClonedVM
Remember you can script everything !.

Related

How to import a virtual machine?

I was asked to create a virtual machine using vmware but I only received two files:
one with extension .vmsd
another one with extension .disk1
I am not seeing any option how to import such files and have the VM running. Please, any suggestion is highly appreciated.
Vmware Homepage
Click on Open a virtual machine
Open the location of the folder containing VMware files
Click on the config file that will be visible most likely a *.vmxFile to be opened
VMs can be created using two ways.
You can use the OS images provided by the VMware software. i.e., The list of OS available in the dropdown. This is a regular process.
The other way is to use your custom Image file (.iso file). This should be done while creating the VM itself but not after creation. Also your ISO file has to be pass the pre-checks done by the software.
The .vmsd & .disk1 files which you mentioned are for maintaining the metadata & files. You can attack them during the creation of VM in the Attach external disks section if I'm not wrong.

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

Drive Letter Change - Reregister VDI for VirtualBox VM

Today, I was trying to get on my Kali Linux virtual machine to just do a basic vulnerability check on a VPS I own. I have my Kali Linux Virtual Disk Image (VDI) saved on a USB external drive, so I plugged that in, fired up Virtual Box, but I got an error when I went to start it. It would appear that the drive letter for this drive has changed from F: to E:. Thus, VirtualBox could not retrieve the VDI from F:\Kali Linux VM\.
Trying to troubleshoot this on my own, I decided to open up the VM settings, remove the SATA Controller VDI that was registered on the F: drive, and then add the VDI from the E: drive (same VDI, just a difference in drive letter). That, however, did not go as smoothly as planned. I was able to remove incorrect VDI path without any problems, but when I tried to add the VDI on the proper path, I got the following error:
Cannot register the hard disk 'E:\Kali Linux VM\Kali Linux.vdi' {6b214e73-ae38-427b-90f8-995c7dd4211c} because a hard disk 'F:\Kali Linux VM\Kali Linux.vdi' with UUID {6b214e73-ae38-427b-90f8-995c7dd4211c} already exists.
Result Code:
E_INVALIDARG (0x80070057)
Component:
VirtualBoxWrap
Interface:
IVirtualBox {0169423f-46b4-cde9-91af-1e9d5b6cd945}
Callee RC:
VBOX_E_OBJECT_NOT_FOUND (0x80BB0001)
It looks like I cannot add the VDI back to the VM because it is identical to the VDI I removed.
Has anyone else encountered a problem like this? And does anyone have a fix for this so I don't lose all the data on that VM?
Thank you all in advance.
Note: I know this isn't a programming question, so this may be the wrong Stack Exchange. Please let me know if this would be better suited under a different Stack Exchange site.
Open Oracle VM VirtuaBox Manager. Now go to
File > Virtual Media manager
Under Hard disks, select Kali Linux.vdi. Right click and remove it.
NOTE: If remove is disabled. Click release first. Then right click and remove.
Now add the VDI Kali Linux.vdi again.

Move SOME Vagrant boxes to external drive

I have a number of projects that use Vagrant and associated VMs. Some are in current development, others are either in maintenance or archived, and I do not need the latter on my laptop. 256GBs fill up fast.
What I'd like to do is move the archived and maintenance virtual machines to an external SSD hard drive, but still have them accessible via the Vagrant command line.
All google queries and searches have only turned up a permanent move to another drive, and nothing about some here and some there.
Is this possible?
You may try to use environment variables for change location setting
i think that may help.
use it in this way VAR="C:\some_location" vagrant up
VAGRANT_DOTFILE_PATH
VAGRANT_HOME
VAGRANT_DOTFILE_PATH can be set to change the directory where Vagrant stores VM-specific state, such as the VirtualBox VM UUID. By default, this is set to .vagrant. If you keep your Vagrantfile in a Dropbox folder in order to share the folder between your desktop and laptop (for example), Vagrant will overwrite the files in this directory with the details of the VM on the most recently-used host. To avoid this, you could set VAGRANT_DOTFILE_PATH to .vagrant-laptop and .vagrant-desktop on the respective machines. (Remember to update your .gitignore!)