deploy a new VM using an existing vmdk file via vmware perl sdk - virtual-machine

I want to deploy a new VM with my vmdk file in vcenter environment from CLI. So ssh to esx server is not an option. Is there any way I can do this .I know there is some vmware perl sdk but I could not find exactly what I need to get this working. I know The same operation is possible from GUI, but I need to make it automated and also scale up so gui is not an option for me.

Can you please be more specific? This for a Linux or Windows CLI?
The simplest way to do this is via PowerCLI: https://my.vmware.com/web/vmware/details?downloadGroup=PCLI550&productId=352
To clone from an existing VM, the command would be:
$vm2 = New-VM -Name VM2 -VM VM1 -Datastore $datastore -VMHost $host
Also, when you say you want to create from VMDK, is the VMDK already on the target datastore? Do you need to import the VMDK first?
Normally, when you create a VM, you can either create a new "blank" VM or you can clone from an existing VM. If you have a VMDK you want to use, you would create an empty VM and then attach the VMDK. This assumes that you already have the VMDK in question loaded into a datastore that the host can access.

Related

How to change Docker's default Image storage location in WSL2?

How can I change the default location for storing Docker images in Windows? I currently have Docker installed on my C: drive, and the images are stored in the following location:
C:\Users\xxxxx\AppData\Local\Docker\wsl\data.
I want to change the default location to my D: drive. I am using WSL2 as the backend for Docker, and I have read that I can use the .wslconfig file to configure Docker. However, I am not sure how to set up the .wslconfig file to change the default image location. My WSL2 installation is located on my D: drive, which I installed from the Microsoft Store.
I'm using Docker version 20.10.21 and these are wsl specs
WSL version: 1.0.3.0
Kernel version: 5.15.79.1
WSLg version: 1.0.47
MSRDC version: 1.2.3575
Direct3D version: 1.606.4
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22000.1335
I'm using Ubuntu distro in WSL, and Docker Desktop v.4.15.0
I tried making some changes in .wslconfig but there was no option for storage or something.
Caveats/Preface:
I've tried this and it works, but I cannot guarantee that long-term it will continue to work. There's the potential that something will break when Docker Desktop upgrades in the future.
In general I don't recommend registry hacks, but I'm not aware of another way to do this. Other than the previous caveat, this seems fairly safe.
No, there's no .wslconfig option for changing the location of a distribution.
With that in mind, here's what I did to move docker-desktop-data to the D: drive:
Create the directory. I'll use D:\wsl\docker-desktop-data as an example.
Stop Docker Desktop by right-clicking on the status bar icon and Quit Docker Desktop.
From PowerShell:
wsl --shutdown
Confirm the location (BasePath) and registry key (PSChildName) of the docker-desktop-data via:
Get-ChildItem HKCU:\Software\Microsoft\Windows\CurrentVersion\Lxss\ |
ForEach-Object {
(Get-ItemProperty $_.PSPATH)
} | Where-Object {
$_.DistributionName -eq "docker-desktop-data"
}
Move ext4.vhdx from the BasePath directory identified above to the D:\wsl\docker-desktop-data directory.
In regedit, navigate to:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss
Find the subkey matching the PSChildName from above.
Modify the BasePath to point to \\?\D:\wsl\docker-desktop-data
Restart Docker Desktop
Test that your existing images are still available by running one of them.

packer vmware iso local and on esxi

I am curious to find out why packer is failing to get ssh access on an ESXi server. The build works just fine for vmware_fusion locally.
As JSON does not seem to display nicely directly here on SF - a link to a gist with the builder configuration: https://gist.github.com/geoHeil/5acf06cb0f3afadfa347d437c2695a7c
When running
packer build -var-file variables.json -only=vmwarevmwareRemote template.json
the kickstart file is loaded, configured and installed. However, in the case of ESXi as the builder the build seems to be stuck on waiting for SSH to become available.
I noticed in the logs that:
/var/log/auth.log
2017-02-08T17:33:20Z sshd[94210]: User 'root' running command 'esxcli --formatter csv network vm list\n'
2017-02-08T17:33:25Z sshd[94210]: User 'root' running command 'esxcli --formatter csv network vm list\n'
displays a lot of the same commands.
Executing this command manually shows
esxcli --formatter csv network vm list
Name,Networks,NumPorts,WorldID,
ubunu-test,"VM Network,",1,87986,
someOther,"VM Network,",1,84833,
What could be wrong here?
edit
packer version is latest 0.12.2, esxi 6.5
edit2
when applying the suggestion of setting a network the same problem persists. But now I see 2 commands in the logs
[root#vm-bd-dev:/var/log] tail -f auth.log
2017-02-09T09:05:56Z sshd[111376]: User 'root' running command 'esxcli --formatter csv network vm list\n'
2017-02-09T09:05:56Z sshd[111376]: User 'root' running command 'esxcli --formatter csv network vm port list -w 111433\n'
The second (new) one has the following output:
ActiveFilters,DVPortID,IPAddress,MACAddress,PortID,Portgroup,TeamUplink,UplinkPortID,vSwitch,
,,0.0.0.0,00:0c:29:47:d5:3d,33554450,VM Network,vmnic2,33554437,vSwitch0,
You probably need some more vmx_data settings for the network, something like:
"vmx_data": {
"ethernet0.networkName": "VM Network",
"ethernet0.present": "true",
"ethernet0.virtualDev": "vmxnet3",
"ethernet0.startConnected": "true",
"ethernet0.addressType": "generated"
}
Switching the network interface to something not hard coded like
network --bootproto=dhcp --ipv6=auto --activate
solved the problem for me.
Apparently different interfaces (no eth0) were available on ESXi.

"virsh list" command not showing VM created by "qemu-system-x86_64" command

I created a VM using "qemu-system-x86_64" command. The VM is up and running. I can access it and list it by command "ps -ef | grep qemu-system-x86_64.
But if I try to list the VM using "virsh list" command then I do not see it there. Could you please point me what could be the reason?
Why is "virsh list" command not able to list VMs created by "qemu-system" command? I thought that virsh is an application that uses libvirt to access KVM/linux's virtualization capabilities. So even if VM is created by any method, then also virsh should be able to query KVM to check the already running VMs on the host.
qemu-system-x86_64 is a backend used by virsh to start a VM. Although qemu-system-x86_64 is libvirt-dependent, it does not register any running instances inside virsh/libvirtd metadata.

Backup and restore with VirtualBox

A VirtualBox's newbie here. I created a snapshot of a VM by using this command line:
VBoxManage snapshot VMName take BackupName
Then I used this to check:
VBoxManage showvminfo VMName
and I saw a snapshot was created with the name BackupName. I didn't find an "actual" snapshot BackupName, I found a file filename.sav in SnapShots folder.
My intention was to create a snapshot of this VM, copy it to another host machine and restore it there. Since I couldn't find the "snapshot" BackupName, so I copied sav file to new host and used this command:
VBoxManage adoptsate filename.sav
But it didn't work. Can anyone help me how to copy that "snapshot" and restore it in a new host. Thanks a lot
First, get a list of the Virtual Machines installed on your host at the command line:
vboxmanage list vms
Sample Output
"UbuntuVM" {77743eca-e338-471c-b824-60c5c5c22b6f}
"Windows XP SP3" {3818afc4-189d-4441-8f35-07284c930a4b}
"Windows XP SP3 Clone" {79b40316-225a-43a1-9ddf-22a51c280d4e}
Find the one you want to export to a different host, and export to a file called Ubuntu.ova like this:
vboxmanage export UbuntuVM -o Ubuntu.ova

Is there a way I can have a VM gain access to my computer?

I would like to have a VM to look at how applications appear and to develop OS-specific applications, however, I want to keep all my code on my Windows machine so if I decide to nuke a VM or anything like that, it's all still there.
If it matters, I'm using VirtualBox.
This is usually handled with network shares. Share your code folder from your host machine and access it from the VMs.
Aside from network shares, another tool to use for this is a version-control system.
You should always be able make a normal network connection between the VM and the hosting OS, as though it were another computer on the same network. Which, in some sense, it is.
I do this all the time.
I have a directory in a Windows drive that I mount in my host ubuntu 12.04.
I run virtualbox ubuntu 13.04 as a guest.
I want the guest to mount the Windows directory with full non-root permissions.
I do almost all my work from a bash shell, so this method is natural for me.
When searching for methods to automatically mount virtualbox shared folders,
reliable and correct methods are hard to distinguish from those that fail.
Failures include getting and setting permissions, as well as other problems.
Methods that fail include:
modifying /etc/fstab
modifying /etc/rc.local
I am fairly certain that rc.local can be used,
but no methods I have tried worked.
I welcome improvements on these guidelines.
On virtualbox 4.2.14 running nautilus (bash terminal) on an ubuntu 13.04 guest,
Below is a working method to mount Common (sharename)
on /home/$USER/Desktop/Common (mountpoint) with full permissions.
(Note the β€˜\’ command continuation character in the find command.)
First time only: create your mountpoint, modify your .bashrc file, and run it.
Respond with password when requested.
These are the four command-lines needed:
mkdir $HOME/Desktop/Common
sudo echo β€œ$USER ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers
find $HOME/Desktop/Common -maxdepth 0 -type d -empty -exec sudo \
mount -t vboxsf -o \
uid=`id -u $USER`,gid=`id -g $USER` Common $HOME/Desktop/Common \;
source ~/.bashrc # Needed if you want to mount Common in this bash.
All other times: simply launch a bash shell.
The find command mounts the shared directory if the mountpoint directory is empty.
If the mountpoint directory is not empty, it does not run the mount command.
I hope this is error-free and sufficiently general.
Please let me know of corrections and improvements.