I am trying to create a new virtual machine with Oracle VirtualBox, using an already-existing hard disk. When I try to select the existing hard disk file, a .vhd file, it displays an error saying the virtual hard disk cannot be used because the UUID already exists.
So I tried the following command to change its UUID.
VBoxManage internalcommands sethduuid /home/user/VirtualBox VMs/drupal/drupal.vhd
I get this error.
Syntax error: Invalid UUID parameter
How can I resolve this?
The correct command is the following one.
VBoxManage internalcommands sethduuid "/home/user/VirtualBox VMs/drupal/drupal.vhd"
The path for the virtual disk contains a space, so it must be enclosed in double quotes to avoid it is parsed as two parameters.
The following worked for me:
run VBoxManage internalcommands sethduuid "VDI/VMDK file" twice (the first time is just to conveniently generate an UUID, you could use any other UUID generation method instead)
open the .vbox file in a text editor
replace the UUID found in Machine uuid="{...}" with the UUID you got when you ran sethduuid the first time
replace the UUID found in HardDisk uuid="{...}" and in Image uuid="{}" (towards the end) with the UUID you got when you ran sethduuid the second time
If you've copied a disk (vmdk file) from one machine to another and need to change a disk's UUID in the copy, you don't need to change the Machine UUID as has been suggested by another answer.
All you need to do is to assign a new UUID to the disk image:
VBoxManage internalcommands sethduuid your-box-disk2.vmdk
UUID changed to: 5d34479f-5597-4b78-a1fa-94e200d16bbb
and then replace the old UUID with the newly generated one in two places in your *.vbox file
<MediaRegistry>
<HardDisks>
<HardDisk uuid="{5d34479f-5597-4b78-a1fa-94e200d16bbb}" location="box-disk2.vmdk" format="VMDK" type="Normal"/>
</HardDisks>
and in
<AttachedDevice type="HardDisk" hotpluggable="false" port="0" device="0">
<Image uuid="{5d34479f-5597-4b78-a1fa-94e200d16bbb}"/>
</AttachedDevice>
It worked for me for VirtualBox ver. 5.1.8 running on Mac OS X El Capitan.
I have searched the web for an answer regarding MAC OS, so .. the solution is
cd /Applications/VirtualBox.app/Contents/Resources/VirtualBoxVM.app/Contents/MacOS/
VBoxManage internalcommands sethduuid "full/path/to/vdi"
Though you have solved the problem, I just post the reason here for some others with the similar problem.
The reason is there's an space in your path(directory name VirtualBox VMs) which will separate the command. So the error appears.
The command fails because it has space in one of the folder name,
i.e. 'VirtualBox VMs.
VBoxManage internalcommands sethduuid /home/user/VirtualBox VMs/drupal/drupal.vhd
If there is no space at folder name or file name, then the command will work even without quoting it,
e.g. after changing 'VirtualBox VMs' into 'VBoxVMs'
VBoxManage internalcommands sethduuid /home/user/VBoxVMs/drupal/drupal.vhd
Same solution as #Al3x for Windows x64, in cmd.exe:
cd %programfiles%\Oracle\VirtualBox
VBoxManage internalcommands sethduuid "full/path/to/.vdi"
This randomizes the UUID of the disk. Pro tip: Right click the .vdi file while holding shift and select "Copy as path" to obtain "full/path/to/.vdi" and enable quick edit in cmd.exe, then right click to paste.
Even though this question asked is old, note that changing a UUID on a virtual HDD in a windows system will make windows treat it as a not activated machine (as it notices the disk change) and will ask for reactivation !
Another alternative to your original solution would be to use the escape character \ before the space:
VBoxManage internalcommands sethduuid /home/user/VirtualBox\ VMs/drupal/drupal.vhd
Related
I'm trying to format my HDD 250Gb in QNX-neutrino OS running on Oracle VM in Windows 10. I mounted my HDD and everything looks fine. I run command
fdisk /dev/hd1
And create 4 partitions with next sizes:
1325, 124684, 17265, 47496 MB
After that I run command:
mount -e /dev/hd1
And when I tried to format created partitions with commands:
mkqnx6fs –q –b4096 /dev/hd1t177
mkqnx6fs –q –b4096 /dev/hd1t178
mkqnx6fs –q –b4096 /dev/hd1t179
mkqnx6fs –q –b4096 /dev/hd1t180
the process starts and never end. Terminal is just frezees. What can I do to fix it? Or there is another way to format my partitions in this system?
I had the same issue when used VM Oracle after markup my HDD. Then I tried VM Ware and this issue was resolved.
VM Ware didn't suit me, because it counted cylinders incorrectly.
I find the solution. The problem was due to poor contact of the USB cable with my hard drive. I tried changing the usb cable and usb port and it worked fine.
VirtualBox (Version 5.2.24 r128163 (Qt5.6.2)) user with xubuntu guest (Ubuntu 18.04.2 LTS) and Windows 10 host here.
I recently tried to resize my vdi from ~100GB to 200GB. In windows I used the command:
./VBoxManage modifyhd "D:\xub2\xub2.vdi" --resize 200000
That went fine. Then I used a gparted live cd to create a vm, attached the vdi and resize the partitions:
gparted gui
All looks good. If I then use the 'fdisk -l' command whilst in the gparted vm the increased partition sizes are visible as expected.
fdisk -l results for vdi attached to gparted vm
If I try and resize the file system for one of the newly resized logical drives with 'resize2fs /dev/sda5' I am told it is already 46265856 blocks long and there is nothing to do.
However....
If I then re-attach this vdi to an ubuntu vm and boot up with the vdi, the 'fdisk -l' command gives different results and is basically telling me that the drive is still 100GB in size.
fdisk -l results for the same vdi attached to ubuntu vm
The 'df' command confirms that it is not resized.
df command output with same vdi attached to ubuntu vm
If I try the command 'resize2fs /dev/sda5' I get the result:
The filesystem is already 22003712 (4k) blocks long. Nothing to do!
How can I fix this and make the ubuntu vm see that the disk and partitions have been increase in size?
Ok. I will answer my own question (thank you for the negative vote anonymous internet).
This issue occurs when you have existing snapshots of the drive that you are trying to expand associated with a VirtualBox VM.
I found this described in VirtualBox's documentation.
https://forums.virtualbox.org/viewtopic.php?f=35&t=50661
One suggested solution is to delete the snapshots, however I got an error message when I attempted that.
The solution that worked for me was to clone my VM. The cloned VM (which did not have any snapshots associated with it), behaved as expected and showed the correct size for the resized disk.
To be clear: the situation I described above is 100% true.
Hope that helps someone.
I've been using PuPHPet to create virtual development environments.
Yesterday I generated a config file for a new box. When I try to spin it up using the vagrant up command, I get the following error message:
C:\xx>vagrant up
Bringing machine 'default' up with 'virtualbox'
provider... There are errors in the configuration of this machine.
Please fix the following errors and try again:
SSH:
* private_key_path file must exist: P://.vagrant.d/insecure_private_key
I came across this question and moved the insecure_private_key from puphpet\files\dot\ssh to the same directory as where the Vagrantfile is. However this gives the same error.
I'm also confused by the directory given in the error message;
P://.vagrant.d/insecure_private_key
Why is the 'P' drive mentioned?
My Vagrantfile can be found here.
Appreciate any advice on solving this error.
I fixed the problem by replacing the path to insecure_private_key by hard coding the path to the insecure_private_key file.
So it went from:
config.ssh.private_key_path = [
customKey,
"#{ENV['HOME']}/.vagrant.d/insecure_private_key"
]
To:
config.ssh.private_key_path = [
customKey,
"C:/Users/My.User/.vagrant.d/insecure_private_key"
]
It looks like it's because you may have performed a vagrant destroy which deleted the insecure_private_key.
But the vagrant file looks up the puphpet\files\dot\ssh files, if they are there, it looks for the insecure_private_key.
delete (rename) the id_rsa files in puphpet\files\dot\ssh
this fixed it for me!
When you are sharing your puphet configuration to your teammates, hardcoding the private_key_path is not advisable as per the accepted answer.
My host computer is windows so i have added a new environment variable VAGRANT_HOME with value %USERPROFILE% since this is where my /.vagrant.d folder resides. When you add this variable just make sure that you close command prompts that are open so the variable will be applied
Hope this helps
You can also just delete all the files in the puphpet folder rm -rf puphpet/files/dot/ssh/* and the vm should regenerate them when you run vagrant provision.
I'm not sure what's wrong with your Vagrant installation, but this line:
vagrant_home = (ENV['VAGRANT_HOME'].to_s.split.join.length > 0) ? ENV['VAGRANT_HOME'] : "#{ENV['HOME']}/.vagrant.d"
is what sets up the variable that is later on used here:
config.ssh.private_key_path = [
customKey,
"#{vagrant_home}/insecure_private_key"
]
The reason this is happening is that as of Vagrant 1.7, it generates a unique private key for each VM you have. There's, what I consider to be, a bug in that Vagrant completely ignores user-defined private_key_path if it detects that it generated a unique key previously.
What PuPHPet is doing here is letting Vagrant generate its unique SSH key, then once the VM boots up and has SSH access, it goes in and generates another key to replace it.
The reason we're replacing it is because this new Vagrant feature only works on OSX/Linux hosts, due to Windows not having the required tools.
My way works across all OS because it does the SSH key generation within the VM itself.
All this is semi-related to your question, but the answer is that something's wrong with your Vagrant installation if those environment variables have not been defined.
Adding to PunctuationMark's answer you can also set the VAGRANT_HOME environment variable in your Vagrantfile: ENV['VAGRANT_HOME'] = ENV['USERPROFILE']
Editing this following line in Vagrantfile worked for me.
PRIVATE_KEY_SOURCE = '~/.vagrant.d/insecure_private_key'
My Windows 8.1 just crashed. Now I have some files on my dist that are corrupted. This includes my vagrant machine index (Not shure if the naming is right but I know that it is this file -> C:\Users\USERNAME.vagrant.d/data/machine-index/index).
So There is a lot of binary or hexdecimal stuff in there (Again not shure because I don't deal with this stuff usualy so correct me if I'm wrong!) And Vagrant spits out the following message if I try to start everything after boot.
vagrant up returns this
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/Username/.vagrant.d/data/machine-index/index
Same thing happened to me. So I just deleted the index file and the .lock file from the machine-index folder to get Vagrant working again.
When using Vagrant 2.2.5 in Windows 10, I had to navigate to /Users/{yourname}/.vagrant.d/data/machine-index and remove both index and index.lock, so rm index then rm index.lock.
Finally I navigated back to Homestead folder and ran vagrant up.
When accidentally my laptop crashed, I had the same vagrant issue (index) on my first attempt to run vagrant up.
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/{user}/.vagrant.d/data/machine-index/index
Unfortunately my issue was not solved by deleting the index and index.lock files as the most voted up answer told. I rebooted my vm using virtualbox GUI (used as VM provider) and shown up the following message.
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
I realised that crash produced errors on VM's FS. So after searching and investigation I overcame that issue by executing the command below.
xfs_repair -v -L /dev/dm-0
Environment info: OS windows10, virtual-box 6.1, vagrant 2.2.7 and vm-os centos7
I am using Oracle Virtual Box version 4.2.16 r86992. Everything was fine until yesterday shutdown.
Today, it shows inaccessible and throws this error:
Runtime error opening C:\Users\xxxxxx\VirtualBox VMs\vBoxxxxXubuntu_Beta\vBoxxxxXubuntu_Beta.vbox for reading: -102 (File not found.).
D:\tinderbox\win-4.2\src\VBox\Main\src-server\MachineImpl.cpp[725] (long __cdecl Machine::registeredInit(void)).
It's good to restore this to working, It would save lot of time and restore configuration settings and data. Thanking your support.
This normally happens if the host OS crashes or you pull the plug on it, leaving the .vbox file unsaved.
In the location:
C:\Users\xxxxxxx\VirtualBox VMs\vBoxxxxXubuntu_Beta\
you should find two files:
vBoxxxxXubuntu_Beta.vbox-prev
vBoxxxxXubuntu_Beta.vbox-tmp
Copy vBoxxxxXubuntu_Beta.vbox-prev to vBoxxxxXubuntu_Beta.vbox.
Select vBoxxxxXubuntu_Beta.vbox, in the VBox manager, right click, and then left click on refresh.
Observe that it now shows Powered Off.
Now you are good to go.
Based on my experience, I was on Windows 7 and running Ubuntu 14.04 as guest OS on Virtual Machine.
Go to your Virtualbox folder (in my case):
C:\Users\Dev12\VirtualBox VMs\Ubuntu
You'll see files with extensions: Ubuntu.vbox-tmp or Ubuntu.vbox-prev
Remove -tmp from file name Ubuntu.vbox-tmp so that it reads as Ubuntu.vbox
Exit from Virtual Machine and start it again.
You should now see error gone away.
The virtual box files with extension .vbox contain metadata the virtualbox hypervisor requires to resolve the guest virtual OS' configuration.
If the main .vbox file is corrupted (i.e. reporting that it is empty) then use the backup .vbox-prev file to recover the contents of the original file.
Do this by renaming the empty .vbox files a temporary name (e.g. rename originalVM.vbox to originalVM-empty.vbox).
Then make a copy of the backup file originalVM.vbox-prev, where the copy will have the same name as the original but with the word "copy" appended to it (i.e. originalVM.vbox-prev is renamed to originalVM (copy).vbox-prev).
It is important to retain the original backup .vbox-prev file it should not be altered or itself renamed.
Now go rename the copy of the newly created .vbox-prev file originalVM (copy).vbox-prev to the original name of the empty .vbox file and be mindful to also change it extension from .vbox-prev back to just .vbox.
That is rename originalVM (copy).vbox-prev back to originalVM.vbox. Now that this is done you may add the .vbox file (guest os) back into the VBOX hypervisor. This will recover the state and snapshot of the "inaccessible" guest VM. Now delete the original empty .vbox file.
I've faced the same issue using CentOs 6.8 on a VirtualBox 5.1 installed in Windows 7 and AjayKumarBasuthkar's solution worked perfectly for me:
I went to C:\Users\\VirtualBox VMs\CentOS6.8
Made a copy of the file CentOS6.8.vbox-prev and gave it the name of CentOS6.8.vbox
Went to the VirtualBox GUI, right-clicked the VM instance and hit refresh
The CentOS instance went from the State Inaccessible to Powered Off
VirtualBox 4.3 is released and could it be that you've updated or there was some issues while updating?
In any case if you are not able to bring up the Virtualbox, remember to backup the VirutalBox VMs folder and going for a fresh install should be the best way forward.
I faced the same problem and I resolved by doing following in Oracle Virtual box 4.3.28 with Ubuntu 14.04 LTS, when Virtual box VM was closed.
Removed ubuntu.vbox to another folder outside virtual box folder
removed -prev from file ubuntu.vbox-prev
start oracle virtualbox, it works excellent.
On a Windows 7 Host, I found that Daemon Tools service had a hold on the file.
The solution was to uninstall Daemon Tools, but I suspect if you stop the service and remove the file association, you would be sorted.
The other issue might be that if your Virtual Machine was on an external hard drive, it is possible that the drive letter has changed. If so, go to Computer Management, and select the hard drive and right click to change the drive letter and save (Note that this is for Windows).
This is going to sound stupid but try to reinstall VB. It may work.
I am adding one critical and important comment to the previous great answers. Make sure that the original .vbox file is corrupted and empty before you copy the content from the.vbox-prev file. If it is not the case and you find it with lines and readable content, don't replace the content of the .vbox.
Changes made to the VM directly before the VM got inaccessible might not be updated in the .vbox-prev backup file . The changes could not be synced with those changes before the OS upgrade or system changes that led to the inaccesable issue.
If you find your VM not accessible after an OS upgrade or system change, first check the.vbox file if it is still readable by a text editor and it has lines. Then you just need to delete the VM from the VirtualBox manager list(just remove the appliance from the list and don't remove files) . Then reopen the.vbox file and it should work perfectly.
If the original.vbox file is corrupted or empty when you open it with a text editor, then and only then, you can copy the content from the .vbox-prev and follow the instructions highlighted.
This was my experience, and I wanted to share it with you to avoid losing some last minute changes before the OS upgrade or crash.