Drive Letter Change - Reregister VDI for VirtualBox VM - virtual-machine

Today, I was trying to get on my Kali Linux virtual machine to just do a basic vulnerability check on a VPS I own. I have my Kali Linux Virtual Disk Image (VDI) saved on a USB external drive, so I plugged that in, fired up Virtual Box, but I got an error when I went to start it. It would appear that the drive letter for this drive has changed from F: to E:. Thus, VirtualBox could not retrieve the VDI from F:\Kali Linux VM\.
Trying to troubleshoot this on my own, I decided to open up the VM settings, remove the SATA Controller VDI that was registered on the F: drive, and then add the VDI from the E: drive (same VDI, just a difference in drive letter). That, however, did not go as smoothly as planned. I was able to remove incorrect VDI path without any problems, but when I tried to add the VDI on the proper path, I got the following error:
Cannot register the hard disk 'E:\Kali Linux VM\Kali Linux.vdi' {6b214e73-ae38-427b-90f8-995c7dd4211c} because a hard disk 'F:\Kali Linux VM\Kali Linux.vdi' with UUID {6b214e73-ae38-427b-90f8-995c7dd4211c} already exists.
Result Code:
E_INVALIDARG (0x80070057)
Component:
VirtualBoxWrap
Interface:
IVirtualBox {0169423f-46b4-cde9-91af-1e9d5b6cd945}
Callee RC:
VBOX_E_OBJECT_NOT_FOUND (0x80BB0001)
It looks like I cannot add the VDI back to the VM because it is identical to the VDI I removed.
Has anyone else encountered a problem like this? And does anyone have a fix for this so I don't lose all the data on that VM?
Thank you all in advance.
Note: I know this isn't a programming question, so this may be the wrong Stack Exchange. Please let me know if this would be better suited under a different Stack Exchange site.

Open Oracle VM VirtuaBox Manager. Now go to
File > Virtual Media manager
Under Hard disks, select Kali Linux.vdi. Right click and remove it.
NOTE: If remove is disabled. Click release first. Then right click and remove.
Now add the VDI Kali Linux.vdi again.

Related

How do I extract specific files from Cuckoo Sandbox VM?

I study about ransomware behavior with cuckoo sandbox. I need to get encrypted files and threatening letter which those made by ransomware, but they exist on my Cuckoo Sandbox VM. How do I extract specific files from its VM?
my environment:
cuckoo sandbox 2.06
Host OS:Ubuntu 18.04
Guest OS:Windows7SP1x86(without guest additions)
VM soft: Virtual Box 5.2
You can get the requiere time to handle your manipulatiuons by:
1- specify a hight number of second either using command line (ex --timeout 300) or GUI.
2- Force waiting the time out (add --enforce-timeout in the command line) or using the GUI.
3- copy the requiered files in rhe chared folder.
4- to get the displayed message made by he ransomware, you can take screenshot from the host machine.

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

Running Hortonworks Sandbox on Oracle Virtual Box

I have imported Hortonworks Sandbox(HDP 2.6.1) on my Oracle Virtual Box(Version 5.0.24 r108355).When I click on this Sandbox and press start I'm facing the below error:
The configured driver wasn't found. Either the necessary driver modules wasn't loaded, the name was misspelled, or it was a misconfiguration. (VERR_PDM_DRIVER_NOT_FOUND).
I've tried with changing audio settings also.But unable to change them.
Any solution would be helpful.
Thanks
The easiest solution will be to ditch VirtualBox and use VMware or Docker instead:
https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/3/
Thank you Sergey Kovalev for your comments.
Instead of import appliance option, tried with :
1.Create new virtual machine
2.Select - Use an existing virtual hard disk file
3.Follow the steps for storage
4.Disable audio settings
After doing these steps able to start and work with Hortonworks Sandbox on Oracle Virtual Box.

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

Cannot connect to Compute Engine CentOS Virtual Machine

I am new to Virtual Machines and CLI so please bear with me.
I have a CentOS 6.5 running on Compute Engine.
I ran yum update (without creating a snapshot of the previous disk - Yes I am an idiot) and not I cannot connect to the machine using the ip address.
I tried the following steps.
Tried to connect through Filezilla - didn't work.
Tried through Putty - didn't work
Tried through the browser option given by the CE console - didn't work.
I even tried creating a snapshot and starting up another VM with the snapshot - didn't work.
If anyone knows how I can get the files and folders out from the previous disk, I can start up a new VM and transfer everything again.
I do not have the latest database and this is important.
Please help!
Thanks
Warren
The way to recover is to delete your VM without deleting the disk, then create another VM with its own boot disk, attach and mount the original disk, and recover any data that you need from it.
First things first: on the VM instances page, click on the instance name that is currently running with that disk, and uncheck the box "Delete boot disk when instance is deleted". Then delete the instance.
Now, create a new instance with its own boot disk. To differentiate this new disk from the original boot disk:
using a different OS (or version of the OS) for the new disk, e.g., if using Ubuntu, try a different version or use Debian; if using RHEL, try CentOS, or vice versa
see which one is mounted at / — this should be the new disk
Mount the original disk as read-only and recover any information you need. Once you have a backup of your data, you can remount it with read-write access and try to fix it (but back up the data first!).
I finally solved this problem thanks to Misha for sending me in the right direction.
The steps are below for anyone who has the same issue.
Problem:
While updating the Centos server using yum update, I was unable to connect back to the server.
I tried all possible combinations but no luck. This seems to be a known issue as there was some material on the Compute Engine site regarding this.
Solution:
I followed the steps as Misha suggested. I started up another VM with its own boot disk and then attached the original disk with read write access.
Note: I was unable to mount the disk as just read only.
The commands were
mkdir /mnt/sdb1
mount /dev/sdb1 /mnt/sdb1
Once I mounted the VM, I copied the files from the html folder in the sdb1 disk to the html folder in the sda1(the new boot disk).
The database was a bit more challenging.
I tried quite a few times but copying the files from /dev/sdb1/var/lib/mysql into the new disk mysql folder was not working.
I found some tutorials but nothing helped.
Finally I downloaded the files from within the /dev/sdb1/var/lib/mysql and put them in my local windows mysql installation within the data folder.
Remember you have to download everything which includes the ib_logfile0 , ib_logfile1 and ibdata1 including the folder which has the *.frm files.
Then I opened localhost/phpmyadmin and voila... the files were there.
The rest was pretty simple... Exporting and uploading the SQL scripts back to the server.
This took me about 12 hours to figure out.
Thanks again Misha.