How do I extract specific files from Cuckoo Sandbox VM? - malware

I study about ransomware behavior with cuckoo sandbox. I need to get encrypted files and threatening letter which those made by ransomware, but they exist on my Cuckoo Sandbox VM. How do I extract specific files from its VM?
my environment:
cuckoo sandbox 2.06
Host OS:Ubuntu 18.04
Guest OS:Windows7SP1x86(without guest additions)
VM soft: Virtual Box 5.2

You can get the requiere time to handle your manipulatiuons by:
1- specify a hight number of second either using command line (ex --timeout 300) or GUI.
2- Force waiting the time out (add --enforce-timeout in the command line) or using the GUI.
3- copy the requiered files in rhe chared folder.
4- to get the displayed message made by he ransomware, you can take screenshot from the host machine.

Related

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

Apache server isn't starting on Xampp portable

It may seems as a repeated question but my problem is that I couldn't be able to start Apache server on XAMPP Portable, I'm am aware of the issues that some aplications can cause since they can use the ports where apache is supposed to work, so I decide to change the apache running ports on httpd.conf and httpd-ssl.conf files to Listen on 8080 and 8001 respectively, here are some screenshots of the changes,
http-conf1, http-conf2, httpssl-conf1, httpssl-conf2
In adition to that I also chage the configuration of the "Service & Port Settings" under XAMPP Control Panel, as shown in the folowing image,
xampp-ports
Although, I did all these changes I still can´t get the apache instance running, and keep getting the following error xampp-error, it's important to notice that I´m trying to start apache service in a company workstation and I don't have any admin rights, but I read that ports above 1250 didn't need any admin rigths to run services on them, so I don´t know what to do at this point, any suggestion from you guys would be really appreciated.
Thanks.
There are lot's of answer to this problem here, in particular I think that this is the answer you are looking for:
Have you executed "setup_xampp.bat" script?
It's inside XAMPP folder and it must be executed every time you change XAMPP folder.
(Bolds are mine)
While not explicitly stated in any "immediate" and "easily" visible warning or message, this is also stated in the readme_en.txt file inside the XAMPP portable main folder.
Step 1: Unpack the package into a directory of your choice. Please start the
"setup_xampp.bat" and beginning the installation.
Note: XAMPP makes no entries in the windows registry and no settings for the system variables.
I'd also say there is a not-so-clear note section right above this step:
[NOTE: Unpack the package to your USB stick or a partition of your choice.
There it must be on the highest level like E:\ or W:. It will
build E:\xampp or W:\xampp or something like this. Please do not use the "setup_xampp.bat" for an USB stick installation!]
I've installed it in a random folder (not root) and after running the setup_xampp.bat script everything ran correctly.
If you are using xampp in USB Drive and having issue at different Windows PC/Laptop then Assign a relevant letter to USB according to installation PC/Laptop USB Letter.
Suppose You install xampp in USB at computer Alpha and Computer Alpha assign it letter F to USB and Now You are at other PC/Laptop Bravo and that PC/Laptop assign it to letter W by default then change that letter to F using Bravo system control panel.
Problem: xampp Portable won’t start, failed or just doesn’t work!
Error: Apache shutdown unexpectedly.
[Apache] This may be due to a blocked port, missing dependencies,
[Apache] improper privileges, a crash, or a shutdown by another method.
[Apache] Press the Logs button to view error logs and check... ...
Solution:
Option 2
Step 1: Open Apache "httpd.conf" from xampp control panel. The file will open in notepad.
image-xampp-config
Step 2: Scroll down or search for “ServerRoot”. If result => ServerRoot "/xampp/apache" follow next step. If not follow (Option 2)
image-xammp-ServerRoot
If 'httpd.conf' not like this image follow #Option 2
Step 3: For portable version of xampp don't any other folder name like 'xampp56'
Use only "xampp" And put this on root directory.
No Sub folder/directory
Step 4: Open your USB drive and go to xampp folder, then start the xampp-control-panel with ‘run as administrator’ mode.
Done.
Option 2
Step 1: Open your ‘My Computer’ or ‘This PC’ to confirm your USB drive label on your current PC. E:, F:, G:…
Step 2: Open Apache "httpd.conf" from xampp control panel. The file will open in notepad. Now Scroll down or search for “ServerRoot”
image-changing-file-httpd
Step 3: There’s a file address path after the ServerRoot should change with your current USB drive address path if path not matches with this current PC.
image-notepade-replace-function
Step 4: Repeat the same process for Apache 'httpd-ssl.conf', 'httpd-xampp.conf',
'php.ini'(Please note that php using the backslash “\” instead of forward slash “/”)
image-php-ini-config-update
and
{…your usb…}\xampp\apache\conf\extra\
File name: 'httpd-autoindex.conf', 'httpd-multilang-errordoc.conf'.
Note: If xampp Portable Apache "httpd.conf" like Option 2 you have to Repeat this process Every time. I recommend you to download new version of 'xampp-portable-win32-... .zip'.
Otherwise total of 7 files need to be updated everytime you change
PC!! Apache ('httpd.conf', 'httpd-ssl.conf', 'httpd-xamp.conf')
'php.ini', 'my.ini', 'httpd-autoindex.conf',
'httpd-multilang-errordoc.conf'
Step 5: Open your USB drive and go to xampp folder, then start the xampp-control-panel with ‘run as administrator’ mode.
Done.

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

Using HGFS within VMware to run web server

I have VMware running Ubuntu 14.02 and A Windows 8 Host. I've enabled shared folders and installed VMware tools. Now what I want is to run the web server through /mnt/hgfs/ProjectName
At this point I can access the shared folder from within Ubuntu. I do not have to run sudo to create new folders or files or edit existing. The folder is not mounted as read only and not treated as read only; however, when I try to change the read only attribute within Windows it reverts back afterwards. Is there any clue as to why the web server cannot read the folder as a web server? Even being mounted as read only the web server should be able to read the files.
Turns out the best way to run this is to name the project folder html within vmware and then mount it to the /var/www folder. Now edits have no problem being made and the server runs great for access to both the host and the guest OS.

What is the fastest way to upload the big files to the server

I have got dedicated server and file about 4 GB to upload on the server. What is the fastest and most save way to upload that file to the server?
FTP may create issues if the connection will be broken.
SFTP will have the same issue as well.
Do you have your own computer available through internet public IP as well?
In that case you may try to set up a simple HTTP server (if you have Windows - just set up the IIS) and then use some download manager on dedicated server (depends from OS) to download the file through HTTP (it can use multiple streams for that) or do this through torrent.
There're trackers, like http://openbittorrent.com/, which will allow you to keep the file on your computer and then use some torrent client to upload the file to the dedicated server.
I'm not sure what OS your remote server is running but I would use wget it has a --continue from the man page:
--continue
Continue getting a partially-downloaded file. This is useful when
you want to finish up a download started by a previous instance of
Wget, or by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget
will assume that it is the first portion of the remote file, and
will ask the server to continue the retrieval from an offset equal
to the length of the local file.
wget binaries are available for GNU/Linux / Windows / MacOSX / dos:
http://wget.addictivecode.org/FrequentlyAskedQuestions?action=show&redirect=Faq#download