Open PDF found with volatility - pdf

my task is to analyze a memory dump. I've found the location of a PDF-File and I want to analyze it with virustotal. But I can't figure out how to "download" it from the memory dump.
I've already tried it with this command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
But in my dumpfile-directory there is just a .vacb file which is not a valid pdf.

I think you may have missed a command line argumenet from your command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
If you are not getting a .dat file in your output folder you can add -u:
-u, --unsafe Relax safety constraints for more data
Can't test this with out access to the dump but you should be able to rename the .dat file created to .pdf.
So it should look something like this:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/ -u
You can check out the documentation on the commands here

VACB is "virtual address control block". Your output type seems to be wrong.
Try something like:
$ python vol.py -f img.vmem dumpfiles --output=pdf --output-file=bla.pdf --profile=[your profile] -D dumpfiles/
or check out the cheat sheet: here

Related

Google Colab Blender render Error: cannot read file

Trying to render a single frame
following this script "Blender_script_for_Google_Colab_using_the_GPU.ipynb"
by- https://github.com/donmahallem
Successfully mounted GDrive and installed Blender.
Executed all the cells from top to bottom, one by one.
This is the OUTPUT of final Cell
found bundled python: /content/blender2.83.12/2.83/python
Error: Cannot read file '/content/{/content/drive/MyDrive/Blender/donut.blend}': No such file or directory
<bpy_struct, CyclesPreferences at 0x7f6366c38ba8>
Device found CUDA
Activating <bpy_struct, CyclesDeviceSettings("Tesla T4")>
Activating <bpy_struct, CyclesDeviceSettings("Tesla T4")>
Blender quit```
ANSWER
It should be like this
!/content/blender2.83.12/blender -P './setgpu.py' -b -noaudio '/content/drive/MyDrive/Blender/donut.blend' -E CYCLES -o '/content/drive/MyDrive/Blender/test_mixed_####.png' -f 1 |& tee '/content/drive/MyDrive/Blender/log.txt'
NOT like this
!/content/blender2.83.12/blender -P './setgpu.py' -b -noaudio '{/content/drive/MyDrive/Blender/donut.blend}' -E CYCLES -o '{/content/drive/MyDrive/Blender/test_mixed_####.png}' -f 1 |& tee '/content/drive/MyDrive/Blender/log.txt'
In short I forgot to remove the curly brackets {} from "Blend_file_path" and "Output_path"
I believe you should use 'My Drive' rather than 'MyDrive' in your directory path.

tcpdump with -w -C -G and -z options

I'm trying to take continuous traces which are written to files that are limited by both duration (-G option) and size (-C option). The files are automatically named with the -w option, and finally the files are compressed with the -z gzip option. Altogether what I have is:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S.pcap -s 0 -C 100 -G 3600 -Z root -z gzip &
The problem is that with the -C option, the current file count is appended onto the name, so I wind up with files ending in: .pcap2.gz .pcap3.gz .pcap4.gz, etc. I would much prefer to have them end as: _2.pcap.gz _3.pcap.gz _4.pcap.gz, etc.
But if I remove .pcap from the -w option, I wind up with 2.gz 3.gz 4.gz
This could work if I could include options in the "-z" command like -z "gzip -S .pcap.gz" so that gzip itself appends the .pcap or if I could use an alias like pcap_gzip="gzip -S .pcap.gz" and then -z pcap_gzip, but neither option seems to be working, the latter producing this error: compress_savefile:execlp(gzip -S pcap.gz, /home/me/pcaps/MyTrace_2018-08-07_105308_27): No such file or directory
I encountered the same problem today, In CentOS6. I found your problem, but the answer did not work to me.
In fact, it only needs to be adjusted slightly, that is, the absolute path of the saved file name and the name of the script to be executed is written, for example
tcpdump -i em1 ... -s 0 -G 10 -w '/home/Svr01_std_%Y%m%d_%H%M%S.pcap' -Z root -z /home/pcapup2arcive.sh
I found out that although the alias doesn't work, I was able to put the same commands in a script and invoke the script via tcpdump -z.
pcap_gzip.sh:
#!/bin/bash
gzip -S .pcap.gz "$#"
Then:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S -s 0 -C 100 -G 3600 -Z root -z pcap_gzip.sh &

CommandException: Caught non-retryable exception - aborting rsync

After using gsutil for more than 1 year I suddenly have this error:
.....
At destination listing 8350000...
At destination listing 8360000...
CommandException: Caught non-retryable exception - aborting rsync
.....
I tried to locate the files with this sync problem but I am not able to do so. Is there a "skip error" option of is there a way I can have gsutil more verbose?
My command line is like this:
gsutil -V -m rsync -d -r -U -P -C -e -x -x 'Download/*' /opt/ gs://mybucket1/kraanloos/
I have created a script to split the problem. This gives me more info for a solution
!#/bin/bash
array=(
3ware
AirTime
Amsterdam
BigBag
Download
guide
home
Install
Holding
Multimedia
newsite
Overig
Trak-r
)
for i in "${array[#]}"
do
echo Processing : $i
PROCESS="/usr/bin/gsutil -m rsync -d -r -U -P -C -e -x 'Backup/*' /opt/$i/ gs://mybucket1/kraanloos/$i/"
echo $PROCESS
$PROCESS
echo ""
echo ""
done
I've been struggling with the same problem the last few days. One way to make it super verbose is to put the -D flag before the rsync argument, as in:
gsutil -D rsync ...
By doing that, I found that my problem is due to having # characters in filenames, as in this question.
In my case, it was because of a broken link to a directory.
As blambert said, use the -D option to see exactly what file causes the problem.
I had struggled with this problem as well. I figured it out now.
you need to re-authenticate your Google Cloud SDK Shell and set a target project again.
It seems like rsync will not show the correct error message.
try cp instead, it will guide you to authentic and set the correct primary project
gsutil cp OBJECT_LOCATION gs://DESTINATION_BUCKET_NAME/
after that, your gsutil rsync should run fine.

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.

Chaining terminal script on mac os x

I am trying to chain some terminal commands together so that i can wget a file unzip it and then directly sync to amazon s3. Here is what i have so far i have s3cmd tool installed properly and working. This works for me.
mkdir extract; wget http://wordpress.org/latest.tar.gz; mv latest.tar.gz extract/; cd extract; tar -xvf latest.tar.gz; cd ..; s3cmd -P sync extract s3://suys.media/
How do i then go about creating a simple script i can just use variables?
You will probably want to look at bash scripting.
This guide can help you alot; http://bash.cyberciti.biz/guide/Main_Page
For your question;
Create a file called mysync,
#!/bin/bash
mkdir extract && cd extract
wget $1
$PATH = pwd
for f in $PATH
do
tar -xvf $f
s3cmd -P sync $PATH $2
done
$1 and $2 are the parameters that you call with your script. You can look at here for more information about how to use command line parameters; http://bash.cyberciti.biz/guide/How_to_use_positional_parameters
ps; #!/bin/bash is necessity. you need to provide your script where bash is stored. its /bin/bash on most unix systems, but i'm not sure if it is the same on mac os x, you can learn it by calling which command on terminal;
→ which bash
/bin/bash
you need to give your script executable privileges to run it;
chmod +x mysync
then you can call it from command line;
mysync url_to_download s3_address
ps2; I haven't tested the code above, but the idea is this. hope this helps.