Chaining terminal script on mac os x - amazon-s3

I am trying to chain some terminal commands together so that i can wget a file unzip it and then directly sync to amazon s3. Here is what i have so far i have s3cmd tool installed properly and working. This works for me.
mkdir extract; wget http://wordpress.org/latest.tar.gz; mv latest.tar.gz extract/; cd extract; tar -xvf latest.tar.gz; cd ..; s3cmd -P sync extract s3://suys.media/
How do i then go about creating a simple script i can just use variables?

You will probably want to look at bash scripting.
This guide can help you alot; http://bash.cyberciti.biz/guide/Main_Page
For your question;
Create a file called mysync,
#!/bin/bash
mkdir extract && cd extract
wget $1
$PATH = pwd
for f in $PATH
do
tar -xvf $f
s3cmd -P sync $PATH $2
done
$1 and $2 are the parameters that you call with your script. You can look at here for more information about how to use command line parameters; http://bash.cyberciti.biz/guide/How_to_use_positional_parameters
ps; #!/bin/bash is necessity. you need to provide your script where bash is stored. its /bin/bash on most unix systems, but i'm not sure if it is the same on mac os x, you can learn it by calling which command on terminal;
→ which bash
/bin/bash
you need to give your script executable privileges to run it;
chmod +x mysync
then you can call it from command line;
mysync url_to_download s3_address
ps2; I haven't tested the code above, but the idea is this. hope this helps.

Related

How to automate commands on Cygwin

Hi I am looking to automate my file transfering to my Jailbroken iPhone over USB with a bash file. Which will launch the relay then do the file transfers
With this here I installed and successfully transfered files to my iPhone with cygwin but now I want to automate the file transfer.
First I need to start the relay with cygwin and those commands are required
cd pyusbmux/python-client/
chmod +x *
./tclrelay.py -t 22:2222
so I created a .sh file that does it but when I launch it cygwin gives me those errors
This is what should happen on the left and the result of the script on the right
How can I make cygwin open with thoses commands
In addition to be sure that tcpON.sh has proper line termination with d2u of dos2unix package:
d2u tcpON.sh
You should add a proper SHEBANG on the first line of your script
https://linuxize.com/post/bash-shebang/
#!/bin/bash
cd /cygdrive/e/Grez/Desktop
cd pyusbmux/python-client/
chmod +x *
./tclrelay.py -t 22:2222
You can use as base the Cygwin.bat and make a tcpON.bat batch file like:
C:
chdir c:\cygwin64\bin
bash --login /cygdrive/e/Grez/Desktop/tcpON.sh
Verify the proper cd command to be sure that you are always in the expected directory.
It is not the only way but probably the most flexible (IMHO)

Shell script to copy data from remote server to Google Cloud Storage using Cron

I want to Sync my server data to Google Cloud Storage to copy automatically using shell script. I don't know how to make script. Every time i need to use:
gsutil -m rsync -d -r [Source] gs://[Bucket-name]
If anyone knows the answer please help me!
To automate the sync process use cron job:
Create a script to run with cron $ nano backup.sh
Paste your gsutil command in the script $ gsutil -m rsync -d -r [Source_PATH] gs://bucket-name
Make the script executable $ chmod +x backup.sh
Based on your use case, put the shell script (backup.sh) in one of the below folders: a) /etc/cron.daily b) /etc/cron.hourly c) /etc/cron.monthly d)
/etc/cron.weekly
If you want to run this script for a specific time then go to the terminal and type: $ crontab -e
Then simply call out the script with cron as often as you want, for example, in midnight: 00 00 * * * /path/to/your/backup.sh
In case you are using Windows on your local server, The commands will be the same as above but make sure to use Windows path instead.

Open PDF found with volatility

my task is to analyze a memory dump. I've found the location of a PDF-File and I want to analyze it with virustotal. But I can't figure out how to "download" it from the memory dump.
I've already tried it with this command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
But in my dumpfile-directory there is just a .vacb file which is not a valid pdf.
I think you may have missed a command line argumenet from your command:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/
If you are not getting a .dat file in your output folder you can add -u:
-u, --unsafe Relax safety constraints for more data
Can't test this with out access to the dump but you should be able to rename the .dat file created to .pdf.
So it should look something like this:
python vol.py -f img.vmem dumpfiles -r pdf$ -i --name -D dumpfiles/ -u
You can check out the documentation on the commands here
VACB is "virtual address control block". Your output type seems to be wrong.
Try something like:
$ python vol.py -f img.vmem dumpfiles --output=pdf --output-file=bla.pdf --profile=[your profile] -D dumpfiles/
or check out the cheat sheet: here

trying to download a dataset from a website

I am trying to download a dataset from a website but I can't download the whole folder .. I have to download each file separately which will need a lot of time. I am wondering if there is anyway to download the whole folder at a time??
The website link: http://www.physionet.org/pn4/eegmmidb/
Use wget with the -r switch to turn on mirroring.
This command will do what you want:
wget --no-parent -r http://www.physionet.org/pn4/eegmmidb/
It'll produce a mirror copy of everything from that directory on down.
These two for cycles run in bash should do it:
for S in S{001..109}; do
mkdir ${S}
cd ${S}
for R in R{01..14}; do
file="http://www.physionet.org/pn4/eegmmidb/${S}/${S}${R}.edf"
wget "$file"
wget "${file}.event"
done
cd ..
done

Scripts installed by the deb package have wrong prefix

Building our own deb packages we've run into the issue of having to patch manually some scripts so they get the proper prefix.
In particular,
We're building mono
We're using official tarballs.
The scripts that end up with wrong prefix are: mcs, xbuild, nunit-console4, etc
An example of a wrong script:
#!/bin/sh
exec /root/7digital-mono/mono/bin/mono \
--debug $MONO_OPTIONS \
/root/7digital-mono/mono/lib/mono/2.0/nunit-console.exe "$#"
What should be the correct end result:
#!/bin/sh
exec /usr/bin/mono \
--debug $MONO_OPTIONS \
/usr/lib/mono/2.0/nunit-console.exe "$#"
The workaround we're using in our build-package script before calling dpkg-buildpackage:
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/mcs
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/xbuild
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/nunit-console
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/nunit-console2
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/nunit-console4
Now, what is the CORRECT way to fix this? Full debian package creation scripts here.
Disclaimer: I know there are preview packages of Mono 3 here! But those don't work for Squeeze.
the proper way is to not call ./configure --prefix=$TARGET_DIR
this tells all the binaries/scripts/... that the installated files will end up in ${TARGET_DIR}, whereas they really should endup in /usr.
you can use the DESTDIR variable (as in make install DESTDIR=${TARGET_DIR}) to change (prefix) the installation target at install time (files will end-up in ${TARGET_DIR}/${prefix} but will only have ${prefix} "built-in")