EMR How to join files into one? - amazon-s3

I've splitted big binary file to (2Gb) chunks and uploaded it to Amazon S3.
Now I want to join it back to one file and process with my custom
I've tried to run
elastic-mapreduce -j $JOBID -ssh \
"hadoop dfs -cat s3n://bucket/dir/in/* > s3n://bucket/dir/outfile"
but it failed due to -cat output data to my local terminal - it does not work remotely...
How I can do this?
P.S. I've tried to run cat as a streaming MR job:
den#aws:~$ elastic-mapreduce --create --stream --input s3n://bucket/dir/in \
--output s3n://bucket/dir/out --mapper /bin/cat --reducer NONE
this job was finished successfully. But. I had 3 file parts in dir/in - now I have 6 parts in /dir/out
part-0000
part-0001
part-0002
part-0003
part-0004
part-0005
And file _SUCCESS ofcource which is not part of my output...
So. How to join splitted before file?

So. I've found a solution. Maybe not better - but it is working.
So. I've created an EMR job flow with bootstrap action
--bootstrap-action joinfiles.sh
in that joinfiles.sh I'm downloading my file pieces from S3 using wget, join them using regular cat a b c > abc.
After that I've added a s3distcp which copied result back to S3. ( sample could be found at: https://stackoverflow.com/a/12302277/658346 ).
That is all.

Related

Recursively go through folders and load the csv files in each folder into BigQuery

So I have Google Cloud Storage Bucket which follows this style of directory:
gs://mybucket/{year}/{month}/{day}/{a csv file here}
The csv files all follow the same schema so it shouldn't be an issue. I was wondering if there was a easier method of loading all the files in 1 command or even a cloud function into 1 table in BigQuery. I've been using bl load to accomplish this for now, but seeing that I have to do this about every week, I'd like to get some automation for it.
Inspired from this answer
You can recursively load your files with the following command:
gsutil ls gs://mybucket/**.csv | \
xargs -I{} echo {} | \
awk -F'[/.]' '{print "yourdataset."$7"_"$4"_"$5"_"$6" "$0}' | \
xargs -I{} sh -c 'bq --location=YOUR_LOCATION load --replace=false --autodetect --source_format=CSV {}'
This loads your CSV files into independant tables in your target dataset, with the naming convention "filename_year_month_day".
The "recursively" part is ensured by the double wildcard (**).
This is for the manual part..
For the automation part you have the choice between different options:
the easiest one is probably to associate a Cloud Function that you trigger with Cloud Scheduler. There is no bash runtime available so you would for instance have to Python your way through. Here is what a quick Google search gave me.
it is possible to do that with an orchestrator (Cloud Composer) if you have the infrastructure (if you don't, don't consider)
another solution is to use Cloud Run, triggered either by Cloud Scheduler (on regular occasions then), or through Eventarc triggers when your csv files are uploaded to GCS.

Amazon S3 console: download multiple files at once

When I log to my S3 console I am unable to download multiple selected files (the WebUI allows downloads only when one file is selected):
https://console.aws.amazon.com/s3
Is this something that can be changed in the user policy or is it a limitation of Amazon?
It is not possible through the AWS Console web user interface.
But it's a very simple task if you install AWS CLI.
You can check the installation and configuration steps on Installing in the AWS Command Line Interface
After that you go to the command line:
aws s3 cp --recursive s3://<bucket>/<folder> <local_folder>
This will copy all the files from given S3 path to your given local path.
Selecting a bunch of files and clicking Actions->Open opened each in a browser tab, and they immediately started to download (6 at a time).
If you use AWS CLI, you can use the exclude along with --include and --recursive flags to accomplish this
aws s3 cp s3://path/to/bucket/ . --recursive --exclude "*" --include "things_you_want"
Eg.
--exclude "*" --include "*.txt"
will download all files with .txt extension. More details - https://docs.aws.amazon.com/cli/latest/reference/s3/
I believe it is a limitation of the AWS console web interface, having tried (and failed) to do this myself.
Alternatively, perhaps use a 3rd party S3 browser client such as http://s3browser.com/
If you have Visual Studio with the AWS Explorer extension installed, you can also browse to Amazon S3 (step 1), select your bucket (step 2), select al the files you want to download (step 3) and right click to download them all (step 4).
The S3 service has no meaningful limits on simultaneous downloads (easily several hundred downloads at a time are possible) and there is no policy setting related to this... but the S3 console only allows you to select one file for downloading at a time.
Once the download starts, you can start another and another, as many as your browser will let you attempt simultaneously.
In case someone is still looking for an S3 browser and downloader I have just tried Fillezilla Pro (it's a paid version). It worked great.
I created a connection to S3 with Access key and secret key set up via IAM. Connection was instant and downloading of all folders and files was fast.
Using AWS CLI, I ran all the downloads in the background using "&" and then waited on all the pids to complete. It was amazingly fast. Apparently the "aws s3 cp" knows to limit the number of concurrent connections because it only ran 100 at a time.
aws --profile $awsProfile s3 cp "$s3path" "$tofile" &
pids[${npids}]=$! ## save the spawned pid
let "npids=npids+1"
followed by
echo "waiting on $npids downloads"
for pid in ${pids[*]}; do
echo $pid
wait $pid
done
I downloaded 1500+ files (72,000 bytes) in about a minute
I wrote a simple shell script to download NOT JUST all files but also all versions of every file from a specific folder under AWS s3 bucket. Here it is & you may find it useful
# Script generates the version info file for all the
# content under a particular bucket and then parses
# the file to grab the versionId for each of the versions
# and finally generates a fully qualified http url for
# the different versioned files and use that to download
# the content.
s3region="s3.ap-south-1.amazonaws.com"
bucket="your_bucket_name"
# note the location has no forward slash at beginning or at end
location="data/that/you/want/to/download"
# file names were like ABB-quarterly-results.csv, AVANTIFEED--quarterly-results.csv
fileNamePattern="-quarterly-results.csv"
# AWS CLI command to get version info
content="$(aws s3api list-object-versions --bucket $bucket --prefix "$location/")"
#save the file locally, if you want
echo "$content" >> version-info.json
versions=$(echo "$content" | grep -ir VersionId | awk -F ":" '{gsub(/"/, "", $3);gsub(/,/, "", $3);gsub(/ /, "", $3);print $3 }')
for version in $versions
do
echo ############### $fileId ###################
#echo $version
url="https://$s3region/$bucket/$location/$fileId$fileNamePattern?versionId=$version"
echo $url
content="$(curl -s "$url")"
echo "$content" >> $fileId$fileNamePattern-$version.csv
echo ############### $i ###################
done
Also you could use the --include "filename" many times in a single command with each time including a different filename within the double quotes, e.g.
aws s3 mycommand --include "file1" --include "file2"
It will save your time rather than repeating the command to download one file at a time.
Also if you are running Windows(tm), WinSCP now allows drag and drop of a selection of multiple files. Including sub-folders.
Many enterprise workstations will have WinSCP installed for editing files on servers by means of SSH.
I am not affiliated, I simply think this was really worth doing.
In my case Aur's didn't work and if you're looking for a quick solution to download all files in a folder just using the browser, you can try entering this snippet in your dev console:
(function() {
const rows = Array.from(document.querySelectorAll('.fix-width-table tbody tr'));
const downloadButton = document.querySelector('[data-e2e-id="button-download"]');
const timeBetweenClicks = 500;
function downloadFiles(remaining) {
if (!remaining.length) {
return
}
const row = remaining[0];
row.click();
downloadButton.click();
setTimeout(() => {
downloadFiles(remaining.slice(1));
}, timeBetweenClicks)
}
downloadFiles(rows)
}())
I have done, by creating shell script using aws cli (i.e : example.sh)
#!/bin/bash
aws s3 cp s3://s3-bucket-path/example1.pdf LocalPath/Download/example1.pdf
aws s3 cp s3://s3-bucket-path/example2.pdf LocalPath/Download/example2.pdf
give executable rights to example.sh (i.e sudo chmod 777 example.sh)
then run your shell script ./example.sh
I think simplest way to download or upload files is to use aws s3 sync command. You can also use it to sync two s3 buckets in same time.
aws s3 sync <LocalPath> <S3Uri> or <S3Uri> <LocalPath> or <S3Uri> <S3Uri>
# Download file(s)
aws s3 sync s3://<bucket_name>/<file_or_directory_path> .
# Upload file(s)
aws s3 sync . s3://<bucket_name>/<file_or_directory_path>
# Sync two buckets
aws s3 sync s3://<1st_s3_path> s3://<2nd_s3_path>
What I usually do is mount the s3 bucket (with s3fs) in a linux machine and zip the files I need into one, then I just download that file from any pc/browser.
# mount bucket in file system
/usr/bin/s3fs s3-bucket -o use_cache=/tmp -o allow_other -o uid=1000 -o mp_umask=002 -o multireq_max=5 /mnt/local-s3-bucket-mount
# zip files into one
cd /mnt/local-s3-bucket-mount
zip all-processed-files.zip *.jpg
import os
import boto3
import json
s3 = boto3.resource('s3', aws_access_key_id="AKIAxxxxxxxxxxxxJWB",
aws_secret_access_key="LV0+vsaxxxxxxxxxxxxxxxxxxxxxry0/LjxZkN")
my_bucket = s3.Bucket('s3testing')
# download file into current directory
for s3_object in my_bucket.objects.all():
# Need to split s3_object.key into path and file name, else it will give error file not found.
path, filename = os.path.split(s3_object.key)
my_bucket.download_file(s3_object.key, filename)

How to upload to compress and upload to s3 on the fly with s3cmd

I just found my box has 5% for HDD hard drive left and I have like almost 250GB of mysql bin file that I want to send to s3. We have moved from mysql to NoSQL and not currently using mysql. However I would love to preserve old data before migration.
Problem is I can't just tar the files in a loop before sending them there. So I was thinking I could gzip on the fly before sending so it doesn't store the compressed file on HDD.
for i in * ; do cat i | gzip -9c | s3cmd put - s3://mybudcket/mybackups/$i.gz; done
To test this command, I run it without the loop and it didn't send anything but didn't complain about anything either. Is there anyway of achieving that?
OS is ubuntu 12.04
s3cmd version is 1.0.0
Thank you for your suggestions.
Alternatively you can use https://github.com/minio/mc . Minio Client aka mc is written in Golang, released under Apache License Version 2.
It implements mc pipe command for users to stream data directly to Amazon S3. mc pipe can also pipe to multiple destinations in parallel. Internally mc pig streams the output and does multipart upload in parallel.
$ mc pipe
NAME:
mc pipe - Write contents of stdin to files. Pipe is the opposite of cat command.
USAGE:
mc pipe TARGET [TARGET...]
Example
#!/bin/bash
for i in *; do
mc cat $i | gzip -9c | mc pipe https://s3.amazonaws.com/mybudcket/mybackups/$i.gz
done
If you can see mc also implements mc cat command :-).
The function to allow stdin to S3 was added to Master branch in February 2014, so I guess make sure your version is newer than that? Version 1.0.0 is from 2011 and previous, the current (at time of this writing) is 1.5.2. It's likely you need to update your version of s3cmd
Other than that, according to https://github.com/s3tools/s3cmd/issues/270 this should work, save that your "do cat i" is missing the $ sign to indicate it as a variable.

How can I use boto or boto-rsync a full backup of 1000+ files to an S3-compatible cloud?

I'm trying to back up my entire collection of over 1000 work files, mainly text but also pictures, and a few large (0.5-1G) audiorecordings, to an S3 cloud (Dreamhost DreamObjects). I have tried to use boto-rsync to perform the first full 'put' with this:
$ boto-rsync --endpoint objects.dreamhost.com /media/Storage/Work/ \
> s3:/work.personalsite.net/ > output.txt
where '/media/Storage/Work/' is on a local hard disk, 's3:/work.personalsite.net/' is a bucket named after my personal web site for uniqueness, and output.txt is where I wanted a list of the files uploaded and error messages to go.
Boto-rsync grinds its way through the whole dirtree, but refreshing output about each file's progress doesn't look so good when it's printed in a file. Still as the upload is going, I 'tail output.txt' and I see that most files are uploaded, but some are only uploaded to less than 100%, and some are skipped altogether. My questions are:
Is there any way to confirm that a transfer is 100% complete and correct?
Is there a good way to log the results and errors of a transfer?
Is there a good way transfer a large number of files in a big directory hierarchy to one or more buckets for the first time, as opposed to an incremental backup?
I am on a Ubuntu 12.04 running Python 2.7.3. Thank you for your help.
you can encapsulate the command in an script and starts over nohup:
nohup script.sh
nohup generates automaticaly nohup.out file where all the output aof the script/command are captured.
to appoint the log you can do:
nohup script.sh > /path/to/log
br
Eddi

Hadoop put command doing nothing!

I am running Cloudera's distribution of Hadoop and everything is working perfectly.The hdfs contains a large number of .seq files.I need to merge the contents of all the .seq files into one large .seq file.However, the getmerge command did nothing for me.I then used cat and piped the data of some .seq files onto a local file.When i want to "put" this file into hdfs it does nothing.No error message shows up,and no file is created.
I am able to "touchz" files in the hdfs and user permissions are not a problem here.The put command simply does not work.What am I doing wrong?
Write a job that merges the all sequence files into a single one. It's just the standard mapper and reducer with only one reduce task.
if the "hadoop" commands fails silently you should have a look at it.
Just type: 'which hadoop', this will give you the location of the "hadoop" executable. It is a shell script, just edit it and add logging to see what's going on.
If the hadoop bash script fails at the beginning it is no surprise that the hadoop dfs -put command does not work.