I have a 27GB file that I am trying to move from an AWS Linux EC2 to S3. I've tried both the 'S3put' command and the 'S3cmd put' command. Both work with a test file. Neither work with the large file. No errors are given, the command returns immediately but nothing happens.
s3cmd put bigfile.tsv s3://bucket/bigfile.tsv
Though you can upload objects to S3 with sizes up to 5TB, S3 has a size limit of 5GB for an individual PUT operation.
In order to load files larger than 5GB (or even files larger than 100MB) you are going to want to use the multipart upload feature of S3.
http://docs.amazonwebservices.com/AmazonS3/latest/dev/UploadingObjects.html
http://aws.typepad.com/aws/2010/11/amazon-s3-multipart-upload.html
(Ignore the outdated description of a 5GB object limit in the above blog post. The current limit is 5TB.)
The boto library for Python supports multipart upload, and the latest boto software includes an "s3multiput" command line tool that takes care of the complexities for you and even parallelizes part uploads.
https://github.com/boto/boto
The file did not exist, doh. I realised this after running the s3 commands in verbose mode by adding the -v tag:
s3cmd put -v bigfile.tsv s3://bucket/bigfile.tsv
s3cmd version 1.1.0 supports the multi-part upload as part of the "put" command, but its still in beta (currently.)
Related
I'm building in Cloud9 to deploy to Lambda. My function works fine in Cloud9 but when I go to deploy I get the error
Unzipped size must be smaller than 262144000 bytes
Running du -h | sort -h shows that my biggest offenders are:
/debug at 291M
/numpy at 79M
/pandas at 47M
/botocore at 41M
My function is extremely simple, it calls a service, uses panda to format the response, and sends it on.
What is in debug and how do I slim it down/eliminate it from the deploy package?
How do others use libraries at all if they eat up most of the memory limit?
A brief background to understand the problem root-cause
The problem is not with your function but with the size of the zipped packages. As per AWS documentation, the overall size of zipped package must not exceed greater than 3MB. With that said, if the package size is greater than 3MB which happens inevitably, as a library can have many dependencies, then consider uploading the zipped package to a AWS S3 bucket. Note: even s3 bucket has a size limit of 262MB. Ensure that your package does not exceed this limit. The error message that you have posted, Unzipped size must be smaller than 262144000 bytes is referring to the size of the deployment package aka the libraries.
Now, Understand some facts when working with AWS,
AWS Containers are empty.
AWS containers have a linux kernel
AWS Cloud9 is only an IDE like RStudio or Pycharm. And it uses S3 bucket for saving the installed packages.
This means you'll need to know the following:
the package and its related dependencies
extract the linux-compiled packages from cloud9 and save to a folder-structure like, python/lib/python3.6/site-packages/
Possible/Workable solution to overcome this problem
Overcome this problem by reducing the package size. See below.
Reducing the deployment package size
Manual method: delete files and folders within each library folder that are named *.info and *._pycache. You'll need to manually look into each folder for the above file extensions to delete them.
Automatic method: I've to figure out the command. work in progress
Use Layers
In AWS go to Lambda and create a layer
Attach the S3 bucket link containing the python package folder. Ensure the lambda function IAM role has permission to access S3 bucket.
Make sure the un-zipped folder size is less than 262MB. Because if its >260 MB then it cannot be attached to AWS Layer. You'll get an error, Failed to create layer version: Unzipped size must be smaller than 262144000 bytes
I get this error when trying to load a table in Google BQ:
Input CSV files are not splittable and at least one of the files is
larger than the maximum allowed size. Size is: 56659381010. Max
allowed size is: 4294967296.
Is there a way to split the file using gsutil or something like that without having to upload everything again?
The largest compressed CSV file you can load into BigQuery is 4 gigabytes. GCS unfortunately does not provide a way to decompress a compressed file, nor does it provide a way to split a compressed file. GZip'd files can't be arbitrarily split up and reassembled in the way you could a tar file.
I imagine your best bet would likely be to spin up a GCE instance in the same region as your GCS bucket, download your object to that instance (which should be pretty fast, given that it's only a few dozen gigabytes), decompress the object (which will be slower), break that CSV file into a bunch of smaller ones (the linux split command is useful for this), and then upload the objects back up to GCS.
I ran into the same issue and this is how I dealt with it:
First, spin up a Google Compute Engine VM instance.
https://console.cloud.google.com/compute/instances
Then install the gsutil commands and then go through the authentication process.
https://cloud.google.com/storage/docs/gsutil_install
Once you have verified that the gcloud, gsutil, and bq commands are working then save a snapshot of the disk as snapshot-1 and then delete this VM.
On your local machine, run this command to create a new disk. This disk is used for the VM so that you have enough space to download and unzip the large file.
gcloud compute disks create disk-2017-11-30 --source-snapshot snapshot-1 --size=100GB
Again on your local machine, run this command to create a new VM instance that uses this disk. I use the --preemptible flag to save some cost.
gcloud compute instances create loader-2017-11-30 --disk name=disk-2017-11-30,boot=yes --preemptible
Now you can SSH into your instance and then run these commands on the remote machine.
First, copy the file from cloud storage to the VM
gsutil cp gs://my-bucket/2017/11/20171130.gz .
Then unzip the file. In my case, for ~4GB file, it took about 17 minutes to complete this step:
gunzip 20171130.gz
Once unzipped, you can either run the bq load command to load it into BigQuery but I found that for my file size (~70 GB unzipped), that operation would take about 4 hours. Instead, I uploaded the unzipped file back to Cloud Storage
gsutil cp 20171130 gs://am-alphahat-regional/unzipped/20171130.csv
Now that the file is back on cloud storage, you can run this command to delete the VM.
gcloud compute instances delete loader-2017-11-30
Theoretically, the associated disk should also have been deleted, but I found that the disk was still there and I needed to delete it with an additional command
gcloud compute disks delete disk-2017-11-30
Now finally, you should be able to run the bq load command or you can load the data from the console.
I'm trying to back up my entire collection of over 1000 work files, mainly text but also pictures, and a few large (0.5-1G) audiorecordings, to an S3 cloud (Dreamhost DreamObjects). I have tried to use boto-rsync to perform the first full 'put' with this:
$ boto-rsync --endpoint objects.dreamhost.com /media/Storage/Work/ \
> s3:/work.personalsite.net/ > output.txt
where '/media/Storage/Work/' is on a local hard disk, 's3:/work.personalsite.net/' is a bucket named after my personal web site for uniqueness, and output.txt is where I wanted a list of the files uploaded and error messages to go.
Boto-rsync grinds its way through the whole dirtree, but refreshing output about each file's progress doesn't look so good when it's printed in a file. Still as the upload is going, I 'tail output.txt' and I see that most files are uploaded, but some are only uploaded to less than 100%, and some are skipped altogether. My questions are:
Is there any way to confirm that a transfer is 100% complete and correct?
Is there a good way to log the results and errors of a transfer?
Is there a good way transfer a large number of files in a big directory hierarchy to one or more buckets for the first time, as opposed to an incremental backup?
I am on a Ubuntu 12.04 running Python 2.7.3. Thank you for your help.
you can encapsulate the command in an script and starts over nohup:
nohup script.sh
nohup generates automaticaly nohup.out file where all the output aof the script/command are captured.
to appoint the log you can do:
nohup script.sh > /path/to/log
br
Eddi
I want to use LZO compression on my Elastic Map Reduce job's output that is being stored on S3, but it is not clear if the files are automatically indexed so that future jobs run on this data will split the files into multiple tasks.
For example, if my output is a bunch of lines of TSV data, in a 1GB LZO file, will a future map job only create 1 task, or something like (1GB/blockSize) tasks (i.e. the behavior of when files were not compressed, or if there was a LZO index file in the directory)?
Edit: If this is not done automatically, what is recommended for getting my output to be LZO-indexed? Do the indexing before uploading the file to S3?
Short answer to my first question: AWS does not do automatic indexing. I've confirmed this with my own job, and also read the same from Andrew#AWS on their forum.
Here's how you can do the indexing:
To index some LZO files, you'll need to use my own Jar built from the Twitter hadoop-lzo project. You'll need to build the Jar somewhere, then upload to Amazon S3, if you want to Index directly with EMR.
On side note, Cloudera has good instructions on all the steps for setting this up on your own cluster. I did this on my local cluster, which allowed me to build the Jar and upload to S3. You can probably find a pre-built Jar on the net if you don't want to build it yourself.
When outputting your data from your Hadoop job, make sure you use the LzopCodec and not the LzoCodec, otherwise the files are not indexable (at least based on my experience). Example Java code (same idea carries over to Streaming API):
import com.hadoop.compression.lzo.LzopCodec;
TextOutputFormat.setCompressOutput(job, true);
TextOutputFormat.setOutputCompressorClass(job, LzopCodec.class)
Once your hadoop-lzo Jar is on S3, and your Hadoop job has outputted .lzo files, run your indexer on the output directory (instructions below you got a EMR job/cluster running):
elastic-mapreduce -j <existingJobId> \
--jar s3n://<yourBucketName>/hadoop-lzo-0.4.17-SNAPSHOT.jar \
--args com.hadoop.compression.lzo.DistributedLzoIndexer \
--args s3://<yourBucketName>/output/myLzoJobResults \
--step-name "Lzo file indexer Jar"
Then when you're using the data in a future job, be sure to specify that the input is in LZO format, otherwise the splitting won't occur. Example Java code:
import com.hadoop.mapreduce.LzoTextInputFormat;
job.setInputFormatClass(LzoTextInputFormat.class);
I am using this
https://github.com/nuxusr/Node.js---Amazon-S3
for uploading files to s3 :
in test-s3-upload.js i had commented mostly tests because they was giving some error , as my goal is to upload the file to s3 so i keep only testUploadFileToBucket() test and while running node test.js gives ok.
but when i check in s3 fox the uploaded file is not being shown.
why file is not uploaded?
Use knox instead. https://github.com/learnboost/knox
Have a look at this project and especially the bin/amazon-s3-upload.js file so you can see how we're doing it using AwsSum:
https://github.com/appsattic/node-awssum-scripts/
https://github.com/appsattic/node-awssum/
It takes a bucket name and a filename and will stream the file up to S3 for you:
$ ./amazon-s3-upload.js -b your-bucket -f the-file.txt
Hope that helps. :)