aws s3 cp error "variable 'current_index' referenced before assignment" - amazon-s3

I am trying to download a file (about 2T) to my local server from s3 such like:
aws s3 cp s3://outputs/star_output.tar.gz ./ --profile abcd --endpoint-url=https://abc.edu
It seems the downloadeing finished with a file like star_output.tar.gz.9AB04cEd but ended up with a failure:
download failed: s3://outputs/star_output.tar.gz to ./ local variable 'current_index' referenced before assignment
And the file star_output.tar.gz.9AB04cEd was also automatically deleted.
I tried a small text file and it downloaded no issue. Is this related to the size of the file (too big)?
Anyone knows the possible reason?

Related

AWS CLI create lambda function cannot see my zip file that is in S3 "No such file or directory"

My first attempt was using the console and it worked. I have a new zip file that successfully uploaded to my bucket in S3. I can list the bucket and see both files there but when I try to use CLI to create the Lambda function it returns "Error parsing parameter '--zip-file': Unable to load paramfile"..." No such file or directory:"
I expect from the documentation that "fileb://path/to/file.zip" implies that the bucket name be included but I am unsure if the region URL is needed. I tried it with and without the region URL with the same results.
Again, I am able to use these file if I create the Lambda using the console, but not CLI. What am I missing?
[royce#localhost ~]$ aws s3 ls s3://uploads.lai
2017-08-18 10:27:48 60383836 userpermission-1.zip
2017-08-31 07:43:50 60389082 userpermission-4.zip
2017-08-18 14:15:43 1171 userpermission.db
[royce#localhost ~]$ aws lambda create-function --function-name awstest01 --zip-file "fileb://uploads.lai/userpermission-4.zip" --runtime java8 --role execution-role-arn --handler app.handler
Error parsing parameter '--zip-file': Unable to load paramfile fileb://uploads.lai/userpermission-4.zip: [Errno 2] No such file or directory: 'uploads.lai/userpermission-4.zip'
The --zip-file flag is if you are uploading your function from a local zip file.
If you are using S3, the CLI the command should be something along the lines of aws create function --code "S3Bucket=string,S3Key=string,S3ObjectVersion=string"
You may check the reference here:
http://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html

How to delete a file with an empty name from S3

Somehow, using the AWS Java API, we managed to upload a file to S3 without a name.
The file is shown if we run s3cmd ls s3://myBucket/MyFolder, but is not shown in the S3 GUI.
Running s3cmd del s3://myBucket/MyFolder/ give the following error:
ERROR: Parameter problem: Expecting S3 URI with a filename or --recursive: s3://myBucket/MyFolder/
Running the same command without the trailing slash does nothing.
How can the file be deleted?
As far as I know, it can't be done using s3cmd.
It can be done using the aws cli, by running:
aws s3 rm 3://myBucket/MyFolder/
Make sure you don't use the --recursive flag, or it will remove the entire directory.

What is the path for a bootstrapped file for a Pig job running in Amazon EMR

I bootstrap a data file in my EMR job. The bootstrapping succeeds and the file is copied to /home/hadoop/contents/ folder with right permissions.
However when I try to access it in the Pig script like below:
userdidstopick = load '/home/hadoop/contents/UserIdsToPick.txt' AS (uid:chararray);
I get an error that the input path does not exist:
hdfs://10.183.166.176:9000/home/hadoop/contents/UserIdsToPick.txt
When running Ruby jobs the bootstrapped file was always accessible under /home/hadoop/contents/ folder and everything worked for me.
Is it different for Pig?
By default Pig on EMR is configured to access HDFS location instead of local filesystem. The error shows the HDFS location.
There are 2 ways to solve this:
Either copy the file on S3, and directly load file from s3
userdidstopick = load 's3_bucket_location/UserIdsToPick.txt' AS (uid:chararray);
Or you can first copy the file on HDFS (instead of local filesystem), and then directly use it as path you are doing today.
I would prefer first option.

Can I move an object into a 'folder' inside an S3 bucket using the s3cmd mv command?

I have the s3cmd command line tool for linux installed. It works fine to put files in a bucket. However, I want to move a file into a 'folder'. I know that folders aren't natively supported by S3, but my Cyberduck GUI tool converts them nicely for me to view my backups.
For instance, I have a file in the root of the bucket, called 'test.mov' that I want to move to the 'idea' folder. I am trying this:
s3cmd mv s3://mybucket/test.mov s3://mybucket/idea/test.mov
but I get strange errors like:
WARNING: Retrying failed request: /idea/test.mov (timed out)
WARNING: Waiting 3 sec...
I also tried quotes, but that didn't help either:
s3cmd mv 's3://mybucket/test.mov' 's3://mybucket/idea/test.mov'
Neither did just the folder name
s3cmd mv 's3://mybucket/test.mov' 's3://mybucket/idea/'
Is there a way within having to delete and reput this 3GB file?
Update: Just FYI, I can put new files directly into a folder like this:
s3cmd put test2.mov s3://mybucket/idea/test2.mov
But still don't know how to move them around....
To move/copy from one bucket to another or the same bucket I use s3cmd tool and works fine. For instance:
s3cmd cp --r s3://bucket1/directory1 s3://bucket2/directory1
s3cmd mv --recursive s3://bucket1/directory1 s3://bucket2/directory1
Probably your file is quite big, try increasing socket_timeout s3cmd configuration setting
http://sumanrs.wordpress.com/2013/03/19/s3cmd-timeout-problems-moving-large-files-on-s3-250mb/
Remove the ' signs. Your code should be:
s3cmd mv s3://mybucket/test.mov s3://mybucket/idea/test.mov
Also try what are the permissions of your bucket - for your username you should have all the permissions.
Also try to connect CloudFront to your bucket. I know it doesn' make sense but I have similar problem to bucket which do not have cloudfront instance clonnected to it.

Why my file is not uploading to S3 using Node.js

I am using this
https://github.com/nuxusr/Node.js---Amazon-S3
for uploading files to s3 :
in test-s3-upload.js i had commented mostly tests because they was giving some error , as my goal is to upload the file to s3 so i keep only testUploadFileToBucket() test and while running node test.js gives ok.
but when i check in s3 fox the uploaded file is not being shown.
why file is not uploaded?
Use knox instead. https://github.com/learnboost/knox
Have a look at this project and especially the bin/amazon-s3-upload.js file so you can see how we're doing it using AwsSum:
https://github.com/appsattic/node-awssum-scripts/
https://github.com/appsattic/node-awssum/
It takes a bucket name and a filename and will stream the file up to S3 for you:
$ ./amazon-s3-upload.js -b your-bucket -f the-file.txt
Hope that helps. :)