s3cmd put files from tar stdin: [Errno 32] Broken pipe - amazon-s3

I was trying to upload files from stdin by s3cmd, by this command below
tar cfz - folder | s3cmd put - s3://backups/abcd2.tar
but sometimes I get ERROR
ERROR: Cannot retrieve any response status before encountering an EPIPE or ECONNRESET exception
WARNING: Upload failed: /abcd2.tar?partNumber=21&uploadId=... ([Errno 32] Broken pipe)
WARNING: Waiting 3 sec...
I think s3cmd is waiting to long for TAR and Amazon S3 close connection (RequestTimeout).
How to solve this problem? Maybe add buffer between tar and s3cmd but how?

Related

Download/Copy tar.gz File from S3 to EC2

When I download a tar.gz file from AWS S3, and then I try to untar it, I am getting the following error:
tar -xzvf filename_backup_jan212021_01.tar.gz
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
When I check what type of file it is, I get this:
file filename_backup_jan212021_01.tar.gz
filename_backup_jan212021_01.tar.gz: ASCII text
This is the command I am using to copy the file from S3 to my EC2:
aws s3 cp s3://bucket_name/filename_backup_jan212021_01.tar.gz .
Please, help me find a solution to extract a tar.gz file after downloading it from AWS S3.
tar -xzvf filename_backup_jan212021_01.tar.gz
gzip: stdin: not in gzip format
file filename_backup_jan212021_01.tar.gz
filename_backup_jan212021_01.tar.gz: ASCII text
cat filename_backup_jan212021_01.tar.gz
/home/ec2-user/file_delete_01.txt
/home/ec2-user/file_jan2021.txt
/home/ec2-user/filename_backup_jan1.tar.gz
/home/ec2-user/filename_backup_jan1.txt
/home/ec2-user/filename_backup_jan2.tar.gz
/home/ec2-user/filename_backup_jan3.tar.gz
All of these indicate that the file that was uploaded to S3 itself is not gzip'd tar file, rather just a plain text file uploaded with a .tar.gz filename. While filenames and extensions are used to indicate content type to humans, computers think otherwise :)
You can create the file with
tar cvzf <archive name> </path/to/files/to/be/tarred> && aws s3 cp <bucket path> <archive name>
to create the archive and upload it to S3, and use the commands you mention in the question to download them. Of course replace the placeholders with the proper names and such

What are the correct flags to rsync to locally mounted S3 bucket?

I have an S3 bucket mounted locally at /mnt/s3 using s3fs.
I can manually cp -r /my-dir/. /mnt/s3, and the file testfile.txt in /mnt/s3 will be overwritten as expected, without error.
However, when using rsync to do this, I get errors about unlinking and copying if the file already exists in the bucket. (If a file of the same name does not exist in the bucket, it's copied properly, without any errors.)
$ rsync -vr --temp-dir=/tmp/rsync /my-dir/. /mnt/s3
sending incremental file list
testfile.txt
rsync: unlink "/mnt/s3/testfile.txt": Operation not permitted (1)
rsync: copy "/tmp/rsync/testfile.txt.Kkyy5n" -> "testfile.txt": Operation not permitted (1)
sent 274 bytes received 428 bytes 1,404.00 bytes/sec
total size is 95 speedup is 0.14
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]
I'm using --temp-dir because otherwise rsync was copying temporary files into /mnt/s3 and trying to rename them to their permanent names. However, rsync failed to rename them, and also failed to delete the temporary files, resulting in improperly copied files and lots of clutter in the S3 bucket.
You may want to try the rsync --inplace flag (instead of the temp_dir workaround), as per the writeup here:
https://baldnerd.com/preventing-rsync-from-doubling-or-even-tripling-your-s3-fees/

tar -xvf file.tar.gz failed with: gzip:stdin:not in gzip format

I tar a set of files with command:tar -czvf file.tar.gz file/ then copy to usb (ext4 format), I checked that I can untar it. After I reinstall system , when mount usb , some error happened , I do fsck /dev/sdc1 and success in mount and copy it to pc. When I untar it tar -xvf file.tar.gz, error happen again:
gzip: stdin: not in gzip format
tar : child returned status 1
tar: Error is not recoverable: exiting now
I have no idea how to rescue the data.
Any help needed. Thanks.
#sitexa , I got this error when file is not fully downloaded(transferred).
tar xvfz apache-tomcat-8.5.12.tar.gz
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
This error is coming due to file is not completely downloaded.
reference : https://ubuntuforums.org/showthread.php?t=1319801
check for file by the command
file <file name>
<filename>: HTML document, ASCII text, with very long lines
comes then it may be corrupted

403 (Forbidden) when doing `s3cmd get` but `s3cmd ls` works

I've set up s3cmd and can successfully put an encrypted file on S3 using it by doing this:
$ s3cmd put --encrypt --config=/home/phil/.s3cfg s3://my-bucket-name/my-dir/my-filename
The file is put there OK.
But when I try to get the file, this happens:
$ s3cmd get --config=/home/phil/.s3cfg --verbose s3://my-bucket-name/my-dir/my-filename
INFO: Applying --exclude/--include
INFO: Summary: 1 remote files to download
s3://my-bucket-name/my-dir/my-filename -> ./my-filename [1 of 1]
ERROR: S3 error: 403 (Forbidden):
The my-filename is created, but with 0 bytes.
I can do s3cmd ls and get a directory listing, so I can access the bucket and directory.
Why can I not get the file? I must have missed something...

ansible - unarchive - input file not found

I'm getting this error while Ansible (1.9.2) is trying to unpack the file.
19:06:38 TASK: [jmeter | unpack jmeter] ************************************************
19:06:38 fatal: [jmeter01.veryfast.server.jenkins] => input file not found at /tmp/apache-jmeter-2.13.tgz or /tmp/apache-jmeter-2.13.tgz
19:06:38
19:06:38 FATAL: all hosts have already failed -- aborting
19:06:38
I checked on the target server, /tmp/apache-jmeter-2.13.tgz file exists and it has valid permissions (for testing I also gave 777 even though not reqd but still got the above error mesg).
I also checked md5sum of this file (compared it with what's there on the apache jmeter site) -- It matches!
# md5sum apache-jmeter-2.13.tgz|grep 53dc44a6379b7b4a57976936f3a65e03
53dc44a6379b7b4a57976936f3a65e03 apache-jmeter-2.13.tgz
When I'm using tar -xvzf on this file, tar is able to show/extract it's contents in the .tgz file.
What could I be missing? At this point, I'm wondering unarchive method/module in Ansible must have some bug.
My last resort (if I can't get unarchive in Ansible to work) would be to use Command: "tar -xzvf /tmp/....." but I don't want to do that as my first preference.
The default behavior for Unarchive is to find the file on your local system, copy it to the remote, and unpack it. I suspect if you're getting a file not found error then you need to specify copy=no in your task.