Can I move an object into a 'folder' inside an S3 bucket using the s3cmd mv command? - amazon-s3

I have the s3cmd command line tool for linux installed. It works fine to put files in a bucket. However, I want to move a file into a 'folder'. I know that folders aren't natively supported by S3, but my Cyberduck GUI tool converts them nicely for me to view my backups.
For instance, I have a file in the root of the bucket, called 'test.mov' that I want to move to the 'idea' folder. I am trying this:
s3cmd mv s3://mybucket/test.mov s3://mybucket/idea/test.mov
but I get strange errors like:
WARNING: Retrying failed request: /idea/test.mov (timed out)
WARNING: Waiting 3 sec...
I also tried quotes, but that didn't help either:
s3cmd mv 's3://mybucket/test.mov' 's3://mybucket/idea/test.mov'
Neither did just the folder name
s3cmd mv 's3://mybucket/test.mov' 's3://mybucket/idea/'
Is there a way within having to delete and reput this 3GB file?
Update: Just FYI, I can put new files directly into a folder like this:
s3cmd put test2.mov s3://mybucket/idea/test2.mov
But still don't know how to move them around....

To move/copy from one bucket to another or the same bucket I use s3cmd tool and works fine. For instance:
s3cmd cp --r s3://bucket1/directory1 s3://bucket2/directory1
s3cmd mv --recursive s3://bucket1/directory1 s3://bucket2/directory1

Probably your file is quite big, try increasing socket_timeout s3cmd configuration setting
http://sumanrs.wordpress.com/2013/03/19/s3cmd-timeout-problems-moving-large-files-on-s3-250mb/

Remove the ' signs. Your code should be:
s3cmd mv s3://mybucket/test.mov s3://mybucket/idea/test.mov
Also try what are the permissions of your bucket - for your username you should have all the permissions.
Also try to connect CloudFront to your bucket. I know it doesn' make sense but I have similar problem to bucket which do not have cloudfront instance clonnected to it.

Related

gsutil cp / download file to windows server

I'm very new at this and need some help; I'm sure I'm not doing something right. I have a Synology NAS that has a cool options to sync files to Google cloud storage. This is a great way to get my backups off site 
I have my backups syncing to a cold line storage bucket. Now that my files are syncing I'm looking to document the process if I need to retrieve them.
I want to download a whole folder and all of the files inside it to a windows server. I installed the gsutil and trying to run this command.
gsutil -m cp -R dir gs://bhp_backup_sync/backup/foldername
but after I run this I get the following exception.
CommandException: No URLs matched: dir
CommandException: 1 file/object could not be transferred.
NOOB here what am I missing?

How to delete a file with an empty name from S3

Somehow, using the AWS Java API, we managed to upload a file to S3 without a name.
The file is shown if we run s3cmd ls s3://myBucket/MyFolder, but is not shown in the S3 GUI.
Running s3cmd del s3://myBucket/MyFolder/ give the following error:
ERROR: Parameter problem: Expecting S3 URI with a filename or --recursive: s3://myBucket/MyFolder/
Running the same command without the trailing slash does nothing.
How can the file be deleted?
As far as I know, it can't be done using s3cmd.
It can be done using the aws cli, by running:
aws s3 rm 3://myBucket/MyFolder/
Make sure you don't use the --recursive flag, or it will remove the entire directory.

how to copy file from amazon server to s3 bucket

i am working with s3 bucket. i need to copy an image from my amazon server to s3 bucket. any idea how can i do it? i saw some sample codes but i dont know how to use it.
if (S3::copyObject($sourceBucket, $sourceFile, $destinationBucket, $destinationFile, S3::ACL_PRIVATE)) {
echo "Copied file";
} else {
echo "Failed to copy file";
}
it seems that this code is used only to bucket but not for the server?
thanks for help.
Copy between S3 Buckets
AWS released a command line interface for copying between buckets.
http://aws.amazon.com/cli/
$ aws s3 sync s3://mybucket-src s3://mybucket-target --exclude *.tmp
..
This will copy from one target bucket to another bucket.
I have no tested this, but I believe that this will operate in series, by downloading the files to your system and then uploading to the bucket.
See the documentation here : S3 CLI Documentation
I've used s3cmd for several years, and it's been very reliable. If you're using Ubuntu it's available with:
apt-get install s3cmd
You can also use one of the SDKs to develop your own tool.

"s3cmd get" rewrites local files

Trying to download S3 directory to local machine using s3cmd. I'm using the command:
s3cmd sync --skip-existing s3://bucket_name/remote_dir ~/local_dir
But if I restart downloading after interruption s3cmd doesn't skip existing local files downloaded earlier and rewrites them. What is wrong with the command?
I had the same problem and found the solution in comment # 38 from William Denniss there http://s3tools.org/s3cmd-sync
If you have:
$s3cmd sync —verbose s3://mybucket myfolder
Change it to:
$s3cmd sync —verbose s3://mybucket/ myfolder/ # note the trailing slash
Then, the MD5 hashes are compared and everything works correctly! —skip-existing works as well.
To recap, both —skip-existing and md5 checks won’t happen if you use the first command, and both work if you use the second (I made a mistake in my previous post, as I was testing with 2 different directories).
Use boto-rsync instead. https://github.com/seedifferently/boto_rsync
It correctly syncs only new/changed files from s3 to the local directory.

S3: make a public folder private again?

How do you make an AWS S3 public folder private again?
I was testing out some staging data, so I made the entire folder public within a bucket. I'd like to restrict its access again. So how do I make the folder private again?
The accepted answer works well - seems to set ACLs recursively on a given s3 path too. However, this can also be done more easily by a third-party tool called s3cmd - we use it heavily at my company and it seems to be fairly popular within the AWS community.
For example, suppose you had this kind of s3 bucket and dir structure: s3://mybucket.com/topleveldir/scripts/bootstrap/tmp/. Now suppose you had marked the entire scripts "directory" as public using the Amazon S3 console.
Now to make the entire scripts "directory-tree" recursively (i.e. including subdirectories and their files) private again:
s3cmd setacl --acl-private --recursive s3://mybucket.com/topleveldir/scripts/
It's also easy to make the scripts "directory-tree" recursively public again if you want:
s3cmd setacl --acl-public --recursive s3://mybucket.com/topleveldir/scripts/
You can also choose to set the permission/ACL only on a given s3 "directory" (i.e. non-recursively) by simply omitting --recursive in the above commands.
For s3cmd to work, you first have to provide your AWS access and secret keys to s3cmd via s3cmd --configure (see http://s3tools.org/s3cmd for more details).
From what I understand, the 'Make public' option in the managment console recursively adds a public grant for every object 'in' the directory.
You can see this by right-clicking on one file, then click on 'Properties'. You then need to click on 'Permissions' and there should be a line:
Grantee: Everyone [x] open/download [] view permissions [] edit permission.
If you upload a new file within this directory it won't have this public access set and therefore be private.
You need to remove public read permission one by one, either manually if you only have a few keys or by using a script.
I wrote a small script in Python with the 'boto' module to recursively remove the 'public read' attribute of all keys in a S3 folder:
#!/usr/bin/env python
#remove public read right for all keys within a directory
#usage: remove_public.py bucketName folderName
import sys
import boto3
BUCKET = sys.argv[1]
PATH = sys.argv[2]
s3client = boto3.client("s3")
paginator = s3client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket=BUCKET, Prefix=PATH)
for page in page_iterator:
keys = page['Contents']
for k in keys:
response = s3client.put_object_acl(
ACL='private',
Bucket=BUCKET,
Key=k['Key']
)
I tested it in a folder with (only) 2 objects and it worked. If you have lots of keys it may take some time to complete and a parallel approach might be necessary.
For AWS CLI, it is fairly straight forward.
If the object is: s3://<bucket-name>/file.txt
For single object:
aws s3api put-object-acl --acl private --bucket <bucket-name> --key file.txt
For all objects in the bucket (bash one-liner):
aws s3 ls --recursive s3://<bucket-name> | cut -d' ' -f5- | awk '{print $NF}' | while read line; do
echo "$line"
aws s3api put-object-acl --acl private --bucket <bucket-name> --key "$line"
done
From the AWS S3 bucket listing (The AWS S3 UI), you can modify individual file's permissions after making either one file public manually or by making the whole folder content public (To clarify, I'm referring to a folder inside a bucket). To revert the public attribute back to private, you click on the file, then go to permissions and click in the radial button under "EVERYONE" heading. You get a second floating window where you can uncheck the *read object" attribute. Don't forget to save the change. If you try to access the link, you should get the typical "Access Denied" message. I have attached two screenshots. The first one shows the folder listing. Clicking the file and following the aforementioned procedure should show you the second screenshot, which shows the 4 steps. Notice that to modify multiple files, one would need to use the scripts as proposed in previous posts. -Kf
I actually used Amazon's UI following this guide http://aws.amazon.com/articles/5050/
While #Varun Chandak's answer works great, it's worth mentioning that, due to the awk part, the script only accounts for the last part of the ls results. If the filename has spaces in it, awk will get only the last segment of the filename split by spaces, not the entire filename.
Example: A file with a path like folder1/subfolder1/this is my file.txt would result in an entry called just file.txt.
In order to prevent that while still using his script, you'd have to replace $NF in awk {print $NF} by a sequence of variable placeholders that accounts for the number of segments that the 'split by space' operation would result in. Since filenames might have a quite large number of spaces in their names, I've gone with an exaggeration, but to be honest, I think a completely new approach would probably be better to deal with these cases. Here's the updated code:
#!/bin/sh
aws s3 ls --recursive s3://my-bucket-name | awk '{print $4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25}' | while read line; do
echo "$line"
aws s3api put-object-acl --acl private --bucket my-bucket-name --key "$line"
done
I should also mention that using cut didn't have any results for me, so I removed it. Credits still go to #Varun Chandak, since he built the script.
As of now, according to the boto docs you can do it this way
#!/usr/bin/env python
#remove public read right for all keys within a directory
#usage: remove_public.py bucketName folderName
import sys
import boto
bucketname = sys.argv[1]
dirname = sys.argv[2]
s3 = boto.connect_s3()
bucket = s3.get_bucket(bucketname)
keys = bucket.list(dirname)
for k in keys:
# options are 'private', 'public-read'
# 'public-read-write', 'authenticated-read'
k.set_acl('private')
Also, you may consider to remove any bucket policies under permissions tab of s3 bucket.
I did this today. My situation was I had certain top level directories whose files needed to be made private. I did have some folders that needed to be left public.
I decided to use the s3cmd like many other people have already shown. But given the massive number of files, I wanted to run parallel s3cmd jobs for each directory. And since it was going to take a day or so, I wanted to run them as background processes on an EC2 machine.
I set up an Ubuntu machine using the t2.xlarge type. I chose the xlarge after s3cmd failed with out of memory messages on a micro instance. xlarge is probably overkill but this server will only be up for a day.
After logging into the server, I installed and configured s3cmd:
sudo apt-get install python-setuptools
wget https://sourceforge.net/projects/s3tools/files/s3cmd/2.0.2/s3cmd-2.0.2.tar.gz/download
mv download s3cmd.tar.gz
tar xvfz s3cmd.tar.gz
cd s3cmd-2.0.2/
python setup.py install
sudo python setup.py install
cd ~
s3cmd --configure
I originally tried using screen but had some problems, mainly processes were dropping from screen -r despite running the proper screen command like screen -S directory_1 -d -m s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_1. So I did some searching and found the nohup command. Here's what I ended up with:
nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_1 > directory_1.out &
nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_2 > directory_2.out &
nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_3 > directory_3.out &
With a multi-cursor error this becomes pretty easy (I used aws s3 ls s3//my_bucket to list the directories).
Doing that you can logout as you want, and log back in and tail any of your logs. You can tail multiple files like:
tail -f directory_1.out -f directory_2.out -f directory_3.out
So set up s3cmd then use nohup as I demonstrated and you're good to go. Have fun!
It looks like that this is now addressed by Amazon:
Selecting the following checkbox makes the bucket and its contents private again:
Block public and cross-account access if bucket has public policies
https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/
UPDATE: The above link was updated August 2019. The options in the image above no longer exist. The new options are in the image below.
If you have S3 Browser, you will be having an option to make it public or private.
If you want a delightfully simple one-liner, you can use the AWS Powershell Tools. The reference for the AWS Powershell Tools can be found here. We'll be using the Get-S3Object and Set-S3ACL commandlets.
$TargetS3Bucket = "myPrivateBucket"
$TargetDirectory = "accidentallyPublicDir"
$TargetRegion = "us-west-2"
Set-DefaultAWSRegion $TargetRegion
Get-S3Object -BucketName $TargetS3Bucket -KeyPrefix $TargetDirectory | Set-S3ACL -CannedACLName private
There are two ways to manage this:
Block all the bucket (simplier but does not applies to all use cases like a s3 bucket with static website and a sub folder for CDN) - https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/
Block access to a directory from the s3 bucket that was granted Make Public option where you can execute the script from ascobol (I just rewrite it with boto3)
#!/usr/bin/env python
#remove public read right for all keys within a directory
#usage: remove_public.py bucketName folderName
import sys
import boto3
BUCKET = sys.argv[1]
PATH = sys.argv[2]
s3client = boto3.client("s3")
paginator = s3client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket=BUCKET, Prefix=PATH)
for page in page_iterator:
keys = page['Contents']
for k in keys:
response = s3client.put_object_acl(
ACL='private',
Bucket=BUCKET,
Key=k['Key']
)
cheers
Use #ascobol's script, above. Tested with ~2300 items in 1250 subfolders and appears to have worked (lifesaver, thanks!).
I'll provide some additional steps for less experienced folks, but if anyone with more reputation would like to delete this answer and comment on his post stating that it works with 2000+ folders, that'd be fine with me.
Install AWS CLI
Install Python 3 if not present (on mac/linux, check with python3 --version
Install BOTO package for Python 3 with pip install boto3
Create a text file named remove_public.py, and paste in the contents of #ascobol's script
run python3 remove_public.py bucketName folderName
Script contents from ascobol's answer, above
#!/usr/bin/env python
#remove public read right for all keys within a directory
#usage: remove_public.py bucketName folderName
import sys
import boto3
BUCKET = sys.argv[1]
PATH = sys.argv[2]
s3client = boto3.client("s3")
paginator = s3client.get_paginator('list_objects_v2')
page_iterator = paginator.paginate(Bucket=BUCKET, Prefix=PATH)
for page in page_iterator:
keys = page['Contents']
for k in keys:
response = s3client.put_object_acl(
ACL='private',
Bucket=BUCKET,
Key=k['Key']
)