I am trying to simulate directory listing for my bucket on ASW S3. Currently I am creating "index.html" locally as follows:
for root, dirs, files in os.walk(job_dir):
objects = []
for obj in dirs+files:
m_time_epoch = os.stat(os.path.join(path,obj)).st_mtime
mtime = datetime.fromtimestamp(m_time_epoch).strftime('%c')
size = os.stat(os.path.join(path,obj)).st_size
type = 'dir' if os.path.isdir(os.path.join(path,obj)) else 'file'
objects.append({'name': obj,
'mtime': mtime,
'size': size,
'type': type})
generate_index(objects, dest_path)
And then passing it together with destination path (bucket URL) to a function which will create "index.html" using jinja template.
Is there better way to do it? I would like to avoid JavaScript though. I made some googling however so far did not find an elegant solution.
What would be the easiest alternative of "os.walk" using boto3 python client?
I found some snippets e.g. here:
How do I list directory contents of an S3 bucket using Python and Boto3?
But is not there a simpler solution?
Thanks...
I'd recommend using the list_objects_v2 method in boto3.
import boto3
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
response_iterator = paginator.paginate(
Bucket='MyBucket'
)
objects = []
for response in response_iterator:
for r in response['Contents']:
print("File is called {}".format(r['Key']))
While iterating through the objects in the bucket, you could build an object you could pass to a Jinja template to create the index.html page
Related
I have a list of .zip files on S3 which is dynamically created and passed to flask view /download. My problem doesn't seem to be in looping through this list, but rather in returning a response so that all files from the list are downloaded to users computer. If I have a view return a response it only downloads the first file, as return closes the connection.
I have tried a number of things and looked at similar issues (like this one: Boto3 to download all files from a S3 Bucket ), but so far had no luck in resolving this. I have also looked at streaming (as in here: http://flask.pocoo.org/docs/1.0/patterns/streaming/ ) and tried creating a subfunction which is a generator, but the same issue persists as I still have to pass a return value to View function - here is that last code example:
#app.route('/download', methods=['POST'])
def download():
download=[]
download = request.form.getlist('checked')
def generate(result):
s3_resource = boto3.resource('s3')
my_bucket = s3_resource.Bucket(S3_BUCKET)
d_object = my_bucket.Object(result).get()
yield d_object['Body'].read()
for filename in download:
return Response (generate(filename), mimetype='application/zip', headers={'Content-Disposition': 'attachment;filename=' + filename})
What would be the best way of doing this so that all the files are downloaded?
Is there an easier way to pass a list to boto3 or any other flask module to download those files?
I have developed a Tornado API which gets me the AWS S3 bucket contents, Below is the code snippet which run perfectly with Boto. However this doesn't work for the buckets in some different location.
The method returns a list(resp) which is consists filename, size, and file type.
Want to achieve similar using Boto3. Tried a lot but Boto3 methods returns the all contents of the s3 bucket with full path.
def post(self):
try:
resp = []
path = self.get_argument('path')
bucket_name = self.get_argument('bucket_name')
path_len = len(path)
conn = S3Connection()
bucket = conn.get_bucket(bucket_name)
folders = bucket.list(path, "/")
for folder in folders:
if folder.name == path:
continue
if str(folder.name).endswith("/"):
file_type = 'd'
file_name = str(folder.name)[path_len:-1]
else:
_file_size = self.filesize(folder.size)
file_type = 'f'
file_name = str(folder.name)[path_len:]
resp.append({"bucket": bucket_name, "path": path, "name": file_name, "type": file_type,
"size": _file_size if file_type == 'f' else ""})
self.write(json.dumps(resp))
Razvan Tudorica built a small replacement for Boto3's upload and delete methods which uses Tornado’s AsyncHTTPClient; he published a blog post here concerning the work and posted his code on GitHub.
As the original SO enquiry highlights that the code snippet supplied "doesn't work for the buckets in some different location", of specific interest here is Razvan's note that, "the main idea around [his] replacement is to use botocore to build the request (AWS wants the requests to be signed using different algorithms based on AWS zones and request data) and only to use the AsyncHTTPClient for the actual asynchronous call."
I hope Razvan's work still proves useful to you or, minimally, to others researching similar efforts (as I was recently).
I have about 1000 objects in S3 which named after
abcyearmonthday1
abcyearmonthday2
abcyearmonthday3
...
want to rename them to
abc/year/month/day/1
abc/year/month/day/2
abc/year/month/day/3
how could I do it through boto3. Is there easier way of doing this ?
As explained in Boto3/S3: Renaming an object using copy_object
you can not rename an object in S3 you have to copy object with a new name and then delete the Old object
s3 = boto3.resource('s3')
s3.Object('my_bucket','my_file_new').copy_from(CopySource='my_bucket/my_file_old')
s3.Object('my_bucket','my_file_old').delete()
There is not direct way to rename S3 object.
Below two steps need to perform :
Copy the S3 object at same location with new name.
Then delete the older object.
I had the same problem (in my case I wanted to rename files generated in S3 using the Redshift UNLOAD command). I solved creating a boto3 session and then copy-deleting file by file.
Like
import boto3
session = boto3.session.Session(aws_access_key_id=my_access_key_id,aws_secret_access_key=my_secret_access_key).resource('s3')
# Save in a list the tuples of filenames (with prefix): [(old_s3_file_path, new_s3_file_path), ..., ()] e.g. of tuple ('prefix/old_filename.csv000', 'prefix/new_filename.csv')
s3_files_to_rename = []
s3_files_to_rename.append((old_file, new_file))
for pair in s3_files_to_rename:
old_file = pair[0]
new_file = pair[1]
s3_session.Object(s3_bucket_name, new_file).copy_from(CopySource=s3_bucket_name+'/'+old_file)
s3_session.Object(s3_bucket_name, old_file).delete()
I want to add Folder in my amazon s3 bucket using coding.
Can you please suggest me how to achieve this?
There are no folders in Amazon S3. It just that most of the S3 browser tools available show part of the key name separated by slash as a folder.
If you really need that you can create an empty object with the slash at the end. e.g. "folder/" It will looks like a folder if you open it with a GUI tool and AWS Console.
As everyone has told you, in AWS S3 there aren't any "folders", you're thinking of them incorrectly. AWS S3 has "objects", these objects can look like folders but they aren't really folders in the fullest sense of the word. If you look for creating folders on the Amazon AWS S3 you won't find a lot of good results.
There is a way to create "folders" in the sense that you can create a simulated folder structure on the S3, but again, wrap your head around the fact that you are creating objects in S3, not folders. Going along with that, you will need the command "put-object" to create this simulated folder structure. Now, in order to use this command, you need the AWS CLI tools installed, go here AWS CLI Installation for instructions to get them installed.
The command is this:
aws s3api put-object --bucket your-bucket-name --key path/to/file/yourfile.txt --body yourfile.txt
Now, the fun part about this command is, you do not need to have all of the "folders" (objects) created before you run this command. What this means is you can have a "folder" (object) to contain things, but then you can use this command to create the simulated folder structure within that "folder" (object) as I discussed earlier. For example, I have a "folder" (object) named "importer" within my S3 bucket, lets say I want to insert sample.txt within a "folder" (object) structure of the year, month, and then a sample "folder" (object) within all of that.
If I only have the "importer" object within my bucket, I do not need to go in beforehand to create the year, month, and sample objects ("folders") before running this command. I can run this command like so:
aws s3api put-object --bucket my-bucket-here --key importer/2016/01/sample/sample.txt --body sample.txt
The put-object command will then go in and create the path that I have specified in the --key flag. Here's a bit of a jewel: even if you don't have a file to upload to the S3, you can still create objects ("folders") within the S3 bucket, for example, I created a shell script to "create folders" within the bucket, by leaving off the --body flag, and not specifying a file name, and leaving a slash at the end of the path provided in the --key flag, the system creates the desired simulated folder structure within the S3 bucket without inserting a file in the process.
Hopefully this helps you understand the system a little better.
Note: once you have a "folder" structure created, you can use the S3's "sync" command to syncronize the descendant "folder" with a folder on your local machine, or even with another S3 bucket.
Java with AWS SDK:
There are no folders in s3, only key/value pairs. The key can contain slashes (/) and that will make it appear as a folder in management console, but programmatically it's not a folder it is a String value.
If you are trying to structure your s3 bucket, then your naming conventions (the keys you give your files) can simply follow normal directory patterns, i.e. folder/subfolder/file.txt.
When searching (depending on language you are using), you can search via prefix with a delimiter. In Java, it would be a listObjects(String storageBucket, String prefix, String delimiter) method call.
The storageBucket is the name of your bucket, the prefix is the key you want to search, and the delimiter is used to filter your search based off the prefix.
The AWS:S3 rails gem does this by itself:
AWS::S3::S3Object.store("teaser/images/troll.png", file, AWS_BUCKET)
Will automatically create the teaser and images "folders" if they don't already exist.
With AWS SDK .Net works perfectly, just add "/" at the end of the name folder:
var folderKey = folderName + "/"; //end the folder name with "/"
AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKey, AWSSecretKey);
var request = new PutObjectRequest();
request.WithBucketName(AWSBucket);
request.WithKey(folderKey);
request.WithContentBody(string.Empty);
S3Response response = client.PutObject(request);
Then refresh your AWS console, and you will see the folder
With aws cli, it is possible to copy an entire folder to a bucket.
aws s3 cp /path/to/folder s3://bucket/path/to/folder --recursive
There is also the option to sync a folder using aws s3 sync
This is a divisive topic, so here is a screenshot in 2019 of the AWS S3 console for adding folders and the note:
When you create a folder, S3 console creates an object with the above
name appended by suffix "/" and that object is displayed as a folder
in the S3 console.
Then 'using coding' you can simply adjust the object name by prepending a valid folder name string and a forward slash.
For Swift I created a method where you pass in a String for the folder name.
Swift 3:
import AWSS3
func createFolderWith(Name: String!) {
let folderRequest: AWSS3PutObjectRequest = AWSS3PutObjectRequest()
folderRequest.key = Name + "/"
folderRequest.bucket = bucket
AWSS3.default().putObject(folderRequest).continue({ (task) -> Any? in
if task.error != nil {
assertionFailure("* * * error: \(task.error?.localizedDescription)")
} else {
print("created \(Name) folder")
}
return nil
})
}
Then just call
createFolderWith(Name:"newFolder")
In iOS (Objective-C), I did following way
You can add below code to create a folder inside amazon s3 bucket programmatically. This is working code snippet. Any suggestion Welcome.
-(void)createFolder{
AWSS3PutObjectRequest *awsS3PutObjectRequest = [AWSS3PutObjectRequest new];
awsS3PutObjectRequest.key = [NSString stringWithFormat:#"%#/", #"FolderName"];
awsS3PutObjectRequest.bucket = #"Bucket_Name";
AWSS3 *awsS3 = [AWSS3 defaultS3];
[awsS3 putObject:awsS3PutObjectRequest completionHandler:^(AWSS3PutObjectOutput * _Nullable response, NSError * _Nullable error) {
if (error) {
NSLog(#"error Creating folder");
}else{
NSLog(#"Folder Creating Sucessful");
}
}];
}
Here's how you can achieve what you're looking for (from code/cli):
--create/select the file (locally) which you want to move to the folder:
~/Desktop> touch file_to_move
--move the file to s3 folder by executing:
~/Desktop> aws s3 cp file_to_move s3://<path_to_your_bucket>/<new_folder_name>/
A new folder will be created on your s3 bucket and you'll now be able to execute cp, mv, rm ... statements i.e. manage the folder as usual.
If this new file created above is not required, simply delete it. You now have an s3 bucket created.
You can select language of your choice from available AWS SDK
Alternatively you can try minio client libraries available in Python, Go, .Net, Java, Javascript for your application development environment, it has example directory with all basic operations listed.
Disclaimer: I work for Minio
In swift 2.2 you can create folder using
func createFolderWith(Name: String!) {
let folderRequest: AWSS3PutObjectRequest = AWSS3PutObjectRequest()
folderRequest.key = Name + "/"
folderRequest.bucket = "Your Bucket Name"
AWSS3.defaultS3().putObject(folderRequest).continueWithBlock({ (task) -> AnyObject? in
if task.error != nil {
assertionFailure("* * * error: \(task.error?.localizedDescription)")
} else {
print("created \(Name) folder")
}
return nil
})
}
Below creates a empty directory called "mydir1".
Below is nodejs code, it should be similar for other languages.
The trick is to have slash (/) at the end of the name of object, as in "mydir1/", otherwise a file with name "mydir1" will be created.
let AWS = require('aws-sdk');
AWS.config.loadFromPath(__dirname + '\\my-aws-config.json');
let s3 = new AWS.S3();
var params = {
Bucket: "mybucket1",
Key: "mydir1/",
ServerSideEncryption: "AES256" };
s3.putObject(params, function (err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
return;
} else {
console.log(data); // successful response
return;
/*
data = {
ETag: "\"6805f2cfc46c0f04559748bb039d69ae\"",
ServerSideEncryption: "AES256",
VersionId: "Ri.vC6qVlA4dEnjgRV4ZHsHoFIjqEMNt"
}
*/
} });
Source: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
in-order to create a directory inside s3 bucket and copy contents inside that is pretty simple.
S3 command can be used:
aws s3 cp abc/def.txt s3://mybucket/abc/
Note: / is must that makes the directory, otherwise it will become a file in s3.
I guess your query is just simply creating a folder inside folder(subfolder).
so while coping any directory data inside a bucket sub folder use command like this.
aws s3 cp mudit s3://mudit-bucket/Projects-folder/mudit-subfolder --recursive
It will create a subfolder and put ur directory contents in it. Also once your subfolder data gets empty. Your Subfolder will automatically gets deleted.
You can use copy command to create a folder while copy a file.
aws s3 cp test.xml s3://mybucket/myfolder/test.xml
How can I create a folder under a bucket using boto library for Amazon s3?
I followed the manual, and created the keys with permission, metadata etc, but no where in the boto's documentation it describes how to create folders under a bucket, or create a folder under folders in bucket.
There is no concept of folders or directories in S3. You can create file names like "abc/xys/uvw/123.jpg", which many S3 access tools like S3Fox show like a directory structure, but it's actually just a single file in a bucket.
Assume you wanna create folder abc/123/ in your bucket, it's a piece of cake with Boto
k = bucket.new_key('abc/123/')
k.set_contents_from_string('')
Or use the console
Use this:
import boto3
s3 = boto3.client('s3')
bucket_name = "YOUR-BUCKET-NAME"
directory_name = "DIRECTORY/THAT/YOU/WANT/TO/CREATE" #it's name of your folders
s3.put_object(Bucket=bucket_name, Key=(directory_name+'/'))
With AWS SDK .Net works perfectly, just add "/" at the end of the folder name string:
var folderKey = folderName + "/"; //end the folder name with "/"
AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKey, AWSSecretKey);
var request = new PutObjectRequest();
request.WithBucketName(AWSBucket);
request.WithKey(folderKey);
request.WithContentBody(string.Empty);
S3Response response = client.PutObject(request);
Then refresh your AWS console, and you will see the folder
Tried many method above and adding forward slash / to the end of key name, to create directory didn't work for me:
client.put_object(Bucket="foo-bucket", Key="test-folder/")
You have to supply Body parameter in order to create directory:
client.put_object(Bucket='foo-bucket',Body='', Key='test-folder/')
Source: ryantuck in boto3 issue
Append "_$folder$" to your folder name and call put.
String extension = "_$folder$";
s3.putObject("MyBucket", "MyFolder"+ extension, new ByteArrayInputStream(new byte[0]), null);
see:
http://www.snowgiraffe.com/tech/147/creating-folders-programmatically-with-amazon-s3s-api-putting-babies-in-buckets/
Update for 2019, if you want to create a folder with path bucket_name/folder1/folder2 you can use this code:
from boto3 import client, resource
class S3Helper:
def __init__(self):
self.client = client("s3")
self.s3 = resource('s3')
def create_folder(self, path):
path_arr = path.rstrip("/").split("/")
if len(path_arr) == 1:
return self.client.create_bucket(Bucket=path_arr[0])
parent = path_arr[0]
bucket = self.s3.Bucket(parent)
status = bucket.put_object(Key="/".join(path_arr[1:]) + "/")
return status
s3 = S3Helper()
s3.create_folder("bucket_name/folder1/folder2)
It's really easy to create folders. Actually it's just creating keys.
You can see my below code i was creating a folder with utc_time as name.
Do remember ends the key with '/' like below, this indicates it's a key:
Key='folder1/' + utc_time + '/'
client = boto3.client('s3')
utc_timestamp = time.time()
def lambda_handler(event, context):
UTC_FORMAT = '%Y%m%d'
utc_time = datetime.datetime.utcfromtimestamp(utc_timestamp)
utc_time = utc_time.strftime(UTC_FORMAT)
print 'start to create folder for => ' + utc_time
putResponse = client.put_object(Bucket='mybucketName',
Key='folder1/' + utc_time + '/')
print putResponse
Although you can create a folder by appending "/" to your folder_name. Under the hood, S3 maintains flat structure unlike your regular NFS.
var params = {
Bucket : bucketName,
Key : folderName + "/"
};
s3.putObject(params, function (err, data) {});
S3 doesn't have a folder structure, But there is something called as keys.
We can create /2013/11/xyz.xls and will be shown as folder's in the console. But the storage part of S3 will take that as the file name.
Even when retrieving we observe that we can see files in particular folder (or keys) by using the ListObjects method and using the Prefix parameter.
Apparently you can now create folders in S3. I'm not sure since when, but I have a bucket in "Standard" zone and can choose Create Folder from Action dropdown.
This question is more relevant to the future, so adding this update.
I am using the upload_file method as shown below.
fold ='/my/system/filePath/tabmcq/Tables/auto/18.tsv'
s3_client.upload_file(
Filename = full/file/path/filename.extension,
Bucket = "tab-mcq-de",
Key = f"{fold.split('/')[-3]}/{fold.split('/')[-2]}/{fold.split('/')[-1]}"
)
Ideas is the "Filename" parameter requires the absolute file path of your system.
The "Key" parameter requires the relative file path from the source directory where your files are located
In case of this example, Key parameter has to contain "Tables/auto/18.tsv" value, for client to create the folders.
Hope this helps.
The following works using Python boto3
s3 = boto3.client("s3")
s3.put_object(Bucket="dest_bucket", Key='folder_name/')