AWS structure of S3 trigger - amazon-s3

I am building a Python Lambda in AWS and wanted to add an S3 trigger to it. Following these instructions I saw how to get the bucket and key on which I got the trigger using:
def func(event):
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
There is an example of such an object in the link, but I wasn't able, however, to find a description of the entire event object anywhere in AWS' documentation.
Is there a documentation for this object's structure? Where might I find it?

You can find documentation about the whole object in the S3 documentation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-content-structure.html
I would also advise to iterate the records, because there could be multiple at once:
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
[...]

Related

Updating existing metadata of my S3 object

I am trying to update the existing metadata of my S3 object but in spite of updating it is creating the new one. As per the documentation, it is showing the same way but don't know why it is not able to update it.
k = s3.head_object(Bucket='test-bucket', Key='test.json')
s3.copy_object(Bucket='test-bucket', Key='test.json', CopySource='test-bucket' + '/' + 'test.json', Metadata={'Content-Type': 'text/plain'}, MetadataDirective='REPLACE')
I was able to update using the copy_from method
s3 = boto3.resource('s3')
object = s3.Object(bucketName, uploadedKey)
object.copy_from(
CopySource={'Bucket': bucketName,'Key': uploadedKey},
MetadataDirective="REPLACE",
ContentType=value
)
S3 metadata is read-only, so updating only metadata of an S3 object is not possible. The only way to update the metadata is to recreate/copy the object. Check the 1st paragraph of the official docs
You can set object metadata at the time you upload it. After you upload the object, you cannot modify object metadata. The only way to modify object metadata is to make a copy of the object and set the metadata.

how to update metadata on an S3 object larger than 5GB?

I am using the boto3 API to update the S3 metadata on an object.
I am making use of How to update metadata of an existing object in AWS S3 using python boto3?
My code looks like this:
s3_object = s3.Object(bucket,key)
new_metadata = {'foo':'bar'}
s3_object.metadata.update(new_metadata)
s3_object.copy_from(CopySource={'Bucket':bucket,'Key':key}, Metadata=s3_object.metadata, MetadataDirective='REPLACE')
This code fails when the object is larger than 5GB. I get this error:
botocore.exceptions.ClientError: An error occurred (InvalidRequest) when calling the CopyObject operation: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
How does one update the metadata on an object larger than 5GB?
Due to the size of your object, try invoking a multipart upload and use the copy_from argument. See the boto3 docs here for more information:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.MultipartUploadPart.copy_from
Apparently, you can't just update the metadata - you need to re-copy the object to S3. You can copy it from s3 back to s3, but you can't just update, which is annoying for objects in the 100-500GB range.

Unable to figure out why my code is printing "None". even though it should bring values

I am trying to access S3 object and when i am printing the data objects, it prints "None" basically I am unable to move beyond that point since my "if" condition is failing. Could anyone please assist
if s3_resource.Bucket(s3_bucket).creation_date:
print("UTKARSH")
this piece of code which you have specified is correct. Check whether you are able to access the object first and then go for if condition.
s3_object = boto.resource('s3').Bucket("Your Bucket Name")
or else you can also get the bucket info using s3 client like:
busket_list = boto.client('s3').list_buckets()
try to get the details first then you can filter the details according to your need.
Creation_date is one of the property of s3 bucket which stores the date of creation of that particular bucket.
What is your s3_bucket? is that String? Check my code.
import boto3
session = boto3.session.Session(region_name='ap-northeast-2')
s3 = session.resource('s3')
if s3.Bucket('seoul-dev-datalake').creation_date:
print(s3.Bucket('my-bucket-name').name, 'Done')
It gives the correct result.
my-bucket-name Done

How to get information of S3 bucket?

Say for example I have the following bucket set up:
bucketone
…/folderone
…/text1.txt
…/text2.txt
…/foldertwo
…/file1.json
…/folderthree
…/folderthreesub
…/file2.json
…/file3.json
But it only goes down one level.
What’s the proper way of retrieving information under a bucket?
Will be sure to accept/upvote answer.
Whats wrong with just doing this from the CLI?
aws s3 cp s3://bucketing . --recursive
Contrary to the way you'd think it will work, rsplit() actually returns the splits from left-right, even though it applies it right-to-left.
Therefore, you actually want to obtain the last element of the split:
filename = obj['Key'].rsplit('/', 1)[-1]
See: Python rsplit() documentation
Also, be careful of 'pretend directories' that might be created via the console. They are actually zero-length files the make the folder appear in the UI. Therefore, skip files with no name after the final slash.
Make those fixes and it works as desired:
import boto3
import os
s3client = boto3.client('s3')
for obj in s3client.list_objects_v2(Bucket='my-bucket')['Contents']:
filename = obj['Key'].rsplit('/', 1)[-1]
localfiledir = os.path.join('/tmp', filename)
if filename != '':
s3client.download_file('my-bucket', obj['Key'], localfiledir)

How to rename objects boto3 S3?

I have about 1000 objects in S3 which named after
abcyearmonthday1
abcyearmonthday2
abcyearmonthday3
...
want to rename them to
abc/year/month/day/1
abc/year/month/day/2
abc/year/month/day/3
how could I do it through boto3. Is there easier way of doing this ?
As explained in Boto3/S3: Renaming an object using copy_object
you can not rename an object in S3 you have to copy object with a new name and then delete the Old object
s3 = boto3.resource('s3')
s3.Object('my_bucket','my_file_new').copy_from(CopySource='my_bucket/my_file_old')
s3.Object('my_bucket','my_file_old').delete()
There is not direct way to rename S3 object.
Below two steps need to perform :
Copy the S3 object at same location with new name.
Then delete the older object.
I had the same problem (in my case I wanted to rename files generated in S3 using the Redshift UNLOAD command). I solved creating a boto3 session and then copy-deleting file by file.
Like
import boto3
session = boto3.session.Session(aws_access_key_id=my_access_key_id,aws_secret_access_key=my_secret_access_key).resource('s3')
# Save in a list the tuples of filenames (with prefix): [(old_s3_file_path, new_s3_file_path), ..., ()] e.g. of tuple ('prefix/old_filename.csv000', 'prefix/new_filename.csv')
s3_files_to_rename = []
s3_files_to_rename.append((old_file, new_file))
for pair in s3_files_to_rename:
old_file = pair[0]
new_file = pair[1]
s3_session.Object(s3_bucket_name, new_file).copy_from(CopySource=s3_bucket_name+'/'+old_file)
s3_session.Object(s3_bucket_name, old_file).delete()