How can I tell if an object is overwritten using the Amazon S3 sdk? - amazon-s3

If I use PutObject in the Amazon S3 SDK is there a way to tell if the object in the bucket pre-existed (was overwritten) without fetching it first? There is the x-amz-version-id header but thats always the latest version, while I want the previous version instead.

Related

Will objects with the same name uploaded in AWS s3 be overwritten

I'm uploading images/videos to S3 using their API and putObject.
When I use upload of com.amazonaws.services.s3.transfer to post the same PutObjectRequest twice, will the object be overwritten by the latest one ? Or aws will save the object twice with different versionID?
I didn't find the answer in Aws official document. I've checked SO but it's quite an old question and I don't know how the current version is.
Yes, by default the versioning on S3 buckets is disabled.

How to dynamically change the "S3 object key" in AWS CodePipeline when source is S3 bucket

I am trying to use S3 bucket as source for CodePipeline. We want to save source code version like "1.0.1" or "1.0.2" in S3 bucket each time we trigger Jenkins pipeline dynamically as source which is saved in S3 bucket. But since the "S3 object key" is not dynamic we cant able to build artifact based on version numbers which is generated dynamically by Jenkins. Is there a way to make the "S3 object key" dynamic and take value from Jenkins pipeline when code pipeline is triggered.
Not possible natively but you can do that by writing your own Lambda function. It’d require Lambda as it’s a restriction with CodePipeline that you’ve to specify a fixed object key name while setting up the pipeline.
So, let’s say you’ve 2 pipelines, CircleCI (CCI) & CodePipeline (CP). CCI generates some files and push it to your S3 bucket (S3-A). Now, you want CP to pick up the latest zip file as a source. But since the latest zip file will be having different names (1.0.1 or 1.0.2), you can’t do that dynamically.
So, on that S3 bucket (S3-A), you can have have S3 event notification trigger enabled with your custom Lambda function. Whenever any new object gets uploaded to that S3 bucket (S3-A), your Lambda function will be triggered, it’ll fetch the latest uploaded object to that S3 bucket (S3-A), zip/unzip that object and push it to an another S3 bucket (S3-B) with some fixed name like file.zip with which you’ll configure your CP with as a source. As there’s a new object with file.zip in your S3 bucket (S3-B), your CP will be triggered automatically.
PS: You’ll have to write your own Lambda function such that it’ll do all those above operations like zipping/unzipping up the newly uploaded object in S3-A, uploading it to S3-B, etc.

Difference between boto and boto3 in aws python aws, related to S3

Aws release a note for S3 path deprecation https://aws.amazon.com/it/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/ . In the Documentation, they assure that AWS sdk will be guaranteed except for some problems with the names, if the SDK is the latest version. Now the problem is that AWS has 2 python sdk, boto and boto3. i'm sure that the boto3 will have no problems related to the bucket path, but for the boto i haven't found anything about it. Is boto updated together with boto3?
From the github of boto:
Going forward, API updates and all new feature work will be focused on Boto3.
So boto is no longer getting any API updates nor features. If you check the linked github page, the last commit was over 1 year ago. So its likely that the S3 path changes won't be reflected in boto.

Upload multiple files to AWS S3 bucket without overwriting existing objects

I am very new to AWS technology.
I want to add some files to an existing S3 bucket without overwriting existing objects. I am using Spring Boot technology for my project.
Can anyone please suggest how can we add/upload multiple files without overwriting existing objects?
AWS S3 supports object versioning in the bucket, in which for use case of uploading same file, S3 will keep all files within the bucket with different version rather than overwriting it.
This can be configured using AWS Console or CLI to enable the Versioning feature. You may want to refer this link for more info.
You probably already found an answer to this, but if you're using the CDK or the CLI you can specify a destinationKeyPrefix. If you want multiple folders in an S3, which was my case, the folder name will be your destinationKeyPrefix.

Does S3 copy keep versioning

According to amazon
Versioning allows you to preserve, retrieve, and restore every version of every object in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them.
Am I correct in assuming that if I copy the content from one bucket in region x to another bucket in region y that the version history will be preserved?
If versioning is not kept through a copy request, how would I be able to transfer the versioning over to the new bucket? I would like to use boto for this but will accept any language.
Thanks
Unfortunately: no
The version history is saved for each file within a bucket which has versioning enabled.
If you modify the file it will preserve the old version and create a new revision for your current version.
If you copy a file to another bucket, even within the same region, the target file will be revision 1.
I've tested this using Cloudberry S3 Explorer Pro
EDIT:
You can actually access each version of the file directly.
So what you could do is to copy version by version, replaying the whole process on the new bucket. This will indirectly copy the file including versioning