S3 java SDK - set expiry to object - amazon-s3

I am trying to upload a file to S3 and set an expire date for it using Java SDK.
This is the code i got:
Instant expiration = Instant.now().plus(3, ChronoUnit.DAYS);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setExpirationTime(Date.from(expiration));
metadata.setHeader("Expires", Date.from(expiration));
s3Client.putObject(bucketName, keyName, new FileInputStream(file), metadata);
The object has no expire data on it in the S3 console.
What can I do?
Regards,
Ido

These are two unrelated things. The expiration time shown in the console is x-amz-expiration, which is populated by the system, by lifecycle policies. It is read-only.
x-amz-expiration
Amazon S3 will return this header if an Expiration action is configured for the object as part of the bucket's lifecycle configuration. The header value includes an "expiry-date" component and a URL-encoded "rule-id" component.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
Expires is a header which, when set on an object, is returned in the response when the object is downloaded.
Expires
The date and time at which the object is no longer able to be cached. For more information, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
It isn't possible to tell S3 when to expire (delete) a specific object -- this is only done as part of bucket lifecycle policies, as described in the User Guide under Object Lifecycle Management.

Following the documentation the method setExpirationTime() using for internal needs and do not define expiration time for the uploaded object
public void setExpirationTime(Date expirationTime)
For internal use only. This will not set the object's expiration
time, and is only used to set the value in the object after receiving
the value in a response from S3.
So you can’t directly set expiration date for particular object. To solve this problem you can:
Define lifecycle rule for a bucket(remove bucket with objects after number of days)
Define lifecycle rule for bucket level to remove objects with specific tag or prefix after numbers of days
To define those rules use documentation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html

Related

Elixir Arc: Extend S3 Header Expiry Time

I'm using the arc attachment library for elixir: https://github.com/stavro/arc, and I'm wanting to increase the expiry time of the signed generated URL's.
The default expiry time for S3 headers is set here:
https://github.com/stavro/arc/blob/3d1754b3e65e0f43b87c38c8ba696eadaeeeae27/lib/arc/storage/s3.ex#L3
Which produces the following in the link request to S3:
...&X-Amz-Date=20180125T203430Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=...
The readme says that you can extend the S3 header expires by adding a s3_object_headers method to your uploader:
Presuming that this is what I needed to do, here's what I added:
def s3_object_headers(version, {file, scope}) do
[expires: 600]
end
But I still get the same Amz-Expires value (300). I also tried using :expires_in and :expires_at as the code seemed to reference those values, but got the same result.
What have I done wrong or failed to understand about how this works?
expires_in needs to be passed in the last argument to your module's url/3 function, not put in s3_object_headers/2:
YourModule.url(..., ..., expires_in: 600)
I think the readme might be wrong by reading the signing and it's
:expires_in (or :expire_in) that you need to define in s3_object_headers

Is there have any API to reset the adapter successStateExpirationSec?

I defined a security check adapter and configured the property which is shown below.
<securityCheckDefinition name="MySecurityTest" class="com.sample.MyTest">
<property name="successStateExpirationSec" defaultValue="30" description="How long is a successful state valid for (seconds)"/>
</securityCheckDefinition>
The configuration means that when I pass the security check, I can access the protected resource under the scope for 30 seconds.
After 30 seconds, the server will force client to logout.
However, no user want their app repeatedly doing validation with high frequency.
We know we can increase the value of successStateExpirationSec, unfortunately, it cannot meet our requirement.
How can I extend the property "successStateExpirationSec" before the time
expired and without revalidation ?
It is not recommended to update the "SuccessStateExpirationSecond" after setting it and before it expires. I think Logical approach for your usecase is to determine the proper value "SuccessExpirationSecond" and set the properties to that value.
Instead of updating in the SecurityCheckDefinition in adapter.xml, you can also set it programatically by Extending "CredentialValidationSecurityCheck" .
Refer sample here .This allows you to set the default properties value.

boto3 S3: update `expiry-date` on object

My object has attribute 'Expiration': 'expiry-date="Sun, 16 Jul 2017 00:00:00 GMT"' that define when this object will be deleted - this date set by S3 from lifecycle rule. Is any way to update this date from boto3 to autodelete this object later? By the way I found the same datetime in attribute x-amz-expiration.
While your object is probably already gone, there is already an answered question for that specific topic: s3 per object expiry
tl;dr: Expiration is per S3 bucket, but by emulating touch you can extend the expiry date of individual objects.
As you asked for a boto3-solution for that and such a solution isn't noted in the linked question, here is one with boto3:
#!/usr/bin/env python3
import boto3
client = boto3.client('s3')
# Upload the object initially.
client.put_object(Body='file content',
Bucket='your-bucket',
Key='testfile')
# Replace the object with itself. That will "reset" the expiry timer.
# As S3 only allows that in combination of changing metadata, storage
# class, website redirect location or encryption attributes, simply
# add some metadata.
client.copy_object(CopySource='your-bucket/testfile',
Bucket='your-bucket',
Key='testfile',
Metadata={'foo': 'bar'},
MetadataDirective='REPLACE')

Accessing FlowFile content in NIFI PutS3Object Processor

I am new to NIFI and want to push data from Kafka to an S3 bucket. I am using the PutS3Object processor and can push data to S3 if I hard code the Bucket value as mphdf/orderEvent, but I want to specify the buckets based on a field in the content of the FlowFile, which is in Json. So, if the Json content is this {"menu": {"type": "file","value": "File"}}, can I have the value for the Bucket property as as mphdf/$.menu.type? I have tried to do this and get the error below. I want to know if there is a way to access the FlowFile content with the PutS3Object processor and make Bucket names configurable or will I have to build my own processor?
ERROR [Timer-Driven Process Thread-10]
o.a.nifi.processors.aws.s3.PutS3Object
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you
provided was not well-formed or did not validate against our
published schema (Service: Amazon S3; Status Code: 400; Error Code:
MalformedXML; Request ID: 77DF07828CBA0E5F)
I believe what you want to do is use an EvaluateJSONPath processor, which evaluates arbitrary JSONPath expressions against the JSON content and extracts the results to flowfile attributes. You can then reference the flowfile attribute using NiFi Expression Language in the PutS3Object configuration (see your first property Object Key which references ${filename}). In this way, you would evaluate $.menu.type and store it into an attribute menuType in the EvaluateJSONPath processor, then in PutS3Object you would have Bucket be mphdf/${menuType}.
You might have to play around with it a bit but off the top of my head I think that should work.

Deleting logs file in Amazon s3 bucket according to created date

How to delete the log files in Amazon s3 according to date.? I have log files in a logs folder folder inside my bucket.
string sdate = datetime.ToString("yyyy-MM-dd");
string key = "logs/" + sdate + "*" ;
AmazonS3 s3Client = AWSClientFactory.CreateAmazonS3Client();
DeleteObjectRequest delRequest = new DeleteObjectRequest()
.WithBucketName(S3_Bucket_Name)
.WithKey(key);
DeleteObjectResponse res = s3Client.DeleteObject(delRequest);
I tried this but doesn't seem to work. I can delete individual files if I put the whole name in the key. But I want to delete all the log files created for a particular date.
You can use S3's Object Lifecycle feature, specifically Object Expiration, to delete all objects under a given prefix and over a given age. It's not instantaneous, but it beats have to make myriad individual requests. To delete everything, just make the age small.
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html