Elixir Arc: Extend S3 Header Expiry Time - amazon-s3

I'm using the arc attachment library for elixir: https://github.com/stavro/arc, and I'm wanting to increase the expiry time of the signed generated URL's.
The default expiry time for S3 headers is set here:
https://github.com/stavro/arc/blob/3d1754b3e65e0f43b87c38c8ba696eadaeeeae27/lib/arc/storage/s3.ex#L3
Which produces the following in the link request to S3:
...&X-Amz-Date=20180125T203430Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=...
The readme says that you can extend the S3 header expires by adding a s3_object_headers method to your uploader:
Presuming that this is what I needed to do, here's what I added:
def s3_object_headers(version, {file, scope}) do
[expires: 600]
end
But I still get the same Amz-Expires value (300). I also tried using :expires_in and :expires_at as the code seemed to reference those values, but got the same result.
What have I done wrong or failed to understand about how this works?

expires_in needs to be passed in the last argument to your module's url/3 function, not put in s3_object_headers/2:
YourModule.url(..., ..., expires_in: 600)

I think the readme might be wrong by reading the signing and it's
:expires_in (or :expire_in) that you need to define in s3_object_headers

Related

S3 java SDK - set expiry to object

I am trying to upload a file to S3 and set an expire date for it using Java SDK.
This is the code i got:
Instant expiration = Instant.now().plus(3, ChronoUnit.DAYS);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setExpirationTime(Date.from(expiration));
metadata.setHeader("Expires", Date.from(expiration));
s3Client.putObject(bucketName, keyName, new FileInputStream(file), metadata);
The object has no expire data on it in the S3 console.
What can I do?
Regards,
Ido
These are two unrelated things. The expiration time shown in the console is x-amz-expiration, which is populated by the system, by lifecycle policies. It is read-only.
x-amz-expiration
Amazon S3 will return this header if an Expiration action is configured for the object as part of the bucket's lifecycle configuration. The header value includes an "expiry-date" component and a URL-encoded "rule-id" component.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
Expires is a header which, when set on an object, is returned in the response when the object is downloaded.
Expires
The date and time at which the object is no longer able to be cached. For more information, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
It isn't possible to tell S3 when to expire (delete) a specific object -- this is only done as part of bucket lifecycle policies, as described in the User Guide under Object Lifecycle Management.
Following the documentation the method setExpirationTime() using for internal needs and do not define expiration time for the uploaded object
public void setExpirationTime(Date expirationTime)
For internal use only. This will not set the object's expiration
time, and is only used to set the value in the object after receiving
the value in a response from S3.
So you can’t directly set expiration date for particular object. To solve this problem you can:
Define lifecycle rule for a bucket(remove bucket with objects after number of days)
Define lifecycle rule for bucket level to remove objects with specific tag or prefix after numbers of days
To define those rules use documentation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html

CSRF failure in custom mongoose pre-hook (Keystone.js)

using keystone LocalFile type to handle image uploads. similar to the Cloudinary autoCleanup option, I want to be able to delete the uploaded file itself, in addition to the corresponding mongo entry when deleting entries through the admin ui.
in this case, I want to delete an "Album", and it's corresponding album cover.
Album.schema.pre('remove', function(next){
var path = this._original.album_cover.path + "/" + this._original.album_cover.filename
fs.unlink(path, function () {
console.log('deleted');
})
I get "CSRF failure" when using the fs module. I thought all CSRF protection was handled internally with Keystone.
Anyone know of a better solution to this?
Took a 10 minute break and came back and it seems to be working now. I also found this, which seems to be the explanation.
"Moreover double check your session timeout. In my dev settings the session duration is set to 3 minutes. So, if I end up editing something for more than that time, Keystone will return a CSRF error on save because the new session (generate in the meantime) invalidates the old token."
https://github.com/keystonejs/keystone/issues/1330

Setting metadata on S3 multipart upload

I'd like to upload a file to S3 in parts, and set some metadata on the file. I'm using boto to interact with S3. I'm able to set metadata with single-operation uploads like so:
Is there a way to set metadata with a multipart upload? I've tried this method of copying the key to change the metadata, but it fails with the error: InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: <size>
I've also tried doing the following:
key = bucket.create_key(key_name)
key.set_metadata('some-key', 'value')
<multipart upload>
...but the multipart upload overwrites the metadata.
I'm using code similar to this to do the multipart upload.
Sorry, I just found the answer:
Per the docs:
If you want to provide any metadata describing the object being uploaded, you must provide it in the request to initiate multipart upload.
So in boto, the metadata can be set in the initiate_multipart_upload call. Docs here.
Faced such issue earlier today and discovered that there is no information on how to do that right.
The code example on how we solved that issue provided below.
$uploader = new MultipartUploader($client, $source, [
'bucket' => $bucketName,
'key' => $filename,
'before_initiate' => function (\Aws\Command $command) {
$command['ContentType'] = 'application/octet-stream';
$command['ContentDisposition'] = 'attachment';
},
]);
Unfortunately, documentation https://docs.aws.amazon.com/aws-sdk-php/v3/guide/service/s3-multipart-upload.html#customizing-a-multipart-upload doesn't make it clear and easy to understand that if you'd like to provide alternative meta data with multipart upload you have to go this way.
I hope that will help.

Expiring api request caches

I've implemented API caching based on http://robots.thoughtbot.com/caching-api-requests. I'm using memory as the storage. How can I reset the cache manually without restarting the server?
I've tried using Rails.cache.clear, but it doesn't seem to work. The data is still getting pulled from the cache. I checked it by observing the server log for my puts message (as shown below).
Caching code:
module Meh
class Api
include HTTParty
#...
cache_name = options[:path] + "/" + options[:params].values.join(",")
response = nil
APICache.get(cache_name, cache: 3600) do
response = self.class.get options[:path], query: options[:params]
# For future debugging
puts "[API] Request: #{response.request.last_uri.to_s}"
# Just return nil if there's an error with the request, for now
if response.code == 200
response.reverse!
else
response = nil
end
end
end
Have you tried 'rake tmp:cache:clear' or deleting the contents of tmp/cache/ manually?
Are you trying to delete the contents of the cache from within the code?
Reading through the api_cache gem, it looks like this is a memory cache, not a file cache. Which would be consistent with your reports. It also looks like there is a .delete method on the APICache api. link So APICache.delete(cache_name) may be what you are looking for.

Custom Request Matcher for SEOMOZ + VCR

I am trying to integrate SEOMOZ API with VCR.
As the API request to SEOMOZ contains parameters that change for the same request over time, I need to implement a custom matcher.
Here is what my API call looks like :
http://lsapi.seomoz.com/linkscape/url-metrics/#{self.url}?Cols=#{cols}&AccessID=#{moz_id}&Expires=#{expires}&Signature=#{url_safe_signature}
I also make calls to other endpoints such as Twitter,Facebook etc etc. For which the default matcher does the job well.
How can I override the matcher behavior just for SEOMOZ. Also on what parameters should I best match the request in this case.
You'll want to match on all parameters except Signature and Expires.
Another option you might consider (we use it internally when using VCR with this sort of API) is to record the time of the test in a file with the cassettes, and use Timecop or something equivalent to ensure you're re-running the recorded test at the "same time" every time you run it.
You can use the VCR.request_matchers.uri_without_params custom matcher, see https://relishapp.com/vcr/vcr/v/2-5-0/docs/request-matching/uri-without-param-s
You would use it like this in your case:
VCR.configure do |c|
# ...
c.default_cassette_options = {
# ...
# the default is: match_requests_on: [:method :uri]
match_requests_on: [:method, VCR.request_matchers.uri_without_params(:Signature, :Expires)]
}
end