I just started to use Amazon S3 storage for storing images uploaded from my app. I am able to access it via a URL:
https://s3.us-east-2.amazonaws.com/BUCKETNAME/.../image.png
Does this count as a GET request? How am I charge for referencing an image like this?
I am able to access it via a URL. Does this count as a GET request?
If you are pasting this URL in to your browser and pressing go, your browser will make a GET request for this resource, yes.
How am I charge for referencing an image like this?
AWS charges based on storage and bandwidth. For storage their pricing is based per GB per month. For bandwidth they charge per 1000 requests and per GB of data transferred. Their pricing charts can be found on their documentation:
https://aws.amazon.com/s3/pricing/
You are right . It’s a get request.
You pay for every 10k get requests , storage size and of course out bound traffic costs .
Take a look here:
https://blog.cloudability.com/aws-s3-understanding-cloud-storage-costs-to-save/
For future reference, if you want to access a file in Amazon S3 the URL needs to be something like:
bucketname.s3.region.amazonaws.com/foldername/image.png
Example: my-awesome-bucket.s3.eu-central-1.amazonaws.com/media/img/dog.png
Don't forget to set the object to public.
Inside S3 if you click on the object will you see a field called: Object URL. That's the object's web address.
Related
I am using aws s3 for storing files and returning links to those files. It's very convenient but I only know how to generate a pre-signed URL which only lasts for a set amount of time.
Is there anyway to generate a default URL which lasts permanently? I need the user to be able to access the photo from within their app. They take a photo, it's uploaded. They view it. Pretty standard.
You have to use Handlers (DOI, PURL, PURLZ, persistant URIs)
All are free of charge, except for DOIs.
You create that handler, then you add it to your project.
I would like to send the last modified date of the uploaded file to the server. I have the javascroipt snippet to get that using FileApi ($(this).fineUploaderS3('getFile', id).lastModifiedDate). I would like to send this information when the uploadSuccess's endPoint is called, but I cannot find the callback which is right for me at Events | Fine Uploader documentation, and I cannot find the way I could inject the data.
These are submitted as POST parameters to my server when the upload finished to S3: key, uuid, name, bucket. I would like to inject the lastModified date here somehow.
Option 2:
Asking the Amazon S3 service about last modification date does not help directly, because the uploaded file has the current date, not the file's original date. It would be great if we could inject the information into the FineUploader->S3 communication in a way that S3 would use it for setting it's own last modified date for the uploaded file.
Other perspective I considered:
If I use onSubmit and setParams then I the Amazon S3 server will take it as 'x-amz-meta-lastModified'. The problem is that when I upload larger files (which is uploaded in chunks with an other dance) then I get signing error. ...<Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>....
EDIT
The Other perspective I considered works. The bottleneck was the name of the custom metadata chih I used at setParams. It cannot contain capital letters, otherwise the signing fails. I did not find any reference documentation for it. For one I checked Object Key and Metadata - Amazon Simple Storage Service. If someone could find me a reference I would include that here.
The original question (when and how to send last modified date to the server component) remains.
(Server is PHP.)
EDIT2
The Option 2 will not work, as far my research went the "Last Modified" entry cannot be manually altered at Amazon S3.
If the S3 API does not return the expected last modified date, you can check the value of the lastModifiedDate on the File object associated with the upload (provided the browser supports the file API) and send that value as a parameter to the upload success endpoint. See the documentation for the setUploadSuccessParams API method for more details.
I have a small communications app in Ember and the server is polled every 10 seconds for new data.
Next to each thread, or reply under a thread, there is a user area that displays the user's email, avatar, and badges. The avatars are hosted on s3, and every time the site sends down JSON data, a get request is sent off for each avatar on s3.
There are other images along side the avatars, like badge icons, but those are hosted in the Rails public folder and do not trigger a get request.
Why is this happening? is there a way to cache or suppress the get requests every time JSON is returned from the server?
Why is this happening?
Sounds like some of your views are re-rendering when it gets new data. It may be possible to prevent this by changing how your templates are structured, but probably that is overkill for this situation. If you want to see what's going on, try adding instrumentation to your app to console.log when views get re-rendered: How to profile the page render performance of ember?
Assuming the avatars and other images are both getting re-rendered, the s3 ones probably trigger a new get request because the headers being returned by s3 are not set to allow browser caching.
is there a way to cache or suppress the get requests every time JSON is returned from the server?
Yes, adjust settings on your s3 images so that browser will cache them.
In s3 management console, go to properties -> metadata section for each file you want to update
set an Expires header for your files
other headers to look at are Cache-Control and Last-Modified
See How to use browser caching with Amazon S3?
Is there a possibility to set maximum file (object) size using a bucket's policy?
I found here a question like this, but there is no size limitation in the examples.
No, you can't do this with a bucket policy. Check the Element Descriptions page of the S3 documentation for an exhaustive list of the things you can do in a bucket policy.
However, you can specify a content-length-range restriction within a Browser Uploads policy document. This feature is commonly used for giving untrusted users write access to specific keys within an S3 bucket you control (e.g. user-facing media uploads), and it provides the appropriate tools for limiting the location, size, and data types that can be uploaded without needing to expose your S3 credentials.
I am syndicating out my multi-media content (mp4 and images) to several clients. So I create one S3 object for every mp4 say "my_content_that_pays_my_bills.mp4" and let the client access the S3 URL for the objects and embed it wherever they want.
What I want is for client A to access this MP4 as "A_my_content_that_pays_my_bills.mp4"
and Client B to access this as "B_my_content_that_pays_my_bills.mp4" and so on.
I want to bill the clients by usage: so I could process access logs and count access to "B_my_content_that_pays_my_bills.mp4" and bill client B for usage.
I know that S3 allows only one key per object. So how do I get around this ?
I don't know that you can alias file names in the way you'd like. Here are a couple of hacks I can think of for public files embedded freely by a customer:
1) Create one Cloudfront distribution per client, each pointing at the same bucket. Each AWS account can have 100 distributions, so you could support only that many clients. Or,
2) Duplicate the files, using the the client-specific names that you'd like. This is simpler but your file storage costs scale with your clients (which may or may not be significant).