Difference between Data Transfer and GET request for Amazon S3 - amazon-s3

I was looking at my billing at noticed my price for Data Transfer made almost 100% of my bill, so I want to be sure I understand exactly what Data Transfer entails, that a GET request.
Just for context I host my website on a different server and have it hooked up to an S3 to store user generated files. These files are the made available for download. Does Data Transfer just cover the bandwidth used to download the file, or is it also used to display one of the files store on my s3 on my site. So for example, if I store a mp3 file on my s3, and display this file on the site to play (excluding the downloading), is that just a GET request thats being sent to get and display the file? To me the definitions are little ambiguous. Any help!?

The GET per-request charge is the charge for handling the actual request for the file (checking whether it exists, checking permissions, fetching it from storage, and preparing to return it to the requester), each time it is downloaded.
The data transfer charge is for the actual transfer of the file's contents from S3 to the requester, over the Internet, each time it is downloaded.
If you include a link to a file on your site but the user doesn't download it and the browser doesn't load it to automatically play, or pre-load it, or something like that, S3 would not know anything about that, so you wouldn't be billed. That's also true if you are using pre-signed URLs -- those don't result in any billing unless they're actually used, because they're generated on your server.
If you include an image on a page, and the image is in S3, every time the page is viewed, you're billed for the request and the transfer, unless the browser has cached the image.
If you use CloudFront in front of S3, so that your image or download links point to CloudFront, you would pay only the request charge from S3, not the transfer charge, from S3, because CloudFront would be billing you the transfer charge instead of S3 (and, additionally, a CloudFront per-request charge, but since CloudFront's data transfer charges are slightly cheaper than S3 in some regions, it's not necessarily a bad deal, by any means).

Related

What is Data Transfer Out in AWS S3?

I am planning to use Amazon S3 for storing images for my upcoming project - I have one doubt on 'Data Transfer Out' prices.
What does this mean? It's written that you have to pay the charges of data transfer if you transfer data from S3 bucket to 'the public internet' - does this mean if I make my image public, (so I can share it's URL to everyone) it would count it as Data Transfer out?
Thanks!
Yes You will be charged for the amount of data that is transfered out from the bucket to the internet.
If you look into the 1st question of Billing section of https://aws.amazon.com/s3/faqs/ this page. You can see the example given there for how you will be charged for the amount of data transfered.
Well, there are two concepts Data Transfer and Requests, I think here you are talking about the Requests and not about Data Transfer, in Requests, they count for every time that someone clicks your webpage and request to show the image, or download it.
Look at this information below :
AWS S3 REQUESTS PRICING
PUT, COPY, POST, or LIST $0.005 / 1,000 requests $0.01 / 1,000 requests
GET and all other requests $0.004 / 10,000 requests $0.01 / 10,000 requests
Look at what AWS says about the requests:
You pay for requests made against your S3 buckets and objects. S3 request costs are based on the request type, and are charged on the quantity of requests as listed in the table below. When you use the Amazon S3 console to browse your storage, you incur charges for GET, LIST, and other requests that are made to facilitate browsing. Charges are accrued at the same rate as requests that are made using the API/SDK. Reference the S3 developer guide for technical details on the following request types: PUT, COPY, POST, LIST, GET, SELECT, Lifecycle Transition, and Data Retrievals. DELETE and CANCEL requests are free.
LIST requests for any storage class are charged at the same rate as S3 Standard PUT, COPY, and POST requests.
You pay for retrieving objects that are stored in S3 Standard – Infrequent Access, S3 One Zone – Infrequent Access, S3 Glacier, and S3 Glacier Deep Archive storage. Reference the S3 developer guide for technical details on Data Retrievals.
Now look at the concept of TRANSFERS:
You pay for all bandwidth into and out of Amazon S3, except for the following:
Data transferred in from the internet.
Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2)
instance, when the instance is in the same AWS Region as the S3
bucket.
Data transferred out to Amazon CloudFront (CloudFront).
The pricing below is based on data transferred "in" and "out" of Amazon S3 (over the public Internet)†††. Transfers between S3 buckets or from Amazon S3 to any service(s) within the same AWS Region are free. You also pay a fee for any data transferred using Amazon S3 Transfer Acceleration. Learn more about AWS Direct Connect pricing.
If you want more info go AWS Princing page

Using CloudFront, is it still useful to use different S3 region?

Up to now, we were using S3 to store our files, using buckets in different regions to be closest to our data generator and people getting data (much more GET than POST, POST'er typically closer to GET'er).
We are moving to CloudFront for many reasons. So now the data is pushed and got from the closest CloudFront endpoint from the user, as a proxy to/from S3.
The question that now arises is whether it is still useful for any reason to store our data on a bucket depending on the region?
GET will not be faster as they are served from the CF endpoint, except for the very first GET of an CF area after a "long" duration without GET
POST will not be faster as they are pushed to the CF endpoint
The cost of CF does not seem to be affected by the region of the origin S3
As you said, region may not make any significant difference for GETs, since Amazon CloudFront distributions have a single endpoint: cloudfront.amazonaws.com
However if you are writing (put) to S3 directly from your user side, here it might be better to keep it at a closer region.

Unique challenge of s3 Bucket Policy for 'Grant/Restrictions' access

I read the directions for posting, so I will be as specific as possible.
I have an S3 bucket with numerous FLV files that I will be allowing customers to stream on THEIR domains.
What I am trying to accomplish is
Setting a bucket policy that 'GRANTS' access to specific domains (a list) to stream my bucket files from their domains.
A bucket policy that restricts a user to 'one stream' per domain. In other words, for each domain listed in the above policy, they can only stream one file at a time on their site.
The premise is a video site where customers will be streaming videos specific to their niche. I make host and deliver the videos, but need some control over their delivery.
All files are in ONE bucket. There aren't any weird things going on with the files. It's very straight forward.
I just need the bucket policy control that would Grant and also Restrict the ability of my customers to stream my content from their domains.
I PRAY I have been clear enough, but please don't hesitate to ask if I have confused you...
Thanks VERY much
A
I don't think you can achieve what you want by simply setting access permissions to the bucket.
I checked in AccessControlList and CannedAccessControlList.
Your best bet will be to write a webservice wrapper to access the bucket data.
You will have better control over the data you serve and may be you might also explore the option of cached copy of data for higher optimization.

How to retrieve Salesforce file attachment limit via API?

Salesforce attachment file size is officially limited to 5MB (doc) but if requested they can increase this limit on a one to one cases.
My question: Can I retrieve this newly allowed file size limit using the API.
Context: Non-Profits are applying for grants via a web portal (.NET), all data is stored in Salesforce. They are asked to attach files. We read the file size they try to upload and send an error message if it exceeds 5MB as it will not be accepted by Salesforce. This is to avoid having them wait for few minutes to upload to only be told that the file size is too large. We would like to update our code so that it allows files bigger than 5MB if Salesforce allows it. Can we retrieve this information via the API?
Thank you!
You can call the getUserInfo() function in the SOAP API, part of the returned data includes the field orgAttachmentFileSizeLimit (this appears to be missing from the docs, but is in the WSDL)
I'll recommend going away from Salesforce to store files, more if you'r expecting to hit limits, also there is a limit on the space for storing organization wide, a useful service like Amazon S3 would be very useful, you can then attach the S3 url to your record in case it is needed, it'll be also available for external applications without having to load your org's api consumption.

Amazon S3 download try limit

I want to restrict download try to certain number from my Amazon s3 service
I am using library from http://undesigned.org.za/
Any one is having any idea how can I restrict download to certain number?
My understanding is that this restriction is impossible -- can't be done.
A number of other S3 users would like to limit the amount of traffic that their account can generate, in order to limit service costs. This is equivalent to restricting the number of downloads.
S3 budget control was a requested feature back in 2006.
Still no word from Amazon AWS on the thread below, which tracks this request to add a feature to S3 where accounts will turn off access when a budget is reached. The thread which runs up to the present day, contains ideas for workarounds amidst the complaints:
See https://forums.aws.amazon.com/thread.jspa?threadID=10532&start=25&tstart=0
Several third party solutions are mentioned.