Block S3 access to IPs from Syria, Iran, Sudan etc - amazon-s3

Do you know any bucket policy that would allow me to block access to S3 files from specific countries?

Although you can block a specific IP range using an S3 bucket policy it is impossible to list all the IP ranges assigned from a specific country in this policy, as AWS imposes a 20KB limit on the size of the policy.
Bucket Policies are Limited to 20 Kilobytes in Size—If you have a large number of objects and users, your bucket policy could reach the 20K size limit.
(from http://docs.aws.amazon.com/AmazonS3/latest/dev/WhenToUseACLvsBucketPolicy.html)
A better solution to your problem would be to configure your S3 bucket to use signed requests (Query String Authentication) as described here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth
and have a separate web service that checks the client's IP using a geolocation service and only issues these signed requests to clients that match your criteria.

Amazon CloudFront Adds Geo Restriction Feature. Take a look at http://aws.amazon.com/about-aws/whats-new/2013/12/18/amazon-cloudfront-adds-geo-restriction-feature/

Related

Google Cloud Storage: Alternative to signed URLs for folders

Our application data storage is backed by Google Cloud Storage (and S3 and Azure Blob Storage). We need to give access to this storage to random outside tools (upload from local disk using CLI tools, unload from analytical database like Redshift, Snowflake and others). The specific use case is that users need to upload multiple big files (you can think about it much like m3u8 playlists for streaming videos - it's m3u8 playlist and thousands of small video files). The tools and users MAY not be affiliated with Google in any way (may not have Google account). We also absolutely need to data transfer to be directly to the storage, outside of our servers.
In S3 we use federation tokens to give access to a part of the S3 bucket.
So model scenario on AWS S3:
customer requests some data upload via our API
we give customers S3 credentials, that are scoped to s3://customer/project/uploadId, allowing upload of new files
client uses any tool to upload the data
client uploads s3://customer/project/uploadId/file.manifest, s3://customer/project/uploadId/file.00001, s3://customer/project/uploadId/file.00002, ...
other data (be it other uploadId or project) in the bucket is safe because the given credentials are scoped
In ABS we use STS token for the same purpose.
GCS does not seem to have anything similar, except for Signed URLs. Signed URLs have a problem though that they refer to a single file. That would either require us to know in advance how many files will be uploaded (we don't know) or the client would need to request each file's signed URL separately (strain on our API and also it's slow).
ACL seemed to be a solution, but it's only tied to Google-related identities. And those can't be created on demand and fast. Service users are also and option, but their creation is slow and generally they are discouraged for this use case IIUC.
Is there a way to create a short lived credentials that are limited to a subset of the CGS bucket?
Ideal scenario would be that the service account we use in the app would be able to generate a short lived token that would only have access to a subset of the bucket. But nothing such seems to exist.
Unfortunately, no. For retrieving objects, signed URLs need to be for exact objects. You'd need to generate one per object.
Using the * wildcard will specify the subdirectory you are targeting and will identify all objects under it. For example, if you are trying to access objects in Folder1 in your bucket, you would use gs://Bucket/Folder1/* but the following command gsutil signurl -d 120s key.json gs://bucketname/folderName/** will create a SignedURL for each of the files inside your bucket but not a single URL for the entire folder/subdirectory
Reason : Since subdirectories are just an illusion of folders in a bucket and are actually object names that contain a ‘/’, every file in a subdirectory gets its own signed URL. There is no way to create a single signed URL for a specific subdirectory and allow its files to be temporarily available.
There is an ongoing feature request for this https://issuetracker.google.com/112042863. Please raise your concern here and look for further updates.
For now, one way to accomplish this would be to write a small App Engine app that they attempt to download from instead of directly from GCS which would check authentication according to whatever mechanism you're using and then, if they pass, generate a signed URL for that resource and redirect the user.
Reference : https://stackoverflow.com/a/40428142/15803365

CloudFront - Editing Origin - Restrict Bucket Access

I have a situation which I am unable to understand easily why and I am not able to find any documentation either.
I have done the following:
Created a S3 bucket
Given public access to it
Enabled it for static website hosting
Created a CloudFront distribution to it
Enabled HTTPS at cloudfront
Now I am trying to restrict the access of S3 bucket only to CloudFront.
I tried the steps presented at
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Unfortunately, when I tried to edit the origin I don't see all the options in the UI especially Restrict Bucket Access is missing.
I only see options to edit Origin Domain Name, Origin Path, Origin Id (grayedout), Origin Custom Headers - No option to enter OAI or setting Restrict Bucket Access etc.
Is it because of enabling HTTPS?
S3 masters, please help!
Origin access identities are only applicable when using the S3 REST endpoint (e.g. example-bucket.s3.amazonaws.com) for the bucket -- not when you are using the website hosting endpoint (e.g. example-bucket.s3-website.us-east-2.amazonaws.com), because website hosting endpoints do not support authenticated requests -- they are only for public content... but OAI is an authentication mechanism.
When using the website endpoint, CloudFront does not treat the origin as an S3 Origin -- it is treated as a Custom Origin, and these options are not available, because if they were available, they wouldn't work anyway (for the reason mentioned above).
For those who have since changed some S3 settings to not be public and intend it to be only retrievable via Cloudfront, it is now there but hidden. You just have to cut copy the value from Origin Domain Name in origin tab and then re-paste it in again (if its the name bucket) and the UI will now render with the Restrict Access input options.

Received S3 bucket security notification email for my AWS account?

Recently I have got an email related to my AWS S3 buckets ACL
and the email says:
We’re writing to remind you that one or more of your Amazon S3 bucket access control lists (ACLs) or bucket policies are currently configured to allow read or write access from any user on the Internet. The list of buckets with this configuration is below.
By default, S3 bucket ACLs or policies allow only the account owner to read or write contents from the bucket; however, these ACLs or bucket policies can be configured to permit world access. While there are reasons to configure buckets with world access, including public websites or publicly downloadable content, recently, there have been public disclosures of S3 bucket contents that were inadvertently configured to allow world read or write access but were not intended to be publicly available.
We encourage you to promptly review your S3 buckets and their contents to ensure that you are not inadvertently making objects available to users that you don’t intend. Bucket ACLs and policies can be reviewed in the AWS Management Console (http://console.aws.amazon.com ), or using the AWS CLI tools. ACLs permitting access to either “All Users” or “Any Authenticated AWS User” (which includes any AWS account) are effectively granting world access to the related content.
So, my question is what should I do to overcome this?
As the first answer, yes these mails are like reminders. What should you do is;
Spot the S3 Buckets that needs to be private
Check their Bucket ACL's. Look to the Public Access & Listing
After that check the Bucket policy. Remember that Bucket policies are more valid than the ACL's (For example, ACL may set to DENY mode but if the policy is on ALLOW, every object would be Public)
For the best practices please check this link;
https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf
[Page 28 of 74]
This is a courtesy notice, letting you know that content in Amazon S3 is public. If this is how you want your S3 bucket(s) configured, then there is no need to take action.
If this is not how you wish your buckets to be configured, then you should remove those permissions. (See plenty of online information on how to do this.)
I suspect that many people just blindly copy instructions from various online tutorials and might not realise the impact of their configurations. This email is just letting AWS customers know about their current configuration.

How to serve HLS streams from S3 in secure way (authorized & authenticated)

Problem:
I am storing number of HLS streams in S3 with given file structure:
Video1
├──hls3
├──hlsv3-master.m3u8
├──media-1
├──media-2
├──media-3
├──media-4
├──media-5
├──hls4
├──hlsv4-master.m3u8
├──media-1
├──media-2
├──media-3
├──media-4
├──media-5
In my user API I know which exactly user has access to which video content
but I also need to ensure that video links are not sharable and only accessible
by users with right permissions.
Solutions:
1) Use signed / temp S3 urls for private S3 content. Whenever client wants to play some specific video it is
sending request to my API. If user has right permissions the API is generating signed url
and returning it back to client which is passing it to player.
The problem I see here is that real video content is stored in dozen of segments files in media-* directories
and I do not really see how can I protect all of them - would I need to sign each of the segment file urls separately?
2) S3 content is private. Video stream requests made by players are going through my API or separate reverse-proxy.
So whenever client decides to play specific video, API / reverse-proxy is getting the request, doing authentication & authorization
and passing the right content (master play list files & segments).
In this case I still need to make S3 content private and accessible only by my API / reverse-proxy. What should be the recommended way here?
S3 rest authentication via tokens?
3) Use encryption with protected key. In this case all of video segments are encrypted and publicly available. The key is also stored in S3
but is not publicly available. Every key request made by player is authenticated & authorized by my API / reverse-proxy.
These are 3 solutions I have in my mind right now. Not convinced on all of them. I am looking for something simple and bullet proof secure. Any recommendations / suggestions?
Used technology:
ffmpeg for video encoding to different bitrates
bento4 for video segmentation
would I need to sign each of the segment file urls separately?
If the player is requesting directly from S3, then yes. So that's probably not going to be the ideal approach.
One option is CloudFront in front of the bucket. CloudFront can be configured with an Origin Access Identity, which allows it to sign requests and send them to S3 so that it can fetch private S3 objects on behalf of an authorized user, and CloudFront supports both signed URLs (using a different algorithm than S3, with two important differences that I will explain below) or with signed cookies. Signed requests and cookies in CloudFront work very similarly to each other, with the important difference being that a cookie can be set once, then automatically used by the browser for each subsequent request, avoiding the need to sign individual URLs. (Aha.)
For both signed URLs and signed cookies in CloudFront, you get two additional features not easily done with S3 if you use a custom policy:
The policy associated with a CloudFront signature can allow a wildcard in the path, so you could authorize access to any file in, say /media/Video1/* until the time the signature expires. S3 signed URLs do not support wildcards in any form -- an S3 URL can only be valid for a single object.
As long as the CloudFront distribution is configured for IPv4 only, you can tie a signature to a specific client IP address, allowing only access with that signature from a single IP address (CloudFront now supports IPv6 as an optional feature, but it isn't currently compatible with this option). This is a bit aggressive and probably not desirable with a mobile user base, which will switch source addresses as they switch from provider network to Wi-Fi and back.
Signed URLs must still all be generated for all of the content links, but you can generate and sign a URL only once and then reuse the signature, just string-rewriting the URL for each file making that option computationally less expensive... but still cumbersome. Signed cookies, on the other hand, should "just work" for any matching object.
Of course, adding CloudFront should also improve performance through caching and Internet path shortening, since the request hops onto the managed AWS network closer to the browser than it typically will for requests direct to S3. When using CloudFront, requests from the browser are sent to whichever of 60+ global "edge locations" is assumed to be nearest the browser making the request. CloudFront can serve the same cached object to different users with different URLs or cookies, as long as the sigs or cookies are valid, of course.
To use CloudFront signed cookies, at least part of your application -- the part that sets the cookie -- needs to be "behind" the same CloudFront distribution that points to the bucket. This is done by declaring your application as an additional Origin for the distribution, and creating a Cache Behavior for a specific path pattern which, when requested, is forwarded by CloudFront to your application, which can then respond with the appropriate Set-Cookie: headers.
I am not affiliated with AWS, so don't mistake the following as a "pitch" -- just anticipating your next question: CloudFront + S3 is priced such that the cost difference compared to using S3 alone is usually negligible -- S3 doesn't charge you for bandwidth when objects are requested through CloudFront, and CloudFront's bandwidth charges are in some cases slightly lower than the charge for using S3 directly. While this seems counterintuitive, it makes sense that AWS would structure pricing in such a way as to encourage distribution of requests across its network rather than to focus them all against a single S3 region.
Note that no mechanism, either the one above or the one below is completely immune to unauthorized "sharing," since the authentication information is necessarily available to the browser, and thus to the user, depending on the user's expertise... but both approaches seem more than sufficient to keep honest users honest, which is all you can ever hope to do. Since signatures on signed URLs and cookies have expiration times, the duration of the share-ability is limited, and you can identify such patterns through CloudFront log analysis, and react accordingly. No matter what approach you take, don't forget the importance of staying on top of your logs.
The reverse proxy is also a good idea, probably easily implemented, and should perform quite acceptably with no additional data transport charges or throughput issues, if the EC2 machines running the proxy are in the same AWS region as the bucket, and the proxy is based on solid, efficient code like that found in Nginx or HAProxy.
You don't need to sign anything in this environment, because you can configure the bucket to allow the reverse proxy to access the private objects because it has a fixed IP address.
In the bucket policy, you do this by granting "anonymous" users the s3:getObject privilege, only if their source IPv4 address matches the IP address of one of the proxies. The proxy requests objects anonymously (no signing needed) from S3 on behalf of authorized users. This requires that you not be using an S3 VPC endpoint, but instead give the proxy an Elastic IP address or put it behind a NAT Gateway or NAT instance and have S3 trust the source IP of the NAT device. If you do use an S3 VPC endpoint, it should be possible to allow S3 to trust the request simply because it traversed the S3 VPC Endpoint, though I have not tested this. (S3 VPC Endpoints are optional; if you didn't explicitly configure one, then you don't have one, and probably don't need one).
Your third option seems weakest, if I understand it correctly. An authorized but malicious user gets the key an can share it all day long.

What are Best practices for delivering user-generated content sharing service globally from Amazon S3?

What are the issues for using Amazon S3 to store user-uploaded photos and video and delivering these to users around the world. One user's uploads may be viewed by users in any location. Is this the use-case for using Amazon CloudFront?
We really want a Global S3 bucket - why oh why has amazon set up regions!!
cheers
You already have the answer. That's exactly what CloudFront is for.
Its pretty trivial to 'link' CloudFront to your bucket, which then means your content is served from multiple edge locations around the world.
Like S3, you can public or private ditributions and you can now use the new Identity and Access Management (IAM) to protect your content too.