I am considering using Amazon Location Service as a map tile provider for a web app I am working on. I have been able to get this working in a small proof-of-concept using Cognito unauthenticated access and MapLibre GL JS (but plan to move to OpenLayers if supported).
My concern with this is that anyone using the application could extract the identity pool id from the browser and use this to run up a large bill on my behalf! The web app is not public, with users authenticated against a proprietary database. I'd like to allow only these authenticated users to be able to retrieve map tiles.
Would using Cognito developer authenticated identities be suitable for this? Any other recommendations to achieve this?
Amazon Cognito authenticated identity pools may help, but are intended to match/be your primary login system and may complicate your design.
Using the aws:referer IAM condition key with your domain name will prevent other browser-based apps from using your credentials for this purpose and are the equivalent to the domain restrictions supported by other providers. Here's an example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RestrictedMapReadOnly",
"Effect": "Allow",
"Action": "geo:GetMap*",
"Resource": "*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://example.com/*",
"https://www.example.com/*"
]
}
}
}
]
}
Related
I'd like to implement the following: a specific Cognito (authenticated) user must have access to a single S3 bucket.
What is the best way to achieve the above?
I have tried the following:
Create Cognito User Pool with App integration
Create Cognito Identity Pool, which creates a dedicated IAM role on autenticated users
My idea was to update the policy of the Identity-Pool-IAM role to impose restrictions on S3 buckets to specific users only. I would of course have to extend this policy every time I add a new Cognito user (no problem with this).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME"
],
"Condition": {
"StringLike": {
"cognito-identity.amazonaws.com:sub": "COGNITO_USER_SUB_UID"
}
}
}
]
}
AWS doesn't like the way the S3-related policy above is written. It returns:
This policy defines some actions, resources, or conditions that do
not provide permissions. To grant access, policies must have an action
that has an applicable resource or condition.
Question: am I taking the right approach? If so, what am I doing wrong? If not, how should I solve my (supposedly simple) requirement?
For whatever reason, all the examples I have found seem to restrict access to an S3 folder in a bucket rather than the bucket itself (see here or here).
I have some set of users in cognito user pool under different groups . Example: 5 users in admin group, 20 users in supervisor group.I have created app client in cognito and enabled some Oauth2 scopes builtin and custom scopes.
Now i have API Gateway API's in which COGNITO AUTHORIZER is enabled for auth.These apis are working fine.
Problem faced:
With the above method i'm able to access my apis , but how to i control API access based on the user scope.
All users in the pool has all scopes enabled. The only things which will differentiate a user is the cognito:group , in api auth setting , I'm only able to set scope, in this case all users irrespective of group has all scopes, so they are getting authenticated. How to i control the flow based on type of users?
When you create an User group, you have an IAM role attached to it.
It will be on this that you can control access to your API by setting permission on execute api :
arn:aws:execute-api:{region}:{account}:{apiId}/{stage}/{method}/{ressource}
where all {parameter} can be wildcar * to match all.
You can find the IAM Role ARN of your user group on his description and so go in the IAM console and update this role permission
So your policy can be something like this :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:eu-central-1:1564543246:badfg687e/*/*/petstore"
},
{
"Effect": "Deny",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:eu-central-1:1564543246:badfg687e/*/*/*"
}
]
}
By example, this permission allow users part of the group attached to this IAM role to execute all method on the ressource petstore of API badfg687e and deny all other access.
I'm actually going to sneak in two questions here:
1)
I'm trying to figure out if there is a way to let Cognito users manage access to their own folders. Let's say user Dave wants to share his protected file with user Anne. How would I go about to do this?
2) How can a group of users access the same restricted folders in a bucket? In my app users rarely work alone, and they upload files to the organization they belong.
Below is the policy I've gotten so far, but it's not doing it for me. Is there a way to do what I want directly in S3, or do I have to do a Lambda/Dynamo/S3 setup?
Do I need a unique policy for every organization and user in my app to achieve this? Isn't that a tad overkill?
I will be grateful for any help I can get on this topic.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::BUCKETNAME/user/${cognito-identity.amazonaws.com:sub}",
"arn:aws:s3:::BUCKETNAME/user/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
Your use-case is beyond what should be implemented via a Bucket Policy.
Trying to add exceptions and special-cases to a bucket policy will make the rules complex to manage and will start hitting limits -- S3 supports bucket policies of up 20 kb.
Once you get into specific rules about users, objects, paths, etc then you should really be managing this through your central app, which then grants access to objects via Pre-Signed URLs that are generated on-the-fly as necessary.
Amazon S3 is a highly scalable, reliable storage system -- but don't expect it to have every feature you require. That's where your app code is required.
I'm creating an app that utilizes a feature similar to instagram -- users can upload images and view others'. They also need to be able to delete only their own.
I plan to store these images in S3. What's the safest way to allow users to upload, download, and delete their own? My current plan is to authenticate users through my own system, then exchange that login token for AWS Cognito credentials, which can upload and download to/from my S3 bucket.
Deleting I think will be more difficult. I imagine I will have clients send a request to a server that processes it, makes sure the requested deletion is allowed for that client, and then sends the request to S3 using admin credentials.
Is this a feasible way of managing all this, and how best can I disallow users from uploading random things to my bucket? I want them only to be able to upload images associated with their account and with my app, but presumably with the Cognito credentials they could upload anything.
Thanks, and let me know if I wasn't clear on anything.
When using Amazon Cognito, your mobile application users will assume an Identity and Access Management (IAM) role, which gives them permissions to access AWS resources.
A role could, for example, grant access to an Amazon S3 bucket to allow them to upload and download pictures. You can then limit their access to the S3 bucket such that they can only perform actions on objects within their own directory.
Here is an example policy that insert grants access to subdirectories based on their Cognito identity:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${cognito-identity.amazonaws.com:sub}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${cognito-identity.amazonaws.com:sub}/*"]
}
]
}
This way, you can allow the mobile app to directly interface with S3 for uploads, downloads and deletes, rather than having to request it through a back-end service. This will allow your app to scale without having to have as many servers (so it's cheaper, too!)
For more details, see:
Understanding Amazon Cognito Authentication Part 3: Roles and Policies
I've been trying all possible options but with no results. My Bucket Policy works well with aws:Referer but it doesn't work at all with Source Ip as the condition.
My Server is hosted with EC2 and I am using the Public IP in this format xxx.xxx.xxx.xxx/32 (Public_Ip/32) as the Source Ip parameter.
Can anyone tell me what I am doing wrong?
Currently my Policy is the following
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xx.xx.xxx.xxx/32"
}
}
}
]
}
I read all examples and case studies but it doesn't seem to allow access based on Source IP...
Thanks a lot!!!
While I won't disagree that policies are better than IP address wherever possible, the accepted answer didn't actually achieve the original question's goal. I needed to do this (I need access from a machine that wasn't EC2, and thus didn't have policies).
Here is a policy that only allows a certain (or multiple IPs) to access a bucket's object. This assumes that there is no other policy to allow access to the bucket (by default, buckets grant no public access).
This policy also does not allow listing. Only if you know if the full url to the object you need. If you need more permissions, just add them to the Action bit.
{
"Id": "Policy123456789",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": [
"xx.xx.xx.xx/32"
]
}
}
}
]
}
From the discussion on the comments on the question, it looks like your situation can be rephrased as follows:
How can I give an specific EC2 instance full access to an S3 bucket, and deny access from every other source?
Usually, the best approach is to create an IAM Role and launch your EC2 instance associated with that IAM Role. As I'm going to explain, it is usually much better to use IAM Roles to define your access policies than it is to specify source IP addresses.
IAM Roles
IAM, or Identity and Access Management, is a service that can be used to create users, groups and roles, manage access policies associated with those three kinds of entities, manage credentials, and more.
Once you have your IAM role created, you are able to launch an EC2 instance "within" that role. In simple terms, it means that the EC2 instance will inherit the access policy you associated with that role. Note that you cannot change the IAM Role associated with an instance after you launched the instance. You can, however, modify the Access Policy associated with an IAM Role whenever you want.
The IAM service is free, and you don't pay anything extra when you associate an EC2 instance with an IAM Role.
In your situation
In your situation, what you should do is create an IAM Role to use within EC2 and attach a policy that will give the permissions you need, i.e., that will "Allow" all the "s3:xxx" operations it will need to execute on that specific resource "arn:aws:s3:::my_bucket/*".
Then you launch a new instance with this role (on the current AWS Management Console, on the EC2 Launch Instance wizard, you do this on the 3rd step, right after choosing the Instance Type).
Temporary Credentials
When you associate an IAM Role with an EC2 instance, the instance is able to obtain a set of temporary AWS credentials (let's focus on the results and benefits, and not exactly on how this process works). If you are using the AWS CLI or any of the AWS SDKs, then you simply don't specify any credential at all and the CLI or SDK will figure out it has to look for those temporary credentials somewhere inside the instance.
This way, you don't have to hard code credentials, or inject the credentials into the instance somehow. The instance and the CLI or SDK will manage this for you. As an added benefit, you get increased security: the credentials are temporary and rotated automatically.
In your situation
If you are using the AWS CLI, you would simply run the commands without specifying any credentials. You'll be allowed to run the APIs that you specified in the IAM Role Access Policy. For example, you would be able to upload a file to that bucket:
aws s3 cp my_file.txt s3://my_bucket/
If you are using an SDK, say the Java SDK, you would be able to interact with S3 by creating the client objects without specifying any credentials:
AmazonS3 s3 = new AmazonS3Client(); // no credentials on the constructor!
s3.putObject("my_bucket", ........);
I hope this helps you solve your problem. If you have any further related questions, leave a comment and I will try to address them on this answer.