The Best Solution for an AWS Mobile App, DynamoDB, & S3 Scenario - amazon-s3

I am planning a game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring information from the mobile app to a DynamoDS table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket.
What is the best approach for storing data to DynamoDB and S3?
Option 1: Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services.
Option 2: Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation.
Many architects I talked to Option 1 is the right one. But according to AWS doco, it appears Option2 can be valid too. Any inputs would be appreciated!

I would strongly consider Option #2 using Amazon Cognito to provide temporary credentials to your users that enable them to directly and specifically access DynamoDB and S3.
Generally speaking, you need to:
Create a new Cognito Identity Pool and set up 2 IAM roles -- one for authenticated users and one for unauthenticated users (optional). https://docs.aws.amazon.com/cognito/devguide/getting-started/?platform=ios
Authenticate a user via your own authentication provider or via external providers like Facebook, Twitter, etc., and then use Cognito to create temporary credentials for them. https://docs.aws.amazon.com/cognito/devguide/identity/external-providers/
Use the credentials to access DynamoDB and/or S3. Your AWS resources will be protected as long as you set up your IAM roles appropriately. For example, you can give fine grained access to your DynamoDB table so that users cannot access rows that don't belong to them. See the following link for more details: https://docs.aws.amazon.com/cognito/devguide/identity/concepts/iam-roles/
The Cognito developer guide is here: https://docs.aws.amazon.com/cognito/devguide/.

Related

Authentication of API Gateway methods using Cognito?

I have created a an API in API Gateway named “Test” which has 2 methods – add and delete.
Criteria:
Create 2 users in Cognito with 1st user having access to both methods and 2nd user having access to only “add”.
Can anyone help me in this thanks in advance.
Check out the using Groups with API Gateway docs:
You can use groups to create a collection of users in a user pool, which is often done to set the permissions for those users. For example, you can create separate groups for users who are readers, contributors, and editors of your website and app. Using the IAM role associated with a group, you can also set different permissions for those different groups so that only contributors can put content into Amazon S3 and only editors can publish content through an API in Amazon API Gateway.
You should be able to set permissions on the groups such that one has access to both API endpoints and the other just to the one.

aws cognito - allow group-chat participants access to s3 bucket

I'm implementing an iOS app with group-chat support, where users can add photos, other files.
Decided on AWS S3 as storage back end, using Cognito Federated Identities to authenticate upload/downloads - data pumped to/from S3, not via our servers.
So far
my implementation allows a user/identity to upload & download to their their own folder on an S3 bucket (example policy arn:aws:s3:::mybucket/users/${cognito-identity.amazonaws.com:sub}/* the variable being the identityID/user_id).
However
I've not been able to find a secure way that allows only participants in a group-chat to upload/download from that group-chat's folder on S3.
Any ideas?
some thoughts on a possible flow:
first, user upload photo to own folder, [ I know how ]
then, the system copies over the photo into the group-chat's folder [ I know how ]
associate group-chat folder with the identities of participants [ not sure how - there could be thousands of groups & participants ]
EDIT 1: as #MyStackRunnethOver suggest, could use one IAM role/credential to manage all upload/download request for users (that belong to said group) [ big security concern if credential compromised ].
EDIT 1: could use PreSigned URLs: files uploaded to user's own folder, presigned url stored on group-chat entries [ max url-life 7days though ]
client caching helps until participants join/leave a group frequently
requires server-side scheduled job to renew expired PreSigned URLs
Any commends/ideas appreciated
Your question boils down to:
"Given that users are part of groups, how can I give users access to group-specific subdirectories based on group membership?"
As I see it, you have two options:
Give each user the "key" to all directories they're a member of. This could mean adding a permission to that user for each group, or providing them with access to a new IAM role for each group. This is the "come up with a way to have fine-grained permissions for S3" strategy.
Don't distribute any directory-specific keys. Instead, when a user requests a certain directory, check whether they're in the group that directory belongs to. This is the "build a fine-grained data storage system around S3" strategy.
I recommend the latter approach, because instead of having an IAM role or a credential per user or per group, you give all your users one credential: the credential needed to make requests of your S3 wrapper. If you keep track of which groups your users are in, all your wrapper needs to do is check the user -> groups mapping to see if a request should be fulfilled. The front end can use the same mapping to prettify the UI: a user is only shown the option to upload / download from the groups they are a member of. In this case, I would envision the mapping as a Dynamo table that is updated whenever a user signs up, joins a group, leaves a group, or deletes their account. You can identify your users by their Cognito credentials, which include user-specific fields.

Google Cloud equivalent of Amazon STS

Amazon STS offers the ability to take an IAM token and create a limited subset of the abilities of that token for other use. The subset of abilities can be by time (expiring in N hours) and by allowed operations (e.g. read one S3 bucket but not all the S3 buckets the original token can read).
Because this is done using the S3 ARN format which which supports wildcards in the S3 key name, that means it's possible to create a sub-token that can read part of an S3 bucket.
Looking through Google Cloud Storage's's access control docs I couldn't find the equivalent of this functionality in GCS.
To be more specific, I'd like to create a bucket with these four objects:
/folder1/file1
/folder1/file2
/folder2/file3
/folder2/file4
And given a token with permissions to access all files indefinitely, produced a limited subset of the token with permissions to view just the objects in /folder2/* (so /folder2/file3 and /folder2/file4) for N hours.
Is this possible in GCS like it is in S3/STS?
Currently, in GCP there are no tokens with a limited subset of the abilities of another token.
The most similar thing to what you are asking are Signed URLs, since they allow access time-limited access to Cloud Storage objects.
I don't know why you need them to have abilities that are a subset to the ones of another token, but in your case you could just create Signed URLs with permissions to view the objects in /folder2/*

Service account -- limiting access to only big query

Is there a way to create a service account in the context of Google's cloud services that can only access BigQuery and not any other service (GCE, App Engine, &c)? Or is it necessary to create a new "project" and put the account in that project?
There are two ways to scope access:
ACLs and group membership allow control over what the service account has access to.
OAuth credentials can be scoped to individual services / apis.
Either option could work for you, depending on what your ultimate goal is.
How to use ACLs to limit access to only BigQuery
A service account is an identity, just like an email address is an identity.
Identity access is controlled through ACLs, either on the project or on the individual datasets you want to manage. BigQuery's access control is described here: https://cloud.google.com/bigquery/access-control. Other services and apis offer their own ACL controls. Together, these options give you fine grained control over access.
For example, if you put the service account in the project owners ACL, then that service account will have access to everything a project owner would have: BigQuery, Google Storage, etc.
Alternatively, if you put that service account only on a single BigQuery Dataset, then it would only have access to that dataset. (If you also want that service account to be able to run BigQuery jobs, then it would need to be a member of some project since jobs run in the context of a project. If you have a requirement that the project you run BigQuery jobs in cannot be the same project that you store Google Storage data in, then you will need multiple projects.)
How to use OAuth Scopes to limit access to only BigQuery
When you create the OAuth credentials for your service account, you can specify the Scopes that the credentials are valid for. Each api documents the scopes required in order to call the api. BigQuery's scopes are documented here: https://cloud.google.com/bigquery/authorization.
For example, if you only provide BigQuery scopes, then your code will only be able to make BigQuery api calls. Attempting to call a Google Storage API with credentials bound to BigQuery won't work.

How to restrict Amazon S3 API access?

Is there a way to create a different identity to (access key / secret key) to access Amazon S3 buckets via the REST API where I can restrict access (read only for example)?
The recommended way is to use IAM to create a new user, then apply a policy to that user.
Yes, you can. The S3 API documentation describes the Authentication and Access Control services available to you. You can set up a bucket so that another Amazon S3 account can read but not modify items in the bucket.
Check out the details at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingAuthAccess.html (follow the link to "Using Query String Authentication")- this is a subdocument to the one Greg Posted, and describes how to generate access URLs on the fly.
This uses a hashed form of the private key and allows expiration, so you can give brief access to files in a bucket without allowed unfettered access to the rest of the S3 store.
Constructing the REST URL is quite difficult, it took me about 3 hours of coding to get it right, but this is a very powerful access technique.