AWS Cognito IAM policy - How to limit access to S3 folder (.NET SDK ListObjectsV2) - amazon-s3

I am trying to limit access for a Cognito user to specific folders in a bucket. The final target is to reach what is described here but I've simplified it for debugging.
The structure is as follows
MYBUCKET/111/content_111_1.txt
MYBUCKET/111/content_111_2.txt
MYBUCKET/222/content_222_1.txt
MYBUCKET/333/
I am performing a simple "list" call via the SDK.
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
I am authenticating via Cognito so I am updating Cognito's IAM policy linked to the authenticated role.
The following policy returns an S3 exception "Access Denied":
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET",
"Condition": {
"StringLike": {
"s3:prefix": [
"111",
"111/*"
]
}
}
}
]
}
The following policy returns all results (as expected).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET"
}
]
}
This is supposed to be super straightforward (see here ). There are other similar questions (such as this and others) but with no final answer.
How should I write the IAM policy so that authenticated users can only access the contents of the folder "111"?
Best regards,
Andrej

I hope I now understand what I got wrong."s3:prefix" is not some form of "filter that will only return the objects that match the prefix"; it is "a parameter that forces the caller to provide specific prefix information when executing the operation".
The following is taken from the S3 documentation :
To answer my own question, starting from the IAM policy above
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET",
"Condition": {
"StringLike": {
"s3:prefix": [
"111",
"111/*"
]
}
}
}
]
}
If I call the SDK with the code below, I will indeed get an "Access Denied" because I have not specified a prefix that matches the IAM policy.
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
But if I do specify the prefix in my SDK call, S3 will return the expected results i.e., only the ones "starting with 111".
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET",
Prefix = "111"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
In other words, my problem was not in the way I had written the IAM policy but in the way I was expecting the "s3:prefix" to work.

Related

unable to assume role with gitlab oidc and AWS

I have configured the IAM Role with the below definition. I am getting the AccessDenied error when I configure the condition below. Where am I going wrong?
Access Denied
"Condition": {
"StringEquals": {
"gitlab.com:sub": "https://gitlab.com/pradeepkumarl/configure-openid-connect-in-aws::ref_type:branch:ref:main"
}
}
Total policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<account-id>:oidc-provider/gitlab.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"gitlab.com:sub": "https://gitlab.com/pradeepkumarl/configure-openid-connect-in-aws::ref_type:branch:ref:main"
}
}
}
]
}
There is a mistake in your sub field under condition. It should be of the form project_path::ref_type:branch:ref:. You don't need to include the Gitlab url.
Also keep in mind that you may need to change the condition from "StringEquals" to "StringLike" to accommodate wildcards, as mentioned in the troubleshooting section of the documentation: https://docs.gitlab.com/ee/ci/cloud_services/aws/index.html

Creating S3 policy with terraform

I am trying to create a S3 bucket and apply a policy to it. Bucket creation steps are fine and when I am trying to apply the below policy I am not able to find the bug in this tf file
The terraform version is - Terraform v0.12.23
{
"Sid": "DenyUnEncryptedConnection",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": [
"arn:aws:s3:::${var.s3_bucketName}",
"arn:aws:s3:::${var.s3_bucketName}/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
In my main.tf file this is what I am passing to the variable
module "s3-bucket-policy" {
source = "../s3-policy/"
s3_bucketName = "${aws_s3_bucket.s3_bucket.id}"
bucket_arn = "${aws_s3_bucket.s3_bucket.arn}"
....
The terraform plan command is giving me the policy as below.(Running it through a Jenkins job Copied out of Jenkins log)
module.s3_bucket.module.s3-bucket-policy.aws_s3_bucket_policy.communication_policy[0][0m will be created[0m[0m
00:00:07.805 [0m [32m+[0m[0m resource "aws_s3_bucket_policy" "communication_policy" {
00:00:07.805 [32m+[0m [0m[1m[0mbucket[0m[0m = (known after apply)
00:00:07.805 [32m+[0m [0m[1m[0mid[0m[0m = (known after apply)
00:00:07.805 [32m+[0m [0m[1m[0mpolicy[0m[0m = (known after apply)
00:00:07.805 }
But when I try to apply the same getting the below error and I am not sure how to proceed further.
[31m
00:01:13.117 [1m[31mError: [0m[0m[1mError putting S3 policy: MalformedPolicy: Action does not apply to any resource(s) in statement
00:01:13.117 status code: 400, [0m
00:01:13.117
Any pointers on this will be very much appreciated
You need to supply a proper Action compatible with your bucket, change your policy to the following and it should work:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
acl = "public-read"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnEncryptedConnection",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
EOF
}
(I changed the resource to use bucket_arn but it should work with s3_bucketName the way you did it too.)
Note the "Action": "s3:*", this policy explicitly denies all actions on the bucket and objects when the request meets the condition "aws:SecureTransport": "false" (i.e. it's not an HTTPS connection).

Access Denied while attempting to put an object into S3 bucket

I am trying to refactor some code to allow upload of large images.
Initially, the images were stored in S3 in a lambda function and it worked just fine in PROD. I have extracted that part out of the function now and attempting to do it via the Java SDK for AWS.
This worked fine in the DEV environment because the bucket is public there. When I tested this with PROD settings. I get an access denied error.
The bucket is private in PROD and the user has access to all S3 actions.
I could access the bucket using the AWS CLI but when I try it using the AWS Java SDK I get an 'Access Denied' error. This is the code in Java. I have explicitly set the region just to make sure it was getting the right one, although I know the region is the default region.
BasicAWSCredentials awsCreds = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);
AmazonS3 s3client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
String imageS3Url = null;
ObjectMetadata d = new ObjectMetadata();
try {
s3client.putObject(new PutObjectRequest(S3_BUCKET_NAME, s3Key, stream, d));
imageS3Url = "https://s3-"+ S3_REGION_NAME +".amazonaws.com/" + S3_BUCKET_NAME +"/"+ s3Key;
}catch(Exception ex) {
log.debug(ex.getMessage());
}
Am I missing any configuration to grant access to AWS java SDK to access the S3 bucket? The AWS Java SDK version is 1.11.411.
Here are the anonymized versions of the bucket and IAM User Policy:
{
"Version": "2012-10-17",
"Id": "PolicyABC”,
"Statement": [
{
"Sid": "Stmt123”,
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/user-name”
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name”
}
]
}
IAM user policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name”,
"arn:aws:s3:::bucket-name/*”
]
}
]
}

Connecting aspera on cloud with S3bucket

I used this policy on AWS to try connecting AoC with an S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::880559705280:role/atp-aws-us-east-1-ts-atc-node"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"sts:ExternalId": "crn:v1:bluemix:public:aspservice-service:global:a/2dd2425e9a424641a12855a1fd5e85ee:70740386-6ca4-4473-bf9b-69a1fd22be12:::c1893698-abfa-4934-a7ca-1a6d837df5e0"
}
}
}
]
}
but when copied on Bucket Policy, I receive Error: Statement is missing required element.
What is wrong?
You need to paste this policy file into Trust relationship policy in Role tab.

How to access AWS S3 bucket logging in with Google and Cognito User/Identity Pool

I'm using the AWS Cognito Enhanced (Simplified) Flow to get the Cognito Identity Credentials providing the idToken received after logging in with Google sign-in api:
export function getAWSCredentialWithGoogle(authResult) {
if (authResult['idToken'] != null) {
AWS.config.region = 'eu-central-1';
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'eu-central-1:xxxxxxxxxxxxxxxxxxxxxxxx',
Logins: {
'accounts.google.com': authResult['idToken']
}
})
return AWS.config.credentials.getPromise()
.then(
function(){
return getAWSCredentials(AWS.config.credentials);
},
function(err) {
}
)
} else {
console.log('no auth code found!');
}
}
I get the:
accessKeyId:"ASIAXXXXXX",
secretAccessKey:"ta4eqkCcxxxxxxxxxxxxxxxxxxx",
sessionToken:"xxxxxxxxx...etc..."
Then I try to upload a picture to an S3 bucket passing the above received accessKeyId and secretAccessKey.
But I receive this error result:
InvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.ASIAXXXXXXXXXXXXXXXX
This is how I set up the AWS S3 (managed) policy (the resource policy for the bucket is the default one) to access progammaticaly the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::xxxxxxxxxx",
"arn:aws:s3:::xxxxxxxxxx/users"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"${cognito-identity.amazonaws.com:sub}/*"
]
}
}
},
{
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:PutObjectVersionAcl",
"s3:DeleteObject",
"s3:PutObjectAcl",
"s3:GetObjectVersion"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::xxxxxxxxx/users/${cognito-identity.amazonaws.com:sub}",
"arn:aws:s3:::xxxxxxxxx/users/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
this policy has been attached to an IAM role with the following trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "eu-central-1xxxxxxxxxxx"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "authenticated"
}
}
}
]
}
I've properly configured the Federated Identity Pool to use this role, and added Google as an OpenID Connect provider.
I also configured my Cognito Identity Pool to accept users federated with my Cognito User Pool by supplying the User Pool ID and the App Client ID.
I would like to to give to any Google sign-in authenticated user, access to the S3 bucket and have read/write permission to his own directory.