Need help understanding inconsistent behaviour and KMS error when deploying changes via CDK - amazon-s3

We are using CDK to create s3 buckets in different regions and to manage a codepipeline which deploys to these cross region and cross account buckets.
We have seen some inconsistent behavior lately when we deployed changes to the pipeline (Add a permission to Codebuild role).
We were getting the following error:
Policy contains a statement with one or more invalid principals. (Service: AWSKMS; Status Code: 400; Error Code: MalformedPolicyDocumentException;
We do not explicitly create KMS keys. The KMS keys and its policy were auto generated during CDK deploy.
To resolve this error, we destroyed the existing S3 buckets and the pipeline and re-deployed it with a different name. The 1st time the deploy was successful in deploying the S3 buckets in all regions except for ap-southeast-2 (With the same error). The second time we rinsed and repeated, it deployed fine.
We are not sure what is causing this behavior. We wanted to confirm if this is a known bug or something we need to change in our code to resolve this inconsistency.
Following is the KMS policy auto-generated for ap-southeast-2 using cdk synth when it was failing to deploy :
"KeyPolicy": {
"Statement": [
{
"Action": [
<Truncated>
],
"Effect": "Allow",
"Principal": {
"AWS": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":iam::<Bucket_ACCOUNT_ID>:root"
]
]
}
},
"Resource": "*"
},
{
"Action": [
<Truncated>
],
"Effect": "Allow",
"Principal": {
"AWS": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":iam::<Bucket_ACCOUNT_ID>:root"
]
]
}
},
"Resource": "*"
},
{
"Action": [
<Truncated>
],
"Effect": "Allow",
"Principal": {
"AWS": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":iam::<Pipeline_Account_ID>:role/<role_name>"
]
]
}
},
"Resource": "*"
},
{
"Action": [
<Truncated>
],
"Effect": "Allow",
"Principal": {
"AWS": {
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":iam::<Pipeline_Account_ID>:role/<role_name>"
]
]
}
},

Related

How do I allow Cognito users access to private content on my AWS static S3 bucket?

I have spent weeks going in circles over this. I have a static S3 website with several 'folders'. I would like to allow public access to the 'root' and 'public' (css, javascript, etc.) folders, but want to restrict access to a 'user' folder.
I set up a User Pool & Group in Cognito that works well for my users (JWTs for customer usernames), but I am having one heck of a time connecting the dots! I have tried using the IAM policy (below), but I know I'm doing something wrong. Would love any suggestions on where to go from here...
Anyone see errors on my policy language? Will a policy similar to this work despite having "Block All Public Access" enabled?
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToMyBucket",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket",
"Condition": {
"StringEquals": {
"s3:prefix": [ "", "public/*" ],
"s3:delimiter": [ "/" ]
}
}
},
{
"Sid": "AllowAccessToUserFolder",
"Effect": "Allow",
"Action": [ "s3:GetObject" ],
"Resource": [ "arn:aws:s3:::mybucket/user" ],
"Condition": {
"StringLike": {
"s3:prefix": [ "${aws:username}/*" ],
"s3:delimiter": [ "/" ]
}
}
}
]
}

Cant putObject to s3 from ECS container

I have setup an ECS task containing two containers. The containers are fully responsive to request, but they need to put some items into s3, but I get Err foundAccessDenied: Access Denied.
I have attached a new policy as following to ecsTaskExecutionRole role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets",
"s3:ListBucket"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"*"
]
}
]
}
Also added the following env when creating docker images: ECS_ENABLE_TASK_IAM_ROLE=true
What am I missing here that all the time get the AccessDenied error?!

S3 Policy not working when resource is specified

I have a rails app set up to upload files to S3
I have an IAM user with an inline policy attached to the user.
When I use the following policy everything works just fine:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1494133349000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"*"
]
}
]
}
Now when I try to specify the ARN of my bucket, I get an access denied error in my app.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1494133349000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket"
]
}
]
}
The ARN is copied directly from my bucket. No clue why the second policy doesnt work. It should according to everything i've read.
This is your bucket:
"Resource": [
"arn:aws:s3:::my-bucket"
]
This is your bucket and the objects in your bucket:
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]

CREATE_FAILED Bucketpolicy - Unknown field Fn::Join

My Cloudformation stack fails and keeps getting rolled back because of the following S3 bucket policy. The referenced S3 bucket is a separate bucket meant for CloudTrail logs (as I read that such a thing is best practice when using CloudTrail). The bucket gets created along with the rest of the stack during the cloudFormation process: [stackname]-cloudtraillogs-[randomstring]
I tried not using any functions to specify the bucket, but that doesn't seem to work. My guess is because it then goes looking for a bucket 'cloudtraillogs' and can't find any bucket with that name. Using a Fn::Join with a reference might solve that(?), but then CloudFormation gives 'Unknown field Fn::Join' when evaluating bucket policy.
Anyone who can spot what I might be doing wrong here?
Bucketpolicy
{
"Resources": {
"policycloudtraillogs": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "cloudtraillogs"
},
"PolicyDocument": {
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck20160224",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "cloudtraillogs"
},
"/*"
]
]
},
{
"Sid": "AWSCloudTrailWrite20160224",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "cloudtraillogs"
},
"/AWSLogs/myAccountID/*"
]
]
},
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
}
}
}
}
Your template does not appear to be valid JSON. Your first policy statement (AWSCloudTrailAclCheck20160224) is missing a closing bracket } for its Resource object.

s3 bucket policy to add exception

Hi I am trying to write the permissions policy to access my bucket.
I want to deny access to a particular user-agent and allow access to all other user agents. With the below policy the access is getting denied to all.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1456658595000",
"Effect": "Deny",
"Action": [
"s3:*"
],
"Condition": {
"StringLike": {
"aws:UserAgent": "NSPlayer"
}
},
"Resource": [
"arn:aws:s3:::bucket/"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket/*"
]
}
]
}
Please let me know how should I write such policy so that except one user agent all others are able to access the same.
It has to be written this way!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SID",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Condition": {
"StringNotLike": {
"aws:UserAgent": "NSPlayer"
}
},
"Resource": [
"*"
]
}
]
}
This solution works if bucket objects are not public read/write. One related answer is here: Deny access to user agent to access a bucket in AWS S3