AWS S3 Bucket as an internal website for documentation with automatic updates from CI - amazon-s3

I want to make an S3 bucket that receives internal documentation from Travis-CI whenever we build master which works fine on Travis-CI. However, I want to limit the internal documentation to only be visible to our company IPs which are static. The policy I am trying to use is the following.
{
"Version": "2012-10-17",
"Id": "InternalDocs",
"Statement": [
{
"Sid": "Docs User",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::000000000000:user/docs"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::docs",
"arn:aws:s3:::docs/*"
]
},
{
"Sid": "Whitelisted Read",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": "arn:aws:s3:::docs/*",
"Condition": {
"StringNotLike": {
"aws:userid": "000000000000"
},
"NotIpAddress": {
"aws:SourceIp": [
"0.0.0.0/32",
"0.0.0.0/32"
]
}
}
}
]
}
Please note that 0.0.0.0, 000000000000, arn:aws:s3:::docs and arn:aws:iam::000000000000:user/docs are placeholders because I don't want to show our actual names, ids or IPs.
The problem I end up with is that since "Principal": "*" seems to match any user, public or not public it will then end up blocking everything. The result is that anyone from the company's static ip can access the bucket but Travis-CI is blocked from uploading new versions of the docs to the bucket.
How can I allow Travis-CI to upload to my S3 bucket while still only allowing specifc IPs to view the documentation using s3:GetObject while blocking anyone else?
I use the deploy function on Travis-CI with the AWS access key and access secret for the user arn:aws:iam::000000000000:user/docs.

Related

S3 + Cloudfront : files are allowed but folders are blocked

I have a problem where files are perfectly accessible from my bucket, but folders throw an access denied error. Does anyone know how I can solve this? I want my folders to be accessible too.
Here is my bucket policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "DenyS3PublicObjectACL",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObjectAcl",
"Resource": [
"arn:aws:s3:::xxxxx/*”,
"arn:aws:s3:::xxxxx”
],
"Condition": {
"StringEqualsIgnoreCaseIfExists": {
"s3:x-amz-acl": [
"authenticated-read",
"public-read",
"public-read-write"
]
}
}
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXX”
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxxxx/*”
}
]
}
You won't get a directory listing page out of Amazon S3 when accessing the URL. S3 doesn't show directory listing.
Reference : S3 allow public directory listing of parent folder?
If you want to implement directory structure explorer see this plugin.
https://github.com/awslabs/aws-js-s3-explorer - Recommended
https://github.com/rufuspollock/s3-bucket-listing

AWS S3 sync inconsistent failure when attempting to sync to another bucket in a different account with kms in the mix

Executive summary of the problem. I have a bucket let's call it bucket A that is setup with a default Customer KMS key (will call the id: 1111111) in one account, which we will call 123. In that bucket there are two objects, which are both under the same path within this bucket. They have the same KMS key ID and the same Owner. When I attempt to sync these to a new bucket B in a different account, let's account 456, one is successfully sync'd over but the other is not and instead I get:
An error occurred (AccessDenied) when calling the CopyObject operation: Access Denied
Has anyone seen inconsistent behavior like this before? I say inconsistent because there is absolutely no difference in the access rights between these but one is successful and another isn't. Note: my summary states two objects for simplicity but one of my real cases there are 30 objects where 2 are copying over and the rest failing and within some other paths different mixed results.
The following describes conditions -- some data obfuscated for security but in a consistent manner:
Bucket A (com.mycompany.datalake.us-east-1) Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": [
"s3:PutObjectTagging",
"s3:PutObjectAcl",
"s3:PutObject",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
]
},
{
"Sid": "DenyIfNotGrantingFullAccess",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
],
"Condition": {
"StringNotLike": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "DenyIfNotUsingExpectedKmsKey",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
],
"Condition": {
"StringNotLike": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:123:key/1111111"
}
}
}
]
}
Also in the source account, I have created an assumed role, which I call datalake_full_access_role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
]
}
]
}
Which has a Trusted relationship with account 456. Also worth mentioning is that currently the policy for the KMS key 1111111 is wide open:
{
"Version": "2012-10-17",
"Id": "key-default-1",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt*",
"kms:Decrypt*",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": "*"
}
]
}
Now for the target bucket B (mycompany-us-west-2-datalake) in account 456, the Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccountBasedAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::456:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mycompany-us-west-2-datalake",
"arn:aws:s3:::mycompany-us-west-2-datalake/*"
]
}
]
}
To do the migration (the sync) I provision an EC2 instance within the 456 account and attach to it an instance profile that has the following policies attached to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123:role/datalake_full_access_role"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:ReEncrypt*",
"kms:CreateGrant",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-east-1:123:key/1111111"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1",
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mycompany-us-west-2-datalake",
"arn:aws:s3:::mycompany-us-west-2-datalake/*"
]
}
]
}
Now on the EC2 instance I install latest aws version:
$ aws --version
aws-cli/1.16.297 Python/3.5.2 Linux/4.4.0-1098-aws botocore/1.13.33
and then run my sync command:
aws s3 sync s3://com.mycompany.datalake.us-east-1 s3://mycompany-us-west-2-datalake --source-region us-east-1 --region us-west-2 --acl bucket-owner-full-control --exclude '*' --include '*/zone=raw/Event/*' --no-progress
I believe I've done my homework and this all should work and for several objects it does but not all and I have nothing else up my sleeve to try at this point. Note I have been 100% successful in syncing to a local directory on the EC2 instance and then from the local directory to the new bucket with the following two calls:
aws s3 sync s3://com.mycompany.datalake.us-east-1 datalake --source-region us-east-1 --exclude '*' --include '*/zone=raw/Event/*' --no-progress
aws s3 sync datalake s3://mycompany-us-west-2-datalake --region us-west-2 --acl bucket-owner-full-control --exclude '*' --include '*/zone=raw/Event/*' --no-progress
This absolutely makes no sense as from an access POV there is no difference. The following is a look into the attributes of two objects in the source bucket, one that succeeds and one that fails:
Successful object:
Owner
Dev.Awsmaster
Last modified
Jan 12, 2019 10:11:48 AM GMT-0800
Etag
12ab34
Storage class
Standard
Server-side encryption
AWS-KMS
KMS key ID
arn:aws:kms:us-east-1:123:key/1111111
Size
9.2 MB
Key
security=0/zone=raw/Event/11_96152d009794494efeeae49ed10da653.avro
Failed object:
Owner
Dev.Awsmaster
Last modified
Jan 12, 2019 10:05:26 AM GMT-0800
Etag
45cd67
Storage class
Standard
Server-side encryption
AWS-KMS
KMS key ID
arn:aws:kms:us-east-1:123:key/1111111
Size
3.2 KB
Key
security=0/zone=raw/Event/05_6913583e47f457e9e25e9ea05cc9c7bb.avro
ADDENDUM: After looking through several cases I am starting to see a pattern. I think there may be an issue when the object is too small. In 10 out of 10 directories analyzed where some but not all objects synced successfully, all that were successful had a size of 8MB or more and all that failed were under 8MB. Could this be a bug with aws s3 sync when KMS is in the mix? I am wondering if I can tweak the ~/.aws/config such that it may address this?
I found a solution; although, I still think this is a bug with aws s3 sync. By setting the following in the ~./aws/config all objects synced successfully:
[default]
output = json
s3 =
signature_version = s3v4
multipart_threshold = 1
The signature_version I had before but figured I would provide it for completeness in case someone has a similar need. The new entry is multipart_threshold = 1, which means an object with any size at all will trigger a multipart upload. I didn't specify the multipart_chunksize, which according to documentation will default to 5MB.
Honestly, this requirement doesn't make sense as it shouldn't matter if the object was uploaded to S3 previously using multipart or not and I know this doesn't matter when KMS isn't involved but apparently it does matter when it is.

AWS S3 bucket policy to block source IP address not working

I know this question has been asked a few times and I have gone through a some documents and examples on this. But I am still not able to get it working.
I want to block access to my S3 bucket from one particular IP address and allow all others. I do not want to block instances belonging to an IAM role and hence I am using NotIpAddress Condition for this. Below is the policy I applied on my bucket:
{
"Version": "2012-10-17",
"Id": "Policy1486984747194",
"Statement": [
{
"Sid": "AllowAllExceptOneIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-test-bucket",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "52.38.90.46"
}
}
}
]
}
But this policy isn't working. I am able to upload files to my bucket from this machine, I am using s3-curl.pl to temporarily upload my files.
Can someone please help me find what is wrong here. Thanks.
To block all actions to an S3 bucket from a particular IP, policy needs to have separate deny effect statement for that IP, sample:
{
"Version": "2012-10-17",
"Id": "Policy1487062767078",
"Statement": [
{
"Sid": "AllowAll",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-test-bucket",
"arn:aws:s3:::my-test-bucket/*"
]
},
{
"Sid": "DenyIP",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-test-bucket",
"arn:aws:s3:::my-test-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "52.38.90.46"
}
}
}
]
}
Action and Resource can be changed based on what one needs to block.
Thanks a lot #SergeyKovalev for helping me with this solution.

S3 Invalid Resource in bucket policy

I'm trying to make my entire S3 bucket public, but when I try to add the policy:
{
"Id": "Policy1454540872039",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1454540868094",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::sneakysnap/*",
"Principal": {
"AWS": [
"985506495298"
]
}
}
]
}
It tells me that my "Resource is invalid", but that is definitely the right arn and that is definitely the right bucket name. Anyone know what's going on?
I had this "problem" when I was trying to set a policy on the wrong bucket. That is, my arn in the policy was reading arn:aws:s3:::my-bucket-A/* but I was attempting to set it on my-bucket-B
I had solved the problem by this
arn:aws:s3:::your-bucket-name-here/*
'
If you are creating a policy for an access point it appears that aws will only accept the following format:
i) account id and region must be specified; and
ii) the literal string object must be included (object is not my bucket name)
arn:aws:s3:region:accountid:accesspoint/myaccesspointname/object/*
I found this answer here -> https://forums.aws.amazon.com/thread.jspa?threadID=315596
I faced the same issue and the following could fix your error. I hope this helps anyone facing the same. You need to specify the account ID that corresponds to the region for your load balancer and bucket.
"Principal": {
"AWS": [
"*********"
Please refer to this & update accordingly. This would solve this issue.
See also the Bucket Permissions section of Access Logs for Your Application Load Balancer.
I also had the same problem!
I was using wrong bucket name so I correct it.
It worked for me!
Best of Luck !
I was getting this error as well. The following change fixed it... No idea why.
This bucket threw the error: bleeblahblo-stuff
This worked: bleeblahblostuff
Maybe it was the dash.... Maybe the bucket length... Or maybe a combination of the two?? Both buckets had the same settings. Hmmm.
I was facing the same problem. I was was not using the correct resource name.
I did change the resource name to exactly of that bucket for which I was creating the bucket policy e.g
"Resource": "arn:aws:s3:::abc/*"
to
"Resource": "arn:aws:s3:::abc12/*"
My problem was that when I created my S3 bucket, by default the following were true:
Manage public access control lists (ACLs)
Block new public ACLs and uploading public objects (Recommended)
True
Remove public access granted through public ACLs (Recommended)
True
Manage public bucket policies
Block new public bucket policies (Recommended)
True
Block public and cross-account access if bucket has public policies (Recommended)
True
I had to set these all to false in order for me to change my bucket policy.
If you're trying the AWS startup workshop, try to close the website-bucket-policy.json file, an re-open it. It works for me, and I guess the update of the json file is not saved automatically unless you close it.
See if the bucket name you are specifying in Resource exists or not. The above answer from Vitaly solved my issue.
the problem I realized I had was that my bucket had a ".com" extension which needs to be included in your arn
To add to iamsohel's answer. I had this same issue when trying to set an S3 policy for enabling Elastic load balancer access logs using Terraform.
Here's the policy I was trying to set:
Access logs for your Application Load Balancer
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::elb-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::bucket-name"
}
]
}
But I wanted to add some variables to the policy. My initial policy looked like this:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
But this was throwing an error:
Error: Error putting S3 policy: MalformedPolicy: Policy has invalid resource
│ status code: 400, request id: 3HHH9QK9SKB1V9Z0, host id: 8mOrnGi/nsHIcz59kryeriVExU7v+XgGpTw64GHfhjgkwhA3WKSfG7eNbgkMgBMA8qYlyUTLYP8=
│
│ with module.s3_bucket_policy_1.aws_s3_bucket_policy.main,
│ on ../../../../modules/aws/s3-bucket-policy/main.tf line 1, in resource "aws_s3_bucket_policy" "main":
│ 1: resource "aws_s3_bucket_policy" "main" {
All I had to do was to add /* to the end of the arn for the bucket resource:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
In my case it was the missing region in the arn for govcloud - so the resource had to be
"arn:aws-us-gov:s3:::grcsimpletest"
rather than
"arn:aws:s3:::grcsimpletest"
Strangely the policy that failed was from an AWS doc.... That said, it kind of clicked when I edited the policy in the S3 console and it showed the bucket arm on the edit screen.

Using two policies together in a single S3 bucket

I am new to Amazon S3 and just created my first bucket. I need two important policies to be implemented on the bucket which are as follows:
First, policy for allowing only downloads from my own website (via HTTP referrer)
Secondly, I want to make all objects in the bucket public.
I have got two different codes of policies for my needs, but now I can't put them together to achieve the said goals. Please help me joining these too policies together so I achieve what I want.
For allowing referrer downloads:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"URL/*",
]
}
}
}
]
}
For making objects public:
{
"Sid": "...",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket/*",
"Principal": {
"AWS": [ "*" ]
}
}