Copying data from S3 to Redshift - Access denied - amazon-s3

We are having trouble copying files from S3 to Redshift. The S3 bucket in question allows access only from a VPC in which we have a Redshift cluster. We have no problems with copying from public S3 buckets. We tried both, key-based and IAM role based approach, but result is the same: we keep getting 403 Access Denied by S3. Any idea what we are missing? Thanks.
EDIT:
Queries we use:
1. (using IAM role):
copy redshift_table from 's3://bucket/file.csv.gz' credentials 'aws_iam_role=arn:aws:iam::123456789:role/redshift-copyunload' delimiter '|' gzip;
(using access keys):
copy redshift_table from 's3://bucket/file.csv.gz' credentials 'aws_access_key_id=xxx;aws_secret_access_key=yyy' delimiter '|' gzip;
S3 policy for IAM Role (first query) and IAM user (second query) is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt123456789",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket/*"
]
}
]
}
Bucket has a policy denying access from anywhere other than VPC (redshift cluster is in this VPC):
{
"Version": "2012-10-17",
"Id": "VPCOnlyPolicy",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-123456"
}
}
}
]
}
We have no problem loading from publicly accessible buckets and if we remove this bucket policy we can copy the data with no problems.
The bucket is in the same region as redshift cluster.
When we run IAM role (redshift-copyunload) through the policy simulator it returns "permission allowed".

Enable "Enhanced VPC Routing" on your Redshift. Without the "Enhanced VPC Routing" your Redshift traffic will be coming via Internet and your S3 bucket policy will deny access. See here:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-enabling-cluster.html

1 Check encription of bucket.
According doc : https://docs.aws.amazon.com/en_us/redshift/latest/dg/c_loading-encrypted-files.html The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS.
2 Check kms: rules on you key|role
3 If files from EMR, check Security configurations for S3.

Related

AWS create user only with S3 permissions

I know how to create a user through AWS Console en IAM, but I wonder where or how should I set the permissions to that user in order that he only could:
upload/delete files to a specific folder in a specific S3 bucket
I have this bucket:
So I wonder if I have to set up the permissions in that interface, or directly in the user in IAM service
I'm creating a Group there with this policy:
but for "Write" and "Read" there are a lot of policies, which ones do I need only for write/read files in a specific bucket?
Edit: Currently I have this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::static.XXXXXX.com/images/carousel/*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
I wonder if that is enough to:
log in AWS Console
go to S3 buckets and delete/read objects in the folder of the bucket that I want
You can attach a role to that user that gets a custom policy (Doc).
There you can choose the service, the actions which you want to allow and also the resource which are whitelisted.
You can either use a resource based policy that is attached with S3 or an identity based policy attached to an IAM User, Group or Role.
Identity-based policies and resource-based policies
You can attach below identity policy to the user to upload/delete files to a specific folder in a specific S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::SAMPLE-BUCKET-NAME/foldername"
}
]
}
For more details, refer Grant Access to User-Specific Folders in an Amazon S3 Bucket

ECS task accessing S3 bucket website with Block Public Access enabled: "Access Denied"

I have an ECS task configured to run an nginx container that I want to use as a reverse proxy to a S3 bucket website.
For security purposes, Block public access is turned on for the bucket so I am looking for a way to give Read access only to the ECS task.
I want my ECS task running an nginx reverse proxy to have S3:GetObjects access to my website bucket. The bucket cannot be public so I want to restrict it to the ecs task using the ecs task IAM role as Principal.
IAM role:
arn:aws:iam:::role/ was configured with an attached policy that allows all S3 actions in the bucket and its objects:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "S3:*",
"Resource": [
"arn:aws:s3:::<BUCKET>",
"arn:aws:s3:::<BUCKET>/*"
]
}
]
}
In Trusted Entities, I added permission to assume the ECS Task role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The issue is that the EC2 target group health check is always returning Access Denied to the bucket and its objects.
[08/Jun/2020:20:33:19 +0000] “GET / HTTP/1.1” 403 303 “-“ “ELB-HealthChecker/2.0”
I also tried to give it permission to by adding the bucket policy below, but I believe it is not needed as the IAM role already have access to it…
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allowNginxProxy",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "*",
"Resource": [
"arn:aws:s3:::<BUCKET>/*",
"arn:aws:s3:::<BUCKET>"
]
}
]
}
I have also tried using ”AWS": "arn:aws:iam::<ACCOUNT_NUMBER>:role/<ECS_TASK_ROLE>" as Principal.
Any suggestions?
Another possibility here:
Check if your S3 Objects are encrypted? If yes, your ECS Task Role should also have the permission to perform decryption. Otherwise, you would also get permission denied exception. One example can be found here.

amazon aws bucket policy to let my ec2 server get files programmatically

It seems like I should be able to make a rule to allow access from my ec2's elastic ip. Here is the code I have:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::big18v1/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "12.123.12.123"
}
}
}
]
}
but.. it doesn't work. I get 'access denied'
any thoughts? I've read over and over about creating iam roles and things like that, but I don't really want to manipulate the bucket files... I just want to use the bucket like a server and get image files from it.
Am I thinking about this right? How should I let only my ec2 instance have access to my s3 bucket?
You should create an IAM role with access to the bucket, then use instance profiles to make the credentials available to the instances
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html

Getting Access Denied when calling the PutObject operation with bucket-level permission

I followed the example on http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-s3 for how to grant a user access to just one bucket.
I then tested the config using the W3 Total Cache Wordpress plugin. The test failed.
I also tried reproducing the problem using
aws s3 cp --acl=public-read --cache-control='max-age=604800, public' ./test.txt s3://my-bucket/
and that failed with
upload failed: ./test.txt to s3://my-bucket/test.txt A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied
Why can't I upload to my bucket?
To answer my own question:
The example policy granted PutObject access, but I also had to grant PutObjectAcl access.
I had to change
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
from the example to:
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
You also need to make sure your bucket is configured for clients to set a public-accessible ACL by unticking these two boxes:
I was having a similar problem. I was not using the ACL stuff, so I didn't need s3:PutObjectAcl.
In my case, I was doing (in Serverless Framework YML):
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName"
Instead of:
- Effect: Allow
Action:
- s3:PutObject
Resource: "arn:aws:s3:::MyBucketName/*"
Which adds a /* to the end of the bucket ARN.
Hope this helps.
If you have set public access for bucket and if it is still not working, edit bucket policy and paste following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::yourbucketnamehere",
"arn:aws:s3:::yourbucketnamehere/*"
],
"Effect": "Allow",
"Principal": "*"
}
]
}
Change yourbucketnamehere in above code with name of your bucket.
In case this help out anyone else, in my case, I was using a CMK (it worked fine using the default aws/s3 key)
I had to go into my encryption key definition in IAM and add the programmatic user logged into boto3 to the list of users that "can use this key to encrypt and decrypt data from within applications and when using AWS services integrated with KMS.".
I was just banging my head against a wall just trying to get S3 uploads to work with large files. Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
Then I tried copying a smaller file and got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I could list objects fine but I couldn't do anything else even though I had s3:* permissions in my Role policy. I ended up reworking the policy to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "*"
}
]
}
Now I'm able to upload any file. Replace my-bucket with your bucket name. I hope this helps somebody else that's going thru this.
In my case the problem was that I was uploading the files with "--acl=public-read" in the command line.
However that bucket has public access blocked and is accessed only through CloudFront.
I had a similar issue uploading to an S3 bucket protected with KWS encryption.
I have a minimal policy that allows the addition of objects under a specific s3 key.
I needed to add the following KMS permissions to my policy to allow the role to put objects in the bucket. (Might be slightly more than are strictly required)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:ListKeys",
"kms:GenerateRandom",
"kms:ListAliases",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"kms:ImportKeyMaterial",
"kms:ListKeyPolicies",
"kms:ListRetirableGrants",
"kms:GetKeyPolicy",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ListResourceTags",
"kms:ReEncryptFrom",
"kms:ListGrants",
"kms:GetParametersForImport",
"kms:TagResource",
"kms:Encrypt",
"kms:GetKeyRotationStatus",
"kms:GenerateDataKey",
"kms:ReEncryptTo",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:<MY-REGION>:<MY-ACCOUNT>:key/<MY-KEY-GUID>"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
<The S3 actions>
],
"Resource": [
"arn:aws:s3:::<MY-BUCKET-NAME>",
"arn:aws:s3:::<MY-BUCKET-NAME>/<MY-BUCKET-KEY>/*"
]
}
]
}
I encountered the same issue. My bucket was private and had KMS encryption. I was able to resolve this issue by putting in additional KMS permissions in the role. The following list is the bare minimum set of roles needed.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAttachmentBucketWrite",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"kms:Decrypt",
"s3:AbortMultipartUpload",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:s3:::bucket-name/*",
"arn:aws:kms:kms-key-arn"
]
}
]
}
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-large-file-encryption-kms-key/
I was having the same error message for a mistake I made:
Make sure you use a correct s3 uri such as: s3://my-bucket-name/
(If my-bucket-name is at the root of your aws s3 obviously)
I insist on that because when copy pasting the s3 bucket from your browser you get something like https://s3.console.aws.amazon.com/s3/buckets/my-bucket-name/?region=my-aws-regiontab=overview
Thus I made the mistake to use s3://buckets/my-bucket-name which raises:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Error : An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I solved the issue by passing Extra Args parameter as PutObjectAcl is disabled by company policy.
s3_client.upload_file('./local_file.csv', 'bucket-name', 'path', ExtraArgs={'ServerSideEncryption': 'AES256'})
I got this error too: ERROR AccessDenied: Access Denied
I am working in a NodeJS app that was trying to use the s3.putObject method. I got clues from reading the many other answers above, so I went to the S3 Bucket, clicked on the Permission tab, then scrolled down to the Bucket Policy section and noticed there was a condition required for access.
So I added a ServerSideEncryption attribute to my params for the putObject call.
This finally worked for me. No other changes, such as any encryption of the message, are required for the putObject to work.
Similar to one post above, (except I was using admin credentials) to get S3 uploads to work with large 50M file.
Initially my error was:
An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
I switched the multipart_threshold to be above the 50M
aws configure set default.s3.multipart_threshold 64MB
and I got:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I checked bucket public access settings and all was allowed.
So I found that public access can be blocked on account level for all S3 buckets:
I also solved it by adding the following KMS permissions to my policy to allow the role to put objects in this bucket (and this bucket alone):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
You can also test your policy configurations before applying them with the IAM Policy Simulator. This came in handy to me.
In my case I had an ECS task with roles attached to it to access S3, but I tried to create a new user for my task to access SES as well. Once I did that I guess I overwrote some permissions somehow.
Basically when I gave SES access to the user my ECS lost access to S3.
My fix was to attach the SES policy to the ECS role together with the S3 policy and get rid of the new user.
What I learned is that ECS needs permissions in 2 different stages, when spinning up the task and for the task's everyday needs. If you want to give the containers in the task access to other AWS resources you need to make sure to attach those permissions to the ECS task.
My code fix in terraform:
data "aws_iam_policy" "AmazonSESFullAccess" {
arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
resource "aws_iam_role_policy_attachment" "ecs_ses_access" {
role = aws_iam_role.app_iam_role.name
policy_arn = data.aws_iam_policy.AmazonSESFullAccess.arn
}
For me I was using expired auth keys. Generated new ones and boom.
My problem was that my source (an ec2 instance) had an IAM role attached that didn't allow any write actions, so even though the bucket policy was correct, I couldn't write anything to anywhere from it. I solved it by adding this policy to the IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::destination-bucket/destination-path/*"
]
}
]
}
I was facing the similar issue so checked the permission tab in the AWS bucket. The public access was blocked which was causing the issue in my case so I unchecked the option and it worked.
enter image description here
If you have specified your own customer managed KMS key for S3 encryption you also need to provide the flag --server-side-encryption aws:kms, for example:
aws s3api put-object --bucket bucket --key objectKey --body /path/to/file --server-side-encryption aws:kms
If you do not add the flag --server-side-encryption aws:kms the cli displays an AccessDenied error
I was able to solve the issue by granting complete s3 access to Lambda from policies. Make a new role for Lambda and attach the policy with complete S3 Access to it.
Hope this will help.
In addition, I have set the permission for the group to which the user belongs to.

amazon s3 video files accessible only from my domain/server?

Now, I know that I cannot stop someone from downloading my videos and sharing, however I would prefer to have it to so that people do not copy paste links directly to my bucket. Thus, is there a way to make my bucket accessible only from my server/domain making the request?
If it helps, I'm using jwplayer which loads from a xml playlist that has all the links. This playlist definitely can be opened and viewed from anywhere and is where I expect the easy copy and paste comes from.
I don't want to mask the urls because that means my bucket is readable to everyone. There is probably some chance that someone will find the url of my bucket and the name of the files and connect everything together...
This is possible by Using Bucket Policies, which allows you to define access rights for Amazon S3 resources - there are a couple of Example Cases for Amazon S3 Bucket Policies illustrating the functionality, and amongst these you'll find an example for Restricting Access to Specific IP Addresses as well:
This statement grants permissions to any user to perform any S3 action
on objects in the specified bucket. However, the request must
originate from the range of IP addresses specified in the condition.
Depending on the specifics of your use case, a bucket policy for this might look like so:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": "192.168.143.0/24"
},
"NotIpAddress" : {
"aws:SourceIp": "192.168.143.188/32"
}
}
}
]
}
As shown the aws:sourceIp value for parameters IPAddress and NotIpAddress is expressed in CIDR notation, enabling respective flexibility for composing the desired scope.
Finally, you might want to check out the recommended AWS Policy Generator, select type S3 Bucket Policy and explore the available Actions and Conditions to compose more targeted policies for your use case eventually - the documentation for Conditions explains this in detail.
The Ip address will help if your server going to access your bucket. But JWPlayer is from client side. So the request is directly goes from jwplayer(browser) to s3 bucket url, Not via your server. In this case "referrer bucket policy" will help you in this.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://yoursitename.com/*",
"http://*.yoursitename.com/*"
]
}
}
}
]
}
So now s3 will allow if that request come from your site only.
You can have your bucket protected, which is by default the way it is. (meaning you only have access to objects in it) Then you can request files from Amazon S3 from your website and give it a time limit to which the user can see it.
//set time so that users can see file for 1 minute. then it is protected again.
$response = $s3->get_object_url(YOUR_A3_BUCKET, PATH/TO/FILE, '1 minutes');
This will automatically give you a url that has parameters associated with it which only is accessible for 1 minute. You can use that as your source within your website and then they could not copy and paste it into the browser after that 1 minute.
You can read more about this at the Amazon SDK for PHP
Restricting Access to a Specific HTTP Referrer
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your Amazon S3 bucket, examplebucket. By default, all the Amazon S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
For everyone who is stumbling upon this now, please take note that Amazon has changed the JSON format for the bucket policies and now requires each allowed / denied IP or domain to be listed separately. See below for an example.
Either way, I strongly recommend to use the AWS Policy Generator to make sure your formatting is correct.
AWS S3 Bucket Policy - Allow Access only from multiple IPs
{
"Id": "Policy1618636210012",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1618635877058",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "333.444.555.666"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636151833",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "222.333.444.555"
}
},
"Principal": "*"
},
{
"Sid": "Stmt1618636203591",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/folder/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "111.222.333.444"
}
},
"Principal": "*"
}
]
}