403 access denied error for files uploaded to s3 using aws command line tool - amazon-s3

I'm trying to upload images to s3 using the aws command line tool. I keep getting a 403 access denied error.
I think the --acl flag from here http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html should fix this but all the options I've tried haven't helped.
I have a django app running which uploads to s3 and I can access those images fine.

did you set any IAM or bucket policy to allow upload to your bucket?
There is (at least) 3 ways to enable a user to access a bucket:
specify an acl at the bucket level (go to the bucket page, select your bucket and click "properties", there you can grant more accesses)
attach a policy to the bucket itself, e.g to something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn of user or role"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::your bucket name"
}
]
}
attach an policy to your IAM user, e.g to give admin rights:go to IAM > users > your user > attach policy > AmazonS3FullAccess
If you want to build your own policy, you can use the aws policy generator
If you want more specific help, please provide more details (which users should have which permissions on your bucket, etc)
hope this helps

Related

Read only users - list all the buckets I have read rights to

We are using ceph and have several buckets.
We are using one read-only user to make backups of these buckets.
If I know the list, I can backup all my bucket.
I don't understand why, but I can't list all buckets.
Is it at all possible in ceph radosgw? I suspect not.
The policy looks like this:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["arn:aws:iam:::user/read-only"]},
"Action": [
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}]
}
And I don't have anything special at the user level.
But when I try to list, I get the following:
export AWS_SECRET_ACCESS_KEY=xx
export AWS_ACCESS_KEY_ID=
export MC_HOST_ceph=https://${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}#radosgwdns
mc ls ceph
mc ls ceph/
mc ls ceph/bucket
Only the last command is listing things.
In this link it is said that it is basically not possible:
https://help.switch.ch/engines/documentation/s3-like-object-storage/s3_policy/
Only S3 bucket policy is available, S3 user policy is not implemented in Ceph S3.
On this release page, they maybe speak about it:
https://ceph.io/releases/v16-2-0-pacific-released/
RGW: Improved configuration of S3 tenanted users.
Thanks for your help!
When you get access to a bucket with a bucket policy to a user it will not appear in the user's bucket listing. If you want it to be you can create a subuser with none permission and again give access to it using bucket policy. Now when the subuser lists buckets it will see the bucket and because of none permission, it has only access to the bucket you specified.
The principal for the subuser would be like this:
"Principal": {"AWS": ["arn:aws:iam:::user/MAIN_USER:SUBUSER"]},

AWS CodeBuild ECR CannotPullContainerError

CodeBuild project fails at the Provisioning phase due to the following error
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image. CannotPullContainerError: Error response from daemon: pull access denied for <image-name>, repository does not exist or may require 'docker login': denied: User: arn:aws:sts::<id>
The issue was with the Image Pull credentials.
CodeBuild was using default AWS CodeBuild credentials for pulling the image while the ECRAccessPolicy was attached to the Project Service Role.
I fixed it by updating the image pull credentials to use project service role.
To add additional clarity (not enough reputation yet to comment on an existing answer), the CodeBuild project service role needs to have the following permissions if trying to pull from a private repository:
{
"Action":[
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
],
"Effect":"Allow",
"Resource":[
"arn:aws:ecr:us-east-1:ACCOUNT_ID:repository/REPOSITORY_NAME*"
]
}
Also, the ECR repository policy should also look something like this (scope down root if desired):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID:root"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
}
]
}
fwiw I stumbled across this issue when using terraform to create my codebuild pipeline.
The setting to change for this was image_pull_credentials_type which should be set to SERVICE_ROLE rather than CODEBUILD in the environment block of the resource "aws_codebuild_project".
Thank you to Chaitanya for the response which pointed me in this direction with the accepted answer.

s3/gcs - Copy s3 subdirectory to gcs without access to root? [duplicate]

I'm looking to set up a transfer job to take files stored within an S3 bucket and load them to a GCS bucket. The credentials that I have give me access to the folder that contains the files that I need from S3 but not to the higher level folders.
When I try to set up the transfer job with the S3 bucket name under 'Amazon S3 bucket' and the access key ID & secret access key filled-in, the access is denied as you would expect given the limits of my credentials. However, access is still denied if I add the extra path information as a Prefix item (e.g. 'Production/FTP/CompanyName') and I do have access to this folder.
It seems as though I can't get past the fact that I do not have access to the root directory. Is there any way around this?
According to the documentation link:
The Storage Transfer Service uses the
project-[$PROJECT_NUMBER]#storage-transfer-service.iam.gserviceaccount.com
service account to move data from a Cloud Storage source bucket.
The service account must have the following permissions for the source
bucket:
storage.buckets.get Allows the service account to get the location of
the bucket. Always required.
storage.objects.list Allows the service account to list objects in the
bucket. Always required.
storage.objects.get Allows the service account to read objects in the
bucket. Always required.
storage.objects.delete Allows the service account to delete objects in
the bucket. Required if you set deleteObjectsFromSourceAfterTransfer
to true.
The roles/storage.objectViewer and roles/storage.legacyBucketReader
roles together contain the permissions that are always required. The
roles/storage.legacyBucketWriter role contains the
storage.objects.delete permissions. The service account used to
perform the transfer must be assigned the desired roles.
You have to set this permissions on your AWS bucket.
Paul,
Most likely your IAM role is missing s3:ListBucket permission. Can you update your IAM role to have s3:ListBucket , s3:GetBucketLocation and try again?
On AWS permission policy should like below , in case you want to give access to subfolder.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": "arn:aws:s3:::<bucketname>",
"Condition": {
"StringLike": {
"s3:prefix": [
"<subfolder>/*"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::<bucketname>/<subfolder>",
"arn:aws:s3:::<bucketname>/<subfolder>/*"
],
"Condition": {}
}
]
}

Cloudwatch monitoring script errors when attempting to send metrics for ASG

I am trying to send use the Cloudwatch monitoring script to send metrics for memory, disk and swap utilization from an EC2 instance to Cloudwatch. In order to run the script I need to provide it AWS credentials or an IAM role. When attempting to use an IAM role I find that I get the below error
[ec2-user#ip-x-x-x-x aws-scripts-mon]$ /home/ec2-user/aws-scripts-
mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --auto-
scaling=only --verbose --aws-iam-role=ACCT-CloudWatch-
service-role
Using AWS credentials file </home/ec2-user/aws-scripts-
mon/awscreds.conf>
WARNING: Failed to call EC2 to obtain Auto Scaling group name. HTTP
Status Code: 0. Error Message: Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
WARNING: The Auto Scaling metrics will not be reported this time.
[ec2-user#ip-x-x-x-x aws-scripts-mon]$
This is what my IAM policy looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingNotificationTypes",
"autoscaling:DescribeAutoScalingInstances",
"ec2:DescribeTags",
"autoscaling:DescribePolicies",
"logs:DescribeLogStreams",
"autoscaling:DescribeTags",
"autoscaling:DescribeLoadBalancers",
"autoscaling:*",
"ssm:GetParameter",
"logs:CreateLogGroup",
"logs:PutLogEvents",
"ssm:PutParameter",
"logs:CreateLogStream",
"cloudwatch:*",
"autoscaling:DescribeAutoScalingGroups",
"ec2:*",
"kms:*",
"autoscaling:DescribeLoadBalancerTargetGroups"
],
"Resource": "*"
}
]
}
What could I be missing?
The message states the problem comes from the role it tries to use:
Failed to obtain credentials for IAM
role ACCT-CloudWatch-service-role. Available roles: ACCT-service-role
Modify this part of your command to --aws-iam-role=ACCT-
service-role (I am assuming that this role is the one configured correctly)

s3 put json file access denied

I am using linux s3cmd to upload files to AWS S3. I can upload a zip file successfully and this has been working for months now, no problems. I now need to also upload a json file. When I try to upload the json file to the same bucket, I get S3 error: Access Denied. I can't figure out why, please help.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
s3cmd --mime-type=application/zip put myfile.zip s3://mybucket
SUCCESS
s3cmd --mime-type=application/json put myfile.json s3://mybucket
ERROR: S3 error: Access Denied
These days, it is recommended to use the AWS Command-Line Interface (CLI) rather than s3cmd.
The aws s3 cp command will try to automatically guess the mime type, so you might not need to specify it as in your example.
If your heart is set on figuring out why s3cmd doesn't work, try opening up permissions (eg allow s3:*) to see if this fixes things, then narrow-down the list of permitted API calls to figure out which one s3cmd is calling.
Alternatively you can use Minio client aka mc Using mc cp command this can be done.
$ mc cp myfile.json s3alias/mybucket
Hope it helps.
Disclaimer: I work for Minio
It was a bug with s3cmd, simple update solved the problem.