I am using linux s3cmd to upload files to AWS S3. I can upload a zip file successfully and this has been working for months now, no problems. I now need to also upload a json file. When I try to upload the json file to the same bucket, I get S3 error: Access Denied. I can't figure out why, please help.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
s3cmd --mime-type=application/zip put myfile.zip s3://mybucket
SUCCESS
s3cmd --mime-type=application/json put myfile.json s3://mybucket
ERROR: S3 error: Access Denied
These days, it is recommended to use the AWS Command-Line Interface (CLI) rather than s3cmd.
The aws s3 cp command will try to automatically guess the mime type, so you might not need to specify it as in your example.
If your heart is set on figuring out why s3cmd doesn't work, try opening up permissions (eg allow s3:*) to see if this fixes things, then narrow-down the list of permitted API calls to figure out which one s3cmd is calling.
Alternatively you can use Minio client aka mc Using mc cp command this can be done.
$ mc cp myfile.json s3alias/mybucket
Hope it helps.
Disclaimer: I work for Minio
It was a bug with s3cmd, simple update solved the problem.
Related
We are using ceph and have several buckets.
We are using one read-only user to make backups of these buckets.
If I know the list, I can backup all my bucket.
I don't understand why, but I can't list all buckets.
Is it at all possible in ceph radosgw? I suspect not.
The policy looks like this:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["arn:aws:iam:::user/read-only"]},
"Action": [
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}]
}
And I don't have anything special at the user level.
But when I try to list, I get the following:
export AWS_SECRET_ACCESS_KEY=xx
export AWS_ACCESS_KEY_ID=
export MC_HOST_ceph=https://${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}#radosgwdns
mc ls ceph
mc ls ceph/
mc ls ceph/bucket
Only the last command is listing things.
In this link it is said that it is basically not possible:
https://help.switch.ch/engines/documentation/s3-like-object-storage/s3_policy/
Only S3 bucket policy is available, S3 user policy is not implemented in Ceph S3.
On this release page, they maybe speak about it:
https://ceph.io/releases/v16-2-0-pacific-released/
RGW: Improved configuration of S3 tenanted users.
Thanks for your help!
When you get access to a bucket with a bucket policy to a user it will not appear in the user's bucket listing. If you want it to be you can create a subuser with none permission and again give access to it using bucket policy. Now when the subuser lists buckets it will see the bucket and because of none permission, it has only access to the bucket you specified.
The principal for the subuser would be like this:
"Principal": {"AWS": ["arn:aws:iam:::user/MAIN_USER:SUBUSER"]},
CodeBuild project fails at the Provisioning phase due to the following error
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image. CannotPullContainerError: Error response from daemon: pull access denied for <image-name>, repository does not exist or may require 'docker login': denied: User: arn:aws:sts::<id>
The issue was with the Image Pull credentials.
CodeBuild was using default AWS CodeBuild credentials for pulling the image while the ECRAccessPolicy was attached to the Project Service Role.
I fixed it by updating the image pull credentials to use project service role.
To add additional clarity (not enough reputation yet to comment on an existing answer), the CodeBuild project service role needs to have the following permissions if trying to pull from a private repository:
{
"Action":[
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
],
"Effect":"Allow",
"Resource":[
"arn:aws:ecr:us-east-1:ACCOUNT_ID:repository/REPOSITORY_NAME*"
]
}
Also, the ECR repository policy should also look something like this (scope down root if desired):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID:root"
},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
}
]
}
fwiw I stumbled across this issue when using terraform to create my codebuild pipeline.
The setting to change for this was image_pull_credentials_type which should be set to SERVICE_ROLE rather than CODEBUILD in the environment block of the resource "aws_codebuild_project".
Thank you to Chaitanya for the response which pointed me in this direction with the accepted answer.
I'm looking to set up a transfer job to take files stored within an S3 bucket and load them to a GCS bucket. The credentials that I have give me access to the folder that contains the files that I need from S3 but not to the higher level folders.
When I try to set up the transfer job with the S3 bucket name under 'Amazon S3 bucket' and the access key ID & secret access key filled-in, the access is denied as you would expect given the limits of my credentials. However, access is still denied if I add the extra path information as a Prefix item (e.g. 'Production/FTP/CompanyName') and I do have access to this folder.
It seems as though I can't get past the fact that I do not have access to the root directory. Is there any way around this?
According to the documentation link:
The Storage Transfer Service uses the
project-[$PROJECT_NUMBER]#storage-transfer-service.iam.gserviceaccount.com
service account to move data from a Cloud Storage source bucket.
The service account must have the following permissions for the source
bucket:
storage.buckets.get Allows the service account to get the location of
the bucket. Always required.
storage.objects.list Allows the service account to list objects in the
bucket. Always required.
storage.objects.get Allows the service account to read objects in the
bucket. Always required.
storage.objects.delete Allows the service account to delete objects in
the bucket. Required if you set deleteObjectsFromSourceAfterTransfer
to true.
The roles/storage.objectViewer and roles/storage.legacyBucketReader
roles together contain the permissions that are always required. The
roles/storage.legacyBucketWriter role contains the
storage.objects.delete permissions. The service account used to
perform the transfer must be assigned the desired roles.
You have to set this permissions on your AWS bucket.
Paul,
Most likely your IAM role is missing s3:ListBucket permission. Can you update your IAM role to have s3:ListBucket , s3:GetBucketLocation and try again?
On AWS permission policy should like below , in case you want to give access to subfolder.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": "arn:aws:s3:::<bucketname>",
"Condition": {
"StringLike": {
"s3:prefix": [
"<subfolder>/*"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::<bucketname>/<subfolder>",
"arn:aws:s3:::<bucketname>/<subfolder>/*"
],
"Condition": {}
}
]
}
I'm trying to upload images to s3 using the aws command line tool. I keep getting a 403 access denied error.
I think the --acl flag from here http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html should fix this but all the options I've tried haven't helped.
I have a django app running which uploads to s3 and I can access those images fine.
did you set any IAM or bucket policy to allow upload to your bucket?
There is (at least) 3 ways to enable a user to access a bucket:
specify an acl at the bucket level (go to the bucket page, select your bucket and click "properties", there you can grant more accesses)
attach a policy to the bucket itself, e.g to something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn of user or role"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::your bucket name"
}
]
}
attach an policy to your IAM user, e.g to give admin rights:go to IAM > users > your user > attach policy > AmazonS3FullAccess
If you want to build your own policy, you can use the aws policy generator
If you want more specific help, please provide more details (which users should have which permissions on your bucket, etc)
hope this helps
My problem is this. I have files that are being added to my S3 bucket from a third party. Now if I try to download these files from the command line they are corrupt or encrypted. But if i download them individually from the S3 console they are fine. (I don't have encryption enabled either)
So, my question is this:
Is it possible to download objects from an Amazon S3 bucket that have been uploaded by a third party?
I've read just about everything I can on this and cant find an answer as to why this is the case. Here is the bucket policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name-here/*"
}
]
}
Example of file contents when encrypted:
^�^H^H^#��P^B�doc.0.js^#�T[o�0^X�+^^�^FH�
pM�ֆj҆:mZ�=M�DNb.^S^Ad��ٴ�>^S.!YՆ^Gۆ^>�ㆌp䆌-册��^[�ΆVن^V,sZ
7JE^S��Z소sv�첕H^C^_Awʲֲ!HY��"� �^A$�$
<7�"�u{�l^OZ�ѧ)>�7Ч�.3ʇ^HۃQ
��?gTS?2J���S�l%z^?�9gB0nHh�^UI��B� �^]��^t�%�-KQ^KN�3^W�����[ہށ�Ӂ5
偌IV^X����偌^]���2�ȁ~>>>:�B,^\^S�|nہ^#x၌遌쁌�u��
�hE^[��]�=Ն��~��h�teԆzꆌ�#x�Gydž&Sw8^F]d}D�^^ z2��Q
A^Vk^E�f ^U%�����
+^D̊^U{�^\kꊌ�/�ꑁ�?푁^E6O!gUN�L3�o?�^�L�n�ё^[^Q3��בx�[py�\�^FR�^P�
���,�����>�t�V^Z���<��^?iLW^X^Y^E^#^#
If a third party tool encrypt your file and upload it, you have to decrypt it in the same way and with the same key.