Cannot set bucket policy of amazon s3 - amazon-s3

I was simply following the "get started" tutorial here
But I failed at "Step 4 Add a Bucket Policy to Allow Public Reads". It always complains "access denied" with a red error icon.
I am not able to set it via command line either. Here is the command I use:
aws s3api put-bucket-policy --bucket bucket-name --policy
file://bucket-policy.json
Here is the error I got:
An error occurred (AccessDenied) when calling the PutBucketPolicy
operation: Access Denied

The issue was, you have to uncheck the boxes under permissions -> public access settings. Amazon failed to mention this in their tutorial. Bad tutorial.

Related

Invalid resource in copy source ARN - while copying between the same accesspoints

I am trying to copy a file from a source access point to a destination access point with the same url except a different file names:
e.g.
aws s3 cp s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890.inprogress s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890
Giving:
copy failed: s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890.inprogress to s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890
An error occurred (InvalidArgument) when calling the CopyObject operation: Invalid resource in copy source ARN
Replacing : with / just before the access point name worked!
aws s3 cp s3://arn:aws:s3:eu-west-2:1234567890:accesspoint/my-access-point-name/path/to/local-1234567890.inprogress s3://arn:aws:s3:eu-west-2:1234567890:accesspoint/my-access-point-name/path/to/local-1234567890
Strange : is fine if one of the source or destination is access point ARN but not both.

rclone failing with "AccessControlListNotSupported" on cross-account copy -- AWS CLI Works

Quick Summary now that I think I see the problem:
rclone seems to always send ACL with a copy request, with a default value of "private". This will fail in a (2022) default AWS bucket which (correctly) assumes "No ACL". Need a way to suppress ACL send in rclone.
Detail
I assume an IAM role and attempt to do an rclone copy from a data center Linux box to a default options private no-ACL bucket in the same account as the role I assume. It succeeds.
I then configure a default options private no-ACL bucket in another account than the role I assume. I attach a bucket policy to the cross-account bucket that trusts the role I assume. The role I assume has global permissions to write S3 buckets anywhere.
I test the cross-account bucket policy by using the AWS CLI to copy the same linux box source file to the cross-account bucket. Copy works fine with AWS CLI, suggesting that the connection and access permissions to the cross account bucket are fine. DataSync (another AWS service) works fine too.
Problem: an rclone copy fails with the AccessControlListNotSupported error below.
status code: 400, request id: XXXX, host id: ZZZZ
2022/08/26 16:47:29 ERROR : bigmovie: Failed to copy: AccessControlListNotSupported: The bucket does not allow ACLs
status code: 400, request id: XXXX, host id: YYYY
And of course it is true that the bucket does not support ACL ... which is the desired best practice and AWS default for new buckets. However the bucket does support a bucket policy that trusts my assumed role, and that role and bucket policy pair works just fine with the AWS CLI copy across account, but not with the rclone copy.
Given that AWS CLI copies just fine cross account to this bucket, am I missing one of rclone's numerous flags to get the same behaviour? Anyone think of another possible cause?
Tested older, current and beta rclone versions, all behave the same
Version Info
os/version: centos 7.9.2009 (64 bit)
os/kernel: 3.10.0-1160.71.1.el7.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: none
Failing Command
$ rclone copy bigmovie s3-standard:SOMEBUCKET/bigmovie -vv
Failing RClone Config
type = s3
provider = AWS
env_auth = true
region = us-east-1
endpoint = https://bucket.vpce-REDACTED.s3.us-east-1.vpce.amazonaws.com
#server_side_encryption = AES256
storage_class = STANDARD
#bucket_acl = private
#acl = private
Note that I've tested all permutations of the commented out lines with similar result
Note that I have tested with and without the private endpoint listed with same results for both AWS CLI and rclone, e.g. CLI works, rclone fails.
A log from the command with the -vv flag
2022/08/25 17:25:55 DEBUG : Using config file from "PERSONALSTUFF/rclone.conf"
2022/08/25 17:25:55 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/local/rclone/1.55/bin/rclone" "copy" "bigmovie" "s3-standard:SOMEBUCKET" "-vv"]
2022/08/25 17:25:55 DEBUG : Creating backend with remote "bigmovie"
2022/08/25 17:25:55 DEBUG : fs cache: adding new entry for parent of "bigmovie", "MYDIRECTORY/testbed"
2022/08/25 17:25:55 DEBUG : Creating backend with remote "s3-standard:SOMEBUCKET/bigmovie"
2022/08/25 17:25:55 DEBUG : bigmovie: Need to transfer - File not found at Destination
2022/08/25 17:25:55 ERROR : bigmovie: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
AccessControlListNotSupported The bucket does not allow ACLs8DW1MQSHEN6A0CFAd3Rlnx/XezTB7OC79qr4QQuwjgR+h2VYj4LCZWLGTny9YAy985be5HsFgHcqX4azSDhDXefLE+U=
2022/08/25 17:25:55 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>

An error occurred (403) when calling the HeadObject operation: Forbidden in airflow (2.0.0)+

Error -
*** Failed to verify remote log exists s3://airflow_test/airflow-logs/demo/task1/2022-05-13T18:20:45.561269+00:00/1.log.
An error occurred (403) when calling the HeadObject operation: Forbidden
Dockerfile -
FROM apache/airflow:2.2.3
COPY /airflow/requirements.txt /requirements.txt
RUN pip install --no-cache-dir -r /requirements.txt
RUN pip install apache-airflow[crypto,postgres,ssh,s3,log]
USER root
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
USER airflow
Under connection UI -
Connection Id * - aws_s3_log_storage
Connection Type * - S3
Host - <My company's internal link>. (ex - https://abcd.company.com)
Extra - {"aws_access_key_id": "key", "aws_secret_access_key": "key", "region_name": "us-east-1"}
Under values.yaml -
config:
logging:
remote_logging: 'True'
remote_base_log_folder: 's3://airflow_test/airflow-logs'
remote_log_conn_id: 'aws_s3_log_storage'
logging_level: 'INFO'
fab_logging_level: 'WARN'
encrypt_s3_logs: 'False'
host: '<My company's internal link>. (ex - https://abcd.company.com)'
colored_console_log: 'False'
How did I created the bucket?
Installed awscli
used the commands -
1. aws configure
AWS Access Key ID: <access key>
AWS Secret Access Key: <secret key>
Default region name: us-east-1
Default output format:
2. aws s3 mb s3://airflow_test --endpoint-url=<My company's internal link>. (ex - https://abcd.company.com)
I am not getting a clue on how to resolve the error. I am actually very new to airflow and helm charts.
I had same error message with you. your account or Key might not have enough permission for accessing S3 bucket.
Please check your role has enough permissions below.
"s3:PutObject*",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:GetObject*",
"s3:ListObject*",
"s3:ListBucket*",
"s3:PutBucket*",
"s3:GetBucket*",
"s3:DeleteObject

An error occurred (Throttling) when calling the DescribeDBClusters operation (reached max retries: 4): Rate exceeded

I am trying to copy multiple file to s3 bucket by using following AWS CLI command
aws s3 cp local_dir s3://s3_bucket/dir1/xmldir/ --recursive --exclude "*" --include "*.xml"
Here for each file upload an AWS lambda will be getting triggered like wise i configured
after lambda getting trigger it process the file which copied to s3.
The problem here is: out of 360 files 200 only successfully processed for remaining getting the error:
An error occurred (Throttling) when calling the DescribeDBClusters operation (reached max retries: 4): Rate exceeded

AWS Boto S3 API read KMS encrypted keys

I tried to read keys which are encrypted by using AWS KMS, I first hit the following error.
S3ResponseError: 400 Bad Request
InvalidArgumentRequests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.Authorizationnull1D584F77747CBB01LQIxPHmUGGDMnnI45xqWHtrB1+96tc7pDIEi6bVEE5i425SRypqeBXzvsH0CqPzwJe4xVv1UjhQ=
After setting os.environ['S3_USE_SIGV4'] = 'True', the above 400 error is gone, but now I hit the 403 error.
S3ResponseError: 403 Forbidden
May I ask if anyone hit the same issue before?
This error is caused by a wrong S3 hostname which was s3-ap-southeast-1.s3.amazonaws.com but should be s3-ap-southeast-1.amazonaws.com