An error occurred (Throttling) when calling the DescribeDBClusters operation (reached max retries: 4): Rate exceeded - amazon-s3

I am trying to copy multiple file to s3 bucket by using following AWS CLI command
aws s3 cp local_dir s3://s3_bucket/dir1/xmldir/ --recursive --exclude "*" --include "*.xml"
Here for each file upload an AWS lambda will be getting triggered like wise i configured
after lambda getting trigger it process the file which copied to s3.
The problem here is: out of 360 files 200 only successfully processed for remaining getting the error:
An error occurred (Throttling) when calling the DescribeDBClusters operation (reached max retries: 4): Rate exceeded

Related

Setting up S3 compatible service for blob storage on Google Cloud Storage

PS: cross posted on drone forums here.
I'm trying to setup s3 like service for drone logs. i've tested that my AWS_* values are set correctly in the container and using aws-cli from inside container gives correct output for:
aws s3api list-objects --bucket drone-logs --endpoint-url=https://storage.googleapis.com
however, drone server itself is unable to upload logs to the bucket (with following error):
{"error":"InvalidArgument: Invalid argument.\n\tstatus code: 400, request id: , host id: ","level":"warning","msg":"manager: cannot upload complete logs","step-id":7,"time":"2023-02-09T12:26:16Z"}
drone server on startup shows that s3 related configuration was picked correctly:
rpc:
server: ""
secret: my-secret
debug: false
host: drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
proto: https
s3:
bucket: drone-logs
prefix: ""
endpoint: https://storage.googleapis.com
pathstyle: true
the env. vars inside droner server container are:
# env | grep -E 'DRONE|AWS' | sort
AWS_ACCESS_KEY_ID=GOOGXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_COOKIE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_DATABASE_DATASOURCE=postgres://drone:XXXXXXXXXXXXXXXXXXXXXXXXXXXXX#35.XXXXXX.XXXX:5432/drone?sslmode=disable
DRONE_DATABASE_DRIVER=postgres
DRONE_DATABASE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_ID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_JSONNET_ENABLED=true
DRONE_LOGS_DEBUG=true
DRONE_LOGS_TRACE=true
DRONE_RPC_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_S3_BUCKET=drone-logs
DRONE_S3_ENDPOINT=https://storage.googleapis.com
DRONE_S3_PATH_STYLE=true
DRONE_SERVER_HOST=drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_SERVER_PROTO=https
DRONE_STARLARK_ENABLED=true
the .drone.yaml that is being used is available here, on github.
the server is running using the nolimit flag:
go build -tags "nolimit" github.com/drone/drone/cmd/drone-server

Invalid resource in copy source ARN - while copying between the same accesspoints

I am trying to copy a file from a source access point to a destination access point with the same url except a different file names:
e.g.
aws s3 cp s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890.inprogress s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890
Giving:
copy failed: s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890.inprogress to s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890
An error occurred (InvalidArgument) when calling the CopyObject operation: Invalid resource in copy source ARN
Replacing : with / just before the access point name worked!
aws s3 cp s3://arn:aws:s3:eu-west-2:1234567890:accesspoint/my-access-point-name/path/to/local-1234567890.inprogress s3://arn:aws:s3:eu-west-2:1234567890:accesspoint/my-access-point-name/path/to/local-1234567890
Strange : is fine if one of the source or destination is access point ARN but not both.

rclone failing with "AccessControlListNotSupported" on cross-account copy -- AWS CLI Works

Quick Summary now that I think I see the problem:
rclone seems to always send ACL with a copy request, with a default value of "private". This will fail in a (2022) default AWS bucket which (correctly) assumes "No ACL". Need a way to suppress ACL send in rclone.
Detail
I assume an IAM role and attempt to do an rclone copy from a data center Linux box to a default options private no-ACL bucket in the same account as the role I assume. It succeeds.
I then configure a default options private no-ACL bucket in another account than the role I assume. I attach a bucket policy to the cross-account bucket that trusts the role I assume. The role I assume has global permissions to write S3 buckets anywhere.
I test the cross-account bucket policy by using the AWS CLI to copy the same linux box source file to the cross-account bucket. Copy works fine with AWS CLI, suggesting that the connection and access permissions to the cross account bucket are fine. DataSync (another AWS service) works fine too.
Problem: an rclone copy fails with the AccessControlListNotSupported error below.
status code: 400, request id: XXXX, host id: ZZZZ
2022/08/26 16:47:29 ERROR : bigmovie: Failed to copy: AccessControlListNotSupported: The bucket does not allow ACLs
status code: 400, request id: XXXX, host id: YYYY
And of course it is true that the bucket does not support ACL ... which is the desired best practice and AWS default for new buckets. However the bucket does support a bucket policy that trusts my assumed role, and that role and bucket policy pair works just fine with the AWS CLI copy across account, but not with the rclone copy.
Given that AWS CLI copies just fine cross account to this bucket, am I missing one of rclone's numerous flags to get the same behaviour? Anyone think of another possible cause?
Tested older, current and beta rclone versions, all behave the same
Version Info
os/version: centos 7.9.2009 (64 bit)
os/kernel: 3.10.0-1160.71.1.el7.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: none
Failing Command
$ rclone copy bigmovie s3-standard:SOMEBUCKET/bigmovie -vv
Failing RClone Config
type = s3
provider = AWS
env_auth = true
region = us-east-1
endpoint = https://bucket.vpce-REDACTED.s3.us-east-1.vpce.amazonaws.com
#server_side_encryption = AES256
storage_class = STANDARD
#bucket_acl = private
#acl = private
Note that I've tested all permutations of the commented out lines with similar result
Note that I have tested with and without the private endpoint listed with same results for both AWS CLI and rclone, e.g. CLI works, rclone fails.
A log from the command with the -vv flag
2022/08/25 17:25:55 DEBUG : Using config file from "PERSONALSTUFF/rclone.conf"
2022/08/25 17:25:55 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/local/rclone/1.55/bin/rclone" "copy" "bigmovie" "s3-standard:SOMEBUCKET" "-vv"]
2022/08/25 17:25:55 DEBUG : Creating backend with remote "bigmovie"
2022/08/25 17:25:55 DEBUG : fs cache: adding new entry for parent of "bigmovie", "MYDIRECTORY/testbed"
2022/08/25 17:25:55 DEBUG : Creating backend with remote "s3-standard:SOMEBUCKET/bigmovie"
2022/08/25 17:25:55 DEBUG : bigmovie: Need to transfer - File not found at Destination
2022/08/25 17:25:55 ERROR : bigmovie: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
AccessControlListNotSupported The bucket does not allow ACLs8DW1MQSHEN6A0CFAd3Rlnx/XezTB7OC79qr4QQuwjgR+h2VYj4LCZWLGTny9YAy985be5HsFgHcqX4azSDhDXefLE+U=
2022/08/25 17:25:55 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>

When listing of objects using the IBM Cloud Object Storage CLI, and also in my Java program, I get a TLS handshake error

I can list my buckets in the COS CLI:
ibmcloud cos buckets
OK
2 buckets found in your account:
Name Date Created
cloud-object-storage-kc-cos-standard-8e7 May 20, 2020 at 14:40:37
cloud-object-storage-kc-cos-standard-nw6 Dec 14, 2020 at 16:35:48
But if I try to list the objects in the second bucket I get the following:
ibmcloud cos objects -bucket cloud-object-storage-kc-cos-standard-nw6 -region us-east
FAILED
RequestError: send request failed
caused by: Get https://cloud-object-storage-kc-cos-standard-nw6.s3.us-east.cloud-object-storage.appdomain.cloud/: tls: first record does not look like a TLS handshake
I do not know why I would get a TLS handshake error on such a call. If I try any other region, I get a "The specified bucket was not found in your IBM Cloud account. This may be because you provided the wrong region. Provide the bucket's correct region and try again."
My Cloud Object Storage configuration is (X's are redacted data):
Last Updated Tuesday, December 15 2020 at 11:16:46
Default Region us-geo
Download Location /Users/xxxxxx#us.ibm.com/Downloads
CRN b6cc5f87-5867-4736-XXXX-cf70c34a1fb7
AccessKeyID
SecretAccessKey
Authentication Method IAM
URL Style VHost
Service Endpoint
To find the exact location of your COS bucket, you can try running the below command
ibmcloud cos buckets-extended
buckets-extended: List all the extended buckets with pagination support.
Pass the Location Constraint for the bucket in the below command
ibmcloud cos objects --bucket vmac-code-engine-bucket --region us-standard

Cannot set bucket policy of amazon s3

I was simply following the "get started" tutorial here
But I failed at "Step 4 Add a Bucket Policy to Allow Public Reads". It always complains "access denied" with a red error icon.
I am not able to set it via command line either. Here is the command I use:
aws s3api put-bucket-policy --bucket bucket-name --policy
file://bucket-policy.json
Here is the error I got:
An error occurred (AccessDenied) when calling the PutBucketPolicy
operation: Access Denied
The issue was, you have to uncheck the boxes under permissions -> public access settings. Amazon failed to mention this in their tutorial. Bad tutorial.