rclone failing with "AccessControlListNotSupported" on cross-account copy -- AWS CLI Works - amazon-s3

Quick Summary now that I think I see the problem:
rclone seems to always send ACL with a copy request, with a default value of "private". This will fail in a (2022) default AWS bucket which (correctly) assumes "No ACL". Need a way to suppress ACL send in rclone.
Detail
I assume an IAM role and attempt to do an rclone copy from a data center Linux box to a default options private no-ACL bucket in the same account as the role I assume. It succeeds.
I then configure a default options private no-ACL bucket in another account than the role I assume. I attach a bucket policy to the cross-account bucket that trusts the role I assume. The role I assume has global permissions to write S3 buckets anywhere.
I test the cross-account bucket policy by using the AWS CLI to copy the same linux box source file to the cross-account bucket. Copy works fine with AWS CLI, suggesting that the connection and access permissions to the cross account bucket are fine. DataSync (another AWS service) works fine too.
Problem: an rclone copy fails with the AccessControlListNotSupported error below.
status code: 400, request id: XXXX, host id: ZZZZ
2022/08/26 16:47:29 ERROR : bigmovie: Failed to copy: AccessControlListNotSupported: The bucket does not allow ACLs
status code: 400, request id: XXXX, host id: YYYY
And of course it is true that the bucket does not support ACL ... which is the desired best practice and AWS default for new buckets. However the bucket does support a bucket policy that trusts my assumed role, and that role and bucket policy pair works just fine with the AWS CLI copy across account, but not with the rclone copy.
Given that AWS CLI copies just fine cross account to this bucket, am I missing one of rclone's numerous flags to get the same behaviour? Anyone think of another possible cause?
Tested older, current and beta rclone versions, all behave the same
Version Info
os/version: centos 7.9.2009 (64 bit)
os/kernel: 3.10.0-1160.71.1.el7.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: none
Failing Command
$ rclone copy bigmovie s3-standard:SOMEBUCKET/bigmovie -vv
Failing RClone Config
type = s3
provider = AWS
env_auth = true
region = us-east-1
endpoint = https://bucket.vpce-REDACTED.s3.us-east-1.vpce.amazonaws.com
#server_side_encryption = AES256
storage_class = STANDARD
#bucket_acl = private
#acl = private
Note that I've tested all permutations of the commented out lines with similar result
Note that I have tested with and without the private endpoint listed with same results for both AWS CLI and rclone, e.g. CLI works, rclone fails.
A log from the command with the -vv flag
2022/08/25 17:25:55 DEBUG : Using config file from "PERSONALSTUFF/rclone.conf"
2022/08/25 17:25:55 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/local/rclone/1.55/bin/rclone" "copy" "bigmovie" "s3-standard:SOMEBUCKET" "-vv"]
2022/08/25 17:25:55 DEBUG : Creating backend with remote "bigmovie"
2022/08/25 17:25:55 DEBUG : fs cache: adding new entry for parent of "bigmovie", "MYDIRECTORY/testbed"
2022/08/25 17:25:55 DEBUG : Creating backend with remote "s3-standard:SOMEBUCKET/bigmovie"
2022/08/25 17:25:55 DEBUG : bigmovie: Need to transfer - File not found at Destination
2022/08/25 17:25:55 ERROR : bigmovie: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
AccessControlListNotSupported The bucket does not allow ACLs8DW1MQSHEN6A0CFAd3Rlnx/XezTB7OC79qr4QQuwjgR+h2VYj4LCZWLGTny9YAy985be5HsFgHcqX4azSDhDXefLE+U=
2022/08/25 17:25:55 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>

Related

Setting up S3 compatible service for blob storage on Google Cloud Storage

PS: cross posted on drone forums here.
I'm trying to setup s3 like service for drone logs. i've tested that my AWS_* values are set correctly in the container and using aws-cli from inside container gives correct output for:
aws s3api list-objects --bucket drone-logs --endpoint-url=https://storage.googleapis.com
however, drone server itself is unable to upload logs to the bucket (with following error):
{"error":"InvalidArgument: Invalid argument.\n\tstatus code: 400, request id: , host id: ","level":"warning","msg":"manager: cannot upload complete logs","step-id":7,"time":"2023-02-09T12:26:16Z"}
drone server on startup shows that s3 related configuration was picked correctly:
rpc:
server: ""
secret: my-secret
debug: false
host: drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
proto: https
s3:
bucket: drone-logs
prefix: ""
endpoint: https://storage.googleapis.com
pathstyle: true
the env. vars inside droner server container are:
# env | grep -E 'DRONE|AWS' | sort
AWS_ACCESS_KEY_ID=GOOGXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_COOKIE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_DATABASE_DATASOURCE=postgres://drone:XXXXXXXXXXXXXXXXXXXXXXXXXXXXX#35.XXXXXX.XXXX:5432/drone?sslmode=disable
DRONE_DATABASE_DRIVER=postgres
DRONE_DATABASE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_ID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_JSONNET_ENABLED=true
DRONE_LOGS_DEBUG=true
DRONE_LOGS_TRACE=true
DRONE_RPC_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_S3_BUCKET=drone-logs
DRONE_S3_ENDPOINT=https://storage.googleapis.com
DRONE_S3_PATH_STYLE=true
DRONE_SERVER_HOST=drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_SERVER_PROTO=https
DRONE_STARLARK_ENABLED=true
the .drone.yaml that is being used is available here, on github.
the server is running using the nolimit flag:
go build -tags "nolimit" github.com/drone/drone/cmd/drone-server

RKE2 Authorized endpoint configuration help required

I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.

How to check content of a Noobaa bucket

I am able to check status of Nooba bucket using noobaa bucket status <bucket> command.
$ noobaa bucket status XYZ
INFO[0005] ✅ Exists: NooBaa "noobaa"
INFO[0005] ✅ Exists: Service "noobaa-mgmt"
INFO[0006] ✅ Exists: Secret "noobaa-operator"
INFO[0006] ✅ Exists: Secret "noobaa-admin"
INFO[0008] ✈️ RPC: bucket.read_bucket() Request: {Name:XYZ}
INFO[0010] ✅ RPC: bucket.read_bucket() Response OK: took 14.3ms
Bucket status:
Bucket : XYZ
OBC Namespace : xyz-namespace
OBC BucketClass : default-bucket-class
Type : REGULAR
Mode : OPTIMAL
ResiliencyStatus : OPTIMAL
QuotaStatus : QUOTA_NOT_SET
Num Objects : 1
Data Size : 3.000 B
Data Size Reduced : 5.000 B
Data Space Avail : 1.000 PB
But I am not able to check content present inside Noobaa bucket.
How can we check content of a Noobaa bucket? using Noobaa CLI or any other way?
Your question made me realize that noobaa CLI should have noobaa object list command so I opened a new issue for this enhancement on the operator github repo. Thanks :)
Until this is added, there are several ways we use to list objects:
run noobaa ui - notice that it opens the browser quickly, but on the terminal it prints the credentials for you to use for login. You can probably find the buckets and the drill down to the objects in the UI on your own, and you can also check out some recorded videos that navigate the UI - for example this video.
Take the admin S3 credentials and endpoint from noobaa status and then use your favorite s3 client - I currently use aws-cli or rclone:
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $NOOBAA_S3_ENDPOINT --no-verify-ssl s3'
and then:
s3 ls XYZ
Not many noticed but the NooBaa system CR contains a useful Readme text in its status, with commands to "Test S3 client" - ready to copy-paste to set up your aws-cli, including kubectl port-forward to support secure networks and reading the credentials from secrets. Check it out with kubectl describe noobaa. This 40 seconds youtube video shows this briefly. BTW, the readme text is generated for the system but its text does not contain actual secrets, only kubectl commands to read those secrets if permitted to.
$ kubectl describe noobaa
...
Phase: Ready
Readme:
Welcome to NooBaa!
-----------------
NooBaa Core Version: 5.3.0-9f579d9
NooBaa Operator Version: 2.1.0
Lets get started:
1. Connect to Management console:
Read your mgmt console login information (email & password) from secret: "noobaa-admin".
kubectl get secret noobaa-admin -n backup-service -o json | jq '.data|map_values(#base64d)'
Open the management console service - take External IP/DNS or Node Port or use port forwarding:
kubectl port-forward -n backup-service service/noobaa-mgmt 11443:443 &
open https://localhost:11443
2. Test S3 client:
kubectl port-forward -n backup-service service/s3 10443:443 &
NOOBAA_ACCESS_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_ACCESS_KEY_ID|#base64d')
NOOBAA_SECRET_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|#base64d')
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3'
s3 ls
...
Last option, which should have been mentioned first, but unfortunately I just saw it is broken in the current version v2.1.0 (opened new issue), is to use the generic noobaa api command in order to call the object_api list_objects method like so:
noobaa api object list_objects '{ "bucket": "first.bucket" }'
I hope that helps, feel free to open github issues with suggestions/issues.
Thanks!
(NooBaa CTO)

not authorized to perform: rds:DescribeDBEngineVersions

I implemented a REST api in django with django-rest-framework,on localhost working fine with successful results.
When pushing this up to an existing AWS elastic beanstalk instance, I received:
{
"detail": "Authentication credentials were not provided."
}
For solution I followed this question : Authorization Credentials Stripped
But when I push mycode on aws EB I am getting this error :
Pipeline failed with error "Service:AmazonRDS, is not authorized to perform: rds:DescribeDBEngineVersions"
I tried lots of solutions but every time I am getting this error.
Note: I am using python3.6
I got the answer of my problem.
I set the RDS policy and create new custom_wsgi.config file on .ebextensions directory and write command :
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On

Terraform > The specified key does not exist

Running terraform plan is complaining that there's no S3 key in my bucket. Note: this key does not exist however I'm pretty sure Terraform is supposed to create this if it doesn't. The log is:
[DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-key</Key>
and the Terraform config is:
terraform {
backend "s3" {
bucket = "<bucket>"
key = "my-key"
region = "eu-west-2"
acl = "private"
kms_key_id = "<key>"
}
}
Any suggestions?
You need to run terraform init before terraform plan to initialise the backend you have configured.
At least in recent versions of Terraform (I'm using 0.11.13) the S3 backend is automatically created if it doesn't already exist.
I spent several hrs on this only to find out remote state file will only be created after terraform apply (of course after terraform init && terraform plan).
$ terraform version
Terraform v1.0.4