Invalid resource in copy source ARN - while copying between the same accesspoints - amazon-s3

I am trying to copy a file from a source access point to a destination access point with the same url except a different file names:
e.g.
aws s3 cp s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890.inprogress s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890
Giving:
copy failed: s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890.inprogress to s3://arn:aws:s3:eu-west-2:1234567890:accesspoint:my-access-point-name/path/to/local-1234567890
An error occurred (InvalidArgument) when calling the CopyObject operation: Invalid resource in copy source ARN

Replacing : with / just before the access point name worked!
aws s3 cp s3://arn:aws:s3:eu-west-2:1234567890:accesspoint/my-access-point-name/path/to/local-1234567890.inprogress s3://arn:aws:s3:eu-west-2:1234567890:accesspoint/my-access-point-name/path/to/local-1234567890
Strange : is fine if one of the source or destination is access point ARN but not both.

Related

rclone failing with "AccessControlListNotSupported" on cross-account copy -- AWS CLI Works

Quick Summary now that I think I see the problem:
rclone seems to always send ACL with a copy request, with a default value of "private". This will fail in a (2022) default AWS bucket which (correctly) assumes "No ACL". Need a way to suppress ACL send in rclone.
Detail
I assume an IAM role and attempt to do an rclone copy from a data center Linux box to a default options private no-ACL bucket in the same account as the role I assume. It succeeds.
I then configure a default options private no-ACL bucket in another account than the role I assume. I attach a bucket policy to the cross-account bucket that trusts the role I assume. The role I assume has global permissions to write S3 buckets anywhere.
I test the cross-account bucket policy by using the AWS CLI to copy the same linux box source file to the cross-account bucket. Copy works fine with AWS CLI, suggesting that the connection and access permissions to the cross account bucket are fine. DataSync (another AWS service) works fine too.
Problem: an rclone copy fails with the AccessControlListNotSupported error below.
status code: 400, request id: XXXX, host id: ZZZZ
2022/08/26 16:47:29 ERROR : bigmovie: Failed to copy: AccessControlListNotSupported: The bucket does not allow ACLs
status code: 400, request id: XXXX, host id: YYYY
And of course it is true that the bucket does not support ACL ... which is the desired best practice and AWS default for new buckets. However the bucket does support a bucket policy that trusts my assumed role, and that role and bucket policy pair works just fine with the AWS CLI copy across account, but not with the rclone copy.
Given that AWS CLI copies just fine cross account to this bucket, am I missing one of rclone's numerous flags to get the same behaviour? Anyone think of another possible cause?
Tested older, current and beta rclone versions, all behave the same
Version Info
os/version: centos 7.9.2009 (64 bit)
os/kernel: 3.10.0-1160.71.1.el7.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: none
Failing Command
$ rclone copy bigmovie s3-standard:SOMEBUCKET/bigmovie -vv
Failing RClone Config
type = s3
provider = AWS
env_auth = true
region = us-east-1
endpoint = https://bucket.vpce-REDACTED.s3.us-east-1.vpce.amazonaws.com
#server_side_encryption = AES256
storage_class = STANDARD
#bucket_acl = private
#acl = private
Note that I've tested all permutations of the commented out lines with similar result
Note that I have tested with and without the private endpoint listed with same results for both AWS CLI and rclone, e.g. CLI works, rclone fails.
A log from the command with the -vv flag
2022/08/25 17:25:55 DEBUG : Using config file from "PERSONALSTUFF/rclone.conf"
2022/08/25 17:25:55 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/local/rclone/1.55/bin/rclone" "copy" "bigmovie" "s3-standard:SOMEBUCKET" "-vv"]
2022/08/25 17:25:55 DEBUG : Creating backend with remote "bigmovie"
2022/08/25 17:25:55 DEBUG : fs cache: adding new entry for parent of "bigmovie", "MYDIRECTORY/testbed"
2022/08/25 17:25:55 DEBUG : Creating backend with remote "s3-standard:SOMEBUCKET/bigmovie"
2022/08/25 17:25:55 DEBUG : bigmovie: Need to transfer - File not found at Destination
2022/08/25 17:25:55 ERROR : bigmovie: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
AccessControlListNotSupported The bucket does not allow ACLs8DW1MQSHEN6A0CFAd3Rlnx/XezTB7OC79qr4QQuwjgR+h2VYj4LCZWLGTny9YAy985be5HsFgHcqX4azSDhDXefLE+U=
2022/08/25 17:25:55 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>

Getting error while AWS EKS cluster backup using Velero tool

Please let me know what is my mistake!
Used this command to backup AWS EKS cluster using velero tool but it's not working :
./velero.exe install --provider aws --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/ --secret-file ./minio.credentials --use-restic --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=s3Url=s3://backup-archive/eks-cluster-backup/prod-eks-cluster/ --kubeconfig ../kubeconfig-prod-eks --plugins velero/velero-plugin-for-aws:v1.0.0
cat minio.credentials
[default]
aws_access_key_id=xxxx
aws_secret_access_key=yyyyy/zzzzzzzz
region=ap-southeast-1
Getting Error:
../kubectl.exe --kubeconfig=../kubeconfig-prod-eks.txt logs deployment/velero -n velero
time="2020-12-09T09:07:12Z" level=error msg="Error getting backup store for this location" backupLocation=default controller=backup-sync error="backup storage location's bucket name \"backup-archive/eks-cluster-backup/\" must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:110" error.function=github.com/vmware-tanzu/velero/pkg/persistence.NewObjectBackupStore logSource="pkg/controller/backup_sync_controller.go:168"
Note: I have tried --bucket backup-archive but still no use
This is the source of your problem: --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/.
The error says: must not contain a '/' .
This means it cannot contain a slash in the middle of the bucket name (leading/trailing slashes are trimmed, so that's not a problem). Source: https://github.com/vmware-tanzu/velero/blob/3867d1f434c0b1dd786eb8f9349819b4cc873048/pkg/persistence/object_store.go#L102-L111.
If you want to namespace your backups within a bucket, you may use the --prefix parameter. Like so:
--bucket backup-archive --prefix /eks-cluster-backup/prod-eks-cluster/.

When listing of objects using the IBM Cloud Object Storage CLI, and also in my Java program, I get a TLS handshake error

I can list my buckets in the COS CLI:
ibmcloud cos buckets
OK
2 buckets found in your account:
Name Date Created
cloud-object-storage-kc-cos-standard-8e7 May 20, 2020 at 14:40:37
cloud-object-storage-kc-cos-standard-nw6 Dec 14, 2020 at 16:35:48
But if I try to list the objects in the second bucket I get the following:
ibmcloud cos objects -bucket cloud-object-storage-kc-cos-standard-nw6 -region us-east
FAILED
RequestError: send request failed
caused by: Get https://cloud-object-storage-kc-cos-standard-nw6.s3.us-east.cloud-object-storage.appdomain.cloud/: tls: first record does not look like a TLS handshake
I do not know why I would get a TLS handshake error on such a call. If I try any other region, I get a "The specified bucket was not found in your IBM Cloud account. This may be because you provided the wrong region. Provide the bucket's correct region and try again."
My Cloud Object Storage configuration is (X's are redacted data):
Last Updated Tuesday, December 15 2020 at 11:16:46
Default Region us-geo
Download Location /Users/xxxxxx#us.ibm.com/Downloads
CRN b6cc5f87-5867-4736-XXXX-cf70c34a1fb7
AccessKeyID
SecretAccessKey
Authentication Method IAM
URL Style VHost
Service Endpoint
To find the exact location of your COS bucket, you can try running the below command
ibmcloud cos buckets-extended
buckets-extended: List all the extended buckets with pagination support.
Pass the Location Constraint for the bucket in the below command
ibmcloud cos objects --bucket vmac-code-engine-bucket --region us-standard

Cannot set bucket policy of amazon s3

I was simply following the "get started" tutorial here
But I failed at "Step 4 Add a Bucket Policy to Allow Public Reads". It always complains "access denied" with a red error icon.
I am not able to set it via command line either. Here is the command I use:
aws s3api put-bucket-policy --bucket bucket-name --policy
file://bucket-policy.json
Here is the error I got:
An error occurred (AccessDenied) when calling the PutBucketPolicy
operation: Access Denied
The issue was, you have to uncheck the boxes under permissions -> public access settings. Amazon failed to mention this in their tutorial. Bad tutorial.

Amazon S3 + Fog warning: connecting to the matching region will be more performant

I get the following warning while querying Amazon S3 via the Fog gem:
[WARNING] fog: followed redirect to my-bucket.s3-external-3.amazonaws.com, connecting to the matching region will be more performant
How exactly do I "connect to the matching region"?
Set the :region option in the Fog connection parameters to the name of the region in which your bucket exists.
For example, I have a bucket called "bucket-a" in region "eu-west-1" and my s3 key and secret are in variables s3_key and s3_secret respectively.
I can connect to this region directly by opening my Fog connection as follows:
s3 = Fog::Storage.new(provider: 'AWS', aws_access_key_id: s3_key, aws_secret_access_key: s3_secret, region: 'eu-west-1')
And now when I list the contents, no region warning is issued:
s3.directories.get('bucket-a').files
If you want to do this for all your buckets, rather than on a bucket-by-bucket basis you can set the following:
Fog::Storage::AWS::DEFAULT_REGION = 'eu-west-1'