Multi-part upload S3 - amazon-s3

I am trying to complete a multi-part upload to S3 where I was able to generate key and upload id from below command. When I pass the values to complete the upload, I'm getting the error. I googled to figureout this error pops up when we use int value in string datatypes. Can someone help pls why this occurs in S3 upload.
bash-3.2$ aws s3api create-multipart-upload --bucket awspythnautomation --key 'docker'
{
"Bucket": "awspythnautomation",
"Key": "docker",
"UploadId": "ySvpOo_9DwDLmfB84GqvJQAQeZQi1_U6_Qs2StKpvxCI.tKTFJKES9nNXDoY5zqkJX4yEuPdcICwTZ.X5xwkaNyYop1r9VOloMKjxji_TakQYLobYy7IcRoUUuHcebgh"
}
bash-3.2$ aws s3api complete-multipart-upload --multipart-upload fileb://Docker.dmg --bucket awspythnautomation --key 'docker' --upload-id ySvpOo_9DwDLmfB84GqvJQAQeZQi1_U6_Qs2StKpvxCI.tKTFJKES9nNXDoY5zqkJX4yEuPdcICwTZ.X5xwkaNyYop1r9VOloMKjxji_TakQYLobYy7IcRoUUuHcebgh
***'in <string>' requires string as left operand, not int***

Related

Getting error while AWS EKS cluster backup using Velero tool

Please let me know what is my mistake!
Used this command to backup AWS EKS cluster using velero tool but it's not working :
./velero.exe install --provider aws --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/ --secret-file ./minio.credentials --use-restic --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=s3Url=s3://backup-archive/eks-cluster-backup/prod-eks-cluster/ --kubeconfig ../kubeconfig-prod-eks --plugins velero/velero-plugin-for-aws:v1.0.0
cat minio.credentials
[default]
aws_access_key_id=xxxx
aws_secret_access_key=yyyyy/zzzzzzzz
region=ap-southeast-1
Getting Error:
../kubectl.exe --kubeconfig=../kubeconfig-prod-eks.txt logs deployment/velero -n velero
time="2020-12-09T09:07:12Z" level=error msg="Error getting backup store for this location" backupLocation=default controller=backup-sync error="backup storage location's bucket name \"backup-archive/eks-cluster-backup/\" must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:110" error.function=github.com/vmware-tanzu/velero/pkg/persistence.NewObjectBackupStore logSource="pkg/controller/backup_sync_controller.go:168"
Note: I have tried --bucket backup-archive but still no use
This is the source of your problem: --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/.
The error says: must not contain a '/' .
This means it cannot contain a slash in the middle of the bucket name (leading/trailing slashes are trimmed, so that's not a problem). Source: https://github.com/vmware-tanzu/velero/blob/3867d1f434c0b1dd786eb8f9349819b4cc873048/pkg/persistence/object_store.go#L102-L111.
If you want to namespace your backups within a bucket, you may use the --prefix parameter. Like so:
--bucket backup-archive --prefix /eks-cluster-backup/prod-eks-cluster/.

Cannot set bucket policy of amazon s3

I was simply following the "get started" tutorial here
But I failed at "Step 4 Add a Bucket Policy to Allow Public Reads". It always complains "access denied" with a red error icon.
I am not able to set it via command line either. Here is the command I use:
aws s3api put-bucket-policy --bucket bucket-name --policy
file://bucket-policy.json
Here is the error I got:
An error occurred (AccessDenied) when calling the PutBucketPolicy
operation: Access Denied
The issue was, you have to uncheck the boxes under permissions -> public access settings. Amazon failed to mention this in their tutorial. Bad tutorial.

Uploading .html file to S3 Static website hosted bucket causes download of .html file in browser

I have an S3 bucket with 'Static Website Hosting' enabled. If I upload a html file to the bucket via the AWS Console the html file opens successfully. If I upload the file using the AWS CLI the file is downloaded rather than displayed in the browser why?
The first file is available here: https://s3.amazonaws.com/test-bucket-for-stackoverflow-post/page1.html
The second file is available here: https://s3.amazonaws.com/test-bucket-for-stackoverflow-post/page2.html
I uploaded the first file in the AWS Console, the second was uploaded using the following command:
aws s3api put-object --bucket test-bucket-for-stackoverflow-post --key page2.html --body page2.html
The second file is downloaded because of its 'Content-Type' header. That header is:
Content-Type: binary/octet-stream
If you want it to display, it should be:
Content-Type: text/html
Try adding --content-type text/html to your put-object command.
Reference: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html

How can I check if S3 objects is encrypted using boto?

I'm writing a python scripts to find out whether S3 object is encrypted. I tried using following code but key.encrypted always returns None even though I can see the object on S3 is encrypted.
keys = bucket.list()
for k in keys:
print k.name, k.size, k.last_modified, k.encrypted , "\n"
k.encrypted always returns None.
For what it's worth, you can do this using boto3 (which can be used side-by-side with boto).
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket-name')
for obj in bucket.objects.all():
key = s3.Object(bucket.name, obj.key)
print key.server_side_encryption
See the boto3 docs for a list of available key attributes.
expanding on #mfisherca's response, you can do this with the AWS CLI:
aws s3api head-object --bucket <bucket> --key <key>
# or query the value directly
aws s3api head-object --bucket <bucket> --key <key> \
--query ServerSideEncryption --output text
You can also check the encryption state for specific objects using the head_object call. Here's an example in Python/boto:
#!/usr/bin/env python
import boto3
s3_client = boto3.client('s3')
head = s3_client.head_object(
Bucket="<S3 bucket name>",
Key="<S3 object key>"
)
if 'ServerSideEncryption' in head:
print head['ServerSideEncryption']
See: http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.head_object

Change the default content type on multiple files that have been uploaded to a AWS S3 bucket

Using aws-cli I uploaded 5gb of files to an Amazon S3 bucket that I have made a static website. Some of the files the site references are .shtml files, but S3 has defaulted to a metadata content type of binary/octet-stream but I want those files to have a metadata content-Type of text/html. Otherwise it doesn't work in the browser.
Is there a aws-cli s3api command I can use to change the content type for all files with a .shtml extension?
You can set content type on specific file types like the following.
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude *.shtml"
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude '*' --include '*.shtml' --no-guess-mime-type --content-type text/html"
To modify the metadata on an Amazon S3 object, copy the object to itself and specify the metadata.
From StackOverflow: How can I change the content-type of an object using aws cli?:
$ aws s3api copy-object --bucket archive --content-type "application/rss+xml" \
--copy-source archive/test/test.html --key test/test.html \
--metadata-directive "REPLACE"