Change the default content type on multiple files that have been uploaded to a AWS S3 bucket - amazon-s3

Using aws-cli I uploaded 5gb of files to an Amazon S3 bucket that I have made a static website. Some of the files the site references are .shtml files, but S3 has defaulted to a metadata content type of binary/octet-stream but I want those files to have a metadata content-Type of text/html. Otherwise it doesn't work in the browser.
Is there a aws-cli s3api command I can use to change the content type for all files with a .shtml extension?

You can set content type on specific file types like the following.
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude *.shtml"
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude '*' --include '*.shtml' --no-guess-mime-type --content-type text/html"

To modify the metadata on an Amazon S3 object, copy the object to itself and specify the metadata.
From StackOverflow: How can I change the content-type of an object using aws cli?:
$ aws s3api copy-object --bucket archive --content-type "application/rss+xml" \
--copy-source archive/test/test.html --key test/test.html \
--metadata-directive "REPLACE"

Related

Getting error while AWS EKS cluster backup using Velero tool

Please let me know what is my mistake!
Used this command to backup AWS EKS cluster using velero tool but it's not working :
./velero.exe install --provider aws --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/ --secret-file ./minio.credentials --use-restic --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=s3Url=s3://backup-archive/eks-cluster-backup/prod-eks-cluster/ --kubeconfig ../kubeconfig-prod-eks --plugins velero/velero-plugin-for-aws:v1.0.0
cat minio.credentials
[default]
aws_access_key_id=xxxx
aws_secret_access_key=yyyyy/zzzzzzzz
region=ap-southeast-1
Getting Error:
../kubectl.exe --kubeconfig=../kubeconfig-prod-eks.txt logs deployment/velero -n velero
time="2020-12-09T09:07:12Z" level=error msg="Error getting backup store for this location" backupLocation=default controller=backup-sync error="backup storage location's bucket name \"backup-archive/eks-cluster-backup/\" must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:110" error.function=github.com/vmware-tanzu/velero/pkg/persistence.NewObjectBackupStore logSource="pkg/controller/backup_sync_controller.go:168"
Note: I have tried --bucket backup-archive but still no use
This is the source of your problem: --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/.
The error says: must not contain a '/' .
This means it cannot contain a slash in the middle of the bucket name (leading/trailing slashes are trimmed, so that's not a problem). Source: https://github.com/vmware-tanzu/velero/blob/3867d1f434c0b1dd786eb8f9349819b4cc873048/pkg/persistence/object_store.go#L102-L111.
If you want to namespace your backups within a bucket, you may use the --prefix parameter. Like so:
--bucket backup-archive --prefix /eks-cluster-backup/prod-eks-cluster/.

aws cli sync json file issue while setting content-type

I'm trying to sync my json file to s3 with --content-type application/json , but when I inspect the response header, it is content-type: binary/octet-stream.
sh "aws s3 sync ./public ${mybucket} --exclude '*' --include '*.json' --content-type 'application/json' --cache-control public,max-age=31536000,immutable"
Appreciated any help.

Uploading .html file to S3 Static website hosted bucket causes download of .html file in browser

I have an S3 bucket with 'Static Website Hosting' enabled. If I upload a html file to the bucket via the AWS Console the html file opens successfully. If I upload the file using the AWS CLI the file is downloaded rather than displayed in the browser why?
The first file is available here: https://s3.amazonaws.com/test-bucket-for-stackoverflow-post/page1.html
The second file is available here: https://s3.amazonaws.com/test-bucket-for-stackoverflow-post/page2.html
I uploaded the first file in the AWS Console, the second was uploaded using the following command:
aws s3api put-object --bucket test-bucket-for-stackoverflow-post --key page2.html --body page2.html
The second file is downloaded because of its 'Content-Type' header. That header is:
Content-Type: binary/octet-stream
If you want it to display, it should be:
Content-Type: text/html
Try adding --content-type text/html to your put-object command.
Reference: https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html

How can I check if S3 objects is encrypted using boto?

I'm writing a python scripts to find out whether S3 object is encrypted. I tried using following code but key.encrypted always returns None even though I can see the object on S3 is encrypted.
keys = bucket.list()
for k in keys:
print k.name, k.size, k.last_modified, k.encrypted , "\n"
k.encrypted always returns None.
For what it's worth, you can do this using boto3 (which can be used side-by-side with boto).
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket-name')
for obj in bucket.objects.all():
key = s3.Object(bucket.name, obj.key)
print key.server_side_encryption
See the boto3 docs for a list of available key attributes.
expanding on #mfisherca's response, you can do this with the AWS CLI:
aws s3api head-object --bucket <bucket> --key <key>
# or query the value directly
aws s3api head-object --bucket <bucket> --key <key> \
--query ServerSideEncryption --output text
You can also check the encryption state for specific objects using the head_object call. Here's an example in Python/boto:
#!/usr/bin/env python
import boto3
s3_client = boto3.client('s3')
head = s3_client.head_object(
Bucket="<S3 bucket name>",
Key="<S3 object key>"
)
if 'ServerSideEncryption' in head:
print head['ServerSideEncryption']
See: http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.head_object

How to serve gzipped assets from Amazon S3

I am currently serving all of my static assets from Amazon S3. I would like to begin using gzipped components. I have gzipped and confirmed that Amazon is setting the correct headers. However, the styles are not loading.
I am new to gzipping components, so possibly I am missing something? I can't find too much information about this with Amazon S3.
For future reference to anyone else with this problem:
Gzip your components. Then remove the .gz extension leaving only the .css or .js extension. Upload the files to your bucket.
From your S3 dashboard, pull up the properties for the file that you just uploaded. Under the 'Metadata' header enter this information:
'content-type' : 'text/css' or 'text/javascript'
'content-encoding' : 'gzip'
These value options are not available by default (wtf) so you must manually type them.
I also found a solution how to do it using CLI, very useful when working with multiple files:
aws s3api put-object \
--bucket YOUR_BUCKET \
--key REMOTE_FILE.json \
--content-encoding gzip \
--content-type application/json \
--body LOCAL_FILE.json.gz
Notes:
Set content-type approppriately to what you're uploading
The file name on the server doesn't need to have the .gz extension