How can I check if S3 objects is encrypted using boto? - amazon-s3

I'm writing a python scripts to find out whether S3 object is encrypted. I tried using following code but key.encrypted always returns None even though I can see the object on S3 is encrypted.
keys = bucket.list()
for k in keys:
print k.name, k.size, k.last_modified, k.encrypted , "\n"
k.encrypted always returns None.

For what it's worth, you can do this using boto3 (which can be used side-by-side with boto).
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket-name')
for obj in bucket.objects.all():
key = s3.Object(bucket.name, obj.key)
print key.server_side_encryption
See the boto3 docs for a list of available key attributes.

expanding on #mfisherca's response, you can do this with the AWS CLI:
aws s3api head-object --bucket <bucket> --key <key>
# or query the value directly
aws s3api head-object --bucket <bucket> --key <key> \
--query ServerSideEncryption --output text

You can also check the encryption state for specific objects using the head_object call. Here's an example in Python/boto:
#!/usr/bin/env python
import boto3
s3_client = boto3.client('s3')
head = s3_client.head_object(
Bucket="<S3 bucket name>",
Key="<S3 object key>"
)
if 'ServerSideEncryption' in head:
print head['ServerSideEncryption']
See: http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.head_object

Related

localstack to run for same code without adding endpoint_url

Let's say I have below code in boto3:
s3_client = boto3.client("s3")
def upload_file(file_name, bucket, object_name=None):
"""
Upload a file to a S3 bucket.
"""
try:
if object_name is None:
object_name = os.path.basename(file_name)
response = s3_client.upload_file(
file_name, bucket, object_name)
except ClientError:
logger.exception('Could not upload file to S3 bucket.')
raise
else:
return response
This works fine for actual AWS environment. Now I'm introducing localstack as testing framework prior doing the actual AWS upload.
My question is how to add localstack to this script with out changing the code.
I know if you add the endpoint_url to the boto3 client then it will work just for localstack.
But my question is there anyway I can use the same script for both localstack if localsystem is involved while for rest actual AWS will be used?
You can easily create a boto3 client that interacts with your LocalStack instance. Here is how you can modify your script for that purpose:
endpoint_url = "http://localhost.localstack.cloud:4566"
# alternatively, to use HTTPS endpoint on port 443:
# endpoint_url = "https://localhost.localstack.cloud"
s3_client = boto3.client("s3", endpoint_url=endpoint_url)
def upload_file(file_name, bucket, object_name=None):
"""
Upload a file to a S3 bucket.
"""
try:
if object_name is None:
object_name = os.path.basename(file_name)
response = s3_client.upload_file(
file_name, bucket, object_name)
except ClientError:
logger.exception('Could not upload file to S3 bucket.')
raise
else:
return response
Alternatively, if you prefer to (or need to) set the endpoints directly, you can use the $LOCALSTACK_HOSTNAME environment variable which is available when executing user code in LocalStack:
import os
endpoint_url = f"http://{os.getenv("LOCALSTACK_HOSTNAME")}:{os.getenv("EDGE_PORT")}"
client = boto3.client("s3", endpoint_url=endpoint_url)

Getting error while AWS EKS cluster backup using Velero tool

Please let me know what is my mistake!
Used this command to backup AWS EKS cluster using velero tool but it's not working :
./velero.exe install --provider aws --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/ --secret-file ./minio.credentials --use-restic --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=s3Url=s3://backup-archive/eks-cluster-backup/prod-eks-cluster/ --kubeconfig ../kubeconfig-prod-eks --plugins velero/velero-plugin-for-aws:v1.0.0
cat minio.credentials
[default]
aws_access_key_id=xxxx
aws_secret_access_key=yyyyy/zzzzzzzz
region=ap-southeast-1
Getting Error:
../kubectl.exe --kubeconfig=../kubeconfig-prod-eks.txt logs deployment/velero -n velero
time="2020-12-09T09:07:12Z" level=error msg="Error getting backup store for this location" backupLocation=default controller=backup-sync error="backup storage location's bucket name \"backup-archive/eks-cluster-backup/\" must not contain a '/' (if using a prefix, put it in the 'Prefix' field instead)" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:110" error.function=github.com/vmware-tanzu/velero/pkg/persistence.NewObjectBackupStore logSource="pkg/controller/backup_sync_controller.go:168"
Note: I have tried --bucket backup-archive but still no use
This is the source of your problem: --bucket backup-archive/eks-cluster-backup/prod-eks-cluster/.
The error says: must not contain a '/' .
This means it cannot contain a slash in the middle of the bucket name (leading/trailing slashes are trimmed, so that's not a problem). Source: https://github.com/vmware-tanzu/velero/blob/3867d1f434c0b1dd786eb8f9349819b4cc873048/pkg/persistence/object_store.go#L102-L111.
If you want to namespace your backups within a bucket, you may use the --prefix parameter. Like so:
--bucket backup-archive --prefix /eks-cluster-backup/prod-eks-cluster/.

Multi-part upload S3

I am trying to complete a multi-part upload to S3 where I was able to generate key and upload id from below command. When I pass the values to complete the upload, I'm getting the error. I googled to figureout this error pops up when we use int value in string datatypes. Can someone help pls why this occurs in S3 upload.
bash-3.2$ aws s3api create-multipart-upload --bucket awspythnautomation --key 'docker'
{
"Bucket": "awspythnautomation",
"Key": "docker",
"UploadId": "ySvpOo_9DwDLmfB84GqvJQAQeZQi1_U6_Qs2StKpvxCI.tKTFJKES9nNXDoY5zqkJX4yEuPdcICwTZ.X5xwkaNyYop1r9VOloMKjxji_TakQYLobYy7IcRoUUuHcebgh"
}
bash-3.2$ aws s3api complete-multipart-upload --multipart-upload fileb://Docker.dmg --bucket awspythnautomation --key 'docker' --upload-id ySvpOo_9DwDLmfB84GqvJQAQeZQi1_U6_Qs2StKpvxCI.tKTFJKES9nNXDoY5zqkJX4yEuPdcICwTZ.X5xwkaNyYop1r9VOloMKjxji_TakQYLobYy7IcRoUUuHcebgh
***'in <string>' requires string as left operand, not int***

Change the default content type on multiple files that have been uploaded to a AWS S3 bucket

Using aws-cli I uploaded 5gb of files to an Amazon S3 bucket that I have made a static website. Some of the files the site references are .shtml files, but S3 has defaulted to a metadata content type of binary/octet-stream but I want those files to have a metadata content-Type of text/html. Otherwise it doesn't work in the browser.
Is there a aws-cli s3api command I can use to change the content type for all files with a .shtml extension?
You can set content type on specific file types like the following.
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude *.shtml"
"aws s3 sync ${BASE_DIR} s3://${BUCKET_NAME} --exclude '*' --include '*.shtml' --no-guess-mime-type --content-type text/html"
To modify the metadata on an Amazon S3 object, copy the object to itself and specify the metadata.
From StackOverflow: How can I change the content-type of an object using aws cli?:
$ aws s3api copy-object --bucket archive --content-type "application/rss+xml" \
--copy-source archive/test/test.html --key test/test.html \
--metadata-directive "REPLACE"

How to serve gzipped assets from Amazon S3

I am currently serving all of my static assets from Amazon S3. I would like to begin using gzipped components. I have gzipped and confirmed that Amazon is setting the correct headers. However, the styles are not loading.
I am new to gzipping components, so possibly I am missing something? I can't find too much information about this with Amazon S3.
For future reference to anyone else with this problem:
Gzip your components. Then remove the .gz extension leaving only the .css or .js extension. Upload the files to your bucket.
From your S3 dashboard, pull up the properties for the file that you just uploaded. Under the 'Metadata' header enter this information:
'content-type' : 'text/css' or 'text/javascript'
'content-encoding' : 'gzip'
These value options are not available by default (wtf) so you must manually type them.
I also found a solution how to do it using CLI, very useful when working with multiple files:
aws s3api put-object \
--bucket YOUR_BUCKET \
--key REMOTE_FILE.json \
--content-encoding gzip \
--content-type application/json \
--body LOCAL_FILE.json.gz
Notes:
Set content-type approppriately to what you're uploading
The file name on the server doesn't need to have the .gz extension