Enabling S3 versioning on a lot of buckets - amazon-s3

It's been decreed that all our S3 buckets should have access logs and versioning enabled. Unfortunately I have a lot of S3 buckets. Is there an efficient way of doing this that doesn't involve setting the attributes on each one individually in the console?

You can also develop your own custom AWS Config rule to manage the compliance of AWS S3 Buckets. (versionning and logs enabled)
https://aws.amazon.com/config/
You can check a lot of examples here:
https://github.com/awslabs/aws-config-rules
You can adapt this one to your needs:
https://github.com/awslabs/aws-config-rules/blob/master/python/s3_bucket_default_encryption_enabled.py

For most of the tasks on AWS, the simplest way is using the AWS CLI, especially about the repetitive things.
You can use AWS CLI and simple bash script like this, by rtrouton:
#!/bin/bash
# This script is designed to check the object versioning status of all S3 buckets associated with an AWS account
# and enable object versioning on any S3 buckets where object versioning is not enabled.
# Get list of S3 buckets from Amazon Web Services
s3_bucket_list=$(aws s3api list-buckets --query 'Buckets[*].Name' | sed -e 's/[][]//g' -e 's/"//g' -e 's/,//g' -e '/^$/d' -e 's/^[ \t]*//;s/[ \t]*$//')
# Loop through the list of S3 buckets and check the individual bucket's object version status.
for bucket in $(echo "$s3_bucket_list")
do
version_status=$(aws s3api get-bucket-versioning --bucket "$bucket" | awk '/Status/ {print $2}' | sed 's/"//g')
if [[ "$version_status" = "Enabled" ]]; then
# If the object version status is Enabled, report that the S3 bucket has object versioning enabled.
echo "The $bucket S3 bucket has object versioning enabled."
elif [[ "$version_status" != "Enabled" ]]; then
# If the object version is a status other than Enabled, report that the S3 bucket does not have
# object versioning enabled, then enable object versioning
echo "The $bucket S3 bucket does not have object versioning enabled. Enabling object versioning on the $bucket S3 bucket."
aws s3api put-bucket-versioning --bucket "$bucket" --versioning-configuration Status=Enabled
fi
done
For more information you can check the following document on the AWS website:
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-versioning.html

Related

AWS localstack s3 bucket endpoint fails to connect

I am using localstack version 0.12.19.4 on a Mac.
I have created an s3 bucket called mybucket
localstack start ------ s3 runs on port 4566
http://localhost:4566/health ---- everything is running
awslocal s3 mb s3://mybucket 
awslocal s3api put-bucket-acl --bucket mybucket --acl public-read
I add some files to my s3 bucket and then I check with awslocal and aws
aws --endpoint-url=http://127.0.0.1:4566 s3 ls
awslocal s3 ls
shows my bucket existing.
Now from a docker image, when I try to access one of the files in mybucket s3 bucket, I get the following error:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:4566/mybucket/dev/us/2020_08_11/eea9efc9-5970-426b-b867-9f57d6d9548f/850f35c8-0ada-44e4-96e1-e050e3040609"
when I check the contents of the s3 bucket, I do see the specific file existing.
one more fact when I retrieve docker port for localstack I see
4566/tcp -> 127.0.0.1:4566
4571/tcp -> 127.0.0.1:4571
Any ideas as to what I am doing wrong or missing?

How do I use the aws cli to block public access to all of my buckets?

I would like the the "Access" column in the web console bucket list to read "not public" for each bucket.
for BUCKET_NAME in $(aws s3 --profile YOUR_PROFILE_HERE ls s3:// | cut -d' ' -f3); do aws s3api --profile YOUR_PROFILE_HERE put-public-access-block --bucket "$BUCKET_NAME" --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"; done;

How do I determine if an S3 bucket has public access using aws-cli [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have a bucket that shows "public access" in the console, but when I attempt to read the aws s3api get-public-access-block, I get an error:
$ aws s3api get-public-access-block --bucket my-test-bucket-name
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
abort-multipart-upload | complete-multipart-upload
copy-object | create-bucket...
I am running aws-cli 1.15.83:
$ aws --version
aws-cli/1.15.83 Python/2.7.14 Linux/4.14.77-70.59.amzn1.x86_64 botocore/1.10.82
You can use aws s3api get-bucket-policy-status to find out which buckets have been identified as having public access:
aws s3api get-bucket-policy-status --bucket my-test-bucket-name
{
"PolicyStatus": {
"IsPublic": true
}
}
The get-public-access-block function is related to new features released last week [1], that help to protect future buckets from being mistakenly created with public access.
Both get-public-access-block and get-bucket-policy-status require a newer version of awscli than 1.15.83. The version I am using that has both these commands is 1.16.58.
[1] https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/
The error you might be getting because of you might not have upgraded awscli.
You pip command to upgrade.
pip install --upgrade awscli
The same error was getting at the start. It should upgrade and give the proper result.
Bash# aws s3api get-public-access-block --bucket my-test-bucket-name
An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the
GetPublicAccessBlock operation: The public access block configuration was not found
^ This is what you'll see on a freshly created s3 bucket that's private by default, but has the potential to become public.
Bash# aws s3api put-public-access-block --bucket my-test-bucket-name --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
^ This is the command to Enable Public Access Block
Bash# aws s3api get-public-access-block --bucket my-test-bucket-name
{
"PublicAccessBlockConfiguration": {
"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
}
}
^ A subsequent run of the same get status command will now show this output, when block public access is enabled.
there is a number of things to look for when you want to understand if a bucket is public or not and why.
first you want to check if there is block public access enabled on your account or on the bucket. Check if policy or ACL are blocked by these settings. If any of RestrictPublicBuckets is true then policy cannot make bucket public, if any of IgnorePublicAcls is true then ACL cannot make bucket public. More details can be found here https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-publicaccessblockconfiguration.html
# get account level settings
aws s3control get-public-access-block --account-id <your account id>
# get bucket level settings
aws s3api get-public-access-block --bucket <your bucket name>
[skip if RestrictPublicBuckets was true] you need to figure out policy status. If policy is public then it is probably the reason you see bucket marked as public.
aws s3api get-bucket-policy-status --bucket <your bucket name>
[skip if IgnorePublicAcls was true] check for public bucket ACL (read or write with grantee set to everyone or authenticated users). Note that if IgnorePublicAcls is true you won't see public ACL so if you decide to disable public access block for some reason you might want to check if ACL is public or not.
aws s3api get-bucket-acl --bucket <your bucket name>
Now you should be able to figure out what makes bucket public if you see it marked as public in console. However until you block public ACL using bucket or account public access block you still might have individual objects in your bucket publicly accessible as they could be shared using object level ACL and it can be challenging checking every single object in your bucket.
Another thing which could be hard to check is access points, you can make bucket public through one of attached access points policy, so even if your bucket policy is public you might want to check whether or not your bucket has attached access points and check policy status for each of them
# list access points attached to the bucket, note that you need to specify bucket region
aws s3control list-access-points --bucket <your bucket name> --account-id <your account id> --region <your bucket region>
# retrieve access point policy status
aws s3control get-access-point-policy-status --region <your bucket region> --account-id <your account id> --name <access point name>
The best way to ensure security of your bucket is to enable public access block settings for both policy and ACL.

s3api getobject permission denied

I am trying to fetch a file from s3 using aws-cli
aws s3api get-object --bucket <bucket_name> --key /foo.com/bar/summary-report-yyyymmdd.csv.gz temp_file.csv.gz --profile <profile_name>
but I am getting the following error -
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
I've rechecked my configuration using
aws configure --profile <profile_name>
and everything seems to be correct there. I am using the same credentials to browse and fetch the file on S3 browser without any issue.
Documentation is of minimal use as I have very limited access to this bucket. I cannot verify the permissions or use
aws s3 --profile <profile_name> ls
AccessDenied can mean you dont have permission but its the error returned if the object does not exist (you can read here for the reason why to use this error)
You can make sure you have access to the bucket using the aws s3api list-objects command like
aws s3api list-objects --bucket <bucket_name> --query 'Contents[].{Key: Key, Size: Size}' --profile <profile_name>
Most probably in your case the issue is with the user of / in front of the key
aws s3api get-object --bucket <bucket_name> --key foo.com/bar/summary-report-yyyymmdd.csv.gz temp_file.csv.gz --profile <profile_name>
For me the issue was kms access. I found this helpful:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/

Redirect output of console to a file on AWS S3

Say I have a website that return me JSON data when I send a GET request using curl. I want to re-direct the output of curl to AWS S3. A new file should be created on S3 for it.
Currently I am able to redirect the output to store it locally.
curl -s -X GET 'http://website_that_returns_json.com' > folder_to_save/$(date +"%Y-%m-%d_%H-%M.json")
I have AWS CLI and s3cmd installed. How would I redirect the output of create to create a new file on AWS S3 ?
Assume :
AWS S3 access key and secret key are already set.
Location to store file : mybucket/$(date +"%Y-%m-%d_%H-%M.json"
The AWS Command-Line Interface (CLI) has the ability to stream data to/from Amazon S3:
The following cp command uploads a local file stream from standard input to a specified bucket and key:
aws s3 cp - s3://mybucket/stream.txt
So, you could use:
curl xxx | aws s3 cp - s3://mybucket/object.txt
However, it's probably safer to save the file locally and then copy it to Amazon S3.
In case you'd like to run the command on the remote, use aws ssm send-command.
Then to redirect the output of that command to S3, you can use --output-s3-bucket-name parameter.
Here is Bash script to run PowerShell script on the remote and upload it into S3 bucket:
instanceId="i-xyz"
bucketName="bucket_to_save"
bucketDir="folder_to_save"
command="Invoke-WebRequest -UseBasicParsing -Uri http://example.com).Content"
cmdId=$(aws ssm send-command --instance-ids "$instanceId" --document-name "AWS-RunPowerShellScript" --query "Command.CommandId" --output text --output-s3-bucket-name "$bucketName" --output-s3-key-prefix "$bucketDir" --parameters commands="'${command}'")
while [ "$(aws ssm list-command-invocations --command-id "$cmdId" --query "CommandInvocations[].Status" --output text)" == "InProgress" ]; do sleep 1; done
outputPath=$(aws ssm list-command-invocations --command-id "$cmdId" --details --query "CommandInvocations[].CommandPlugins[].OutputS3KeyPrefix" --output text)
echo "Command output uploaded at: s3://${bucketName}/${outputPath}"
aws s3 ls "s3://${bucketName}/${outputPath}"
To output the uploaded S3 files, run:
aws s3 ls s3://${bucketName}/${outputPath}/stderr.txt && aws s3 cp --quiet s3://${bucketName}/${outputPath}/stderr.txt /dev/stderr
aws s3 cp --quiet s3://${bucketName}/${outputPath}/stdout.txt /dev/stdout