I am trying to fetch a file from s3 using aws-cli
aws s3api get-object --bucket <bucket_name> --key /foo.com/bar/summary-report-yyyymmdd.csv.gz temp_file.csv.gz --profile <profile_name>
but I am getting the following error -
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
I've rechecked my configuration using
aws configure --profile <profile_name>
and everything seems to be correct there. I am using the same credentials to browse and fetch the file on S3 browser without any issue.
Documentation is of minimal use as I have very limited access to this bucket. I cannot verify the permissions or use
aws s3 --profile <profile_name> ls
AccessDenied can mean you dont have permission but its the error returned if the object does not exist (you can read here for the reason why to use this error)
You can make sure you have access to the bucket using the aws s3api list-objects command like
aws s3api list-objects --bucket <bucket_name> --query 'Contents[].{Key: Key, Size: Size}' --profile <profile_name>
Most probably in your case the issue is with the user of / in front of the key
aws s3api get-object --bucket <bucket_name> --key foo.com/bar/summary-report-yyyymmdd.csv.gz temp_file.csv.gz --profile <profile_name>
For me the issue was kms access. I found this helpful:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
Related
I am using localstack version 0.12.19.4 on a Mac.
I have created an s3 bucket called mybucket
localstack start ------ s3 runs on port 4566
http://localhost:4566/health ---- everything is running
awslocal s3 mb s3://mybucket
awslocal s3api put-bucket-acl --bucket mybucket --acl public-read
I add some files to my s3 bucket and then I check with awslocal and aws
aws --endpoint-url=http://127.0.0.1:4566 s3 ls
awslocal s3 ls
shows my bucket existing.
Now from a docker image, when I try to access one of the files in mybucket s3 bucket, I get the following error:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:4566/mybucket/dev/us/2020_08_11/eea9efc9-5970-426b-b867-9f57d6d9548f/850f35c8-0ada-44e4-96e1-e050e3040609"
when I check the contents of the s3 bucket, I do see the specific file existing.
one more fact when I retrieve docker port for localstack I see
4566/tcp -> 127.0.0.1:4566
4571/tcp -> 127.0.0.1:4571
Any ideas as to what I am doing wrong or missing?
I would like the the "Access" column in the web console bucket list to read "not public" for each bucket.
for BUCKET_NAME in $(aws s3 --profile YOUR_PROFILE_HERE ls s3:// | cut -d' ' -f3); do aws s3api --profile YOUR_PROFILE_HERE put-public-access-block --bucket "$BUCKET_NAME" --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"; done;
I want to list all of the files in an AWS S3 bucket that end in .css.
I saw this question: Filter S3 list-objects results to find a key matching a pattern and from reading it, I tried this:
aws s3api list-objects --bucket mybucket --query "Contents[?contains(Key, 'css')]"
That returned every file inside a /css folder as well as files with 'css' anywhere in the name. I want the equivalent of find "*.css". Is that possible?
Try using ends_with function.
aws s3api list-objects --bucket mybucket --query "Contents[?ends_with(Key, 'css')]"
Say I have a website that return me JSON data when I send a GET request using curl. I want to re-direct the output of curl to AWS S3. A new file should be created on S3 for it.
Currently I am able to redirect the output to store it locally.
curl -s -X GET 'http://website_that_returns_json.com' > folder_to_save/$(date +"%Y-%m-%d_%H-%M.json")
I have AWS CLI and s3cmd installed. How would I redirect the output of create to create a new file on AWS S3 ?
Assume :
AWS S3 access key and secret key are already set.
Location to store file : mybucket/$(date +"%Y-%m-%d_%H-%M.json"
The AWS Command-Line Interface (CLI) has the ability to stream data to/from Amazon S3:
The following cp command uploads a local file stream from standard input to a specified bucket and key:
aws s3 cp - s3://mybucket/stream.txt
So, you could use:
curl xxx | aws s3 cp - s3://mybucket/object.txt
However, it's probably safer to save the file locally and then copy it to Amazon S3.
In case you'd like to run the command on the remote, use aws ssm send-command.
Then to redirect the output of that command to S3, you can use --output-s3-bucket-name parameter.
Here is Bash script to run PowerShell script on the remote and upload it into S3 bucket:
instanceId="i-xyz"
bucketName="bucket_to_save"
bucketDir="folder_to_save"
command="Invoke-WebRequest -UseBasicParsing -Uri http://example.com).Content"
cmdId=$(aws ssm send-command --instance-ids "$instanceId" --document-name "AWS-RunPowerShellScript" --query "Command.CommandId" --output text --output-s3-bucket-name "$bucketName" --output-s3-key-prefix "$bucketDir" --parameters commands="'${command}'")
while [ "$(aws ssm list-command-invocations --command-id "$cmdId" --query "CommandInvocations[].Status" --output text)" == "InProgress" ]; do sleep 1; done
outputPath=$(aws ssm list-command-invocations --command-id "$cmdId" --details --query "CommandInvocations[].CommandPlugins[].OutputS3KeyPrefix" --output text)
echo "Command output uploaded at: s3://${bucketName}/${outputPath}"
aws s3 ls "s3://${bucketName}/${outputPath}"
To output the uploaded S3 files, run:
aws s3 ls s3://${bucketName}/${outputPath}/stderr.txt && aws s3 cp --quiet s3://${bucketName}/${outputPath}/stderr.txt /dev/stderr
aws s3 cp --quiet s3://${bucketName}/${outputPath}/stdout.txt /dev/stdout
For example, if I want to filter the files by their name, I would do:
$ aws s3api list-objects --bucket bucket-name --query "Contents[?contains(Key, 'key-name')]"
I'm looking for a similar command to filter the size of these files, using the "<" and ">" operators.
I found out what was my mistake. I was using single quote instead of ` and therefore it wasn't working. The answer to the question is:
aws s3api list-objects --bucket bucket-name --query 'Contents[?Size<`2000`]'