Connection to Cloudformation generated S3bucket times out - amazon-s3

I'm trying to solve an issue with my AWS Cloudformation template. The template I have includes a VPC with a private subnet, and a VPC endpoint to allow connections to S3 buckets.
The bucket itself includes 3 buckets, and I have a couple of preexisting buckets already said up in the same region (in this case, eu-west-1).
I use aws-cli to log into an EC2 instance in the private subnet, then use aws-cli commands to access S3 (e.g. sudo aws s3 ls bucketname)
My problem is that I can only list the content of pre-existing buckets in that region, or new buckets that I create manually through the website. When I try to list cloudformation-generated buckets it just hangs and times out:
[ec2-user#ip-10-44-1-129 ~]$ sudo aws s3 ls testbucket
HTTPSConnectionPool(host='vltestbucketxxx.s3.amazonaws.com', port=443): Max retries exceeded with url: /?delimiter=%2F&prefix=&encoding-type=url (Caused by ConnectTimeoutError(<botocore.awsrequest.AWSHTTPSConnection object at 0x7f2cc0bcf110>, 'Connection to vltestbucketxxx.s3.amazonaws.com timed out. (connect timeout=60)'))
It does not seem to be related to the VPC endpoint (setting the config to allow everything has no effect)
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
nor does accesscontrol seem to affect it.
{
"Resources": {
"testbucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicReadWrite",
"BucketName": "testbucket"
}
}
}
}
Bucket policies don't seem to be the issue either (I've generated buckets with no policy attached, and again only the cloudformation generated ones time out). On the website, configuration for a bucket that connects and one that times out looks identical to me.
Trying to access buckets in other regions also times out, but as I understood it cloudformation generates buckets in the same region as the VPC, so that shouldn't be it (the website also shows the buckets to be in the same region).
Does anyone have an idea of what the issue might be?
Edit: I can connect from the VPC public subnet, so maybe it is an endpoint problem after all?

When using a VPC endpoint, make sure that you've configured your client to send requests to the same endpoint that your VPC Endpoint is configured for via the ServiceName property (e.g., com.amazonaws.eu-west-1.s3).
To do this using the AWS CLI, set the AWS_DEFAULT_REGION environment variable or the --region command line option, e.g., aws s3 ls testbucket --region eu-west-1. If you don't set the region explicitly, the S3 client will default to using the global endpoint (s3.amazonaws.com) for its requests, which does not match your VPC Endpoint.

Related

CloudFront origin for specific region content

I have created four S3 buckets, each with a simple index.html file and each with unique content.
I have created a CloudFront distribution and assigned it four origins, one for each of the four buckets.
Each origin has an Origin Access Identity and that OAI has been used in it's related bucket's policy, eg:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity 123456789ABCDE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-first-test-bucket/*"
}
]
}
I have also set Block all public access to true for each bucket.
When I visit the CloudFront distribution name I see the content for my region.
However, when I use a geo-browser to test the distribution from another region (one closer to one of the other buckets) I see the same content.
How can I configure my CloudFront distribution to serve the closest region-specific content? (eg: us-east-1 bucket content served through CloudFront for New York users.)
Geo-browser is not perfect for testing, you should test this with a good VPN.
to verify what I am saying, try to enter a blocked website in China. geo-browser will take you to it but it is trying to trick the server to think the connection is from China by changing IP address.
This can not Trick AWS. So test with VPN (a paid one is preferable)
More Info:
How does AWS Cloudfront CDN works:
when the first user from a specific region request a file
the file will be streamed (copied) from S3 to the closest Cloudfront server in the user region
the file will stay on this server temporary (usually 24 hours)
when a second user from the same Region request the same file he/she will get the copy from Cloudfront close server too.
if the same file changes on S3 it will be changes in very short time in the Cloudfront too (from 1 second to 5 minutes)
So, only the first request for the file will be affected by the distance of S3 bucket, which is negligible.
My recommendation is to use 1 S3 bucket only with folders specifying content depending on local (us, fr, gb, ...etc) and rely on the Cloudfront CDN to distribute content to different CDN servers for each region. I am using Cloudfront in this way and everything I wrote here is from real experiments I've done before.
Conclusion: if you use CDN then the location of storage server is not a factor for speedy delivery of content.
You can use a Route53 traffic policy. Add a Geolocation rule and then a Cloudfront distribution as an endpoint.

Why S3 cross-region replication is not working for us when we're upload a file with PHP?

S3 cross region replication is not working for us when we're upload a file with PHP.
When we upload the file from the AWS interface it replicate to the other bucket it's working great, but when we use S3 API for PHP: putObject it's upload but don't replicate to the other bucket.
What are we missing here?
Thanks
As I commented, it would be great to see the bucket policy of the upload bucket, the bucket policy of the destination bucket, and the permissions granted to whatever IAM role / user the PHP is using.
My guess is that there's some difference in config/permissioning between the source bucket's owning account (which is likely what you use when manipulating from the AWS Console interface) and whatever account or role or user is representing your PHP code. For example:
If the owner of the source bucket doesn't own the object in the bucket, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list (ACL)
Pending more info from the OP, I'll add some potentially helpful trouble-shooting resources:
Can't get amazon S3 cross-region replication between two accounts to work
AWS Troubleshooting Cross-Region Replication
I don't know if is the same for replicating buckets between accounts, but I use this policy to replicate objects uploaded on a bucket in us-east-1 to a bucket in eu-west-1 and it works like a charm, both uploading files manually of from a python script.
{
"Version": "2008-10-17",
"Id": "S3-Console-Replication-Policy",
"Statement": [
{
"Sid": "S3ReplicationPolicyStmt1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS Account ID>:root"
},
"Action": [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Resource": [
"arn:aws:s3:::<replicated region ID>.<replicated bucket name>",
"arn:aws:s3:::<replicated region ID>.<replicated bucket name>/*"
]
}
]
}
Where:
- is, of course, your AWS account ID
- is the AWS region ID (eu-west-1, us-east-1, ...) where the replicated bucket will be (in my case is eu-west-1)
- is the name of bucket you want to replicate.
So say you want to replicate a bucket called "my.bucket.com" in eu-west-1, the Resource ARN to put in the policy will be arn:aws:s3:::eu-west-1.my.bucket.com. Same with the leading /*
Also the replication rule is set as follows:
- Source: entire bucket
- Destination: the bucket I mentioned above
- Destination options: leave all unchecked
- IAM role: Create new role
- Rule name: give it a significant name
- Status: Enabled

Accessing different region s3 bucket from an ec2 instance

I have assigned a role with the following policy to my ec2 instance running on us-west-2 region -
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
and trying to access a bucket from ap-southeast-1 region. The problem is every aws s3 operations are timing out. I have also tried specifying region in the command --region ap-southeast-1.
From the documentation, I found this pointer -
Endpoints are supported within the same region only. You cannot create
an endpoint between a VPC and a service in a different region.
So, what is the process to access bucket from a different region using aws-cli or boto client from the instance?
Apparently, to access bucket from a different region, the instance also needs access to the public internet. Therefore, the instance needs to have a public ip or it has to be behind a NAT.
I think it is not necessary to specify the region of the bucket in order to access to it, you can check some boto3 examples from here:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.get_object
I would check to make sure you've given the above permission to the correct User or Role.
Run the command;
aws sts get-caller-identity
You may think the EC2 instance is using credentials you've set when it may be using an IAM role.

Amazon S3 Permission problem - How to set permissions for all files at once?

I have uploaded some files using the Amazon AWS management console.
I got an HTTP 403 Access denied error. I found out that I needed to set the permission to view.
How would I do that for all the files on the bucket?
I know that it is possible to set permission on each file, but it's time-consuming when having many files that need to be viewable for everyone.
I suggest that you apply a bucket policy1 to the bucket where you want to store public content. This way you don't have to set ACL for every object. Here is an example of a policy that will make all the files in the bucket mybucket public.
{
"Version": "2008-10-17",
"Id": "http better policy",
"Statement": [
{
"Sid": "readonly policy",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/sub/dirs/are/supported/*"
}
]
}
That * in "Resource": "arn:aws:s3:::mybucket/sub/dirs/are/supported/*" allows recursion.
1 Note that a Bucket Policy is different than an IAM Policy. (For one you will get an error if you try to include Principal in an IAM Policy.) The Bucket Policy can be edit by going the root of the bucket in your AWS web console and expanding Properties > Permissions. Subdirectories of a bucket also have Properties > Permissions, but there is no option to Edit bucket policy
You can select which directory you want it to be public.
Press on "more" and mark it as public; it will make the directory and all the files to be accessible as public.
You can only modify ACLs for a unique item (bucket or item), soy you will have to change them one by one.
Some S3 management applications allows you to apply the same ACL to all items in a bucket, but internally, it applies the ACL to each one by one.
If you upload your files programmatically, it's important to specify the ACL as you upload the file, so you don't have to modify it later. The problem of using an S3 management application (like Cloudberry, Transmit, ...) is that most of them uses the default ACL (private read only) when you upload each file.
I used Cloudberry Explorer to do the job :)
Using S3 Browser you can update permissions using the gui, also recursively. It's a useful tool and free for non-commercial use.
To make a bulk of files public, do the following:
Go to S3 web interface
Open the required bucket
Select the required files and folders by clicking the checkboxes at the left of the list
Click «More» button at the top of the list, click «Make public»
Confirm by clicking «Make public». The files won't have a public write access despite the warning says «...read this object, read and write permissions».
You could set ACL on each file using aws cli:
BUCKET_NAME=example
BUCKET_DIR=media
NEW_ACL=public-read
aws s3 ls $BUCKET_NAME/$BUCKET_DIR/ | \
awk '{$1=$2=$3=""; print $0}' | \
xargs -t -I _ \
aws s3api put-object-acl --acl $NEW_ACL --bucket $BUCKET_NAME --key "$BUCKET_DIR/_"
I had same problem while uploading the files through program (java) to s3 bucket ..
Error: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:9000' is therefore not allowed access. The response had HTTP status code 403
I added the origin identity and changed the bucket policy and CORS configuration then everything worked fine.
Transmit 5
I wanted to add this here for potential macOS users that already have the beautifully-crafted FTP app called Transmit by Panic.
I already had Panic and it supports S3 buckets (not sure what version this came in but I think the upgrades were free). It also supports recursively updating Read and Write permissions.
You simply right click the directory you want to update and select the Read and Write permissions you want to set them to.
It doesn't seem terribly fast but you can open up the log file by going Window > Transcript so you at least know that it's doing something.
Use AWS policy generator to generate a policy which fits your need. The principal in the policy generator should be the IAM user/role which you'd be using for accessing the object(s).
Resource ARN should be arn:aws:s3:::mybucket/sub/dirs/are/supported/*
Next, click on "Add statement" and follow through. You'll finally get a JSON representing the policy. Paste this in your s3 bucket policy management section which is at "your s3 bucket page in AWS -> permissions -> bucket policy".
This worked for me on digital ocean, which allegedly has the same API as s3:
s3cmd modify s3://[BUCKETNAME]/[DIRECTORY] --recursive --acl-public
The above sets all files to public.

How to make 10,000 files in S3 public

I have a folder in a bucket with 10,000 files. There seems to be no way to upload them and make them public straight away. So I uploaded them all, they're private, and I need to make them all public.
I've tried the aws console, it just gives an error (works fine with folders with less files).
I've tried using S3 organizing in Firefox, same thing.
Is there some software or some script I can run to make all these public?
You can generate a bucket policy (see example below) which gives access to all the files in the bucket. The bucket policy can be added to a bucket through AWS console.
{
"Id": "...",
"Statement": [ {
"Sid": "...",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket/*",
"Principal": {
"AWS": [ "*" ]
}
} ]
}
Also look at following policy generator tool provided by Amazon.
http://awspolicygen.s3.amazonaws.com/policygen.html
If you are uploading for the first time, you can set the files to be public on upload on the command line:
aws s3 sync . s3://my-bucket/path --acl public-read
As documented in Using High-Level s3 Commands with the AWS Command Line Interface
Unfortunately it only applies the ACL when the files are uploaded. It does not (in my testing) apply the ACL to already uploaded files.
If you do want to update existing objects, you used to be able to sync the bucket to itself, but this seems to have stopped working.
[Not working anymore] This can be done from the command line:
aws s3 sync s3://my-bucket/path s3://my-bucket/path --acl public-read
(So this no longer answers the question, but leaving answer for reference as it used to work.)
I had to change several hundred thousand objects. I fired up an EC2 instance to run this, which makes it all go faster. You'll want to install the aws-sdk gem first.
Here's the code:
require 'rubygems'
require 'aws-sdk'
# Change this stuff.
AWS.config({
:access_key_id => 'YOURS_HERE',
:secret_access_key => 'YOURS_HERE',
})
bucket_name = 'YOUR_BUCKET_NAME'
s3 = AWS::S3.new()
bucket = s3.buckets[bucket_name]
bucket.objects.each do |object|
puts object.key
object.acl = :public_read
end
I had the same problem, solution by #DanielVonFange is outdated, as new version of SDK is out.
Adding code snippet that works for me right now with AWS Ruby SDK:
require 'aws-sdk'
Aws.config.update({
region: 'REGION_CODE_HERE',
credentials: Aws::Credentials.new(
'ACCESS_KEY_ID_HERE',
'SECRET_ACCESS_KEY_HERE'
)
})
bucket_name = 'BUCKET_NAME_HERE'
s3 = Aws::S3::Resource.new
s3.bucket(bucket_name).objects.each do |object|
puts object.key
object.acl.put({ acl: 'public-read' })
end
Just wanted to add that with the new S3 Console you can select your folder(s) and select Make public to make all files inside the folders public. It works as a background task so it should handle any number of files.
Using the cli:
aws s3 ls s3://bucket-name --recursive > all_files.txt && grep .jpg all_files.txt > files.txt && cat files.txt | awk '{cmd="aws s3api put-object-acl --acl public-read --bucket bucket-name --key "$4;system(cmd)}'
Had this need myself but the number of files makes it WAY to slow to do in serial. So I wrote a script that does it on iron.io's IronWorker service. Their 500 free compute hours per month are enough to handle even large buckets (and if you do exceed that the pricing is reasonable). Since it is done in parallel it completes in less than a minute for the 32,000 objects I had. Also I believe their servers run on EC2 so the communication between the job and S3 is quick.
Anybody is welcome to use my script for their own needs.
Have a look at BucketExplorer it manages bulk operations very well and is a solid S3 Client.
You would think they would make public read the default behavior, wouldn't you? : )
I shared your frustration while building a custom API to interface with S3 from a C# solution. Here is the snippet that accomplishes uploading an S3 object and setting it to public-read access by default:
public void Put(string bucketName, string id, byte[] bytes, string contentType, S3ACLType acl) {
string uri = String.Format("https://{0}/{1}", BASE_SERVICE_URL, bucketName.ToLower());
DreamMessage msg = DreamMessage.Ok(MimeType.BINARY, bytes);
msg.Headers[DreamHeaders.CONTENT_TYPE] = contentType;
msg.Headers[DreamHeaders.EXPECT] = "100-continue";
msg.Headers[AWS_ACL_HEADER] = ToACLString(acl);
try {
Plug s3Client = Plug.New(uri).WithPreHandler(S3AuthenticationHeader);
s3Client.At(id).Put(msg);
} catch (Exception ex) {
throw new ApplicationException(String.Format("S3 upload error: {0}", ex.Message));
}
}
The ToACLString(acl) function returns public-read, BASE_SERVICE_URL is s3.amazonaws.com and the AWS_ACL_HEADER constant is x-amz-acl. The plug and DreamMessage stuff will likely look strange to you as we're using the Dream framework to streamline our http communications. Essentially we're doing an http PUT with the specified headers and a special header signature per aws specifications (see this page in the aws docs for examples of how to construct the authorization header).
To change an existing 1000 object ACLs you could write a script but it's probably easier to use a GUI tool to fix the immediate issue. The best I've used so far is from a company called cloudberry for S3; it looks like they have a free 15 day trial for at least one of their products. I've just verified that it will allow you to select multiple objects at once and set their ACL to public through the context menu. Enjoy the cloud!
If your filenames have spaces, we can take Alexander Vitanov's answer above and run it through jq:
#!/bin/bash
# make every file public in a bucket example
bucket=www.example.com
IFS=$'\n' && for tricky_file in $(aws s3api list-objects --bucket "${bucket}" | jq -r '.Contents[].Key')
do
echo $tricky_file
aws s3api put-object-acl --acl public-read --bucket "${bucket}" --key "$tricky_file"
done