Accessing different region s3 bucket from an ec2 instance - amazon-s3

I have assigned a role with the following policy to my ec2 instance running on us-west-2 region -
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
and trying to access a bucket from ap-southeast-1 region. The problem is every aws s3 operations are timing out. I have also tried specifying region in the command --region ap-southeast-1.
From the documentation, I found this pointer -
Endpoints are supported within the same region only. You cannot create
an endpoint between a VPC and a service in a different region.
So, what is the process to access bucket from a different region using aws-cli or boto client from the instance?

Apparently, to access bucket from a different region, the instance also needs access to the public internet. Therefore, the instance needs to have a public ip or it has to be behind a NAT.

I think it is not necessary to specify the region of the bucket in order to access to it, you can check some boto3 examples from here:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.get_object

I would check to make sure you've given the above permission to the correct User or Role.
Run the command;
aws sts get-caller-identity
You may think the EC2 instance is using credentials you've set when it may be using an IAM role.

Related

S3 403 error for one Beanstalk environment but not the other one in the same application with the same Iam Instance Profile getting the same S3 file

As the title says, I have created another Elastic Beanstalk environment in the same application. They both are currently trying to read the same S3 file. Both have the same IAM Instance Profile. The S3 bucket does not have a Bucket Policy. All the permissions are done via IAM.
Here are the errors I see
Error loading S3 properties file from https://s3.amazonaws.com/xxxxxx/1.0/xxx-secure.properties. The request made it to S3 but was rejected for some reason.
and then further down
om.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
I have read that 403 errors don't always actually mean Access Denied, but I am at a loss at what the issue could be.
Additional notes. I inherited 2 applications a few years ago written by two different people. In application one, I have never had any issues with S3 permissions and creating new environments. In application 2 (this one), I often get issues when I try to change anything with a S3 location. Thus, I am keeping it the same for now.
S3 permission (remember if works for one environment, so I don't think this is it)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::xxxxxxx/*"
]
}
]
}

AccessDenied error from CreateBucket Permissions for pandas to_csv to S3

I have a script running on an EC2 box that finishes by running pd.to_csv('s3://<my_bucket_name>/<file_path>.
Run locally with my AWS admin credentials, this script runs fine and deposits the csv into the right bucket.
My S3 permissions for the EC2 instance are copied and pasted out of AWS' documentation: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<my_bucket_name>"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object*",
"Resource": ["arn:aws:s3:::<my_bucket_name>/*"]
}
]
}
When run on the EC2 instance, my error is botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the CreateBucket operation: Access Denied.
I don't understand why pandas/s3fs is trying to create a bucket when mine already does exist. Suggestions elsewhere was to just provide s3:* access to ec2, but I'd prefer to be a little more restrictive than no restrictions.
Any thoughts on how to resolve this?
Turns out this was more of an issue with The aws batch role that was running the ec2 instance. The write permissions are good enough to write to S3 without bucket listing privileges. The AccessDenied error was a red herring at the more general error that no privileges were being passed to the instance.
A quick look at the Pandas codebase didn't show me anything concrete, but my guess would be that it's checking to see if the bucket exists before listing/updating the objects and failing because it doesn't have the s3:ListAllMyBuckets permission.
You could confirm or deny this theory by giving your role that action (in its own statement), which would hopefully avoid having to give s3:* to it.

Why S3 cross-region replication is not working for us when we're upload a file with PHP?

S3 cross region replication is not working for us when we're upload a file with PHP.
When we upload the file from the AWS interface it replicate to the other bucket it's working great, but when we use S3 API for PHP: putObject it's upload but don't replicate to the other bucket.
What are we missing here?
Thanks
As I commented, it would be great to see the bucket policy of the upload bucket, the bucket policy of the destination bucket, and the permissions granted to whatever IAM role / user the PHP is using.
My guess is that there's some difference in config/permissioning between the source bucket's owning account (which is likely what you use when manipulating from the AWS Console interface) and whatever account or role or user is representing your PHP code. For example:
If the owner of the source bucket doesn't own the object in the bucket, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list (ACL)
Pending more info from the OP, I'll add some potentially helpful trouble-shooting resources:
Can't get amazon S3 cross-region replication between two accounts to work
AWS Troubleshooting Cross-Region Replication
I don't know if is the same for replicating buckets between accounts, but I use this policy to replicate objects uploaded on a bucket in us-east-1 to a bucket in eu-west-1 and it works like a charm, both uploading files manually of from a python script.
{
"Version": "2008-10-17",
"Id": "S3-Console-Replication-Policy",
"Statement": [
{
"Sid": "S3ReplicationPolicyStmt1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS Account ID>:root"
},
"Action": [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Resource": [
"arn:aws:s3:::<replicated region ID>.<replicated bucket name>",
"arn:aws:s3:::<replicated region ID>.<replicated bucket name>/*"
]
}
]
}
Where:
- is, of course, your AWS account ID
- is the AWS region ID (eu-west-1, us-east-1, ...) where the replicated bucket will be (in my case is eu-west-1)
- is the name of bucket you want to replicate.
So say you want to replicate a bucket called "my.bucket.com" in eu-west-1, the Resource ARN to put in the policy will be arn:aws:s3:::eu-west-1.my.bucket.com. Same with the leading /*
Also the replication rule is set as follows:
- Source: entire bucket
- Destination: the bucket I mentioned above
- Destination options: leave all unchecked
- IAM role: Create new role
- Rule name: give it a significant name
- Status: Enabled

Connection to Cloudformation generated S3bucket times out

I'm trying to solve an issue with my AWS Cloudformation template. The template I have includes a VPC with a private subnet, and a VPC endpoint to allow connections to S3 buckets.
The bucket itself includes 3 buckets, and I have a couple of preexisting buckets already said up in the same region (in this case, eu-west-1).
I use aws-cli to log into an EC2 instance in the private subnet, then use aws-cli commands to access S3 (e.g. sudo aws s3 ls bucketname)
My problem is that I can only list the content of pre-existing buckets in that region, or new buckets that I create manually through the website. When I try to list cloudformation-generated buckets it just hangs and times out:
[ec2-user#ip-10-44-1-129 ~]$ sudo aws s3 ls testbucket
HTTPSConnectionPool(host='vltestbucketxxx.s3.amazonaws.com', port=443): Max retries exceeded with url: /?delimiter=%2F&prefix=&encoding-type=url (Caused by ConnectTimeoutError(<botocore.awsrequest.AWSHTTPSConnection object at 0x7f2cc0bcf110>, 'Connection to vltestbucketxxx.s3.amazonaws.com timed out. (connect timeout=60)'))
It does not seem to be related to the VPC endpoint (setting the config to allow everything has no effect)
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
nor does accesscontrol seem to affect it.
{
"Resources": {
"testbucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicReadWrite",
"BucketName": "testbucket"
}
}
}
}
Bucket policies don't seem to be the issue either (I've generated buckets with no policy attached, and again only the cloudformation generated ones time out). On the website, configuration for a bucket that connects and one that times out looks identical to me.
Trying to access buckets in other regions also times out, but as I understood it cloudformation generates buckets in the same region as the VPC, so that shouldn't be it (the website also shows the buckets to be in the same region).
Does anyone have an idea of what the issue might be?
Edit: I can connect from the VPC public subnet, so maybe it is an endpoint problem after all?
When using a VPC endpoint, make sure that you've configured your client to send requests to the same endpoint that your VPC Endpoint is configured for via the ServiceName property (e.g., com.amazonaws.eu-west-1.s3).
To do this using the AWS CLI, set the AWS_DEFAULT_REGION environment variable or the --region command line option, e.g., aws s3 ls testbucket --region eu-west-1. If you don't set the region explicitly, the S3 client will default to using the global endpoint (s3.amazonaws.com) for its requests, which does not match your VPC Endpoint.

Move AWS s3 bucket to another aws account

Does AWS provide a way to copy a bucket from one account to a different account? I am uploading several of files to my own bucket for development purposes, but now I'm going to want to switch the bucket to client account.
what all the possiable soluation to do that?
You can copy the contents of one bucket to another owned by a different account, but you cannot transfer ownership of a bucket to a new account. The way to think about it is you're transferring ownership of the objects in the bucket, not the bucket itself.
Amazon has very detailed articles about this procedure.
In the source account, attach the following policy to the bucket you want to copy.
#Bucket policy in the source AWS account
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {"AWS": "222222222222"},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::sourcebucket/*",
"arn:aws:s3:::sourcebucket"
]
}
]
}
Attach a policy to a user or group in the destination AWS account to delegate access to the bucket in the source AWS account. If you attach the policy to a group, make sure that the IAM user is a member of the group.
#User or group policy in the destination AWS account
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::sourcebucket",
"arn:aws:s3:::sourcebucket/*",
"arn:aws:s3:::destinationbucket",
"arn:aws:s3:::destinationbucket/*"
]
}
}
When these steps are completed, you can copy objects by using the AWS Command Line Interface (CLI) commands cp or sync. For example, the following aws s3 sync command could be used to copy the contents from a bucket in the source AWS account to a bucket in the destination AWS account.
aws s3 sync s3://sourcebucket s3://destinationbucket
You can not move the whole bucket to another account. You should delete the bucket first in one account and re-create the bucket with the same name in another account. It takes up to 24 hours for a bucket name to become available again after you delete it.
Or you can create the new bucket in a needed account - move all data there and then delete old bucket.
There are different tools that can help you to make this actions but I assume I shouldn't paste any links here.
Do you need to move the bucket to another region or you need to make these changes within one?