Python Meta Client Copy 403 Head Object Forbidden - amazon-s3

import boto3
import botocore
from botocore.config import Config
from datetime import timedelta, datetime, date
import json
def get_creds(role):
session = botocore.session.get_session()
aws_access_id = session.get_credentials().access_key
aws_secret_id = session.get_credentials().secret_key
aws_token = session.get_credentials().token
return aws_access_id, aws_secret_id, aws_token
def create_connection(item, type):
role = get_creds('arn:aws:iam::123456789:role/LambdaRole')
my_config = Config(
region_name='us-east-1',
signature_version='s3v4'
)
if type == 'client' :
c = boto3.client(item,
config = my_config,
aws_access_key_id=role[0],
aws_secret_access_key=role[1],
aws_session_token=role[2],
)
else :
c = boto3.resource(item,
config = my_config,
aws_access_key_id=role[0],
aws_secret_access_key=role[1],
aws_session_token=role[2],
)
return c
def lambda_handler(event, context):
src_bucket = 'source_bucket'
dest_bucket = 'destination_bucket'
copy_to_prefix = dest_bucket + "/" + date.today().strftime("%Y/%m/%d") + '/'
s3 = create_connection('s3', 'client')
results = s3.list_objects_v2(Bucket = src_bucket)
keys = []
next_token = ''
while next_token is not None:
if next_token == '':
results = s3.list_objects_v2(Bucket = src_bucket, Prefix = 'FBI/')
elif next_token != '':
results = s3.list_objects_v2(Bucket = src_bucket, Prefix = 'FBI/', ContinuationToken = next_token)
next_token = results.get('NextContinuationToken')
contents = results.get('Contents')
for i in contents:
k = i.get('Key')
keys.append(k)
s3_resource = create_connection('s3', 'resource')
for k in keys:
copy_source = {
'Bucket': '{}'.format(src_bucket),
'Key': '{}'.format(k)
}
extra_args = { 'ACL' : 'bucket-owner-full-control' }
s3_resource.meta.client.copy(copy_source, dest_bucket , copy_to_prefix + '{}'.format(k.split("/",1)[1]), extra_args)
This script is resulting in "errorMessage": "An error occurred (403) when calling the HeadObject operation: Forbidden"
The script does list the files, I can print(keys) and see the list. It also is able to create the destination; I deleted the "sub-folder" and ran the script and it creates the entire structure.
It seems the problem is the actual get from the other account.
Bucket Policy on source account:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::source_bucket",
"arn:aws:s3:::source_bucket/FBI/*"
]
}
]
}
Inline IAM Policy for the role on destination account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::source_bucket",
"arn:aws:s3:::source_bucket/FBI/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::destination_bucket",
"arn:aws:s3:::destination_bucket/*"
]
}
]
}
I also added a bucket policy on the destination bucket matching the IAM policy, just to be sure.
Any ideas why I am seeing the error.
EDIT:
There was one point in time where the Account A cloudwatch showed "arn:aws:sts::803456671434:assumed-role/<role_name>/s3_session"
However, even adding that to the bucket policy has not changed the outcome.

As it turns out none of my code or permissions setup are the issue. I was unaware that the files I am attempting to copy were actually loaded to the account I am trying to copy from by another account.
Here is the crux of what happens:
Account A (the originator account that I did not know about) -> Account B (the account I am trying to copy from), but does not give the ACL for bucket-owner-full-control. Due to the lack of ACL updates Account C (my account) does not have permissions to the file, but it does have access to the bucket because Account B is the originating account for the bucket.
Possible solutions:
When Account A uploads the object to a bucket present in Account B, the object owner can provide permissions to both Account A and Account C by specifying canonical ID's via '—grant' option. In this case, the object owner (Account A) is directly providing access to Account C by mentioning in the ACL. Example `aws s3 cp /localpath/ s3://AccountAbucket name/ --grants-full id=AccountAcanonicalID_< Account B > id=AccountCcanonicalID_Account C
Account A uploading the objects normally to the bucket present in Account B by providing access to the bucket owner. Now, Account B can have an event notification triggering a Lambda function to overwrite the objects uploaded by Account A so that ownership of the objects is changed to Account B. In this case, the bucket and object owner would be Account B, then account B could set up bucket policy allowing Account C to access
There is possibly another solution that involves IAM role assumptions and setting up of chaining of roles. Essentially IAM role in Account A Allows an IAM Role in Account B to assume it and then Account C assumes the Account B role. That seems like a potentially cumbersome setup and the chaining could be complicated.

Related

Unable to read file from encrypted s3 bucket

I'm unable to read a file from encrypted s3 bucket in a lambda.
Below is my policy document where i'm giving access to s3 as well as kms. I've attached this policy to lambda.
When i try to read a file from the bucket, I get Access Denied error.
I'm adding kms:RequestAlias condition to kms statement so that the lambda will only have access to keys which has mytoken in the alias.
I suspect this is where i'm making mistake because if i remove the condition, the lambda gets access to all keys and read the encrypted file without any issues.
Can someone help me restrict access to only keys which has mytoken in the alias
data "aws_iam_policy_document" "lambda_s3_policy_doc" {
statement {
sid = ""
effect = "Allow"
resources = [
"arn:aws:s3:::mybucket*",
"arn:aws:s3:::mybucket*/*"
]
actions = [
"s3:AbortMultipartUpload",
"s3:CreateBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:PutObject"
]
}
statement {
effect = "Allow"
actions = [
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:GenerateDataKey"
]
resources = ["*"]
condition {
test = "StringLike"
variable = "kms:RequestAlias"
values = [
"alias/*mytoken*"
]
}
}
}
What worked for me (I was trying to download the files from a couple of encrypted buckets directly from the AWS console and your case is in fact slightly different) was replacing kms:RequestAlias with kms:ResourceAliases.
statement {
sid = "AllowKMSAccessUsingAliases"
effect = "Allow"
actions = [
"kms:Decrypt",
]
resources = [
"arn:aws:kms:eu-central-1:111111111111:key/*",
]
condition {
test = "ForAnyValue:StringEquals"
variable = "kms:ResourceAliases"
values = [
"alias/alias-bucket-1",
"alias/alias-bucket-2",
]
}
}
According to what the AWS documentation says, it makes sense at least for me: it seems that you should use kms:RequestAlias when you are including the alias as part of your KMS request like in the image below:
When you use kms:ResourceAliases what gets checked is the alias associated to the KMS resource involved in the operation regardless of whether the alias was explicitly included in the request or not
So probably, your lambda function, when asking for the un-encryption of a file in a bucket, is using the KMS id in the request instead of the KMS alias and if that is the case kms:RequestAlias won't work because there is no alias in the request to be checked.

aws cloudsearch uploadDocument returning signaturedoesnotmatch

Im using com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient to uploadDocuments() with passing AWS secretkey and access id, End points .
Access Policy - Access all for all services
It is returning
Service: AmazonCloudSearchDomain; Status Code: 403; Error Code:
SignatureDoesNotMatch;
But with same package i have tried search() with same credentials , im getting search result correctly as expected.
Some one please help for above exception
This may be caused by your access policy allowing public access to search requests, but not upload. So there may be an issue with the credentials being passed, but you don't see that error when performing search requests, because credentials aren't necessary for that type of request.
For example, this access policy below would allow anyone to search without presenting credentials. But any other operation (like uploading documents) would require a valid set of credentials that have access to the CloudSearch domain.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "cloudsearch:search"
}
]
}

How do I access s3 bucket from IAM account using Java

I have an s3 account with AmazonS3FullAccess put when I try to use it to run s3.listObjects("name") I get a 403 error...
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>59C510407179770D</RequestId><HostId>aLPzqYkTKx6nkUWVtZWYS+2fYexzniKWkn2D9+aG6pdxBAjtxAcC85uvGC4HqDnQIifLaf+oy1E=</HostId></Error>
s3.doesBucketExistV2("name") returns true.
My policy looks like this...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Do I need to add the user somewhere?
Update
Looks like it could be a problem with not getting the AWS creds (which is weird becuase of this line)...
Deprecated. By doesBucketExistV2(String) which will correctly throw an exception when credentials are invalid instead of returning true. See Issue #1256.
If I run ((AmazonS3Client) s3).awsCredentialsProvider.getCredentials() it returns null. My creds are in a amazon.properties file like this...
#PropertySources({
#PropertySource("classpath:amazon.properties")
})
// amazon.properties
amazon.accessKey=${AMZN_ACCESS_KEY}
amazon.secretKey=${AMZN_SECRET_KEY}
aws.accessKeyId=${AMZN_ACCESS_KEY}
aws.secretKey=${AMZN_SECRET_KEY}
and echo $AMZN_ACCESS_KEY returns the value I would expect.
Update 2
It appears to be something with the properties not getting read properly if I am explicit like this...
BasicAWSCredentials awsCreds = new BasicAWSCredentials(key, secret);
final AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(service.getRegion())
.build();
It worked, so 2 questions 1.) Why does the doesBucketExistV2 return true even when I am not logged in properly 2.) Why are the system properties not working?
You need to pass bucket name as below. The sample code should be as below
Refer here.
/* The following example list two objects in a bucket. */
var params = {
Bucket: "name",
MaxKeys: 2
};
s3.listObjects(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response

Terraform s3 event notification error

I am having trouble trying to create s3 event notifications. Does anyone know the resolutions to this?
Error is:
*Error applying plan:
1 error(s) occurred:
* module.Test-S3-Bucket.aws_s3_bucket_notification.s3-notification: 1 error(s) occurred:
* aws_s3_bucket_notification.s3-notification: Error putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations
status code: 400, request id: AD9B5BF2FF84A6CB, host id: ShUVJ+TdkpqAZfpeDM3grkF9Vue3Q/AF0LydchperKTF6XdQyDM6BisZi/38pGAh/ZqS+gNyrSM=*
Below is the code that gives me the error:
resource "aws_s3_bucket" "s3-bucket" {
bucket = "${var.bucket_name}"
acl = ""
lifecycle_rule {
enabled = true
prefix = ""
expiration {
days = 45
}
}
tags {
CostC = "${var.tag}"
}
}
resource "aws_s3_bucket_notification" "s3-notification" {
bucket = "${var.bucket_name}"
topic {
topic_arn = "arn:aws:sns:us-east-1:1223445555:Test"
events = [ "s3:ObjectCreated:*", "s3:ObjectRemoved:*" ]
filter_prefix = "test1/"
}
}
If you haven't done it already, you need to specify a policy on the topic that grants the SNS:Publish permission to S3 (only from the bucket specified in the Condition attribute) - if you are also provisioning the topic via Terraform then something like this should do it (we know, as it caught us out just a few days ago too!):
resource "aws_sns_topic" "my-sns-topic" {
name = "Test"
policy = <<POLICY
{
"Version":"2012-10-17",
"Statement":[{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-east-1:1223445555:Test",
"Condition":{
"ArnLike":{"aws:SourceArn":"${aws_s3_bucket.s3-bucket.arn}"}
}
}]
}
POLICY
}
Hope that helps.
Well, I know that this is not your exact case, but I had the same error and I didn't manage to find an answer here, and because this post is the first that Google gave me, I will leave the answer to my case here in the hope that it will help someone else.
So, I notice that after Terraform apply I had this error and I went to the UI to see what happened and found this message:
The Lambda console can't validate one or more event sources for this trigger. The most common cause is when a source ARN includes a wildcard (*) character. You can manage unvalidated triggers using the AWS CLI or AWS SDK.
And guess what? I really had a wildcard (*) character in ARN like this:
source_arn = "{aws_s3_bucket.bucket.arn}/*"
So I changed it to:
source_arn = aws_s3_bucket.bucket.arn
And it worked. So, if you read this - there might be the same mistake in your case.

Amazon S3 Access image by url

I have uploaded an image to Amazon S3 storage. But how can I access this image by url? I have made the folder and file public but still get AccessDenied error if try to access it by url https://s3.amazonaws.com/bucket/path/image.png
This is an older question, but for anybody who comes across this question, once I made the file public I was able to access my image as https://mybucket.s3.amazonaws.com/myfolder/afile.jpg
in my case i have uploaded image privately so that i was unable to access. i did following code
const AWS = require('aws-sdk')
const myBucket = 'BUCKET_NAME'
const myKey = 'FILE_NAME.JPG'
const signedUrlExpireSeconds = 60 * 1
const s3 = new AWS.S3({
accessKeyId: "ACCESS_KEY_ID",
signatureVersion: 'v4',
region: 'S3_REGION',
secretAccessKey: "ACCESS_SECRET"
});
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
You can access your image by using:
https://s3.amazonaws.com/bucketname/foldername/imagename.jpg
or if there are no folders, you can do:
https://s3.amazonaws.com/bucketname/imagename.jpg
upvote if helps. It conforms to present AWS dated 30 may 2017.
Seems like you can now simply right-click on any folder inside a bucket and select 'Make Public' to make everything in that folder public. It may not work at the bucket level itself.
One of easiest way is to make a bucket policy.
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "MakeItPublic",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname.com/*"
}]
}
make sure you access image using the same case as it was uploaded and stored on S3.
for example, if you uploaded image_name.JPG, you should use the same name, but not image_name.jpg
For future reference, if you want to access a file in Amazon S3 the URL needs to be something like:
bucketname.s3.region.amazonaws.com/foldername/image.png
Example: my-awesome-bucket.s3.eu-central-1.amazonaws.com/media/img/dog.png
Don't forget to set the object to public.
Inside S3 if you click on the object will you see a field called: Object URL. That's the object's web address.
On your console, right click on the image you want to access and click on "Make Public"; when thats done right click on the image again and click on "Properties" and copy the Link from the Extended view.
I came across this question whilst looking for a solution to a similar problem with being unable to access images.
It turns out that images with a % in their filename, when being accessed, must have the % symbol URL encoded to %25.
i.e. photo%20of%20a%20banana%20-%2019%20june%202016.jpg needs to be accessed via photo%2520of%2520a%2520banana%2520-%252019%2520june%25202016.jpg.
However, URL encoding the full path didn't work for us, since the slashes, etc would be encoded, and the path would not work. In our specific case, simply replacing % with %25 in all access paths made the difference.
I was having the same problem. I have the issue the spacing in image url. I did this to make it work:
String imgUrl=prizes.get(position).getImagePreview().replaceAll("\\s","%20");
now pass this url to picasso:
Picasso.with(mContext)
.load(imgUrl)
.into(mImageView);
Just add Permission to follow the below image.
To access private images via URL you must provide Query-string authentication. Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.
Just addon to #akotian answer, you can get the object URL by clicking the object as follows
and to access publically you can set the ACL programmatically while uploading the object to the bucket
i.e sample java request
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.contentType(contentType)
.bucket(LOGO_BUCKET_NAME)
.key(LOGO_FOLDER_PREFIX+fileName)
.acl(ObjectCannedACL.PUBLIC_READ)// this make public read
.metadata(metadata)
.build();
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
]
}
]
}
use this policy for that bucket, which makes it public.
Adding bucket policy worked for me
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketname/*"
}
]
}
Turn off Block public access (bucket settings) from Permissions tab inside your bucket. You also need to Edit the permissions of the object. Provide Read access on Grantee Everyone (public access). Then chech "I understand the effects of these changes on this object." and Save changes.
Change your Bucket.. It may works