Grant Lambda access to private S3 bucket - amazon-s3

We have a Lambda function that needs to be able to access a private S3 bucket.
The bucket has 'block all public access' enabled and the following resource policy:
{
"Version": "2012-10-17",
"Id": "Policy1620740846405",
"Statement": [
{
"Sid": "Stmt1620740843181",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::'''''':role/integrations-shopifyBucketOrdersFunctionRole-*****",
"arn:aws:iam::'''''':root",
"arn:aws:iam::''''''':user/transalisS3"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
]
}
]
}
I have also attached the AmazonS3FullAccess policy directly the the IAM role that the Lambda uses.
However, when the Lambda function tries to access the S3 bucket it gives an access denied error:
AccessDenied: Access Denied
An external system that connects to S3 using IAM User credentials also gets the same error when it tries to access the bucket.
Does anybody know what might be causing this error?
Below is the Lambda code that is erroring:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.bucketOrders = async (event, context) => {
let response = {};
let eventBucket = event.Records[0].s3.bucket.name;
let eventFile = event.Records[0].s3.object.key;
let decodedKey = decodeURIComponent(eventFile);
try {
let objectData = await s3.getObject({
Bucket: eventBucket,
Key: decodedKey,
}).promise();

When the Lambda application is created there is an option to auto generate the IAM role, this role has a permission boundary which had an invalid resource attached - causing everything to fail.

Related

AWS Cognito IAM policy - How to limit access to S3 folder (.NET SDK ListObjectsV2)

I am trying to limit access for a Cognito user to specific folders in a bucket. The final target is to reach what is described here but I've simplified it for debugging.
The structure is as follows
MYBUCKET/111/content_111_1.txt
MYBUCKET/111/content_111_2.txt
MYBUCKET/222/content_222_1.txt
MYBUCKET/333/
I am performing a simple "list" call via the SDK.
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
I am authenticating via Cognito so I am updating Cognito's IAM policy linked to the authenticated role.
The following policy returns an S3 exception "Access Denied":
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET",
"Condition": {
"StringLike": {
"s3:prefix": [
"111",
"111/*"
]
}
}
}
]
}
The following policy returns all results (as expected).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET"
}
]
}
This is supposed to be super straightforward (see here ). There are other similar questions (such as this and others) but with no final answer.
How should I write the IAM policy so that authenticated users can only access the contents of the folder "111"?
Best regards,
Andrej
I hope I now understand what I got wrong."s3:prefix" is not some form of "filter that will only return the objects that match the prefix"; it is "a parameter that forces the caller to provide specific prefix information when executing the operation".
The following is taken from the S3 documentation :
To answer my own question, starting from the IAM policy above
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET",
"Condition": {
"StringLike": {
"s3:prefix": [
"111",
"111/*"
]
}
}
}
]
}
If I call the SDK with the code below, I will indeed get an "Access Denied" because I have not specified a prefix that matches the IAM policy.
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
But if I do specify the prefix in my SDK call, S3 will return the expected results i.e., only the ones "starting with 111".
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET",
Prefix = "111"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
In other words, my problem was not in the way I had written the IAM policy but in the way I was expecting the "s3:prefix" to work.

Can't access images in S3 bucket using cognito identity

I'm testing to display images from an s3 bucket using javascript, prior to making this part of an application.
I have an s3 bucket (non-public), named for this post: IMAGE-BUCKET
Created an identity role : GET-IMAGE.
I have temporarily given full s3 access to GET-IMAGE role.
I have CORS defined for the bucket.
While testing I have disabled the browser cache.
3 issues:
Getting "403 Forbidden" response when images are accessed from the
html/script below.
If I make a particular image public, that image displays -- an issue with large # of images.
If I make the entire bucket public, images do not display
It seems Cognito identiy is not able to access the bucket, or there's an issue in the script below.
Also, setting the bucket public doesn't work either, unless each image is also set public. This bucket will be used privately, so this is only an issue while troubleshooting.
I have attached AmazonS3FullAccess to GET-IMAGE, I also added the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessS3BucketIMAGEBUCKET",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Using html and script from AWS documentation (modified):
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.487.0.js"></script>
<script>
var albumBucketName = 'IMAGE-BUCKET';
// Initialize the Amazon Cognito credentials provider for GET-IMAGE:
AWS.config.region = 'us-east-1'; // Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'us-east-1:43ba4c15-ab2f-8880-93be-xxx',
});
// Create a new service object
var s3 = new AWS.S3({
apiVersion: '2006-03-01',
params: { Bucket: albumBucketName }
});
// A utility function to create HTML.
function getHtml(template) {
return template.join('\n');
}
// Show the photos that exist in an album.
function viewAlbum(albumName) {
var albumPhotosKey = '/';
s3.listObjects(function (err, data) {
if (err) {
return alert('There was an error viewing your album: ' + err.message);
}
// 'this' references the AWS.Response instance that represents the response
var href = this.request.httpRequest.endpoint.href;
var bucketUrl = href + albumBucketName + '/';
var photos = data.Contents.map(function (photo) {
var photoKey = photo.Key;
var photoUrl = bucketUrl + encodeURIComponent(photoKey);
return getHtml([
'<span>',
'<div>',
'<br/>',
'<img style="width:128px;height:128px;" src="' + photoUrl + '"/>',
'</div>',
'<div>',
'<span>',
photoKey.replace(albumPhotosKey, ''),
'</span>',
'</div>',
'</span>',
]);
});
var message = photos.length ?
'<p>The following photos are present.</p>' :
'<p>There are no photos in this album.</p>';
var htmlTemplate = [
'<div>',
'<button onclick="listAlbums()">',
'Back To Albums',
'</button>',
'</div>',
'<h2>',
'Album: ' + albumName,
'</h2>',
message,
'<div>',
getHtml(photos),
'</div>',
'<h2>',
'End of Album: ' + albumName,
'</h2>',
'<div>',
'<button onclick="listAlbums()">',
'Back To Albums',
'</button>',
'</div>',
]
document.getElementById('viewer').innerHTML = getHtml(htmlTemplate);
document.getElementsByTagName('img')[0].setAttribute('style', 'display:none;');
});
}
</script>
</head>
<body>
<h1>Photo Album Viewer</h1>
<div id="viewer" />
<button onclick="viewAlbum('');">View All Images</button>
</body>
</html>
UPDATE:
If I grant public read in S3 Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::IMAGE-BUCKET/*"
}
]
}
It allows to access each image; solving the issue #2 and #3.
But this makes the bucket basically public.
If I change the Bucket policy to limit to the Cognito identity, changing the principal as follows, again I am not able to access images via the html/script, getting 403 errors.
"Principal": {
"AWS": "arn:aws:iam::547299998870:role/Cognito_GET-IMAGEIDUnauth_Role"
}
UPDATE:
I've been reading online, checking some of the other related posts ...
I've it reduced to the basic components, here's the latest configuration. The configuration should be as simple as, giving access to the GET-IMAGE role based on the documentation:
Under IAM Management Console > Roles > GET-IMAGE role (unauthenticated)
I added an inline policy:
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": ["s3:GetObject","s3:ListBucket"],
"Resource": "arn:aws:s3:::IMAGE-BUCKET/*"
}
I removed the Bucket policy -- this shouldn't be needed, the GET-IMAGE role already has access. Role trust is already included by default. HTML contains the credential:
IdentityPoolId: 'us-east-1:9bfadd6a-xxxx-41d4-xxxx-79ad7347xxa1
Those are the most basic components, nothing else should be needed. However, it does not work. I made 1 of the images public and that image is displayed, other images error with 403 Forbidden.
I've resolved the s3 access issue, I'm including all the settings and methods I used. To troubleshoot, I started testing with an actual AWS user, then stepped back to the cognito identity. I included notes regarding access by an AWS user for reference. I also abondoned the AWS sample HTML code, and used a simple short function to display output in the console, utilizing getsignedurl function. I'm not familiar with AWS libraries, and finding getsignedurl helped speed up testing and finally resolving the issue.
Used the following sample names throughout:
Cognito Role: GET-IMAGE
S3 Bucket: IMAGE-BUCKET
I'll go over both Cognito and AWS user access to S3, using HTML for simple demo and testing.
With Cognito:
SETTINGS:
Role: Create a Cognito Identity. For instructions and to create, follow this wizard:
https://console.aws.amazon.com/cognito/create/
Take a note of the sample code AWS provides after it's created-- you'll need the pool ID.
Permissions: Add Role level AND S3 level permissions
Role level: IAM > Roles > GET-IMAGE_Unauth_Role
Add (JSON) to both Auth and UnAuth Roles
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3Access",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::IMAGE-BUCKET/*"
]}
S3:
IMAGE-BUCKET > Permissions > Bucket Policy:
(JSON)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::547999998899:user/anAWSUser",
"arn:aws:iam::547999998899:role/Cognito_GET-IMAGEIDUnauth_Role",
"arn:aws:iam::547999998899:role/Cognito_GET-IMAGEIDAuth_Role"
]
},
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::IMAGE-BUCKET/*"
]
}
]}
** Note: I also added an AWSUser, for credential version, for use in the next section "With Credentials"
CORS:
IMAGE-BUCKET> Permissions > CORS
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
**Note: You can restrict the origin in the AllowedOrigin parameter.
HTML:
<!DOCTYPE html>
<html>
<head>
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.487.0.js"></script>
<script>
// Replace IMAGE-BUCKET with your bucket name.
var BucketName = 'IMAGE-BUCKET';
// Cognito credentials (from Cognito ID creation sample code)
AWS.config.region = 'us-east-1'; // Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'us-east-1:9999996a-f099-99d4-b999-79a99999aaa1',
});
// Create a new service object
var s3 = new AWS.S3({
apiVersion: '2006-03-01',
params: { Bucket: BucketName }
});
// Test Function. REPLACE Key with test file name from s3
function show1() {
var UrlExpireSeconds = 180 * 1;
var params = {
Bucket: BucketName,
Key: "20190815_file_name.jpg",
Expires: UrlExpireSeconds
};
var url = s3.getSignedUrl('getObject', params);
console.log('The URL is', url);
document.getElementById('viewer').innerHTML =
'<span>'+
'<div>'+
'<br/>'+
'<img style="width:128px;height:128px;" src="' +
url + '"/>' +
'</div>'+
'<div>'+
'<span>'
};
show1();
</script>
</head>
<body>
<h1>S3 Test Image Display</h1>
<div id="viewer" />
<button onclick="show1();">View Image</button>
</body>
</html>
With User Credentials
You can also use credentials to authenticate a user to access s3. In the javascript above,
comment out the cognito credentials and use the following instead:
//access key ID, Secret
var cred = new AWS.Credentials('AKXXX283988CCCAA-ACCESS-KEY','kKsCuq7a9WNohmOYY8SApewie77493LgV-SECRET');
AWS.config.credentials=cred;
To get the access key and secret from AWS console:
IAM > Users > an-AWSUser > Security-Credentials
Under "Access Keys", click "Create Access Key"
===================
Note that the trust policy for an unauthenticated role is automatically created by AWS when you create a Cognito ID Role; it doesn't have to be defined manually as mentioned before.
Also listbucket and bucket level resource permissions, as in "IMAGE-BUCKET", are not required; Getobject is all that's needed to access a file directly. In my case, I'm accessing images by key, do not need to list bucket contents.
I set both the Role and S3 bucket permissions; I did not test without the role permissions, bucket policy may be sufficient.
You are not defining the trust policy for your unauthenticated role correctly.
As per this documentation on cognito role trust and permissions, the trust policy for an unauthenticated role can be defined as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "YOUR_IDENTITY_POOL_ID"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "unauthenticated"
}
}
}
]
}
When you use AWS.CognitoIdentityCredentials, under the hood your Cognito Identity Pool will first get a web identity id for your user. As you don't provide a login with an authenticated token from an identity provider such as Cognito User Pools, or Facebook, the id is for an unauthorized web identity.
Cognito will then call the security token service's assumeRoleWithWebIdentity method on your behalf in order to get credentials with the permissions that you defined in the unauthenticated role's access policy that will allow the web identity to access the s3 bucket.
This is why the principal in the trust policy needs to be cognito-identity.amazonaws.com. It is to give cognito identity pools the permission to call the sts:AssumeRoleWithWebIdentity method on behalf of the web identity in order to obtain IAM credentials.
The access policy part of the role, which defines what unauthenticated users can actually do, will continue to be as you originally defined it in your post:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessS3BucketIMAGEBUCKET",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Update
I notice that the last inline policy you have posted for your unauthenticated role won't work for s3.listObjects(). It will return a 403 because it needs a slightly different resource statement to indicate the bucket itself, rather than the buckets content.
You can update your policy as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::IMAGE-BUCKET/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::IMAGE-BUCKET"
]
}
]
}

AWS Lambda is not copying files from one s3 bucket to another in the cloud, but in local it works

I want to use a lambda function to copy content from one bucket to another. That is the lambda that I have created:
'use strict';
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var srcBucket = "bkctFrom";
var destBucket = "bkctTo";
module.exports.hello = async event => {
var params = {
Bucket: srcBucket
};
await s3.listObjects(params, function(err, data) {
if (err){
console.log(err, err.stack);
}
else{
var cont = data['Contents'];
var key="";
for (let [key, value] of Object.entries(cont)) {
key = value['Key'];
s3.copyObject({
CopySource: srcBucket + '/' + key,
Bucket: destBucket,
Key: key
},
function (copyErr,copyData){
if(copyErr){console.log(copyErr);}
else{console.log(copyData);}
}
);
}
}
});
};
This fuction works good when I run locally : sls invoke local -f hello, all the content is copyied form the bucket bkctFrom to bkctTo.
But when I deploy in the aws it doesn't work. There is no error log, only execution result successed.
In local instead of null I get the information about the files inserted in the bucket.
This is the policy I am using to create the role for this lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListSourceAndDestinationBuckets",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::bkctFrom",
"arn:aws:s3:::bkctTo"
]
},
{
"Sid": "SourceBucketGetObjectAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::bkctFrom/*"
},
{
"Sid": "DestinationBucketPutObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::bkctTo/*"
}
]
}
I am using serverless and that is the .yml:
service: updaterepobucket
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-west-2
functions:
hello:
handler: handler.hello
# events:
# - http:
# path: users/create
# method: get
I am refering the role manually in the aws console:
Here they have another name (only test).
And even if I set full access to S3 it still nt working:
If it is working from my local but it is not working from the cloud the conclusion is that maybe there are something related to permissions. But I dont know what is missing in this policy. Any guess?
Two things:
a. Its not clear to me how you're referencing the policy from your serverless.yml file. I'd recommend you refer to your policy using the iamManagedPolicies section (see https://serverless.com/framework/docs/providers/aws/guide/iam/ for more details)
b. Either way, the permissions might not be enough. See this AWS Article, "Why can't I copy an object between two Amazon S3 buckets?
": https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-copy-between-buckets/
Hope it helps

How to grant lambda permission to upload file to s3 bucket in `terraform`?

I have below lambda function configuration in TerraForm:
resource "aws_lambda_function" "test_lambda" {
# filename = "crawler/dist/deploy.zip"
s3_bucket = "${var.s3-bucket}"
s3_key = "${aws_s3_bucket_object.file_upload.key}"
# source_code_hash = "${filebase64sha256("file.zip")}"
function_name = "quote-crawler"
role = "arn:aws:iam::773592622512:role/LambdaRole"
handler = "handler.handler"
source_code_hash = "${data.archive_file.zipit.output_base64sha256}"
runtime = "${var.runtime}"
timeout = 180
environment {
variables = {
foo = "bar"
}
}
}
when I run the lambda I got the error "errorMessage": "An error occurred (AccessDenied) when calling the PutObject operation: Access Denied", when it tries to upload file to s3 bucket. It seems that the lambda function doesn't have permission to access s3. TerraForm doc is not clear about how to configure them. The permission configuration panel doesn't appear on lambda console either. It seems that lambda that created by TerraForm has limited configuration for me to use. So how can I grant s3 permission to lambda?
To make it easy you can do this in three steps,
create a role
create policy
attached policy to the role
attached role to lambda
Create role.
resource "aws_iam_role" "role" {
name = "${var.env_prefix_name}-alb-logs-to-elk"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Create a policy that has specified access to s3
#Created Policy for IAM Role
resource "aws_iam_policy" "policy" {
name = "${var.env_prefix_name}-test-policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
}
]
}
EOF
}
Attached IAM Role and the new created Policy
resource "aws_iam_role_policy_attachment" "test-attach" {
role = "${aws_iam_role.role.name}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
Now attached the role to Lamba source
resource "aws_lambda_function" "test_lambda" {
# filename = "crawler/dist/deploy.zip"
s3_bucket = "${var.s3-bucket}"
s3_key = "${aws_s3_bucket_object.file_upload.key}"
# source_code_hash = "${filebase64sha256("file.zip")}"
function_name = "quote-crawler"
role = "${aws_iam_role.role.arn}"
handler = "handler.handler"
source_code_hash = "${data.archive_file.zipit.output_base64sha256}"
runtime = "${var.runtime}"
timeout = 180
environment {
variables = {
foo = "bar"
}
}
}
The IAM role associated to the function is not allowed to upload to S3.
The solution is to create an IAM policy allowing S3 access to your bucket (say read/write), which would look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
Then, you need to attach this policy to the role used by your lambda function.
More info at:
https://www.terraform.io/docs/providers/aws/r/iam_role_policy.html
I would do it in the following order:
this code is using terraform 0.12.*
Create policy documents for assume role and s3 permissions
data aws_iam_policy_document lambda_assume_role {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
data aws_iam_policy_document lambda_s3 {
statement {
actions = [
"s3:PutObject",
"s3:PutObjectAcl"
]
resources = [
"arn:aws:s3:::bucket/*"
]
}
}
Create an IAM policy
resource aws_iam_policy lambda_s3 {
name = "lambda-s3-permissions"
description = "Contains S3 put permission for lambda"
policy = data.aws_iam_policy_document.lambda_s3.json
}
Create a role
resource aws_iam_role lambda_role {
name = "lambda-role"
assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json
}
Attach policy to role
resource aws_iam_role_policy_attachment lambda_s3 {
role = aws_iam_role.lambda_role.name
policy_arn = aws_iam_policy.lambda_s3.arn
}
Attach role to lambda
resource "aws_lambda_function" "test_lambda" {
# filename = "crawler/dist/deploy.zip"
s3_bucket = var.s3-bucket
s3_key = aws_s3_bucket_object.file_upload.key
# source_code_hash = "${filebase64sha256("file.zip")}"
function_name = "quote-crawler"
role = aws_iam_role.lambda_role.arn
handler = "handler.handler"
source_code_hash = data.archive_file.zipit.output_base64sha256
runtime = var.runtime
timeout = 180
environment {
variables = {
foo = "bar"
}
}
}

Access Denied while attempting to put an object into S3 bucket

I am trying to refactor some code to allow upload of large images.
Initially, the images were stored in S3 in a lambda function and it worked just fine in PROD. I have extracted that part out of the function now and attempting to do it via the Java SDK for AWS.
This worked fine in the DEV environment because the bucket is public there. When I tested this with PROD settings. I get an access denied error.
The bucket is private in PROD and the user has access to all S3 actions.
I could access the bucket using the AWS CLI but when I try it using the AWS Java SDK I get an 'Access Denied' error. This is the code in Java. I have explicitly set the region just to make sure it was getting the right one, although I know the region is the default region.
BasicAWSCredentials awsCreds = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);
AmazonS3 s3client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
String imageS3Url = null;
ObjectMetadata d = new ObjectMetadata();
try {
s3client.putObject(new PutObjectRequest(S3_BUCKET_NAME, s3Key, stream, d));
imageS3Url = "https://s3-"+ S3_REGION_NAME +".amazonaws.com/" + S3_BUCKET_NAME +"/"+ s3Key;
}catch(Exception ex) {
log.debug(ex.getMessage());
}
Am I missing any configuration to grant access to AWS java SDK to access the S3 bucket? The AWS Java SDK version is 1.11.411.
Here are the anonymized versions of the bucket and IAM User Policy:
{
"Version": "2012-10-17",
"Id": "PolicyABC”,
"Statement": [
{
"Sid": "Stmt123”,
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/user-name”
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name”
}
]
}
IAM user policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name”,
"arn:aws:s3:::bucket-name/*”
]
}
]
}