Error accessing s3 bucket from lambda - InvalidBucketName - amazon-s3

Using Java AWS SDK I've created a lambda function to read a csv file from an s3 bucket. I've made the bucket public and can access it and the file easily from any browser.
To test it, I'm using the test button on the lambda console. I'm just using the hello world test config input template.
It fails with:
Error Message: The specified bucket is not valid. (Service: Amazon S3; Status Code: 400; Error Code: InvalidBucketName; Request ID: XXXXXXXXXXXXXXX)
Lambda function and s3 bucket are in the same region (us-east-1).
I've added AmazonS3FullAccess to lambda_basic_execution role.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build();
also tried
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
then the call
S3Object s3object = s3Client.getObject(new GetObjectRequest(
bucketName, keyName));
bucketName is:
https://s3.amazonaws.com/<allAlphaLowerCaseBucketName>
keyName is:
<allAlphaLowerCaseKeyName>.csv
Any help is appreciated.

The bucket name is not the URL to the bucket, but only the actual name of your bucket.
S3Object s3object = s3Client.getObject(
new GetObjectRequest(
<allAlphaLowerCaseBucketName>,
<allAlphaLowerCaseKeyName>.csv
)
);

Related

S3 file upload failing

I am using S3 transfer manager V2 to upload a file to AWS S3 bucket and its failing with following exception.
software.amazon.awssdk.core.exception.SdkClientException: Failed to send the request: Failed to write to TLS handler
Following is the code snippet
StaticCredentialsProvider credentials = StaticCredentialsProvider.create(AWSSessionCredentials.create(access_key_id, secret_access_key);
S3AsyncClient s3AsyncClient = S3AsyncClient.crtBuilder().credentialsProvider(credentials).region(region).build();
S3TransferManager s3tm = S3TransferManager.builder().s3Client(s3AsyncClient).build();
UploadFileRequest uploadFileRequest = UploadFileRequest.builder().putObjectRequest(req -> req.bucket("bucket").key("key")).source(Paths.get("/file_to_upload.txt")).build();
FileUpload upload = transferManager.uploadFile(uploadFileRequest);
upload.completionFuture().join();
What am I missing here? Please advise

I get an exception when try to read file from minio with amazon SDK

I am trying to use minio as a local Amazon S3 server. I started minio server on my computer, created a test bucket, and uploaded one file - Completed.jpg. Now, I have this file in the minio and I can download it via link http://localhost:9000/minio/testbucket/Completed.jpg. But when I try to read this file from java, I get an exception. I wrote this test:
#Test
public void readObject() {
ClientConfiguration clientConf = PredefinedClientConfigurations.defaultConfig().withProtocol(Protocol.HTTPS).withMaxErrorRetry(1);
BasicAWSCredentials awsCredentials = new BasicAWSCredentials("minioadmin", "minioadmin");
AmazonS3ClientBuilder builder = AmazonS3ClientBuilder.standard()
.withClientConfiguration(clientConf)
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:9000/minio", "us-east-1"));
AmazonS3 amazonS3 = builder.build();
S3Object object = amazonS3.getObject(new GetObjectRequest("testbucket", "Completed.jpg"));
assertNotNull(object);
}
And It is the exception:
com.amazonaws.services.s3.model.AmazonS3Exception: All access to this bucket has been disabled. (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled; Request ID: /minio/testbucket/Completed.jpg; S3 Extended Request ID: 4a46a947-6473-4d53-bbb3-a4f908d444ce)
, S3 Extended Request ID: 4a46a947-6473-4d53-bbb3-a4f908d444ce
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1383)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1359)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5052)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4998)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1486)

How to configure `Terraform` to upload zip file to `s3` bucket and then deploy them to lambda

I use TerraForm as infrastructure framework in my application. Below is the configuration I use to deploy python code to lambda. It does three steps: 1. zip all dependencies and source code in a zip file; 2. upload the zipped file to s3 bucket; 3. deploy to lambda function.
But what happens is the deploy command terraform apply will fail with below error:
Error: Error modifying Lambda Function Code quote-crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: 2db6cb29-8988-474c-8166-f4332d7309de
on config.tf line 48, in resource "aws_lambda_function" "test_lambda":
48: resource "aws_lambda_function" "test_lambda" {
Error: Error modifying Lambda Function Code praw_crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: e01c83cf-40ee-4919-b322-fab84f87d594
on config.tf line 67, in resource "aws_lambda_function" "praw_crawler":
67: resource "aws_lambda_function" "praw_crawler" {
It means the deploy file doesn't exist in s3 bucket. But it success in the second time when I run the command. It seems like a timing issue. After upload the zip file to s3 bucket, the zip file doesn't exist in s3 bucket. That's why the first time deploy failed. But after a few seconds later, the second command finishes successfully and very quick. Is there anything wrong in my configuration file?
The full terraform configuration file can be found: https://github.com/zhaoyi0113/quote-datalake/blob/master/config.tf
You need to add dependency properly to achieve this, Otherwise, it will crash.
First Zip the files
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
}
then upload it s3 by specifying it dependency which zip,source = "${data.archive_file.source.output_path}" this will make it dependent on zip
# upload zip to s3 and then update lamda function from s3
resource "aws_s3_bucket_object" "file_upload" {
bucket = "${aws_s3_bucket.bucket.id}"
key = "lambda-functions/loadbalancer-to-es.zip"
source = "${data.archive_file.source.output_path}" # its mean it depended on zip
}
Then you are good to go to deploy Lambda, To make it depened just this line do the magic s3_key = "${aws_s3_bucket_object.file_upload.key}"
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
function_name = "alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
s3_bucket = "${var.env_prefix_name}${var.s3_suffix}"
s3_key = "${aws_s3_bucket_object.file_upload.key}" # its mean its depended on upload key
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
}
You may find that the source_code_hash changes even when the code hasn't changed when using Terraform's archive_file. If this is an issue for you I created a module to fix this: lambda-python-archive.
This is a response to the top answer:
You need to add .output_base64sha256 to the source_code_hash instead of using base64sha256 or else terraform plan never settles with "no changes / up-to-date" message.
For example:
source_code_hash = "${data.archive_file.source.output_bash64sha256}"

S3 get metadata of older version objects

I am getting a 405 method not allowed while trying to retrieve metadata of older versions of an S3 object using a Lambda Java function.
AmazonS3 amazonS3 = getAmazonS3();
GetObjectRequest getObjectRequest = new GetObjectRequest(bucket, templateKey, versionId);
ObjectMetadata objectMetadata = amazonS3.getObject(getObjectRequest).getObjectMetadata(); // Exception thrown at this line
public AmazonS3 getAmazonS3() {
String region = PropertyManager.getValue(PropertyKey.AWS_REGION.getKey(stage));
return AmazonS3ClientBuilder.standard().withRegion(region)
.withCredentials(new EnvironmentVariableCredentialsProvider()).build();
}
Stack trace in Lambda:
The specified method is not allowed against this resource. (Service: Amazon S3; Status Code: 405; Error Code: MethodNotAllowed; Request ID: 1D12DDA5F0493282): com.amazonaws.services.s3.model.AmazonS3Exception
com.amazonaws.services.s3.model.AmazonS3Exception: The specified method is not allowed against this resource. (Service: Amazon S3; Status Code: 405; Error Code: MethodNotAllowed; Request ID: 1D12DDA5F0493282), S3 Extended Request ID: jTNnAl8ifgsUlPMV0GEHAEVBtWwjTprEJy45C9BMJ5kTk/Qn8Pne8/ZM/tH27ZoeUtHrd1NeuyQ=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1588)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1258)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4187)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4134)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1385)
at com.ghx.templateengine.template.GetTemplateVersions.handleRequest(GetTemplateVersions.java:66)
A few of the older versions of the S3 object had delete marker. AWS support conveyed that trying to head object a version that is a delete marker results in a 405 error.

Amazon S3 Extended Request ID null

I'm trying to upload a file to Amazon bucket using S3 service. Getting below exception when connecting through company's "proxy server".
<code>
AWSCredentials credentials = new BasicSessionCredentials(
uploadLocation.getAccessKeyId(), uploadLocation.getSecretAccessKey(), uploadLocation.getSessionToken());
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setProxyHost(proxyAddress);
clientConfiguration.setProxyPort(Integer.parseInt(proxyPort));
clientConfiguration.setProtocol(Protocol.HTTP);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withClientConfiguration(clientConfiguration)
.withRegion("us-east-1")
.build();
PutObjectResult putObjectResult = s3Client.putObject(new PutObjectRequest(uploadLocation.getBucket(),
uploadLocation.getObject_key(), file));
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1592)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1257)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1029)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:741)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:715)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:697)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:665)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:647)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:511)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4227)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4174)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1722)
</code>
Please note that it works perfectly when proxy is not used and hence ClientConfiguration object is not required.
Don't have much visibility into S3 bucket configuration as a different vendor company is using Amazon S3 as their storage system.
Any suggestions, please?