S3 file upload failing - amazon-s3

I am using S3 transfer manager V2 to upload a file to AWS S3 bucket and its failing with following exception.
software.amazon.awssdk.core.exception.SdkClientException: Failed to send the request: Failed to write to TLS handler
Following is the code snippet
StaticCredentialsProvider credentials = StaticCredentialsProvider.create(AWSSessionCredentials.create(access_key_id, secret_access_key);
S3AsyncClient s3AsyncClient = S3AsyncClient.crtBuilder().credentialsProvider(credentials).region(region).build();
S3TransferManager s3tm = S3TransferManager.builder().s3Client(s3AsyncClient).build();
UploadFileRequest uploadFileRequest = UploadFileRequest.builder().putObjectRequest(req -> req.bucket("bucket").key("key")).source(Paths.get("/file_to_upload.txt")).build();
FileUpload upload = transferManager.uploadFile(uploadFileRequest);
upload.completionFuture().join();
What am I missing here? Please advise

Related

Java Proxy-Authorization in the aws request header for uploading to S3 using a proxy server

I am trying to upload a file to aws s3 and there is a proxy server in between and the proxy servers uses a token.
I am requested to send the credentials on header as “Proxy-Authorization”. Here is the code I am using.
com.amazonaws.ClientConfiguration config = new ClientConfiguration();
config.addHeader(“Proxy-Authorization”, “Basic $$$$$$$$$“);
config.setProtocol(Protocol.HTTPS);
config.setProxyHost(proxyHost);
config.setProxyPort(proxyPort);
BasicAWSCredentials awsCreds = new BasicAWSCredentials(accessKey, secret);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withClientConfiguration(config)
.withCredentials(new
AWSStaticCredentialsProvider(awsCreds)).withRegion(Regions.US_EAST_1).build();
s3Client.putObject(name,key,file)
I am getting HTTP 403.Forbidden error, when we try to upload to s3 (using s3Client putObject )
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: null; S3 Extended Request ID: null;
I tried the same using a curl command with -H "Proxy-Authorization:Basic $$$$" and it worked fine. Appreciate any help on how can I pass the header in the request from a java code.

I get an exception when try to read file from minio with amazon SDK

I am trying to use minio as a local Amazon S3 server. I started minio server on my computer, created a test bucket, and uploaded one file - Completed.jpg. Now, I have this file in the minio and I can download it via link http://localhost:9000/minio/testbucket/Completed.jpg. But when I try to read this file from java, I get an exception. I wrote this test:
#Test
public void readObject() {
ClientConfiguration clientConf = PredefinedClientConfigurations.defaultConfig().withProtocol(Protocol.HTTPS).withMaxErrorRetry(1);
BasicAWSCredentials awsCredentials = new BasicAWSCredentials("minioadmin", "minioadmin");
AmazonS3ClientBuilder builder = AmazonS3ClientBuilder.standard()
.withClientConfiguration(clientConf)
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:9000/minio", "us-east-1"));
AmazonS3 amazonS3 = builder.build();
S3Object object = amazonS3.getObject(new GetObjectRequest("testbucket", "Completed.jpg"));
assertNotNull(object);
}
And It is the exception:
com.amazonaws.services.s3.model.AmazonS3Exception: All access to this bucket has been disabled. (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled; Request ID: /minio/testbucket/Completed.jpg; S3 Extended Request ID: 4a46a947-6473-4d53-bbb3-a4f908d444ce)
, S3 Extended Request ID: 4a46a947-6473-4d53-bbb3-a4f908d444ce
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1383)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1359)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5052)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4998)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1486)

How to configure `Terraform` to upload zip file to `s3` bucket and then deploy them to lambda

I use TerraForm as infrastructure framework in my application. Below is the configuration I use to deploy python code to lambda. It does three steps: 1. zip all dependencies and source code in a zip file; 2. upload the zipped file to s3 bucket; 3. deploy to lambda function.
But what happens is the deploy command terraform apply will fail with below error:
Error: Error modifying Lambda Function Code quote-crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: 2db6cb29-8988-474c-8166-f4332d7309de
on config.tf line 48, in resource "aws_lambda_function" "test_lambda":
48: resource "aws_lambda_function" "test_lambda" {
Error: Error modifying Lambda Function Code praw_crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: e01c83cf-40ee-4919-b322-fab84f87d594
on config.tf line 67, in resource "aws_lambda_function" "praw_crawler":
67: resource "aws_lambda_function" "praw_crawler" {
It means the deploy file doesn't exist in s3 bucket. But it success in the second time when I run the command. It seems like a timing issue. After upload the zip file to s3 bucket, the zip file doesn't exist in s3 bucket. That's why the first time deploy failed. But after a few seconds later, the second command finishes successfully and very quick. Is there anything wrong in my configuration file?
The full terraform configuration file can be found: https://github.com/zhaoyi0113/quote-datalake/blob/master/config.tf
You need to add dependency properly to achieve this, Otherwise, it will crash.
First Zip the files
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
}
then upload it s3 by specifying it dependency which zip,source = "${data.archive_file.source.output_path}" this will make it dependent on zip
# upload zip to s3 and then update lamda function from s3
resource "aws_s3_bucket_object" "file_upload" {
bucket = "${aws_s3_bucket.bucket.id}"
key = "lambda-functions/loadbalancer-to-es.zip"
source = "${data.archive_file.source.output_path}" # its mean it depended on zip
}
Then you are good to go to deploy Lambda, To make it depened just this line do the magic s3_key = "${aws_s3_bucket_object.file_upload.key}"
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
function_name = "alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
s3_bucket = "${var.env_prefix_name}${var.s3_suffix}"
s3_key = "${aws_s3_bucket_object.file_upload.key}" # its mean its depended on upload key
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
}
You may find that the source_code_hash changes even when the code hasn't changed when using Terraform's archive_file. If this is an issue for you I created a module to fix this: lambda-python-archive.
This is a response to the top answer:
You need to add .output_base64sha256 to the source_code_hash instead of using base64sha256 or else terraform plan never settles with "no changes / up-to-date" message.
For example:
source_code_hash = "${data.archive_file.source.output_bash64sha256}"

Amazon Rekognition API - IndexFaces prompting an error for external image id - When 'externalImageId' has folder structure in it

I am trying to invoke IndexFaces API but getting an error :
*"exception":"com.amazonaws.services.rekognition.model.AmazonRekognitionException",
"message": "1 validation error detected: Value 'postman/postworld/postman_female.jpg' at 'externalImageId' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.\\-:]+ (Service: AmazonRekognition; Status Code: 400; Error Code: ValidationException; Request ID: 3ac46c4d-3358-11e8-abd5-d5fb3ad03e33)",
"path": "/enrolluser"*
I was able to upload my file successfully into S3 using the so called "folder structure"of S3 . But when I am trying to read the same file for IndexFaces , then it's prompting an error related to éxternalImageId'.
Here is the snapshot from the S3 of my uploaded file :
http://xxxxxx.s3.amazonaws.com/postman/postworld/postman_automated.jpg
If I get rid of folder structure and directly dump the file , like :
http://xxxxxx.s3.amazonaws.com/postman_automated.jpg
then the IndexFaces API is passing it successfully .
Can you please suggest how to pass the externalImageId when I do have the 'folder structure'? Currently I am passing the externalImageId through my java code like :
enrolledFileName = userName +"/"+myWorldName+"/"+enrolledFileName;
System.out.println("The FILE NAME MANIPULATED IS:"+enrolledFileName);
String generateAmazonFaceId = amazonRekognitionManagerObj.addToCollectionForEnrollment(collectionName, bucketName, enrolledFileName);
System.out.println("The Generated FaceId is:"+generateAmazonFaceId);**strong text**
Above code internally calls :
Image image=new Image().withS3Object(new S3Object().withBucket(bucketName)
.withName(fileName));
IndexFacesRequest indexFacesRequest = new IndexFacesRequest()
.withImage(image)
.withCollectionId(collectionName)
.withExternalImageId(fileName)
.withDetectionAttributes("ALL");

Error accessing s3 bucket from lambda - InvalidBucketName

Using Java AWS SDK I've created a lambda function to read a csv file from an s3 bucket. I've made the bucket public and can access it and the file easily from any browser.
To test it, I'm using the test button on the lambda console. I'm just using the hello world test config input template.
It fails with:
Error Message: The specified bucket is not valid. (Service: Amazon S3; Status Code: 400; Error Code: InvalidBucketName; Request ID: XXXXXXXXXXXXXXX)
Lambda function and s3 bucket are in the same region (us-east-1).
I've added AmazonS3FullAccess to lambda_basic_execution role.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build();
also tried
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
then the call
S3Object s3object = s3Client.getObject(new GetObjectRequest(
bucketName, keyName));
bucketName is:
https://s3.amazonaws.com/<allAlphaLowerCaseBucketName>
keyName is:
<allAlphaLowerCaseKeyName>.csv
Any help is appreciated.
The bucket name is not the URL to the bucket, but only the actual name of your bucket.
S3Object s3object = s3Client.getObject(
new GetObjectRequest(
<allAlphaLowerCaseBucketName>,
<allAlphaLowerCaseKeyName>.csv
)
);