Amazon Rekognition API - IndexFaces prompting an error for external image id - When 'externalImageId' has folder structure in it - amazon-s3

I am trying to invoke IndexFaces API but getting an error :
*"exception":"com.amazonaws.services.rekognition.model.AmazonRekognitionException",
"message": "1 validation error detected: Value 'postman/postworld/postman_female.jpg' at 'externalImageId' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.\\-:]+ (Service: AmazonRekognition; Status Code: 400; Error Code: ValidationException; Request ID: 3ac46c4d-3358-11e8-abd5-d5fb3ad03e33)",
"path": "/enrolluser"*
I was able to upload my file successfully into S3 using the so called "folder structure"of S3 . But when I am trying to read the same file for IndexFaces , then it's prompting an error related to éxternalImageId'.
Here is the snapshot from the S3 of my uploaded file :
http://xxxxxx.s3.amazonaws.com/postman/postworld/postman_automated.jpg
If I get rid of folder structure and directly dump the file , like :
http://xxxxxx.s3.amazonaws.com/postman_automated.jpg
then the IndexFaces API is passing it successfully .
Can you please suggest how to pass the externalImageId when I do have the 'folder structure'? Currently I am passing the externalImageId through my java code like :
enrolledFileName = userName +"/"+myWorldName+"/"+enrolledFileName;
System.out.println("The FILE NAME MANIPULATED IS:"+enrolledFileName);
String generateAmazonFaceId = amazonRekognitionManagerObj.addToCollectionForEnrollment(collectionName, bucketName, enrolledFileName);
System.out.println("The Generated FaceId is:"+generateAmazonFaceId);**strong text**
Above code internally calls :
Image image=new Image().withS3Object(new S3Object().withBucket(bucketName)
.withName(fileName));
IndexFacesRequest indexFacesRequest = new IndexFacesRequest()
.withImage(image)
.withCollectionId(collectionName)
.withExternalImageId(fileName)
.withDetectionAttributes("ALL");

Related

S3 file upload failing

I am using S3 transfer manager V2 to upload a file to AWS S3 bucket and its failing with following exception.
software.amazon.awssdk.core.exception.SdkClientException: Failed to send the request: Failed to write to TLS handler
Following is the code snippet
StaticCredentialsProvider credentials = StaticCredentialsProvider.create(AWSSessionCredentials.create(access_key_id, secret_access_key);
S3AsyncClient s3AsyncClient = S3AsyncClient.crtBuilder().credentialsProvider(credentials).region(region).build();
S3TransferManager s3tm = S3TransferManager.builder().s3Client(s3AsyncClient).build();
UploadFileRequest uploadFileRequest = UploadFileRequest.builder().putObjectRequest(req -> req.bucket("bucket").key("key")).source(Paths.get("/file_to_upload.txt")).build();
FileUpload upload = transferManager.uploadFile(uploadFileRequest);
upload.completionFuture().join();
What am I missing here? Please advise

Terraform failed to read s3 prefix containing file named as '/'

I have created a s3 object resource in terraform using below code
resource "aws_s3_bucket_object" "object" {
bucket = <bucket_name>
acl = "private"
key = "prefix"
source = "/dev/null"
}
Post deploy I accidentely created a file in the prefix with name as /, now when I tried to apply new changes using terraform apply it throws error
Error: error reading S3 Object (prefix/): Forbidden: Forbidden.

Unable to upload document with special character in AWS CloudSearch through Java SDK

I am trying to upload a document which has special character in it. The JSON string is
[{
"type": "add",
"id": 1234,
"fields": {
"copyrightline": "© 2005 Some company. All Rights Reserved."
}
}]
When i remove '©' from the json, i am able to upload the document. When i have the character '©' the below is the error
AmazonCloudSearchDomainException: The request signature we calculated
does not match the signature you provided. Check your AWS Secret
Access Key and signing method. Consult the service documentation for
details. (Service: AmazonCloudSearchDomain; Status Code: 403; Error
Code: SignatureDoesNotMatch; Request ID:
d11a2497-aeac-11e9-b6fb-db6602f3004a)
Tried changing the encoding (UTF-8, UTF-16 and UTF-32) but with no success.
Here is the code which push the above string to CloudSearch
UploadDocumentsRequest uploadDocumentsRequest = new UploadDocumentsRequest();
InputStream inputStream = org.apache.commons.io.IOUtils.toInputStream(testDataString, "UTF-8");
uploadDocumentsRequest.setDocuments(inputStream);
uploadDocumentsRequest.setContentType(ContentType.Applicationjson);
uploadDocumentsRequest.setContentLength((long) testData.length());
UploadDocumentsResult uploadDocumentsResult = client.uploadDocuments(uploadDocumentsRequest);
Found out that the issue is with setContentLength(). If the length is invalid the error is thrown. So the following code change made sure it works.
uploadDocumentsRequest.setContentLength((long) cloudSearchAddRequest.getBytes("UTF-8").length);

Error while testing Google Dialog Bot. Unable to send Uttarance file to bot : botium box

I am using commercial version of Botium box. able to connect to Google dialog bot and able to add test set which contains 2 utterance file and one convo file. on Running the Test Set getting this error (Unable to send utterance file to bot):
Addition Figure no 001 :- Screenshot of Chatting Manually with Bot.
enter image description here
Error:
Error: bot_first_reply/bot_first_reply-L1/Step 1 - tell utterance: error sending to bot
Error: Cannot send message to dialog flow container: {
Error: Deadline exceeded
at Http2CallStream.call.on (/app/agent/node_modules/#grpc/grpc-js/build/src/client.js:96:45)
at Http2CallStream.emit (events.js:202:15)
at process.nextTick (/app/agent/node_modules/#grpc/grpc-js/build/src/call-stream.js:71:22)
at processTicksAndRejections (internal/process/next_tick.js:74:9) code: 4, details: 'Deadline exceeded', metadata: Metadata { options: undefined, internalRepr: Map {} } }
at sessionClient.detectIntent.then.catch.err (/app/agent/node_modules/botium-connector-dialogflow/dist/botium-connector-dialogflow-cjs.js:704:16)

How to configure `Terraform` to upload zip file to `s3` bucket and then deploy them to lambda

I use TerraForm as infrastructure framework in my application. Below is the configuration I use to deploy python code to lambda. It does three steps: 1. zip all dependencies and source code in a zip file; 2. upload the zipped file to s3 bucket; 3. deploy to lambda function.
But what happens is the deploy command terraform apply will fail with below error:
Error: Error modifying Lambda Function Code quote-crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: 2db6cb29-8988-474c-8166-f4332d7309de
on config.tf line 48, in resource "aws_lambda_function" "test_lambda":
48: resource "aws_lambda_function" "test_lambda" {
Error: Error modifying Lambda Function Code praw_crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
status code: 400, request id: e01c83cf-40ee-4919-b322-fab84f87d594
on config.tf line 67, in resource "aws_lambda_function" "praw_crawler":
67: resource "aws_lambda_function" "praw_crawler" {
It means the deploy file doesn't exist in s3 bucket. But it success in the second time when I run the command. It seems like a timing issue. After upload the zip file to s3 bucket, the zip file doesn't exist in s3 bucket. That's why the first time deploy failed. But after a few seconds later, the second command finishes successfully and very quick. Is there anything wrong in my configuration file?
The full terraform configuration file can be found: https://github.com/zhaoyi0113/quote-datalake/blob/master/config.tf
You need to add dependency properly to achieve this, Otherwise, it will crash.
First Zip the files
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
}
then upload it s3 by specifying it dependency which zip,source = "${data.archive_file.source.output_path}" this will make it dependent on zip
# upload zip to s3 and then update lamda function from s3
resource "aws_s3_bucket_object" "file_upload" {
bucket = "${aws_s3_bucket.bucket.id}"
key = "lambda-functions/loadbalancer-to-es.zip"
source = "${data.archive_file.source.output_path}" # its mean it depended on zip
}
Then you are good to go to deploy Lambda, To make it depened just this line do the magic s3_key = "${aws_s3_bucket_object.file_upload.key}"
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
function_name = "alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
s3_bucket = "${var.env_prefix_name}${var.s3_suffix}"
s3_key = "${aws_s3_bucket_object.file_upload.key}" # its mean its depended on upload key
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
}
You may find that the source_code_hash changes even when the code hasn't changed when using Terraform's archive_file. If this is an issue for you I created a module to fix this: lambda-python-archive.
This is a response to the top answer:
You need to add .output_base64sha256 to the source_code_hash instead of using base64sha256 or else terraform plan never settles with "no changes / up-to-date" message.
For example:
source_code_hash = "${data.archive_file.source.output_bash64sha256}"