How to catch failed S3 copyObjectAsync with 200 OK result - amazon-s3

I want to use the copyObjectAsync of S3 SDK (.net core) in order to copy keys from one bucket to another.
In AWS documentation (https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) I found :"A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately."
However there is no explanation on how or where to get the error. Is it will be exception?
I found similar question :
How to catch failed S3 copyObject with 200 OK result in AWSJavaScriptSDK
But I wonder if there are another explanation why the SDK not support it and other methods to assure if the copy succeed.
Thanks in advance,
Yossi

Related

Camel AWSS3 Component with Idempotent Consumer EIP throws software.amazon.awssdk.services.s3.model.NoSuchKeyException:The specified key does not exist

I am trying to use Camel AWS2 S3 component to retrieve objects from AWS S3 service.
Since more than one instance of this route will be running, I am using Idempotent
Consumer EIP in the route to filter out duplicates. I am using hazel cast idempotent repository in the route.
When I am running one instance of the application, everything works fine.
When I am running two instances of the application, I am seeing **software.amazon.awssdk.services.s3.model.NoSuchKeyException: The specified key does not exist.**
The file consumption is not a problem and it's working fine.
If application instance 1 is consuming a file from s3, sometimes I am seeing the error in application instance 2.
If application instance 2 is consuming a file from s3, sometimes I am seeing the error in application instance 1.
My route:
`from("aws2-s3://test-bucket?prefix=mypoc&moveAfterRead=true)
.routeId("myRoute") .idempotentConsumer(simple("${header.CamelAwsS3BucketName}-${header.CamelAwsS3Key}"),repository)
.skipDuplicate(true)
.log(LoggingLevel.INFO, "Message recieved{}",String.valueOf(simple("${header.CamelAwsS3Key}")));`
Exception:
Caused by: \[software.amazon.awssdk.services.s3.model.NoSuchKeyException - The specified key does not exist. (Service: S3, Status Code: 404, Request ID: WNVX1YQWZQWAFNMB, Extended Request ID: Vm1pTrjSbM71R9h/f7+ypr60/Gn4j5pzgCZDsAhtVzd9QZBGBbrq8U14DMPWf0GOmy5/pmJvbno=)\] software.amazon.awssdk.services.s3.model.NoSuchKeyException: The specified key does not exist. (Service: S3, Status Code: 404, Request ID: WNVX1YQWZQWAFNMB, Extended Request ID: Vm1pTrjSbM71R9h/f7+ypr60/Gn4j5pzgCZDsAhtVzd9QZBGBbrq8U14DMPWf0GOmy5/pmJvbno=)at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:125) \~\[sdk-core-2.17.247.jar:na\]
But I am wondering why I am seeing the exception.Looks like both the instances are trying to consume the file and one of the instance is throwing the error.
Any insights on why we are seeing the exception?

dms s3 source endpoint connection fails

Getting below connection error when trying to validate S3 source endpoint of DMS.
Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to connect to database.
Followed all the steps listed in the below links but still maybe I am missing something...
https://aws.amazon.com/premiumsupport/knowledge-center/dms-connection-test-fail-s3/
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.S3.html
The role associated with the endpoint does have access to the S3 bucket of the endpoint, along with dms being listed as trusted entity.
I got this same error when trying to use S3 as a target.
The one thing not mentioned in the documentation, and which turned out to be the root cause for my error, is that the DMS Replication Instance and the Bucket need to be in the same region.

Does s4cmd support signature version 4 because i am unable to upload files to s3 bucket (LONDON)

I am trying to upload files to s3 bucket(LONDON) i.e. eu-west-2. S4cmd is not working.
s4cmd put /home/username/Documents/file-1.json s3://[BUCKETNAME]/file-1.json
error when i run this command is : -
[Exception] An error occurred (400) when calling the HeadObject operation: Bad Request
[Thread Failure] An error occurred (400) when calling the HeadObject operation: Bad Request
S3cmd works but it is slow. s4cmd works for US standard region but for London region it is not working.
Thanks in advance.
The aws s3 cp command in the AWS Command-Line Interface (CLI) uses multi-part upload to fully utilize available bandwidth, so it should give you pretty much the best speed possible.

Amazon S3 File Read Timeout. Trying to download a file using JAVA

New to Amazon S3 usage.I get the following error when trying to access the file from Amazon S3 using a simple java method.
2016-08-23 09:46:48 INFO request:450 - Received successful response:200, AWS Request ID: F5EA01DB74D0D0F5
Caught an AmazonClientException, which means the client encountered an
internal error while trying to communicate with S3, such as not being
able to access the network.
Error Message: Unable to store object contents to disk: Read timed out
The exact lines of code worked yesterday.I was able to download 100% of 5GB file in 12 min. Today I'm in a better connected environment but only 2% or 3% of the file is downloaded and then the program fails.
Code that I'm using to download.
s3Client.getObject(new GetObjectRequest("mybucket", file.getKey()), localFile);
You need to set the connection timeout and the socket timeout in your client configuration.
Click here for a reference article
Here is an excerpt from the article:
Several HTTP transport options can be configured through the com.amazonaws.ClientConfiguration object. Default values will suffice for the majority of users, but users who want more control can configure:
Socket timeout
Connection timeout
Maximum retry attempts for retry-able errors
Maximum open HTTP connections
Here is an example on how to do it:
Downloading files >3Gb from S3 fails with "SocketTimeoutException: Read timed out"

multiple file upload to bigquery

I am trying to do multiple file upload simultaneously to google big-query using command line tool. I got following error :
BigQuery error in load operation: Could not connect with BigQuery server.
Http response status: 503
Http response content:
Service Unavailable
Any way to workaround this problem ?
How do I upload multiple files simultaneously to google big-query using command line tool.
Multiple file upload should work (and we use it every day). If you're getting a 503, that indicates something is wrong with the service. One thing you might want to make sure of is that if you're using a * in your command line that you have it quoted so that the shell doesn't expand it automatically before it gets passed to bq.
If you're getting a 503 error, can you retry the command the flag --apilog=- (this needs to be one of the first params) which will dump the interaction with the server to stdout. The problem may be obvious from that log, but if it isn't can you update your question with the relevant portions of the log? If you're not comfortable posting that information on a public forum, can you e-mail it to me at tigani at google dot com?