Redshift Spectrum / The bucket you are attempting to access must be addressed using the specified endpoint - amazon-s3

I created a parquet file in S3 and an external table pointing to it in Redshift / Spectrum. Both my S3 bucket and Redshift cluster are in us-west-2. I specified the option region when creating the schema.
Queries run smoothly in Athena.
Yet when I run from Redshift client, I get this error:
Amazon Invalid operation: S3 Query Exception (Fetch)
Details:
error: S3 Query Exception (Fetch)
code: 15001
context: Task failed due to an internal error.
HTTP response error code: 301 Message: PermanentRedirect The bucket you are attempting to access must be addressed using the specified endpoint. >Please send all future requests to this endpoint.
x-amz-request-id: XXXX
query: XXXXX
location: dory_util.cpp:689
process: query0_40 [pid=XXX]
-----------------------------------------------;

AWS has acknowledged the issue and released a patch overnight.

Please make sure that your Redshift cluster is running with at least version 1.0.14016 in us-east-2 or us-west-2 and 1.0.1407 in us-east-1. To apply the patch to Redshift immediately, move the maintenance window of your cluster closer to the current time and day to pick it up at your convenience.

Related

AWS S3 Connection in druid

I have set up a clustered Druid with the configuration as mentioned in the Druid documentation
https://druid.apache.org/docs/latest/tutorials/cluster.html
I am using AWS S3 for deep storage. Following is the snippet of my common configuration file
druid.extensions.loadList=["druid-datasketches", "mysql-metadata-storage", "druid-s3-extensions", "druid-orc-extensions", "druid-lookups-cached-global"]
# For S3:
druid.storage.type=s3
druid.storage.bucket=bucket-name
druid.storage.baseKey=druid/segments
#druid.storage.disableAcl=true
druid.storage.sse.type=s3
#druid.s3.accessKey=...
#druid.s3.secretKey=...
# For S3:
druid.indexer.logs.type=s3
druid.indexer.logs.s3Bucket=bucket-name
druid.indexer.logs.s3Prefix=druid/stage/indexing-logs
While running any ingestion task I am getting Access denied error
Java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: ; S3 Extended Request ID: ), S3 Extended Request ID:
at org.apache.druid.storage.s3.S3DataSegmentPusher.push(S3DataSegmentPusher.java:103) ~[?:?]
at org.apache.druid.segment.realtime.appenderator.AppenderatorImpl.lambda$mergeAndPush$4(AppenderatorImpl.java:791) ~[druid-server-0.19.0.jar:0.19.0]
at org.apache.druid.java.util.common.RetryUtils.retry(RetryUtils.java:87) ~[druid-core-0.19.0.jar:0.19.0]
at org.apache.druid.java.util.common.RetryUtils.retry(RetryUtils.java:115) ~[druid-core-0.19.0.jar:0.19.0]
at org.apache.druid.java.util.common.RetryUtils.retry(RetryUtils.java:105) ~[druid-core-0.19.0.jar:0.19.0]
I am using s3 for two purposes
read data from s3 and ingest it. This connection is working fine and data is being from s3 location
for deep storage. I am getting error over here.
I am using Profile information authentication method to provide s3 credential. So I already have configured aws cli with appropriate credentials. Also, s3 data is encrypted by AES256 so i have added druid.storage.sse.type=s3 in config file.
Can someone help me out here as I am not able to debug the issue.
You asked how to approach debugging this. Normally I would:
Ssh onto the ec2 instance and run aws sts get-caller-identity. This will tell you what principal your requests are sent from. Then, I would confirm that principal has the S3 access that is expected.
I would confirm that I can write to the bucket in your configuration.
druid.storage.type=s3
druid.storage.bucket=<bucket-name>
druid.storage.baseKey=druid/segments
I would try some of the other auth methods such as exporting the keys into the environment mentioned in the third option since that is a simple test. Then I would run step 1 again to confirm my principal reflects those keys. And then I would try running your code again.

dms s3 source endpoint connection fails

Getting below connection error when trying to validate S3 source endpoint of DMS.
Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to connect to database.
Followed all the steps listed in the below links but still maybe I am missing something...
https://aws.amazon.com/premiumsupport/knowledge-center/dms-connection-test-fail-s3/
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.S3.html
The role associated with the endpoint does have access to the S3 bucket of the endpoint, along with dms being listed as trusted entity.
I got this same error when trying to use S3 as a target.
The one thing not mentioned in the documentation, and which turned out to be the root cause for my error, is that the DMS Replication Instance and the Bucket need to be in the same region.

PySpark Writing DataFrame Partitions to S3

I've been trying to partition and write a spark dataframe to S3 and I get an error.
df.write.partitionBy("year","month").mode("append")\
.parquet('s3a://bucket_name/test_folder/')
Error message is:
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception:
Status Code: 403, AWS Service: Amazon S3, AWS Request ID: xxxxxx,
AWS Error Code: SignatureDoesNotMatch,
AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method.
However, when I simply write without partitioning it does work.
df.write.mode("append").parquet('s3a://bucket_name/test_folder/')
What could be causing this problem?
I resolved this problem by upgrading from aws-java-sdk:1.7.4 to aws-java-sdk:1.11.199 and hadoop-aws:2.7.7 to hadoop-aws:3.0.0 in my spark-submit.
I set this in my python file using:
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.11.199,org.apache.hadoop:hadoop-aws:3.0.0 pyspark-shell
But you can also provide them as arguments to spark-submit directly.
I had to rebuild Spark providing my own version of Hadoop 3.0.0 to avoid dependency conflicts.
You can read some of my speculation as to the root cause here: https://stackoverflow.com/a/51917228/10239681

Does s4cmd support signature version 4 because i am unable to upload files to s3 bucket (LONDON)

I am trying to upload files to s3 bucket(LONDON) i.e. eu-west-2. S4cmd is not working.
s4cmd put /home/username/Documents/file-1.json s3://[BUCKETNAME]/file-1.json
error when i run this command is : -
[Exception] An error occurred (400) when calling the HeadObject operation: Bad Request
[Thread Failure] An error occurred (400) when calling the HeadObject operation: Bad Request
S3cmd works but it is slow. s4cmd works for US standard region but for London region it is not working.
Thanks in advance.
The aws s3 cp command in the AWS Command-Line Interface (CLI) uses multi-part upload to fully utilize available bandwidth, so it should give you pretty much the best speed possible.

Elasticsearch Writes Into S3 Bucket for Metadata But Doesn't Write Into S3 Bucket For Scheduled Snapshots?

According to the documents at http://www.elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html, I have included the respective API Access Key ID with its secret access key and Elasticsearch is able to write into the S3 bucket as follows:
[2012-08-02 04:21:38,793][DEBUG][gateway.s3] [Schultz, Herman] writing to gateway org.elasticsearch.gateway.shared.SharedStorageGateway$2#4e64f6fe ...
[2012-08-02 04:21:39,337][DEBUG][gateway.s3] [Schultz, Herman] wrote to gateway org.elasticsearch.gateway.shared.SharedStorageGateway$2#4e64f6fe, took 543ms
However when it comes to writing snapshots into the S3 bucket, out comes the following error:
[2012-08-02 04:25:37,303][WARN ][index.gateway.s3] [Schultz, Herman] [plumbline_2012.08.02][3] failed to read commit point [commit-i] java.io.IOException: Failed to get [commit-i]
Caused by: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: E084E2ED1E68E710, AWS Error Code: InvalidAccessKeyId, AWS Error Message: The AWS Access Key Id you provided does not exist in our records.
[2012-08-02 04:36:06,696][WARN ][index.gateway] [Schultz, Herman] [plumbline_2012.08.02][0] failed to snapshot (scheduled) org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException: [plumbline_2012.08.02][0] Failed to perform snapshot (index files)
Is there a reason why this is happening since the access keys I have provided is able to write metadata and not creating snapshots?