400 bad request when attempting connection to AWS Neptune with IAM enabled - amazon-neptune

I am unable to connect to neptune instance that has IAM enabled. I have followed the AWS documentation (corrected a few of my silly errors on the way) but without luck.
When I connect via my Java application using the SigV4Signer and when I use the gremlin console, I get a 400 bad request websocket error.
o.a.t.g.d.Handler$GremlinResponseHandler : Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 400 Bad Request
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:267)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:302)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
When I run com.amazon.neptune.gremlin.driver.example.NeptuneGremlinSigV4Example (from my machine over port-forwarding AND from the EC2 jumphost) I get:
java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
I am able to connect to my neptune instance using the older deprecated certificate mechanism. I am using a jumphost ec2 instance and port-forwarding.
I believe that the SigV4 aspect is working as in the neptune audit logs I can see attempts to connect with the aws_access_key:
1584098990319, <jumphost_ip>:47390, <db_instance_ip>:8182, HTTP_GET, [unknown], [unknown], "HttpObjectAggregator$AggregatedFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: CompositeByteBuf(ridx: 0, widx: 0, cap: 0, components=0)) GET /gremlin HTTP/1.1 upgrade: websocket connection: upgrade sec-websocket-key: g44zxck9hTI9cZrq05V19Q== sec-websocket-origin: http://localhost:8182 sec-websocket-version: 13 Host: localhost:8182 X-Amz-Date: 20200313T112950Z Authorization: AWS4-HMAC-SHA256 Credential=<my_access_key>/20200313/eu-west-2/neptune-db/aws4_request, SignedHeaders=host;sec-websocket-key;sec-websocket-origin;sec-websocket-version;upgrade;x-amz-date, Signature=<the_signature> content-length: 0", /gremlin
But when I look
This is the policy that I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"neptune-db:*"
],
"Resource": [
"arn:aws:neptune-db:eu-west-2:<my_aws_account>:*/*"
]
}
]
}
I have previously tried with a policy that references my cluster resource id.
I created a new api user with this policy attached as its only permission. (I've tried this twice).
IAM is showing my that the graph-user I created has not successfully logged in (duh).
Seems that the issue is with the IAM set-up somewhere along the line. Is it possible to get more information out of AWS with regards to why the connection attempt is failing?
I am using the most recent release of Neptune and the 3.4.3 Gremlin Driver and console. I am using Java 8 when running the NeptuneGremlinSigV4Example and building the libraries to deploy to the console.
thanks

It appears from the audit log output that the SigV4 Signature that is being created is using localhost as the Host header. This is most likely due to the fact that you're using a proxy to connect to Neptune. By default, the NeptuneGremlinSigV4Example assumes that you're connecting directly to a Neptune endpoint and reuses the endpoint as the Host header in creating the Signature.
To get around this, you can use the following example code that overrides this process and allows you to use a proxy and still sign the request properly.
https://github.com/aws-samples/amazon-neptune-samples/tree/master/gremlin/gremlin-java-client-demo
I was able to get this to work using the following.
Create an SSL tunnel from you local workstation to your EC2 jumphost:
ssh -i <key-pem-file> -L 8182:<neptune-endpoint>:8182 ec2-user#<ec2-jumphost-hostname>
Set the following environment variables:
export AWS_ACCESS_KEY_ID=<access_key>
export AWS_SECRET_ACCESS_KEY=<secret_key>
export SERVICE_REGION=<region_id> (i.e. us-west-2)
Once the tunnel is up and your environment variables are set, use the following format with the Gremlin-Java-Client-Demo:
java -jar target/gremlin-java-client-demo.jar --nlb-endpoint localhost --lb-port 8182 --neptune-endpoint <neptune-endpoint> --port 8182 --enable-ssl --enable-iam-auth

Related

rclone failing with "AccessControlListNotSupported" on cross-account copy -- AWS CLI Works

Quick Summary now that I think I see the problem:
rclone seems to always send ACL with a copy request, with a default value of "private". This will fail in a (2022) default AWS bucket which (correctly) assumes "No ACL". Need a way to suppress ACL send in rclone.
Detail
I assume an IAM role and attempt to do an rclone copy from a data center Linux box to a default options private no-ACL bucket in the same account as the role I assume. It succeeds.
I then configure a default options private no-ACL bucket in another account than the role I assume. I attach a bucket policy to the cross-account bucket that trusts the role I assume. The role I assume has global permissions to write S3 buckets anywhere.
I test the cross-account bucket policy by using the AWS CLI to copy the same linux box source file to the cross-account bucket. Copy works fine with AWS CLI, suggesting that the connection and access permissions to the cross account bucket are fine. DataSync (another AWS service) works fine too.
Problem: an rclone copy fails with the AccessControlListNotSupported error below.
status code: 400, request id: XXXX, host id: ZZZZ
2022/08/26 16:47:29 ERROR : bigmovie: Failed to copy: AccessControlListNotSupported: The bucket does not allow ACLs
status code: 400, request id: XXXX, host id: YYYY
And of course it is true that the bucket does not support ACL ... which is the desired best practice and AWS default for new buckets. However the bucket does support a bucket policy that trusts my assumed role, and that role and bucket policy pair works just fine with the AWS CLI copy across account, but not with the rclone copy.
Given that AWS CLI copies just fine cross account to this bucket, am I missing one of rclone's numerous flags to get the same behaviour? Anyone think of another possible cause?
Tested older, current and beta rclone versions, all behave the same
Version Info
os/version: centos 7.9.2009 (64 bit)
os/kernel: 3.10.0-1160.71.1.el7.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: none
Failing Command
$ rclone copy bigmovie s3-standard:SOMEBUCKET/bigmovie -vv
Failing RClone Config
type = s3
provider = AWS
env_auth = true
region = us-east-1
endpoint = https://bucket.vpce-REDACTED.s3.us-east-1.vpce.amazonaws.com
#server_side_encryption = AES256
storage_class = STANDARD
#bucket_acl = private
#acl = private
Note that I've tested all permutations of the commented out lines with similar result
Note that I have tested with and without the private endpoint listed with same results for both AWS CLI and rclone, e.g. CLI works, rclone fails.
A log from the command with the -vv flag
2022/08/25 17:25:55 DEBUG : Using config file from "PERSONALSTUFF/rclone.conf"
2022/08/25 17:25:55 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/local/rclone/1.55/bin/rclone" "copy" "bigmovie" "s3-standard:SOMEBUCKET" "-vv"]
2022/08/25 17:25:55 DEBUG : Creating backend with remote "bigmovie"
2022/08/25 17:25:55 DEBUG : fs cache: adding new entry for parent of "bigmovie", "MYDIRECTORY/testbed"
2022/08/25 17:25:55 DEBUG : Creating backend with remote "s3-standard:SOMEBUCKET/bigmovie"
2022/08/25 17:25:55 DEBUG : bigmovie: Need to transfer - File not found at Destination
2022/08/25 17:25:55 ERROR : bigmovie: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
AccessControlListNotSupported The bucket does not allow ACLs8DW1MQSHEN6A0CFAd3Rlnx/XezTB7OC79qr4QQuwjgR+h2VYj4LCZWLGTny9YAy985be5HsFgHcqX4azSDhDXefLE+U=
2022/08/25 17:25:55 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>

MinIO Signature Mismatch

I have set up MinIO behind a reverse proxy in EKS. Everything worked well until MinIO was updated to RELEASE.2021-11-03T03-36-36Z. Now I am getting the following error when trying to access my MinIO bucket using the mc command-line utility: mc: <ERROR> Unable to list folder. The request signature we calculated does not match the signature you provided. Check your key and signing method.
mc version is RELEASE.2021-11-16T20-37-36Z. When I port-forward the MinIO container to localhost and access it in a browser at http://localhost:9001 I can get to it, but I can't log in anymore. I get the error:
Invalid Login, 403 Forbidden`. This is seen in my MinIO container
It also logs the following:
API: SYSTEM()
Time: 03:19:57 UTC 11/23/2021
DeploymentID: 60a8ed7a-7448-4a3d-9220-ff823facd54e
Error: The request signature we calculated does not match the signature you provided. Check your key and signing method. (*errors.errorString)
requestHeaders={"method":"POST","reqURI":"/minio/admin/v3/update?updateURL=","header":{"Authorization":["AWS4-HMAC-SHA256 Credential=<credential-scrubbed>/20211123//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=37850012ca8d27498793c514aa826f1c29b19ceae96057b9d46e24599cc8081b"],"Connection":["keep-alive"],"Content-Length":["0"],"Host":["<host-info-scrubbed>"],"User-Agent":["MinIO (darwin; amd64) madmin-go/0.0.1 mc/RELEASE.2021-11-16T20-37-36Z"],"X-Amz-Content-Sha256":["<scrubbed>"],"X-Amz-Date":["20211123T031957Z"],"X-Forwarded-For":["10.192.57.142"],"X-Forwarded-Host":["<host-info-scurbbed>"],"X-Forwarded-Path":["/minio/admin/v3/update"],"X-Forwarded-Port":["80"],"X-Forwarded-Proto":["http"],"X-Real-Ip":["10.192.57.142"]}}
5: cmd/auth-handler.go:154:cmd.validateAdminSignature()
4: cmd/auth-handler.go:165:cmd.checkAdminRequestAuth()
3: cmd/admin-handler-utils.go:41:cmd.validateAdminReq()
2: cmd/admin-handlers.go:87:cmd.adminAPIHandlers.ServerUpdateHandler()
1: net/http/server.go:2046:http.HandlerFunc.ServeHTTP()
When checking the proxy logs (NGINX), I see:
10.192.57.142 - - [24/Nov/2021:21:18:17 +0000] "GET / HTTP/1.1" 403 334 "-" "MinIO (darwin; amd64) minio-go/v7.0.16 mc/RELEASE.2021-11-16T20-37-36Z"
Any suggestions or advice on what I can do to resolve this would be great! I'm using the mc client on OSX.

ERR_SSL_VERSION_OR_CIPHER_MISMATCH from AWS API Gateway into Lambda

I have set up a lambda and attached an API Gateway deployment to it. The tests in the gateway console all work fine. I created an AWS certificate for *.hazeapp.net. I created a custom domain in the API gateway and attached that certificate. In the Route 53 zone, I created the alias record and used the target that came up under API gateway (the only one available). I named the alias rest.hazeapp.net. My client gets the ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Curl indicates that the TLS server handshake failed, which agrees with the SSL error. Curl indicates that the certificate CA checks out.
Am I doing something wrong?
I had this problem when my DNS entry pointed directly to the API gateway deployment rather than that backing the custom domain name.
To find the domain name to point to:
aws apigateway get-domain-name --domain-name "<YOUR DOMAIN>"
The response contains the domain name to use. In my case I had a Regional deployment so the result was:
{
"domainName": "<DOMAIN_NAME>",
"certificateUploadDate": 1553011117,
"regionalDomainName": "<API_GATEWAY_ID>.execute-api.eu-west-1.amazonaws.com",
"regionalHostedZoneId": "...",
"regionalCertificateArn": "arn:aws:acm:eu-west-1:<ACCOUNT>:certificate/<CERT_ID>",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
}
}

How to use Minio Client with a Minio server in Arch?

I want to be able to use the Minio client in arch
I use Arch with Minio version
$ minio version
Version: DEVELOPMENT.GOGET
Release-Tag: DEVELOPMENT.GOGET
Commit-ID: DEVELOPMENT.GOGET
and Minio client version
$ mcli version
Version: 2018-03-23T17:45:52Z
Release-tag: DEVELOPMENT.2018-03-23T17-45-52Z
Commit-id: fe82b0381c5ccf004515be3bfd9b6fa733890005
when I want to add the config file I get as shown when starting the minio server
$ mcli config host add myminio <YOUR-S3-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY>
mcli: Configuration written to `/home/user/.mc/config.json`. Please update your access credentials.
mcli: Successfully created `/home/user/.mc/share`.
mcli: Initialized share uploads `/home/user/.mc/share/uploads.json` file.
mcli: Initialized share downloads `/home/user/.mc/share/downloads.json` file.
mcli: <ERROR> Unable to initialize new config from the provided credentials The request signature we calculated does not match the signature you provided. Check your key and signing method
on the server I get the print out
ERRO[0529] \{
"method": "GET",
"reqURI": "/probe-bucket-sign/?location=",
"header": {
"Authorization": ["AWS4-HMAC-SHA256 Credential=<Creadentisl>/20180323/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=<signature>"],
"Host": ["<YOUR-S3-ENDPOINT>"],
"User-Agent": ["Minio (linux; amd64) minio-go/5.0.1"],
"X-Amz-Content-Sha256": ["<...>"],
"X-Amz-Date": ["20180323T174913Z"]
}
}
cause=Signature does not match source=[auth-handler.go:122:checkRequestAuthType()]

not authorized to perform: rds:DescribeDBEngineVersions

I implemented a REST api in django with django-rest-framework,on localhost working fine with successful results.
When pushing this up to an existing AWS elastic beanstalk instance, I received:
{
"detail": "Authentication credentials were not provided."
}
For solution I followed this question : Authorization Credentials Stripped
But when I push mycode on aws EB I am getting this error :
Pipeline failed with error "Service:AmazonRDS, is not authorized to perform: rds:DescribeDBEngineVersions"
I tried lots of solutions but every time I am getting this error.
Note: I am using python3.6
I got the answer of my problem.
I set the RDS policy and create new custom_wsgi.config file on .ebextensions directory and write command :
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On