How to use Minio Client with a Minio server in Arch? - amazon-s3

I want to be able to use the Minio client in arch
I use Arch with Minio version
$ minio version
Version: DEVELOPMENT.GOGET
Release-Tag: DEVELOPMENT.GOGET
Commit-ID: DEVELOPMENT.GOGET
and Minio client version
$ mcli version
Version: 2018-03-23T17:45:52Z
Release-tag: DEVELOPMENT.2018-03-23T17-45-52Z
Commit-id: fe82b0381c5ccf004515be3bfd9b6fa733890005
when I want to add the config file I get as shown when starting the minio server
$ mcli config host add myminio <YOUR-S3-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY>
mcli: Configuration written to `/home/user/.mc/config.json`. Please update your access credentials.
mcli: Successfully created `/home/user/.mc/share`.
mcli: Initialized share uploads `/home/user/.mc/share/uploads.json` file.
mcli: Initialized share downloads `/home/user/.mc/share/downloads.json` file.
mcli: <ERROR> Unable to initialize new config from the provided credentials The request signature we calculated does not match the signature you provided. Check your key and signing method
on the server I get the print out
ERRO[0529] \{
"method": "GET",
"reqURI": "/probe-bucket-sign/?location=",
"header": {
"Authorization": ["AWS4-HMAC-SHA256 Credential=<Creadentisl>/20180323/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=<signature>"],
"Host": ["<YOUR-S3-ENDPOINT>"],
"User-Agent": ["Minio (linux; amd64) minio-go/5.0.1"],
"X-Amz-Content-Sha256": ["<...>"],
"X-Amz-Date": ["20180323T174913Z"]
}
}
cause=Signature does not match source=[auth-handler.go:122:checkRequestAuthType()]

Related

Setting up S3 compatible service for blob storage on Google Cloud Storage

PS: cross posted on drone forums here.
I'm trying to setup s3 like service for drone logs. i've tested that my AWS_* values are set correctly in the container and using aws-cli from inside container gives correct output for:
aws s3api list-objects --bucket drone-logs --endpoint-url=https://storage.googleapis.com
however, drone server itself is unable to upload logs to the bucket (with following error):
{"error":"InvalidArgument: Invalid argument.\n\tstatus code: 400, request id: , host id: ","level":"warning","msg":"manager: cannot upload complete logs","step-id":7,"time":"2023-02-09T12:26:16Z"}
drone server on startup shows that s3 related configuration was picked correctly:
rpc:
server: ""
secret: my-secret
debug: false
host: drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
proto: https
s3:
bucket: drone-logs
prefix: ""
endpoint: https://storage.googleapis.com
pathstyle: true
the env. vars inside droner server container are:
# env | grep -E 'DRONE|AWS' | sort
AWS_ACCESS_KEY_ID=GOOGXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_COOKIE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_DATABASE_DATASOURCE=postgres://drone:XXXXXXXXXXXXXXXXXXXXXXXXXXXXX#35.XXXXXX.XXXX:5432/drone?sslmode=disable
DRONE_DATABASE_DRIVER=postgres
DRONE_DATABASE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_ID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_JSONNET_ENABLED=true
DRONE_LOGS_DEBUG=true
DRONE_LOGS_TRACE=true
DRONE_RPC_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_S3_BUCKET=drone-logs
DRONE_S3_ENDPOINT=https://storage.googleapis.com
DRONE_S3_PATH_STYLE=true
DRONE_SERVER_HOST=drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_SERVER_PROTO=https
DRONE_STARLARK_ENABLED=true
the .drone.yaml that is being used is available here, on github.
the server is running using the nolimit flag:
go build -tags "nolimit" github.com/drone/drone/cmd/drone-server

rclone failing with "AccessControlListNotSupported" on cross-account copy -- AWS CLI Works

Quick Summary now that I think I see the problem:
rclone seems to always send ACL with a copy request, with a default value of "private". This will fail in a (2022) default AWS bucket which (correctly) assumes "No ACL". Need a way to suppress ACL send in rclone.
Detail
I assume an IAM role and attempt to do an rclone copy from a data center Linux box to a default options private no-ACL bucket in the same account as the role I assume. It succeeds.
I then configure a default options private no-ACL bucket in another account than the role I assume. I attach a bucket policy to the cross-account bucket that trusts the role I assume. The role I assume has global permissions to write S3 buckets anywhere.
I test the cross-account bucket policy by using the AWS CLI to copy the same linux box source file to the cross-account bucket. Copy works fine with AWS CLI, suggesting that the connection and access permissions to the cross account bucket are fine. DataSync (another AWS service) works fine too.
Problem: an rclone copy fails with the AccessControlListNotSupported error below.
status code: 400, request id: XXXX, host id: ZZZZ
2022/08/26 16:47:29 ERROR : bigmovie: Failed to copy: AccessControlListNotSupported: The bucket does not allow ACLs
status code: 400, request id: XXXX, host id: YYYY
And of course it is true that the bucket does not support ACL ... which is the desired best practice and AWS default for new buckets. However the bucket does support a bucket policy that trusts my assumed role, and that role and bucket policy pair works just fine with the AWS CLI copy across account, but not with the rclone copy.
Given that AWS CLI copies just fine cross account to this bucket, am I missing one of rclone's numerous flags to get the same behaviour? Anyone think of another possible cause?
Tested older, current and beta rclone versions, all behave the same
Version Info
os/version: centos 7.9.2009 (64 bit)
os/kernel: 3.10.0-1160.71.1.el7.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: none
Failing Command
$ rclone copy bigmovie s3-standard:SOMEBUCKET/bigmovie -vv
Failing RClone Config
type = s3
provider = AWS
env_auth = true
region = us-east-1
endpoint = https://bucket.vpce-REDACTED.s3.us-east-1.vpce.amazonaws.com
#server_side_encryption = AES256
storage_class = STANDARD
#bucket_acl = private
#acl = private
Note that I've tested all permutations of the commented out lines with similar result
Note that I have tested with and without the private endpoint listed with same results for both AWS CLI and rclone, e.g. CLI works, rclone fails.
A log from the command with the -vv flag
2022/08/25 17:25:55 DEBUG : Using config file from "PERSONALSTUFF/rclone.conf"
2022/08/25 17:25:55 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/local/rclone/1.55/bin/rclone" "copy" "bigmovie" "s3-standard:SOMEBUCKET" "-vv"]
2022/08/25 17:25:55 DEBUG : Creating backend with remote "bigmovie"
2022/08/25 17:25:55 DEBUG : fs cache: adding new entry for parent of "bigmovie", "MYDIRECTORY/testbed"
2022/08/25 17:25:55 DEBUG : Creating backend with remote "s3-standard:SOMEBUCKET/bigmovie"
2022/08/25 17:25:55 DEBUG : bigmovie: Need to transfer - File not found at Destination
2022/08/25 17:25:55 ERROR : bigmovie: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
AccessControlListNotSupported The bucket does not allow ACLs8DW1MQSHEN6A0CFAd3Rlnx/XezTB7OC79qr4QQuwjgR+h2VYj4LCZWLGTny9YAy985be5HsFgHcqX4azSDhDXefLE+U=
2022/08/25 17:25:55 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>

MinIO Signature Mismatch

I have set up MinIO behind a reverse proxy in EKS. Everything worked well until MinIO was updated to RELEASE.2021-11-03T03-36-36Z. Now I am getting the following error when trying to access my MinIO bucket using the mc command-line utility: mc: <ERROR> Unable to list folder. The request signature we calculated does not match the signature you provided. Check your key and signing method.
mc version is RELEASE.2021-11-16T20-37-36Z. When I port-forward the MinIO container to localhost and access it in a browser at http://localhost:9001 I can get to it, but I can't log in anymore. I get the error:
Invalid Login, 403 Forbidden`. This is seen in my MinIO container
It also logs the following:
API: SYSTEM()
Time: 03:19:57 UTC 11/23/2021
DeploymentID: 60a8ed7a-7448-4a3d-9220-ff823facd54e
Error: The request signature we calculated does not match the signature you provided. Check your key and signing method. (*errors.errorString)
requestHeaders={"method":"POST","reqURI":"/minio/admin/v3/update?updateURL=","header":{"Authorization":["AWS4-HMAC-SHA256 Credential=<credential-scrubbed>/20211123//s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=37850012ca8d27498793c514aa826f1c29b19ceae96057b9d46e24599cc8081b"],"Connection":["keep-alive"],"Content-Length":["0"],"Host":["<host-info-scrubbed>"],"User-Agent":["MinIO (darwin; amd64) madmin-go/0.0.1 mc/RELEASE.2021-11-16T20-37-36Z"],"X-Amz-Content-Sha256":["<scrubbed>"],"X-Amz-Date":["20211123T031957Z"],"X-Forwarded-For":["10.192.57.142"],"X-Forwarded-Host":["<host-info-scurbbed>"],"X-Forwarded-Path":["/minio/admin/v3/update"],"X-Forwarded-Port":["80"],"X-Forwarded-Proto":["http"],"X-Real-Ip":["10.192.57.142"]}}
5: cmd/auth-handler.go:154:cmd.validateAdminSignature()
4: cmd/auth-handler.go:165:cmd.checkAdminRequestAuth()
3: cmd/admin-handler-utils.go:41:cmd.validateAdminReq()
2: cmd/admin-handlers.go:87:cmd.adminAPIHandlers.ServerUpdateHandler()
1: net/http/server.go:2046:http.HandlerFunc.ServeHTTP()
When checking the proxy logs (NGINX), I see:
10.192.57.142 - - [24/Nov/2021:21:18:17 +0000] "GET / HTTP/1.1" 403 334 "-" "MinIO (darwin; amd64) minio-go/v7.0.16 mc/RELEASE.2021-11-16T20-37-36Z"
Any suggestions or advice on what I can do to resolve this would be great! I'm using the mc client on OSX.

400 bad request when attempting connection to AWS Neptune with IAM enabled

I am unable to connect to neptune instance that has IAM enabled. I have followed the AWS documentation (corrected a few of my silly errors on the way) but without luck.
When I connect via my Java application using the SigV4Signer and when I use the gremlin console, I get a 400 bad request websocket error.
o.a.t.g.d.Handler$GremlinResponseHandler : Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 400 Bad Request
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:267)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:302)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
When I run com.amazon.neptune.gremlin.driver.example.NeptuneGremlinSigV4Example (from my machine over port-forwarding AND from the EC2 jumphost) I get:
java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
I am able to connect to my neptune instance using the older deprecated certificate mechanism. I am using a jumphost ec2 instance and port-forwarding.
I believe that the SigV4 aspect is working as in the neptune audit logs I can see attempts to connect with the aws_access_key:
1584098990319, <jumphost_ip>:47390, <db_instance_ip>:8182, HTTP_GET, [unknown], [unknown], "HttpObjectAggregator$AggregatedFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: CompositeByteBuf(ridx: 0, widx: 0, cap: 0, components=0)) GET /gremlin HTTP/1.1 upgrade: websocket connection: upgrade sec-websocket-key: g44zxck9hTI9cZrq05V19Q== sec-websocket-origin: http://localhost:8182 sec-websocket-version: 13 Host: localhost:8182 X-Amz-Date: 20200313T112950Z Authorization: AWS4-HMAC-SHA256 Credential=<my_access_key>/20200313/eu-west-2/neptune-db/aws4_request, SignedHeaders=host;sec-websocket-key;sec-websocket-origin;sec-websocket-version;upgrade;x-amz-date, Signature=<the_signature> content-length: 0", /gremlin
But when I look
This is the policy that I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"neptune-db:*"
],
"Resource": [
"arn:aws:neptune-db:eu-west-2:<my_aws_account>:*/*"
]
}
]
}
I have previously tried with a policy that references my cluster resource id.
I created a new api user with this policy attached as its only permission. (I've tried this twice).
IAM is showing my that the graph-user I created has not successfully logged in (duh).
Seems that the issue is with the IAM set-up somewhere along the line. Is it possible to get more information out of AWS with regards to why the connection attempt is failing?
I am using the most recent release of Neptune and the 3.4.3 Gremlin Driver and console. I am using Java 8 when running the NeptuneGremlinSigV4Example and building the libraries to deploy to the console.
thanks
It appears from the audit log output that the SigV4 Signature that is being created is using localhost as the Host header. This is most likely due to the fact that you're using a proxy to connect to Neptune. By default, the NeptuneGremlinSigV4Example assumes that you're connecting directly to a Neptune endpoint and reuses the endpoint as the Host header in creating the Signature.
To get around this, you can use the following example code that overrides this process and allows you to use a proxy and still sign the request properly.
https://github.com/aws-samples/amazon-neptune-samples/tree/master/gremlin/gremlin-java-client-demo
I was able to get this to work using the following.
Create an SSL tunnel from you local workstation to your EC2 jumphost:
ssh -i <key-pem-file> -L 8182:<neptune-endpoint>:8182 ec2-user#<ec2-jumphost-hostname>
Set the following environment variables:
export AWS_ACCESS_KEY_ID=<access_key>
export AWS_SECRET_ACCESS_KEY=<secret_key>
export SERVICE_REGION=<region_id> (i.e. us-west-2)
Once the tunnel is up and your environment variables are set, use the following format with the Gremlin-Java-Client-Demo:
java -jar target/gremlin-java-client-demo.jar --nlb-endpoint localhost --lb-port 8182 --neptune-endpoint <neptune-endpoint> --port 8182 --enable-ssl --enable-iam-auth

How do I execute a curl request in Apache NiFi using either InvokeHTTP or ExecuteStreamCommand Processor?

So I am having difficulties sending a curl request to Hive. I want to take the json flow-file that I have created and send it as a command to Hive but I keep getting errors when I try to configure InvokeHTTP processor. For reference here is my workflow as it currently stands.
Replace Text-> Update Attributes - > InvokeHTTP->Put processor
I have tried mostly to get InvokeHTTP processor to work. The configurations that I have are:
1.HTTP Method: POST
2.Remote URL: ${https://hive-prod-1.sample_text/alert}
3.SSL Context Service: StandardSSLContext Service
4.Proxy Type: https
-Content-type: application/json
I then added a property
5.curl: curl-XPOST-H"Authorization: Bearer xWJbexxxxxxxx -H "Content-Type: application/json'
I am not sure if my configuration is incorrect or if there is another issue going on.
When I tried to use/configure ExecuteStreamCommand:
1.Command Arguments: curl-XPOST-H"Authorization xxxxx -H "Content-type: application/json
2.Command Path: application/json
Argument Delimiter: ;
Again, I am not sure if the configuration if correct for either of these processors or if it has something to do with a cert. When I run it I also get the error message 'java.lang.illegalstateexception: trustmanagerfactory =is not initialized.
It sounds as if you have not successfully/completely configured the SSLContextService which is required for InvokeHTTP when connecting to a service which uses TLS. Your Hive instance is protected with TLS, so you need to obtain the public certificate of the Hive instance (you can do this via a browser, using openssl s_client, etc.), load the public certificate into a Java Keystore (JKS) formatted truststore file as a trustedCertEntry, and then point the SSLContextService to that truststore file. For more information, look at the first section of Tomas Zezula's article on NiFi SSL configuration.