How to configure artifact store of mlflow tracking service to connect to minio S3 using minio STS generated acces_key, secret_key and session_token - amazon-s3

Minio is configured with LDAP and am generating credentials of user
with AssumeRoleWithLDAPIdentity using STS API (reference)
From above values, I'm setting the variables AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_SESSION_TOKEN (reference)
I'm getting error when am trying to push model to mlflow to store in minio artifact
S3UploadFailedError: Failed to upload /tmp/tmph68xubhm/model/MLmodel to mlflow/1/xyz/artifacts/model/MLmodel: An error occurred (InvalidTokenId) when calling the PutObject operation: The security token included in the request is invalid

Related

DataFlow :missing required authentication credential

I am getting following error while running DataFlow pipeline
Error reporting inventory checksum: code: "Unauthenticated", message: "Request is missing required authentication credential.
Expected OAuth 2 access token, login cookie or other valid authentication credential.
We have created service account dataflow#12345678.iam.gserviceaccount.com with following roles
BigQuery Data Editor
Cloud KMS CryptoKey Decrypter
Dataflow Worker
Logs Writer
Monitoring Metric Writer
Pub/Sub Subscriber
Pub/Sub Viewer
Storage Object Creator
And in our python code we are using import google.auth
Any idea what am I missing here ?
I do not believe I need to create key for SA , however I am not sure if "OAuth 2 access token" for SA need to be created ? If yes how ?
This was the issue in my case https://cloud.google.com/dataflow/docs/guides/common-errors#lookup-policies
If you are trying to access a service through HTTP, with a custom request (not using a client library), you can obtain a OAuth2 token for that service account using the metadata server of the worker VM. See this example for Cloud Run, you can use the same code snippet in Dataflow to get a token and use it with your custom HTTP request:
https://cloud.google.com/run/docs/authenticating/service-to-service#acquire-token

ClusterAuthorizationException in Kafka while creating topic

Our Confluent Kafka is installed on AWS EC2. We are using SASL/SSL security protocol and LDAP for user authentication.
The following exception occurs when trying to create a topic:
ERROR [KafkaApi-0] Error when handling request: clientId=2, correlationId=0, api=UPDATE_METADATA, body={controller_id=2,controller_epoch=1,broker_epoch=8589934650,topic_states=[],live_brokers=[{id=2,end_points=[{port=9092,host=dfdp-080060041.dfdp.com,listener_name=PLAINTEXT,security_protocol_type=0},{port=9093,host=dfdp-080060041.dfdp.com,listener_name=SASL_SSL,security_protocol_type=3}],rack=null},{id=1,end_points=[{port=9092,host=dfdp-080060025.dfdp.com,listener_name=PLAINTEXT,security_protocol_type=0},{port=9093,host=dfdp-080060025.dfdp.com,listener_name=SASL_SSL,security_protocol_type=3}],rack=null},{id=0,end_points=[{port=9092,host=dfdp-080060013.dfdp.com,listener_name=PLAINTEXT,security_protocol_type=0},{port=9093,host=dfdp-080060013.dfdp.com,listener_name=SASL_SSL,security_protocol_type=3}],rack=null}]} (kafka.server.KafkaApis)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request Request(processor=3, connectionId=10.80.60.13:9093-10.80.60.41:53554-0, session=Session(User:$BEB000-DRJTO9PK3C7L,dfdp-080060041.dfdp.com/10.80.60.41), listenerName=ListenerName(SASL_SSL), securityProtocol=SASL_SSL, buffer=null) is not authorized

S3 access denied when trying to run aws cli

using the AWS CLI I'm trying to run
aws cloudformation create-stack --stack-name FullstackLambda --template-url https://s3-us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yam --capabilities CAPABILITY_NAMED_IAM --region us-west-2
but I get the error
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
I have already set my credential with
aws configure
PS I got the create-stack command from the AppSync docs (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html)
Looks like you accidentally skipped l letter at the end of template file name:
LambdaCFTemplate.yam -> LambdaCFTemplate.yaml
First make sure the S3 URL is correct. But since this is a 403, I doubt it's the case.
Yours could result from a few different scenarios.
1.If both APIs and IAM user are MFA protected, you have to generate temporary credentials using aws sts get-session-token and use it
2.Use a role to provide cloudformation read access to the template object in S3. First create a IAM role with read access to S3. Then create a parameter like below and ref it in resource properties IamInstanceProfile block
"InstanceProfile":{
"Description":"Instance Profile Name",
"Type":"String",
"Default":"iam-test-role"
}

Appveyor cannot upload to S3

I've got a S3 access key and secret set up. I've tried the credentials locally with the aws cli program. However, when run on Appveyor it got permission denied as follows
Deploying using S3 provider
Uploading artifact "NOpenType/bin/Release/NOpenType.0.1.4-ci0187.nupkg" (25,708 bytes) to S3 bucket "nrasterizer-artifacts" as "master/NOpenType/bin/Release/NOpenType.0.1.4-ci0187.nupkg"
Access Denied
How do I resolve this and let appveyor upload to my bucket?
This could be due to any number of reasons
Is S3 provider properly configured? Obvious, but please recheck the key& secret and bucket names etc.
Does the user have appropriate permissions? You did mention that you tested the credentials locally. But it could be that there is a S3 bucket policy which restricts uploads etc. to a set to specific IP addresses.
As I was using set_public: true setting I needed the s3:PutObjectAcl permission in addition to s3:PutObject.

Artifactory allow anonymous access for docker pull

I have set anonymous access enabled for the read operations in the docker. But when I try to pull an image from the repository it asks for an authentication password.
What is the correct way to set anonymous read access to the docker repository ? I have followed this documentation
https://www.jfrog.com/confluence/display/RTF/Managing+Permissions
Following is the error
Error response from daemon: Get https://my-repo/v2/nginx/manifests/latest: unknown: Unauthorized