Our Confluent Kafka is installed on AWS EC2. We are using SASL/SSL security protocol and LDAP for user authentication.
The following exception occurs when trying to create a topic:
ERROR [KafkaApi-0] Error when handling request: clientId=2, correlationId=0, api=UPDATE_METADATA, body={controller_id=2,controller_epoch=1,broker_epoch=8589934650,topic_states=[],live_brokers=[{id=2,end_points=[{port=9092,host=dfdp-080060041.dfdp.com,listener_name=PLAINTEXT,security_protocol_type=0},{port=9093,host=dfdp-080060041.dfdp.com,listener_name=SASL_SSL,security_protocol_type=3}],rack=null},{id=1,end_points=[{port=9092,host=dfdp-080060025.dfdp.com,listener_name=PLAINTEXT,security_protocol_type=0},{port=9093,host=dfdp-080060025.dfdp.com,listener_name=SASL_SSL,security_protocol_type=3}],rack=null},{id=0,end_points=[{port=9092,host=dfdp-080060013.dfdp.com,listener_name=PLAINTEXT,security_protocol_type=0},{port=9093,host=dfdp-080060013.dfdp.com,listener_name=SASL_SSL,security_protocol_type=3}],rack=null}]} (kafka.server.KafkaApis)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request Request(processor=3, connectionId=10.80.60.13:9093-10.80.60.41:53554-0, session=Session(User:$BEB000-DRJTO9PK3C7L,dfdp-080060041.dfdp.com/10.80.60.41), listenerName=ListenerName(SASL_SSL), securityProtocol=SASL_SSL, buffer=null) is not authorized
Related
I am getting following error while running DataFlow pipeline
Error reporting inventory checksum: code: "Unauthenticated", message: "Request is missing required authentication credential.
Expected OAuth 2 access token, login cookie or other valid authentication credential.
We have created service account dataflow#12345678.iam.gserviceaccount.com with following roles
BigQuery Data Editor
Cloud KMS CryptoKey Decrypter
Dataflow Worker
Logs Writer
Monitoring Metric Writer
Pub/Sub Subscriber
Pub/Sub Viewer
Storage Object Creator
And in our python code we are using import google.auth
Any idea what am I missing here ?
I do not believe I need to create key for SA , however I am not sure if "OAuth 2 access token" for SA need to be created ? If yes how ?
This was the issue in my case https://cloud.google.com/dataflow/docs/guides/common-errors#lookup-policies
If you are trying to access a service through HTTP, with a custom request (not using a client library), you can obtain a OAuth2 token for that service account using the metadata server of the worker VM. See this example for Cloud Run, you can use the same code snippet in Dataflow to get a token and use it with your custom HTTP request:
https://cloud.google.com/run/docs/authenticating/service-to-service#acquire-token
Minio is configured with LDAP and am generating credentials of user
with AssumeRoleWithLDAPIdentity using STS API (reference)
From above values, I'm setting the variables AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_SESSION_TOKEN (reference)
I'm getting error when am trying to push model to mlflow to store in minio artifact
S3UploadFailedError: Failed to upload /tmp/tmph68xubhm/model/MLmodel to mlflow/1/xyz/artifacts/model/MLmodel: An error occurred (InvalidTokenId) when calling the PutObject operation: The security token included in the request is invalid
I'm working on an ASP.NET Core project that will be deployed to AWS. Recently the client came back and requested to pull a few values from AWS SecretsManager. I'm using the permissions inherited from the IAM Role associated to the EC2 instance that the service is deployed to.
In production use this service will be hosted by the client themselves on their own AWS account.
When I deploy to my own test AWS account the process works just fine but when the client deploys to their own AWS account they are getting a 403 Forbidden response on the call to get the secret value. They have the secret and permissions policy set up like I do but still the 403 error.
AmazonSecretsManagerClient client = new AmazonSecretsManagerClient();
var secretRequest = new GetSecretValueRequest
{
SecretId = "MySecretName"
};
// FAILS HERE
GetSecretValueResponse secretResponse = await client.GetSecretValueAsync(secretRequest);
It is a HttpRequestException with a message of "Response status code does not indicate success: 403 (Forbidden)."
My question isn't really a coding issue since this does work on my test AWS account. This seems like it must be an environment issue with the client's AWS account.
My experience with AWS is very limited so I have no idea what would be causing this.
Is the customer trying to fetch the exact same secret you are using in your account? This would require using a custom CMK and adding a resource policy granting access as described in the AWS docs.
I have set anonymous access enabled for the read operations in the docker. But when I try to pull an image from the repository it asks for an authentication password.
What is the correct way to set anonymous read access to the docker repository ? I have followed this documentation
https://www.jfrog.com/confluence/display/RTF/Managing+Permissions
Following is the error
Error response from daemon: Get https://my-repo/v2/nginx/manifests/latest: unknown: Unauthorized
Can anyone tell me why WSo2 API Manager does not authenticate? I have set up two WSo2 API Manager 1.8.0 instances and created an api.it is working fine as prototyped api. after it save and publish and call the api with an access token
getting the following rsponce
<ams:fault xmlns:ams="http://wso2.org/apimanager/security">
<ams:code>900906</ams:code>
<ams:message>
No matching resource found in the API for the given request
</ams:message>
<ams:description>
Access failure for API: /api/stature, version: 1.0.0 with key: null
</ams:description>
</ams:fault>
and here is the wso2carbon.log:
TID[-1234] [AM] [2015-01-19 00:12:47,263] ERROR {org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler} -
API authentication failure org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:212)
org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:94)
org.apache.synapse.rest.API.process(API.java:284) org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:83)
org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:64)
org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:168)
org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
No matching resource found in the API for the given request
The above issue occurs, when your request URL is wrong and APIManager could not match that with existing published APIs.
If the published API configuration is not deployed properly in the gateway. You can check this, browsing to Gateway's synapse config folder/api folder.(inside/repository/deployment/server)
If the relevant API entries are missing at APIM DB.
Do you try with distributed setup? did you change DBs? Check above all 3 points, then you can figure out the issue quickly.