Kafka authorization and authentication ports - authentication

I have read the documentation on Kafka security here: https://kafka.apache.org/documentation/#security_authz_cli
and I was wondering something about the ports they use. For the authentication portion, under 7.2 Encryption and authentication using SSL, they have the statement:
kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config client-ssl.properties
kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties
for the use cases of the console consumer and producer.
For the authorization portion, under 7.4 Authorizations and ACL, when showing on how to add to ACLs different permissions for different users, they have the phase:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic
So my question is, for the "broker-list" and "bootstrap-server" variable in the authentication portion and the "zookeeper.connect" variable in authorization, does the port and localhost have the be the same? It isn't in the examples given and I'm trying to combine the authentication and authorization parts using SSL. Is they need to be the same or do not need to be the same, why? Any documentation / tutorial on how to do this using purely console only is appreciated. I don't want to use Kerebos.

Related

How to configure Kafka server with SASL_SSL and GSSAPI protocols

I am new to Apache Kafka, and here is what I have done so far,
Downloaded kafka_2.12-2.1.0
Make Batch file for Zookeeper to run zookeeper server:
start kafka_2.12-2.1.0.\bin\windows\zookeeper-server-start.bat kafka_2.12-2.1.0.\config\zookeeper.properties
Make Batch File for Apache Kafka server
start kafka_2.12-2.1.0\bin\windows\kafka-server-start.bat kafka_2.12-2.1.0\config\server.properties
Started A Producer using batch file.
start kafka_2.12-2.1.0.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic 3drocket-player
It is running fine but now I am looking for authentication. As I have to implement a consumer with specific auth settings (requirement by the client). Like security protocol is SASL_SSL and SSL mechanism is GSSAPI.
For this reason, I tried to search and find confluet documentation but the problem is it is too abstract that how to take each and every step.
I am looking for detail configuration steps according to my setup. How to configure my kafka server with SASL SSL and GSSAPI protocol. Initially I found that GSSAPI/Keberos has a separate server then, do i need to install more server? Within Confluent Kafka is there any built-in solution.
Configure a SASL port in server.properties
e.g)
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
https://kafka.apache.org/documentation/#security_configbroker
https://kafka.apache.org/documentation/#security_sasl_config
Client:
When you run the Kafka client, you need to set these properties.
security.protocol=SASL_SSL
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
https://kafka.apache.org/documentation/#security_configclients
https://kafka.apache.org/documentation/#security_sasl_kerberos_clientconfig
Then configure the JAAS configuration
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="path/to/kafka_client.keytab"
storeKey=true
useTicketCache=false
principal="kafka-client-1#EXAMPLE.COM";
};
...
SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Apache Kafka®. Ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_gssapi.html#kafka-sasl-auth-gssapi
....

How to use Kafka with TLS peer verification turned off

I'm testing kafka cluster creation using let's encrypt staging certs. After creating, on my machine, I run the kafka-provided kafka-console-consumer.sh and kafka-console-producer.sh scripts. When I ran with let's encrypt production, it worked fine. But now that I'm using staging certs, I get this when I run the producer:
ERROR [Producer clientId=console-producer] Connection to node -1 (2.kafka.mysite.com/10.1.17.191:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
I use these properties for producer script:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
I'd like to give the option to ignore TLS, and I'd like it to be some parameter I can toggle (on the cluster or on the client) to allow it. How can I achieve this? For anyone familiar with Rabbitmq, I think it's similar to VERIFY_PEER=false, aka VERIFY_NONE.
The kafka configuration has setting
ssl.client.auth
Its value could be set as required,requested or none. You could set it to requested.his means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
https://docs.confluent.io/current/installation/configuration/broker-configs.html

ElastAlert : Access to the Elastic search exposed by Oauth2

Context :
ElastAlert v0.1.29 included in Container Docker on OpenShift Orchestrator
Elasticsearch 2.4.4 exposed by Openshift agregate_logging (with Oauth2)
Hello,
From Elastalert, i want to connect to Elasticsearch.
The authenticate of Elastic use oauth2.
The oauth2 requires the X-Proxy-Remote-User and the token in the header of the requests :
Ex:
curl -k -H "Authorization: Bearer $token" -H "X-Proxy-Remote-User: $(oc whoami)" -H "X-Forwarded-For: 127.0.0.1" https://es.example.test/_cat/indices
I believe that ElastAlert doesn't support the authenticate Oauth2 by token. Can you confirm?
Effectively, i don't think that client_key and client_cert tls options they are compatible ?
Thanks for your help
Regards
Loïc
From what I've read of the code, no, it only supports basic auth. This would be a nice feature if someone had the time to contribute.

SSL Handshake error using Curl to POST a file to a web service

I've been playing around with Curl, trying to do what should be a simple POST of a file to a web service for a couple of days and not getting anywhere.
The target POST service is unauthenticated HTTPS. When trying to run my POST request via Curl or via Informatica, I am getting an SSL handshake failure with both methods.
For example:
curl -X POST -F 'file=#filename.dat' https://url
I have been able to get this to work using Postman, so I know the service works. According to network security, SSL is disabled in this environment. Am I out of luck, or is there a way to get this to work without SSL?
Specific error encountered:
curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
By default, a client establishing a HTTPS URL connection will check the validity of a SSL certificate - otherwise, what's the point of using SSL?
In your case, you are saying "Pretend to use HTTPS but actually, ignore the certificate", because it's invalid, or you are still getting one, or you are in the development phase (I hope the latter is true, and get or create a valid sever certificate when needed).
But curl doesn't know that. It is assuming you are asking it to establish a connection with an HTTPS endpoint - thus it will try to validate the certificate - which, in your case, may be the source of the failure.
Try curl -k -X POST -F 'file=#filename.dat' https://url
From the manpage:
-k, --insecure
(TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections otherwise considered insecure.
The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store.
See this online resource for further details:
https://curl.haxx.se/docs/sslcerts.html
See also --proxy-insecure and --cacert.

Using apache kafka in SSL mode

I'm trying to set up kafka in SSL [1-way] mode. I've gone through the official documentation and successfully generated the certificates. I'll note down the behavior for 2 different cases. This setup has only one broker and one zookeeper.
Case-1: Inter-broker communication - Plaintext
Relevant entries in my server.properties file are as follows:
listeners=PLAINTEXT://localhost:9092, SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****
I've added a client-ssl.properties in kafka config dir with following entries:
security.protocol=SSL
ssl.truststore.location=/Users/xyz/home/ssl/client.truststore.jks
ssl.truststore.password=****
If I put bootstrap.servers=localhost:9093 or bootstrap.servers=localhost:9092 in my config/producer.properties file, my console-producers/consumers work fine. Is that the intended behavior? If yes, then why? Because I'm specifically trying to connect to localhost:9093 from producer/consumer in SSL mode.
Case-2: Inter-broker communication - SSL
Relevant entries in my server.properties file are as follows:
security.inter.broker.protocol=SSL
listeners=SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****
My client-ssl.properties file remains the same. I put bootstrap.servers=localhost:9093 in producer.properties file. Now, none of my producer/consumer can connect to kafka. I get the following msg:
WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
What am I doing wrong?
In all these cases I'm using the following commands to start producers/consumers:
./kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config ../config/client-ssl.properties
./kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config ../config/client-ssl.properties
Make sure that the common names (CN) in your certificates match your hostname.
SSL protocol verify CN against hostname. I guess here you should have CN=localhost.
I had a similar issue and that's how I fixed it.
One important information regarding this: The behavior where the CN has to be equal to the hostname can be deactivated, by adding the following line to server.properties:
ssl.endpoint.identification.algorithm=
The default value for this setting is set to https, which ultimately activates the host to CN verification. This is the default since Kafka 2.0.
I've successfully tested a SSL setup (just on the broker side though) with the following properties:
############################ SSL Config #################################
ssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=TrustStorePassword
ssl.keystore.location=/path/to/kafka.server.keystore.jks
ssl.keystore.password=KeyStorePassword
ssl.key.password=PrivateKeyPassword
security.inter.broker.protocol=SSL
listeners=SSL://localhost:9093
advertised.listeners=SSL://127.0.0.1:9093
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
You can also find a Shell script to generate SSL certificates (with key- and truststores) alongside some documentation in this github project: https://github.com/confluentinc/confluent-platform-security-tools
Well, both the given answers point out to the right direction, but some more details need to be added to end this confusion.
I generated the certs using this bash script from confluent, and when I looked inside the file, it made sense. I'm pasting the relevant section here:
echo " NOTE: currently in Kafka, the Common Name (CN) does not need to be the FQDN of"
echo " this host. However, at some point, this may change. As such, make the CN"
echo " the FQDN. Some operating systems call the CN prompt 'first / last name'"
There you go. When you're generating the certs, make sure to put localhost (or FQDN) when it asks for first / last name. Do remember that you need to use the same endpoint to expose the broker.