How to configure Kafka server with SASL_SSL and GSSAPI protocols - ssl

I am new to Apache Kafka, and here is what I have done so far,
Downloaded kafka_2.12-2.1.0
Make Batch file for Zookeeper to run zookeeper server:
start kafka_2.12-2.1.0.\bin\windows\zookeeper-server-start.bat kafka_2.12-2.1.0.\config\zookeeper.properties
Make Batch File for Apache Kafka server
start kafka_2.12-2.1.0\bin\windows\kafka-server-start.bat kafka_2.12-2.1.0\config\server.properties
Started A Producer using batch file.
start kafka_2.12-2.1.0.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic 3drocket-player
It is running fine but now I am looking for authentication. As I have to implement a consumer with specific auth settings (requirement by the client). Like security protocol is SASL_SSL and SSL mechanism is GSSAPI.
For this reason, I tried to search and find confluet documentation but the problem is it is too abstract that how to take each and every step.
I am looking for detail configuration steps according to my setup. How to configure my kafka server with SASL SSL and GSSAPI protocol. Initially I found that GSSAPI/Keberos has a separate server then, do i need to install more server? Within Confluent Kafka is there any built-in solution.

Configure a SASL port in server.properties
e.g)
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
https://kafka.apache.org/documentation/#security_configbroker
https://kafka.apache.org/documentation/#security_sasl_config
Client:
When you run the Kafka client, you need to set these properties.
security.protocol=SASL_SSL
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
https://kafka.apache.org/documentation/#security_configclients
https://kafka.apache.org/documentation/#security_sasl_kerberos_clientconfig
Then configure the JAAS configuration
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="path/to/kafka_client.keytab"
storeKey=true
useTicketCache=false
principal="kafka-client-1#EXAMPLE.COM";
};

...
SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Apache Kafka®. Ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_gssapi.html#kafka-sasl-auth-gssapi
....

Related

How to use Kafka with TLS peer verification turned off

I'm testing kafka cluster creation using let's encrypt staging certs. After creating, on my machine, I run the kafka-provided kafka-console-consumer.sh and kafka-console-producer.sh scripts. When I ran with let's encrypt production, it worked fine. But now that I'm using staging certs, I get this when I run the producer:
ERROR [Producer clientId=console-producer] Connection to node -1 (2.kafka.mysite.com/10.1.17.191:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
I use these properties for producer script:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
I'd like to give the option to ignore TLS, and I'd like it to be some parameter I can toggle (on the cluster or on the client) to allow it. How can I achieve this? For anyone familiar with Rabbitmq, I think it's similar to VERIFY_PEER=false, aka VERIFY_NONE.
The kafka configuration has setting
ssl.client.auth
Its value could be set as required,requested or none. You could set it to requested.his means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
https://docs.confluent.io/current/installation/configuration/broker-configs.html

Kafka: enabling multiple authentication methods

I have worked on setting the authentication for Kafka clients in past. I have I have refered:
https://kafka.apache.org/documentation/#security
https://docs.confluent.io/current/kafka/authentication_sasl/index.html#sasl-configuration-for-kafka-brokers
And other links as well.
As mentioned in docs we need to have jaas configuration file to specify the authentication method, I had one like below:
KafkaClient {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
LoginStringClaim_sub="admin";
};
Which basically adds the OAuth authentication for kafka clients.
The question is - can I have multiple authentication methods enabled on kafka broker
I mean can I enable both OAuthBearer and PLAIN authentication on Kafka, and let the client authenticate by any one of these methods.
OK, I found how we can do it.
Multiple SASL mechanisms can be enabled on the broker simultaneously while each client has to choose one mechanism.
In JAAS config file, we have to specify the configuration for the multiple login modules as below:
KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
LoginStringClaim_sub="admin";
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
Then we have to enable the SASL mechanisms in server.properties:
# List of enabled mechanisms, can be more than one
sasl.enabled.mechanisms=OAUTHBEARER,PLAIN
And Then Specify the SASL security protocol and mechanism for inter-broker communication in server.properties
# Configure SASL_SSL if SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
security.inter.broker.protocol=SASL_SSL
# Configure the appropriate inter-broker protocol
sasl.mechanism.inter.broker.protocol=PLAIN
Credit to - https://docs.confluent.io/current/kafka/authentication_sasl/index.html#enabling-multiple-sasl-mechanisms

apache ignite discovery connect to zookeeper using ssl connection

I am trying to use Zookeeper for node discovery with Apache Ignite. I have configured Zookeeper to only accept SSL/TLS connections. How do I provide Zookeeper keystore detail to Apache Ignite ZookeeperDiscoverySpi? I have checked the documentation and source code of ignite-zookeeper.jar and I do not see any options to supply these details? Should I be providing these details elsewhere in the ignite config?
Solution:
replace inginite-zookeeper.jar dependency zookeeper-3.4.6.jar with latest zookeeper-3.5.x.jar for proper SSL/Netty support.
supply SSL config details as JVM arguments (no options for this in Ignize Spi API)

Nifi: using zookeeper with SSL

I have Nifi cluster with one zookeeper node and five Nifi node. I want to have SSL encryption from the zookeeper server to the Nifi client.
Reading from the Nifi documentation, it says:
Support for SSL in ZooKeeper is being actively developed and is expected to be available in the 3.5.x release version.
The new zookeeper 3.5.3-beta have SSL capabilities.
I installed zookeeper 3.5.3 but I am unable to secure the connection it with SSL: I am getting NotSslRecordException
How can I run Nifi with a secure zookeeper using SSL?
Thank you
It requires more than just running ZooKeeper 3.5.x. There is code in NiFi that uses the ZooKeeper client and that code is not based on the 3.5.x client, so there is no way for NiFi to make a SSL connection.
Note that you also need to setup Zookeeper to use the SSL security for example
zookeeper.ssl.keyStore.location="/path/to/your/keystore"
zookeeper.ssl.keyStore.password="keystore_password"
zookeeper.ssl.trustStore.location="/path/to/your/truststore"
zookeeper.ssl.trustStore.password="truststore_password"
Full docummentation here: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide

How to set up SSL on WildFly 9 Domain Mode?

I currently have a WildFly 9 cluster up and running with access to my application over port 8080, I would like to set up SSL and have access only on port 8443, but I cannot seem to find any documentation for where the security realm and https listener are placed in Domain mode.
I have the keystore and certificate all set up and was able to get https working in a demo using standalone mode, but I need to be able to do it in domain mode.
Can anyone help me out and share how they've accomplished this?
Solved it! It turns out for some reason JBoss was not registering my Security Realm and HTTPS listener. To do this you need to use bin/jbosscli and the commands:
RUN THE "CONNECT" COMMAND FIRST
/host=master/core-service=management/security-realm=SSLRealm/:add()
---where SSLRealm is the name of the realm
/host=master/core-service=management/security-realm=SSLRealm/server-identity=ssl/:add(keystore-path=Keystore.jks, keystore-relative-to=jboss.domain.config.dir, keystore-password=password)
---this assumes the keystore lives in the domain/configuration directory
Restart the server.
I then ran into issues figuring out the command to register the HTTPS listener, but I found the WildFly web console at serverURL:9990 has a way to do it too:
Once logged in to the webconsole
Configuration->Profiles->for each profile which is used->Undertow->HTTP->View
From there
HTTP Server->default-server->view
Finally
HTTPS Listener->ADD enter a name like: default-https, Security Realm: the name chosen for the security realm (for this example SSLRealm), Socket Binding: https and click save
Restart again
You should now have access at your serversURL:8443
To set it up on slave servers you should only need to copy the keystore to each slave servers domain/configuration and then add the security realm replacing /host=master/ with /host=slave/ in the command. And then restart the server.
Double check the Domain.xml file on the slave has the https listener you created originally in the webconsole (it should automatically be put into all of the clusters domain.xml files)