I have Nifi cluster with one zookeeper node and five Nifi node. I want to have SSL encryption from the zookeeper server to the Nifi client.
Reading from the Nifi documentation, it says:
Support for SSL in ZooKeeper is being actively developed and is expected to be available in the 3.5.x release version.
The new zookeeper 3.5.3-beta have SSL capabilities.
I installed zookeeper 3.5.3 but I am unable to secure the connection it with SSL: I am getting NotSslRecordException
How can I run Nifi with a secure zookeeper using SSL?
Thank you
It requires more than just running ZooKeeper 3.5.x. There is code in NiFi that uses the ZooKeeper client and that code is not based on the 3.5.x client, so there is no way for NiFi to make a SSL connection.
Note that you also need to setup Zookeeper to use the SSL security for example
zookeeper.ssl.keyStore.location="/path/to/your/keystore"
zookeeper.ssl.keyStore.password="keystore_password"
zookeeper.ssl.trustStore.location="/path/to/your/truststore"
zookeeper.ssl.trustStore.password="truststore_password"
Full docummentation here: https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide
Related
I am new to Apache Kafka, and here is what I have done so far,
Downloaded kafka_2.12-2.1.0
Make Batch file for Zookeeper to run zookeeper server:
start kafka_2.12-2.1.0.\bin\windows\zookeeper-server-start.bat kafka_2.12-2.1.0.\config\zookeeper.properties
Make Batch File for Apache Kafka server
start kafka_2.12-2.1.0\bin\windows\kafka-server-start.bat kafka_2.12-2.1.0\config\server.properties
Started A Producer using batch file.
start kafka_2.12-2.1.0.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic 3drocket-player
It is running fine but now I am looking for authentication. As I have to implement a consumer with specific auth settings (requirement by the client). Like security protocol is SASL_SSL and SSL mechanism is GSSAPI.
For this reason, I tried to search and find confluet documentation but the problem is it is too abstract that how to take each and every step.
I am looking for detail configuration steps according to my setup. How to configure my kafka server with SASL SSL and GSSAPI protocol. Initially I found that GSSAPI/Keberos has a separate server then, do i need to install more server? Within Confluent Kafka is there any built-in solution.
Configure a SASL port in server.properties
e.g)
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
https://kafka.apache.org/documentation/#security_configbroker
https://kafka.apache.org/documentation/#security_sasl_config
Client:
When you run the Kafka client, you need to set these properties.
security.protocol=SASL_SSL
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
https://kafka.apache.org/documentation/#security_configclients
https://kafka.apache.org/documentation/#security_sasl_kerberos_clientconfig
Then configure the JAAS configuration
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="path/to/kafka_client.keytab"
storeKey=true
useTicketCache=false
principal="kafka-client-1#EXAMPLE.COM";
};
...
SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Apache Kafka®. Ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_gssapi.html#kafka-sasl-auth-gssapi
....
I'm using kafka version kafka_2.12-2.0.0 and received the below error after enabling SSL authentication. It seems to be working fine with previous versions: kafka_2.12-1.1.0, 2.11-0.10.2.2 etc.
I don't understand why it is not working with latest version 2.11-0.2.0.0? Has anyone observed the same issue that I'm facing right now with 2.0.0 version.
Below is my test environment docker config file.
listeners=PLAINTEXT://:9092,SSl://:9093
ssl.client.auth=required
ssl.keystore.location=/path/to/server.keystore
ssl.keystore.password=<Key store password>
ssl.key.password = <private key password>
ssl.truststore.location=/path/to/truststore.keystore
ssl.truststore.password=<trust store password>
security.inter.broker.protocol=SSL
And here's the error:
[2018-10-01 09:33:38,984] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
Can someone help me ?
Without more details it's hard to tell for sure, but 2.0.0 introduced a change of behaviour related to the handling of SSL connections.
As mentioned in the 2.0.0 upgrade notes, the broker setting ssl.endpoint.identification.algorithm is now set to https. This enforces hostname verification to prevent "man-in-the-middle" attacks.
To restore previous behaviour, you need to explicitely set this to an empty string.
ssl.endpoint.identification.algorithm=
Was also facing a similar issue. My issue, I was having Kafka server 1.1.1 running and was using Kafka client 2.1.0 to push records. Changing Kafka client to 1.1.1 solved my issue.
Hope this helps.
I am trying to use Zookeeper for node discovery with Apache Ignite. I have configured Zookeeper to only accept SSL/TLS connections. How do I provide Zookeeper keystore detail to Apache Ignite ZookeeperDiscoverySpi? I have checked the documentation and source code of ignite-zookeeper.jar and I do not see any options to supply these details? Should I be providing these details elsewhere in the ignite config?
Solution:
replace inginite-zookeeper.jar dependency zookeeper-3.4.6.jar with latest zookeeper-3.5.x.jar for proper SSL/Netty support.
supply SSL config details as JVM arguments (no options for this in Ignize Spi API)
I need to secure my zookeeper with SSL and then configure kafka broker to access to zookeeper with ssl.
Is it possible ?
And if yes, how ?
Thanks
Alain
Connecting to Cassandra with non-SSL port works.
But when I try to make a SSL connection by initializing SSLContext, I get this strange exception.
I am using Datastax driver cassandra-driver-core-2.0.0-rc3.jar
Caused by: org.jboss.netty.channel.ChannelPipelineException: Failed to initialize a pipeline.
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:208) [netty-3.9.0.Final.jar:]
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182) [netty-3.9.0.Final.jar:]
at com.datastax.driver.core.Connection.(Connection.java:92) [cassandra-driver-core-2.0.0-rc3.jar:]
at com.datastax.driver.core.Connection$Factory.open(Connection.java:421) [cassandra-driver-core-2.0.0-rc3.jar:]
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:215) [cassandra-driver-core-2.0.0-rc3.jar:]
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:170) [cassandra-driver-core-2.0.0-rc3.jar:]
You need to install the Java Unlimited Strength JCE; this has to be done on your client machine in your JRE installation.
Here is one reference in the context of the Cassandra server, but Java clients using SSL need this as well.
http://www.pathin.org/tutorials/java-cassandra-cannot-support-tls_rsa_with_aes_256_cbc_sha-with-currently-installed-providers/