How to block creating/deleting kafka topic by unauthorized users? - ssl

I have setup Kafka and zookeeper authentication with SASL+ACL and Kafka to producer and consumer by SSL two way authentication including encryption.
By enabling SASL and ACL between Kafka and zookeeper it doesn't allow to login unauthorized Kafka broker to the zookeeper cluster. But still, topic creation and deletion can be done without any restrictions.
zookeeper.properties
dataDir=/x02/lsesv2-s/data/Zookeeper
clientPort=15300
tickTime=2000
initLimit=10
syncLimit=5
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
quorum.auth.enableSasl=true
quorum.auth.learnerRequireSasl=true
quorum.auth.serverRequireSasl=true
quorum.auth.learner.loginContext=QuorumLearner
quorum.auth.server.loginContext=QuorumServer
server.1=172.25.33.12:15302:15301
server.2=172.25.33.13:15302:15301
server.3=172.25.33.11:15302:15301
zookeeper_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="abc123"
user_admin="abc123";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="abc123";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="abc123";
};
Set ACL by below code
final CountDownLatch connectedSignal = new CountDownLatch(1);
String connect = "localhost:15300";
ZooKeeper zooKeeper = null;
try
{
String userName = "admin";
String password = "mit123";
zooKeeper = new ZooKeeper(connect, 5000, we ->
{
if (we.getState() == Watcher.Event.KeeperState.SyncConnected)
{
connectedSignal.countDown();
}
});
connectedSignal.await();
zooKeeper.addAuthInfo("digest", (userName + ":" + password).getBytes());
final String aclString = "auth:" + userName + ":" + password + ":" + "cdrwa" +
",sasl:" + userName + ":" + "cdrwa";
zooKeeper.setACL("/", parseACLs(aclString), -1);
} finally
{
if (zooKeeper != null)
{
zooKeeper.close();
}
}
Above code is working and below is the result after executing the code.
Welcome to ZooKeeper!
JLine support is disabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
getAcl /
'sasl,'admin
: cdrwa
'digest,'admin:oiasY+rmnmmK9mec8kpnvv281HE=
: cdrwa
Instead of server.properties file I have overridden Kafka properties when it is started. *
Kafka properties
kafka/bin/kafka-server-start.sh /x02/lsesv2-s/current/kafka/config/server.properties
--override broker.id=1
--override zookeeper.connect=10g-flton-onl01:15300,10g-flton-onl02:15300,10g-flton-nor02:15300
--override num.network.threads=16
--override num.io.threads=16
--override socket.send.buffer.bytes=10240000
--override socket.receive.buffer.bytes=10240000
--override log.dirs=/x02/lsesv2-s/data/Kafka
--override offsets.topic.replication.factor=1
--override min.insync.replicas=1
--override inter.broker.listener.name=INTERNAL
--override listeners=INTERNAL://10g-flton-onl01:15307
--override advertised.listeners=INTERNAL://10g-flton-onl01:15307
--override listener.security.protocol.map=INTERNAL:SSL
--override security.protocol=SSL
--override ssl.client.auth=required
--override ssl.key.password=abc123
--override ssl.keystore.location=configs/MHV/kafka.server.keystore.jks
--override ssl.keystore.password=abc123
--override ssl.truststore.location=configs/MHV/kafka.server.truststore.jks
--override ssl.truststore.password=abc123
--override ssl.endpoint.identification.algorithm=
Kafka to producer/consumer authentication works fine and zookeeper to kafka authentication is also working fine. But still, topic creation and deletion can be done by unauthorized users too.
Topic creation
kafka/bin/kafka-topics.sh --create --zookeeper localhost:15300 --replication-factor 3 --partitions 8 --topic test
Topic deletion
kafka/bin/kafka-topics.sh --zookeeper localhost:15300 --delete --topic test
Note: I didn't set -Djava.security.auth.login.config=kafka_server_jaas.conf when creating or deleting topics. So this operation should be restricted. But actually, it doesn't.
Help me with topic creation and deletion for only authorized users.

It seems this is the required property from testing locally.
KAFKA_ZOOKEEPER_SET_ACL: "true"
For the Confluent images or maps directly too.
zookeeper.set.acl
Reference
Also as stated at Kafka 101 Confluent
the metadata stored in ZooKeeper is such that only brokers will be able to modify the corresponding znodes, but znodes are world readable. 
Because we configured ZooKeeper to require SASL authentication, we need to set the java.security.auth.login.config system property while starting the kafka-topics tool:
A code example and docker-compose file is shown here

Related

Continuous TLS handshake error logs in vault nodes due to LB health check

I am getting continuous TLS handshake errors every 5 sec due to my load balancer pinging vault nodes in every 5 seconds. Kube load balancer is pinging my vault nodes using
nc -vz podip podPort every 5 sec
I have already disabled client cert verification in my config.hcl but still see below logs in my kubectl logs for vault
kubectl logs pod-0 -n mynamespace
[INFO] http: TLS handshake error from 10.x.x.x:60056: EOF 2020-09-02T01:13:32.957Z
[INFO] http: TLS handshake error from 10.x.x.x:23995: EOF 2020-09-02T01:13:37.957Z
[INFO] http: TLS handshake error from 10.x.x.x:54165: EOF 2020-09-02T01:13:42.957Z
Below is my config.hcl which I am loading via kube config map
apiVersion: v1
kind: ConfigMap
metadata:
name: raft-config
labels:
name: raft-config
data:
config.hcl: |
storage "raft" {
path = "/vault-data"
tls_skip_verify = "true"
retry_join {
leader_api_addr = "https://vault-cluster-0:8200"
leader_ca_cert_file = "/opt/ca/vault.crt"
leader_client_cert_file = "/opt/ca/vault.crt"
leader_client_key_file = "/opt/ca/vault.key"
}
retry_join {
leader_api_addr = "https://vault-cluster-1:8200"
leader_ca_cert_file = "/opt/ca/vault.crt"
leader_client_cert_file = "/opt/ca/vault.crt"
leader_client_key_file = "/opt/ca/vault.key"
}
retry_join {
leader_api_addr = "https://vault-cluster-2:8200"
leader_ca_cert_file = "/opt/ca/vault.crt"
leader_client_cert_file = "/opt/ca/vault.crt"
leader_client_key_file = "/opt/ca/vault.key"
}
}
seal "transit" {
address = "https://vaulttransit:8200"
disable_renewal = "false"
key_name = "autounseal"
mount_path = "transit/"
tls_skip_verify = "true"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/opt/ca/vault.crt"
tls_key_file = "/opt/ca/vault.key"
tls_skip_verify = "true"
tls_disable_client_certs = "true"
}
ui=true
disable_mlock = true
As I am using external open source vault image and my load balancer is an internal LB (which has internal CA cert). I am suspecting my vault pod is not able to recognize the CA cert provided by my load balancer when it tries to ping port 8200(TCP listener is started by vault on this port)
These logs are harmless and not causing any issue but they are unnecessary noise which I want to avoid. My vault nodes are working on https and there seems to be no issue in their functionality.
Can someone please help understand why vault TCP listener is trying to do TLS handshake even though I have explicitly specified tls_disable_client_certs = "true"
Again these logs are flooding my pods every 5 sec when my LB tries to do a health check on my pods using nc -vz podip podPort
My vault version is 1.5.3
The messages are not about client certs or CA certs, a TLS handshake happens whether the client presents a certificate or not.
Instead, it is because a TCP connection is created and established and the Go library now wants to start a TLS handshake. Instead, the other side (the health checker) just hangs up and the TLS handshake never happens. Go then logs this message.
You are correct in saying that it is harmless, this is purely a side effect of port-liveness health checking. It is however spammy and annoying.
You have two basic options to get around this:
filter the messages out of the logs when persisting them
change to a different type of health check
I would recommend the second option: switch to a different health check. Vault has a /sys/health endpoint that can be used with HTTPS health checks.
In addition to getting rid of the TLS warning messages, the health endpoint also allows to you check for active and unsealed nodes.

DBVisualizer not able to connect to Kerberised Hive

We have a HDP (3.1.0) cluster with Hive (3.0.0.3.1). The cluster is Kerberised;
I am trying to connect to Hive with DBVisualizer, without success. The client (where I am using DBVisualizer from) is a Centos 7 Machine.
Kerberos related
On the client, here is the /etc/krb5.conf (copy/paste from one of the cluster's machine):
cat krb5.conf
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = COMPANY.LOC
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
COMPANY.LOC = COMPANY.LOC
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
COMPANY.LOC = {
admin_server = server.company.loc
kdc = server.company.loc
}
I used kinit and here is the result of klist:
[florianc#localhost etc]$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: castelainf#COMPANY.LOC
Valid starting Expires Service principal
07/24/2020 09:12:03 07/24/2020 19:12:03 krbtgt/COMPANY.LOC#COMPANY.LOC
renew until 07/31/2020 09:11:59
DbVisualizer
Version: 11.0.4 (free)
Tools>Tool Properties>Specify overridden Java VM Properties here:
-Dsun.security.krb5.debug=true
-Djavax.security.auth.useSubjectCredsOnly=false
-Djava.security.krb5.conf="/etc/krb5.conf"
The JAR used for the driver is the one provided by the cluster in Ambari>Hive>JDBC Standalone jar
The database URL of the connection is:
jdbc:hive2://server1.company.loc:2181,server2.company.loc:2181,server3.company.loc:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST#COMPANY.LOC
The error returned when trying to connect is the following:
Could not open client transport for any of the Server URI's in ZooKeeper: Can't get Kerberos realm
Edit 1
Using these URIs:
jdbc:hive2://server1.company.loc:2181/;principal=hive/_HOST#COMPANY.LO
jdbc:hive2://server1.company.loc:2181/;principal=hive/server1#COMPANY.LOC
jdbc:hive2://server1.company.loc:2181/;principal=hive/server1.company.loc#COMPANY.LOC
Always return:
Could not open client transport with JDBC Uri <URI>: Can't get Kerberos realm

Masstransit cannot access host machine RabbitMQ from a docker container

I created a simple .net core console application with docker support. Following
Masstransit code fails to connect to RabbitMQ instance on host machine. But similar implementation using RabitMq.Client is able to connect to host machine RabbitMQ instance.
Masstransit throws
MassTransit.RabbitMqTransport.RabbitMqConnectionException: Connect
failed: ctas#192.168.0.9:5672/ --->
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
host machine ip : 192.168.0.9
using Masstransit
string rabbitMqUri = "rabbitmq://192.168.0.9/";
string userName = "ctas";
string password = "ctas#123";
string assetServiceQueue = "hello";
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(rabbitMqUri), hst =>
{
hst.Username(userName);
hst.Password(password);
});
cfg.ReceiveEndpoint(host,
assetServiceQueue, e =>
{
e.Consumer<AddNewAssetReceivedConsumer>();
});
});
bus.Start();
Console.WriteLine("Service Running.... Press enter to exit");
Console.ReadLine();
bus.Stop();
Using RabbitMQ Client
public static void Main()
{
var factory = new ConnectionFactory();
factory.UserName = "ctas";
factory.Password = "ctas#123";
factory.VirtualHost = "watcherindustry";
factory.HostName = "192.168.0.9";
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "hello",
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine(" [x] Received {0}", message);
};
channel.BasicConsume(queue: "hello",
autoAck: true,
consumer: consumer);
Console.WriteLine(" Press [enter] to exit.");
Console.ReadLine();
}
}
Docker file
FROM microsoft/dotnet:1.1-runtime
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "TestClient.dll"]
I created an example, and was able to connect my host, using the preview package from masstransit.
Start rabbitmq in docker and expose ports on the host
docker run -d -p 5672:5672 -p 15672:15672 --hostname my-rabbit --name some-rabbit rabbitmq:3-management
Build and run console app.
docker build -t dotnetapp .
docker run -d -e RABBITMQ_URI=rabbitmq://guest:guest#172.17.0.2:5672 --name some-dotnetapp dotnetapp
To verify your receiving messages run
docker logs some-dotnetapp --follow
you should see the following output
Application is starting...
Connecting to rabbitmq://guest:guest#172.17.0.2:5672
Received: Hello, World [08/12/2017 04:35:53]
Received: Hello, World [08/12/2017 04:35:58]
Received: Hello, World [08/12/2017 04:36:03]
Received: Hello, World [08/12/2017 04:36:08]
Received: Hello, World [08/12/2017 04:36:13]
...
Notes:
172.17.0.2 was my-rabbit container ip address but you can replace it with your machine ip address
http://localhost:15672 is the rabbitmq management console log in with guest as username and password.
Lastly portainer.io is a very useful application to visually view you local docker environment.
Thanks for the response. I managed to resolve this issue. My findings are as follows.
to connect to a rabbitmq instance on another docker container, they have to be moved/connected to the same network. To do this
create a newtork
docker network create -d bridge my_bridge
connect both app and rabbitmq containers to same network
docker network connect my_bridge <container name>
For masstransit uri use rabbitmq container IP on that network or container name
To connect rabbitmq instance of host machine from a app on docker container.
masstransit uri should include machine name( I tried IP, that did not work)
Try using virtual host in MassTransit configuration too, not sure why you decided to omit it.
var host = cfg.Host("192.168.0.9", "watcherindustry", hst =>
{
hst.Username(userName);
hst.Password(password);
});
Look at Alexey Zimarev comment to your question, if your rabbit runs on a container then it should be on your docker-compese file and then use that entry in your endpoint definition to connect to rabbit because docker creates an internal network on which you are agnostic from source code...
rabbitmq:
container_name: "rabbitmq-yournode01"
hostname: rabbit
image: rabbitmq:3.6.6-management
environment:
- RABBITMQ_DEFAULT_USER=yourusergoeshere
- RABBITMQ_DEFAULT_PASS=yourpasswordgoeshere
- RABBITMQ_DEFAULT_VHOST=vhost
volumes:
- rabbit-volume:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
In your app settings you should have something lie:
"ConnectionString": "host=rabbitmq:5672;virtualHost=vhost;username=yourusergoeshere;password=yourpasswordgoeshere;timeout=0;prefetchcount=1",
And if you'd use EasyNEtQ you could do:
_bus = RabbitHutch.CreateBus(_connectionString); // The one above
I hope it helps,
Juan

Unable to configure and run pithos.io using AWS Java SDK

I am trying to configure pithos.io on my server testmbr1.kabuter.com:8081:
Here is how I start pithos.io:
java -jar pithos-0.7.5-standalone.jar -f pithos.yaml
My pithos.yaml:
service:
host: "0.0.0.0"
port: 8081
logging:
level: info
console: true
overrides:
io.pithos: debug
options:
service-uri: testmbr1.kabuter.com
default-region: myregion
keystore:
keys:
AKIAIOSFODNN7EXAMPLE:
master: true
tenant: test#example.com
secret: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
bucketstore:
default-region: myregion
cluster: "45.33.37.148"
keyspace: storage
regions:
myregion:
metastore:
cluster: "45.33.37.148"
keyspace: storage
storage-classes:
standard:
cluster: "45.33.37.148"
keyspace: storage
max-chunk: "128k"
max-block-chunk: 1024
cassandra:
saved_caches_directory: "target/db/saved_caches"
data_file_directories:
- "target/db/data"
commitlog_directory: "target/db/commitlog"
I am using AWS Java SDK to connect. Below is my JUnit:
#Test
public void testPithosIO() {
try {
ClientConfiguration config = new ClientConfiguration();
config.setSignerOverride("S3SignerType");
EndpointConfiguration endpointConfiguration = new EndpointConfiguration("http://testmbr1.kabuter.com:8081",
"myregion");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("AKIAIOSFODNN7EXAMPLE",
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion("myregion")
.withClientConfiguration(config)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(endpointConfiguration).build();
s3Client.createBucket("mybucket1");
System.out.println(s3Client.getRegionName());
System.out.println(s3Client.listBuckets());
} catch (Exception e) {
e.printStackTrace();
}
}
My problems is 1) I am getting: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to mybucket1.testmbr1.kabuter.com:8081 [mybucket1.testmbr1.kabuter.com/198.105.254.130, mybucket1.testmbr1.kabuter.com/104.239.207.44] failed: connect timed out
This was fixed by adding mybucket1.testmbr1 CNAME record pointing to testmbr1.kabuter.com.
2) while trying to createBucket: s3Client.createBucket("mybucket1") I am getting:
com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d98b7908-d11e-458a-be27-254b136f344a), S3 Extended Request ID: d98b7908-d11e-458a-be27-254b136f344a
How do I get it to working? pithos.io seems to have limited documentation.
Any pointers?
Since my endpoint was using a non-standard port:
http://testmbr1.kabuter.com:8081
I had to define server-uri in pithos.yaml with the port as well:
server-uri : testmbr1.kabuter.com:8081

Kafka SASL zookeeper authentication

I am facing the following error while enabling SASL on Zookeeper and broker authentication.
[2017-04-18 15:54:10,476] DEBUG Size of client SASL token: 0
(org.apache.zookeeper.server.ZooKeeperServer)
[2017-04-18 15:54:10,476] ERROR cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. (org.apache.zookeeper.server. ZooKeeperServer)
[2017-04-18 15:54:10,478] ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-04-18 15:54:10,478] DEBUG Received event: WatchedEvent state:AuthFailed type:None path:null (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Leaving process event (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient... (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-04-18 15:54:10,478] DEBUG Closing ZooKeeper connected to localhost:2181 (org.I0Itec.zkclient.ZkConnection)
[2017-04-18 15:54:10,478] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient...done (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,480] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-04-18 15:54:10,482] INFO shutting down (kafka.server.KafkaServer)
Following configuration is given in the JAAS file, which is passed as KAFKA_OPTS to take it as JVM parameter:-
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
kafka broker's server.properties has following extra fields set:-
zookeeper.set.acl=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
ssl.endpoint.identification.algorithm=HTTPS
ssl.keystore.location=path
ssl.keystore.password=anything
ssl.key.password=anything
ssl.truststore.location=path
ssl.truststore.password=anything
Zookeeper properties are as follows:
authProvider.1=org.apache.zookeeper.server.auth.DigestAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
I found the issue by increasing the log level to DEBUG. Basically follow the steps below. I don't use SSL but you will integrate it without any issue.
Following are my configuration files:
server.properties
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
auto.create.topics.enable=false
broker.id=0
listeners=SASL_PLAINTEXT://localhost:9092
advertised.listeners=SASL_PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
advertised.host.name=localhost
num.partitions=1
num.recovery.threads.per.data.dir=1
log.flush.interval.messages=30000000
log.flush.interval.ms=1800000
log.retention.minutes=30
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
delete.topic.enable=true
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
super.users=User:admin
zookeeper.properties
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
producer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
bootstrap.servers=localhost:9092
compression.type=none
consumer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group
Now are the most important files for making your server starting without any issue:
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
After doing all these configuration, on a first terminal window:
Terminal 1 (start Zookeeper server)
From kafka root directory
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/zookeeper_jaas.conf"
$ bin/zookeeper-server-start.sh config/zookeeper.properties
Terminal 2 (start Kafka server)
From kafka root directory
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/kafka_server_jaas.conf"
$ bin/kafka-server-start.sh config/server.properties
[BEGIN UPDATE]
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
Terminal 3 (start Kafka consumer)
On a client terminal, export client jaas conf file and start consumer:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092
Terminal 4 (start Kafka producer)
If you also want to produce, do this on another terminal window:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties
[END UPDATE]
You need to create a JAAS config file for Zookeeper and make it use it.
Create a file JAAS config file for Zookeeper with a content like this:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="admin-secret";
};
Where user (admin) and password (admin-secret) must match with username and password that you have in Client section of Kafka JAAS config file.
To make Zookeeper use the JAAS config file, pass the following JVM flag to Zookeeper pointing to the file created before.
-Djava.security.auth.login.config=/path/to/server/jaas/file.conf"
If you are using Zookeeper included with Kafka package you can launch Zookeeper like this, assuming that your Zookeeper JAAS config file is located in ./config/zookeeper_jaas.conf
EXTRA_ARGS=-Djava.security.auth.login.config=./config/zookeeper_jaas.conf ./bin/zookeeper-server-start.sh ./config/zookeeper.properties