Kafka and Zookeeper TLS - ssl

I am trying to enable TLS for kafka broker exchanges and had a thought regarding Zookeeper TLS. Currently, on Apache Kafka Documentation I cannot see much mentioned about ZK TLS setup (ok, probably because it's a different apache project) and any possible performance impact.
The question is, can I not have the ONLY broker-client and inter-broker exchanges secured? Do I also need to add TLS to zookeeper? Extra security isn't bad, but is it really necessary to it even for zookeeper?

Zookeeper with TLS is only available in Zookeeper 3.5 which is still in beta. Therefore, Kafka isn't supporting TLS connections to zookeeper yet. Doesn't mean you can't do it but it does mean you won't find much documentation on it and if you run in it on something important, you are putting yourself at risk. In this case, I would say the extra security could hurt.

Related

What protocol does Akka.NET use to communicate to nodes in the cluster?

For example, setting the remote{} configuration, does that also set the transport that is used internally for cluster communication, for example, the heartbeat messages.
I am not asking for any use case purpose I am asking so I better understand what's happening behind the scenes.
At the moment (Akka.NET 1.3) uses its own protocol for remote communication on top of TCP connection - only a single connection is used by every node-to-node connection. This video discusses it in greater detail.
In future, it will probably change to match JVM version of akka - two major ideas are:
"lanes": multiple connections for each pair of nodes, to avoid head-of-line blocking, that is inherent problem of TCP.
Add support for other protocols, such as Aeron, which is also supported by akka on JVM.

Failover with Spring AMQP and RabbitMQ HA

There are multiple articles suggesting that load-balancer should be used in front of RabbitMQ cluster.
However, there are also multiple references that Spring AMQP is using some
failover implementation like connection reset when broker comes back to life.
I have several questions regarding this topic (given that those articles are more or less old and it's 2018 today)
When using Spring AMQP, is it load-balancing for still required?
If load-balancing is still suggested, how would I solve affinity of primary queue to its node? There would be much inter-connect between cluster nodes, because round-robin load-balancer would have 1-(1/n) success rate of hitting correct cluster node
Does Spring AMQP support some kind of topology awareness, which would allow it to consume from correct node?
There were some articles suggesting that clients should publish/consume to nodes respecting locality of queues. Does this still apply? How does this all fits together given load-balancing, Spring AMQP failover and CachingConnectionFactory?
Can anybody please provide answers to those topics and also provide relevant references, which would provide additional information for verification?
Thanks a lot
For each of your bullets:
a load balancer makes little sense with default configuration of Spring AMQP since it opens a single, long-lived, connection that is shared across all consumers. In, 2.0, you can configure the RabbitTemplate to use a separate connections; this is because it is a recommended configuration to use a different connection for publishers/consumers; this will be default in 2.1.
It might make sense to use a load balancer if you configure the connection factory to cache connections (instead of just channels) since, then, each component gets its own connection.
See next bullet.
See Queue Affinity and the LocalizedQueueConnectionFactory. It uses the management plugin to determine which node currently hosts the queue and connects to that. It will not work with a load balancer since it needs to connect to the actual node.
It is my understanding from several discussions that queue affinity is only needed in the most extreme environments and that, in most environments, the difference is immeasurable. However, environments/networks differ so much, YMMV so you may want to test. My general rule of thumb is to avoid premature optimization since the added complexity of the configuration may simply not be worth the benefit (and you may not have a problem in the first place).

TLS/SSL connection to Redis when using spring-data-redis

What is the recommended way to make a TLS/SSL connection to Redis sentinel using spring-data-redis and Jedis?
I'm using spring-data-redis 1.8.3.RELEASE with Jedis 2.9.0.
I understand that Redis does not provide direct support for TLS/SSL and instead recommends a secure proxy like spiped or stunnel. So lets assume I have setup the appropriate secure tunnels.
I can see that JedisConnectionFactory has a setUseSsl(boolean useSsl) method, but the value only seems to be used in createRedisPool() and not createRedisSentinelPool(), which leads me to think it is currently not possible with Redis sentinel.
Additionally, even when using standalone Redis and setting useSsl to true, there doesn't appear to be a way to set the SSLSocketFactory or parameters, so it will likely end up relying on the JVM's SSL system properties which is problematic if those aren't the SSL properties you wanted to use to connection to the secure tunnel.
Just trying to confirm if my above assumptions are correct, and if not then looking for pointers in the right direction. Thanks.

Kafka Zookeeper security

I am using Kafka Version 0.10.2.0. Is there a way to secure communication between Zookeper Client i.e ZkClient and zookeper server with SSL. I found some way to do through SASL but i want it through SSL.
Zookeeper 3.5 includes SSL support but it is still in alpha so Kafka doesn't yet support it. The highest supported version is 3.4 which only includes sasl.
Ref: https://issues.apache.org/jira/browse/ZOOKEEPER-1000
This task can still be achieved by a simple workaround mentioned in the steps below;
Install zookeeper-3.5.1-alpha (to use the .jar files. version 3.5+ can be used)
Replace default zookeeper*.jar with /zookeeper-3.5.1-alpha/zookeeper-3.5.1-alpha.jar in <kafka-installation-folder>\libs
Copy /zookeeper-3.5.1-alpha/lib/netty-3.7.0.Final.jar into <kafka-installation-folder>\libs
Relevant changes to enable SSL on Zookeeper (https://cwiki-test.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide)

Can I configure Apache ActiveMQ to use the STOMP protocol over UDP?

I'm developing a STOMP binding for Ada, which is working fine utilizing TCP/IP as the transport between the client and an ActiveMQ server configured as a STOMP broker. I thought to support UDP as well (i.e. STOMP over UDP), however, the lack of pertinent information in the ActiveMQ documentation or in web searches suggests to me that this isn't possible, and perhaps it doesn't even make any sense :-)
Confirmation one way or the other (and an ActiveMQ configuration excerpt if this is possible) would be appreciated.
this is not implemented in ActiveMQ at the moment as Stomp transport uses TCP only. It is possible to implement, so if you have a time to do it, give it a try.