I have two kafka brokers, and I want to use SSL encryption to kafka brokers. I read the docs and it says to generate ssl key for each broker.
I create ssl key and use the same key in two brokers. Why we cant create it once and use it in all brokers? Is it have any risk?
The risk is that if one key is compromised now you have compromised all of the brokers instead of just one. Every organization has their own requirements for this kind of thing so I recommend checking with the security team that runs other distributed applications in your organization to see what they do and why.
Related
I can see from the Erlang TLS 1.3 documentation that we can enable session resumption on the server by setting, for eg.
{session_tickets, stateless},
The documentation also states
Session tickets are protected by application traffic keys, and in stateless tickets, the opaque data structure itself is self-encrypted.
I take it, by application traffic keys, they mean the key provided in the keyfile. Is there any way to configure the session tickets to be protected/synchronized by some custom key material that can by distributed to many servers, so that clients can resume sessions against any of these servers?
OpenSSL has SSL_CTX_set_tlsext_ticket_key_cb which lets you manage tickets manually. I'm looking to do something similar in Erlang.
client must know all brokers using Failover Transport, right? Like that,
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
Is there optimization,so that the client does not have to know the existence of each broker ?
Put a TCP load balancer in front of the brokers. Only forward requests to the master broker. The LB can ping who's online or not by checking the "Slave" attribute of the broker via Jolokia/JMX.
A standalone approach would be to provide an URL to a comma separated list of broker URLs to try in case of failure. Can be done using the updateURIsURL option in the failover URI.
There is also some possibilities to auto-discover brokers using Multicast or by querying an LDAP directory, but that requires certain infrastructure in place. Read more about it here.
I have a design for a RabbitMQ topology, but recently learned that RabbitMQ federation ignores messages that aren't "directly published" to the upstream exchange. This is a problem, because I am using a combination of exchange-to-exchange bindings and federation, so my setup isn't working.
Essentially, our setup is to have messages flowing into one exchange on an "inbound" server, federated to an exchange on a "routing" server, which is bound to another exchange on a routing server, which is federated to an "outgoing" server (which is where clients create queues and bind them). The reasoning behind the exchange-to-exchange binding is to force the routing to happen there, instead of allowing it to happen all the way upstream as would occur without that link. For load reasons, we can't afford for the routing to happen upstream in the "inbound" servers.
Is there a way to re-publish messages in the routing server so federation picks them up, or something to that effect? Is there something other than federation I should use in this topology?
Yes, the shovel plugin allows you to do just that. It consumes from one exchange and re-publishes to another, and the exchanges can be on the same or different nodes.
I am trying to demonstrate to others that my queue is using SSL, however from the RabbitMQ web management tools there seems to be no distinction over which queues are using SSL and which are not.
Using RabbitMQ management on localhost, I am able to see all my queues. I have set up SSL on port 5671 successfully using the troubleshooting from RabbitMQ website.
Using MassTransit I have configured my incoming bus to use localhost:5671/my_queue_name with a client certificate and all is working successfully - I just can't confirm to others that the queue is secure. If I get a message from the queue using the web management tools, I can read the (JSON) message in plain text. Any ideas how I can prove my messages are secure?
I've attempted using BusDriver to peek the queues but get nothing back (independent of whether is SSL or not).
SSL is used to secure connections, not to encrypt queue contents.
What SSL gives you is that communication from clients to RabbitMQ will be encrypted, so you could theoretically be sure that nobody tampered with your messages.
Also if you need to validate that the sender of the message is a particular user, you could use this RabbitMQ extension: http://www.rabbitmq.com/validated-user-id.html
Why only can java provide support for failover protocol in activemq whereas not other languages.
My doubt is that in the failover protocol like failover://(tcp://host1:61616,tcp://host2:61616)?randomize=false also the client uses one of the the inner urls like tcp://host1:61616 and then how does the broker comes to know that the call was using some failover protocol or not and then how the broker decides that it needs to replicate the message ?
Please understand that failover protocol is meant for reconnect logic on client side only and AMQ broker isn't even aware if a client is using failover protocol or not.
From the official AMQ documentation:
The Failover transport layers reconnect logic on top of any of the
other transports.
The Failover configuration syntax allows you to specify any number of
composite uris. The Failover transport randomly chooses one of the
composite URI and attempts to establish a connection to it. If it does
not succeed or if it subsequently fails, a new connection is
established to one of the other uris in the list.
Not sure what you mean by replication here but as per the official doc
The Failover transport tracks transactions by default. The inflight
transactions are replayed on reconnection.
There are different scenarios to put up a HA solution with ActiveMQ.
If clients connect using the failover protocol to host1,host2, then the broker setup needs to be setup for HA as well.
One solution is to cluster host1 and host2 in an Active-Active solution. Then messages are always propagated when they are asked for - the queues are shared in the entire cluster among all amq brokers.
Otherwise, if the active-active solution is not prefered, then a master-slave solution can be setup where the two brokers, host1 and host2, share the data area (for instance using a Database for persistance or a shared SAN disk).
There are more combinations of setups, but the failover protocol assumes that the entire solution can handle that messages arrives to different brokers, if one goes down. As far as I know, there is no other magic in the failover protocol, from the broker perspective.