I can see from the Erlang TLS 1.3 documentation that we can enable session resumption on the server by setting, for eg.
{session_tickets, stateless},
The documentation also states
Session tickets are protected by application traffic keys, and in stateless tickets, the opaque data structure itself is self-encrypted.
I take it, by application traffic keys, they mean the key provided in the keyfile. Is there any way to configure the session tickets to be protected/synchronized by some custom key material that can by distributed to many servers, so that clients can resume sessions against any of these servers?
OpenSSL has SSL_CTX_set_tlsext_ticket_key_cb which lets you manage tickets manually. I'm looking to do something similar in Erlang.
Related
We're running Websphere MQ 9.1 and our Telemetry (MQTT) channel is configured to require SSL Authentication.
Our certificates have lifespans of just a few months and we want to automate the process of replacing these certificates. I can easily create a new .kdb file and place in the SSLKEYR location, but this doesn't automatically make the channel use the new certificates.
I have tried the REFRESH SECURITY TYPE(SSL) command and this command succeeds (output: AMQ8560I: IBM MQ security cache refreshed.) I would think this should work, see: https://www.ibm.com/docs/en/ibm-mq/9.1?topic=authorities-refreshing-tls-security
Refreshing TLS security
If you make a change to the key repository, you can refresh the copy of the key repository that is held in memory while a channel is running, without restarting the channel. When you refresh the cached copy of the key repository, the TLS channels that are currently running on the queue manager are updated with the new information.
About this task
When a channel is secured using TLS, the digital certificates and their associated private keys are stored in the key repository. A copy of the key repository is held in memory while a channel is running. If you make a change to the key repository, you can refresh the copy of the key repository that is held in memory without restarting the channel.
When you refresh the cached copy of the key repository, all TLS channels that are currently running are updated:
Sender, server, and cluster-sender channels that use TLS are allowed to complete the current batch of messages. The channels then run the SSL handshake again with the refreshed view of the key repository.
All other channel types that use TLS are stopped. If the partner end of the stopped channel has retry values defined, the channel retries and runs the SSL handshake again. The new SSL handshake uses the refreshed view of the contents of the key repository, the location of the LDAP server to be used for the Certificate Revocation Lists, and the location of the key repository. In the case of server-connection channel, the client application loses its connection to the queue manager and has to reconnect to continue.
However - when I replace the KDB with a keyrepository with different certificates and I refresh the security, my clients still reconnect after I purge them from the channel. When I restart the channel, the clients stay offline as expected.
Why doesn't refresh security work in this case (because its a telemetry channel?) and is there a way to solve this puzzle without stopping and starting the channel?
In IBM MQ, MQTT components are called MQXR. There are 3 log files you can check:
(1)
Windows: {MQ_DATA_PATH}\qmgrs\{qmgr_name}\mqxr.stdout
Windows: {MQ_DATA_PATH}\qmgrs\{qmgr_name}\mqxr.stderr
Unix: {MQ_DATA_PATH}/qmgrs/{qmgr_name}/mqxr.stdout
Unix: {MQ_DATA_PATH}/qmgrs/{qmgr_name}/mqxr.stderr
(2)
Windows: {MQ_DATA_PATH}\qmgrs\{qmgr_name}\errors\mqxr_0.log
Unix: {MQ_DATA_PATH}/qmgrs/{qmgr_name}/errors/mqxr_0.log
The log file mqxr_0.log should have any error messages related to refreshing security.
Here's an interesting note from the MQ Knowledge Center:
All other channel types using TLS are stopped with a STOP CHANNEL MODE(FORCE) STATUS(INACTIVE) command. If the partner end of the stopped message channel has retry values defined, the channel retries and the new TLS handshake uses the refreshed view of the contents of the TLS key repository, the location of the LDAP server to be used for Certification Revocation Lists, and the location of the key repository. In the case of a server-connection channel, the client application loses its connection to the queue manager and has to reconnect in order to continue.
So, you issuing the stop channel command is basically what the note said the refresh security command will do.
Finally, you should probably open a PMR (help ticket) with IBM to see what they say and possibly fix the issue if it is a bug.
Considering Redis Security Document, is my thoughts right?
Redis does not provide strong security functions by itself.
Redis already assumes that only trusted Redis clients are connecting in a secured network.
Simple security setting, for example, IP restriction settings in OS firewall is a way.
I don't think that Redis security is wrong. Basically, Redis is a backend program in a private network, just like Database servers are.
Redis security is weak, but security does matter.
It can be observed from the document itself that different methods are mentioned to address the weak points, such as, implementing authentication.
It is also mentioned that the "Redis is not optimized for maximum security but for maximum performance and simplicity". Hence, it is up to the developer to implement the security.
I have two kafka brokers, and I want to use SSL encryption to kafka brokers. I read the docs and it says to generate ssl key for each broker.
I create ssl key and use the same key in two brokers. Why we cant create it once and use it in all brokers? Is it have any risk?
The risk is that if one key is compromised now you have compromised all of the brokers instead of just one. Every organization has their own requirements for this kind of thing so I recommend checking with the security team that runs other distributed applications in your organization to see what they do and why.
From both the official Kafka docs, as well as an ocean of blogs that churned up during the course of my travels, it looks like I can spin up a Kafka broker whose server.properties config file contains:
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:Bob;User:Alice
This defines two superusers (Bob + Alice) who can then produce messages to, and consume messages from, my broker's topics.
But how do I to leverage these users from the client-side? If I have a Java client that needs to send messages to my Kafka broker, how does that client "authenticate" itself as 'Bob' (or 'Alice', or any other superuser)?
And where are the super user passwords defined/used?!?
I did some digging this week and it looks like "basic auth"-style (username + password) credentials are not supported in Kafka proper.
It looks like you can set up Kerberos or a similar solution (JAAS/SASL, etc.) to create a ticket service that works with Kafka, which is what these ACLs seem to be for. I think the gist is that you would first authenticate against, say, Kerberos, at which point you would be granted a ticket/token. You would then present your username/principle along with your ticket to Kafka, and Kafka would work with Kerberos to ensure the ticket was still valid. I think that's how it works, based on some obscure/vague blogs I was able to get my hands on.
I also see evidence that Kafka currently, or plans on, having some kind of integration-layer support with LDAP, and so you might be able to hook your Kafka cluster up to AD or similar.
The best way to manage Kafka authentication, weirdly enough, seems to be the Yahoo! Kafka Manager tool, which seems to be a very popular, well-maintained project rife with recent updates and community support. This is likely what I will run with, at least for the time being. HTH.
In the context of Tomcat, can session replication takes place without enabling sticky session?
I understand the purpose of sticky session is to have the client 'sticks' to 1 server throughout the session. With session replication, the client's interaction with the server is replicated throughout the cluster (many web servers).
In the above case, can session replication takes place? i.e. client's session is spread though out the web servers and each interaction with any one web server is replicated across, thus, allowing seamless interaction.
AFAIK, tomcat clustering does not support non-sticky sessions. From tomcat docs:
Make sure that your loadbalancer is configured for sticky session
mode.
But there's a solution (I created, so you know I'm biased :-)) called memcached-session-manager (msm), that also supports non-sticky sessions. msm uses memcached (or any backend speaking the memcached protocol) as backend for session backup/storage.
In non-sticky mode sessions are only stored in memcached and no longer in tomcat, as with non-sticky sessions the session-store must be external (to avoid stale data).
It also supports session locking: with non-sticky sessions multiple, parallel requests may hit different tomcats and could modify the session in parallel, so that some of the session changes might get overwritten by others. Session locking allows synchronization of parallel requests going to different tomcats.
The msm home page mainly describes the sticky session approach (as it started with this only), for details regarding non-sticky sessions you might search or ask on the mailing list.
Details and examples regarding the configuration can be found in the msm wiki (SetupAndConfiguration).
Just to give you an idea regarding the complexity: what you need is one or more memcached servers running (or s.th. similar speaking memcached) and an updated tomcat context.xml like this:
<Context>
...
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:host1.domain.com:11211,n2:host2.domain.com:11211"
sticky="false"
sessionBackupAsync="false"
lockingMode="auto"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
/>
</Context>
Your load-balancer does not need special configuration, so once you have these things in place you can start testing your application.
An excellent article about this topic here:
http://www.mulesoft.com/tomcat-cluster
The terracotta product they mention has a simplistic tutorial here:
http://www.terracotta.org/documentation/web-sessions/get-started
Terracotta works on Tomcat but you must pay attention to check which bits of Terracotta are free and which are commercial. A couple of years ago their redundant store was paid and I don't remember this solution being a separated product.
I have actually found the inverse to this question. Session replication for me (Tomcat7) with the OOTB options only works properly WITHOUT Sticky Session. After turning the logging up I found that with the JVMRoutes enabled my session ID goes from A123456789 to A123456789.01 -- suffixed with the jvmroute. That session is successfully replicated from Node 01 in the cluster to Node 02 but uses the same ID (A123456789.01). When I take Node 01 out of the cluster then the traffic starts to stick on Node 02 -- and it looks now for a session A123456789.02 which of course does not exist. .01 is there, but basically sits idle until it expires. If I bring the other server up and the sessions are replicated and then take 02 down, I get even odder behaviour because the session is picked up where it left off.
For me, so far, session replication without sticky sessions (just regular RR amongst the nodes in the cluster) is the only thing that has worked.