Is Optional Traffic Possible For Cassandra Inter-node Encryption? - ssl

We can implement client-node encryption for C* with optional traffic (encrypted and unencrypted connections are handled).
client_encryption_options:
enabled: false
# If enabled and optional is set to true encrypted and unencrypted connections are handled.
optional: false
keystore: conf/.keystore
keystore_password: cassandra
We don't have the same parameter available for inter-node communication.
Is there a way we can tweak the apache source code for custom implementation of C* with optional traffic for inter-node encryption?
Also can we implement inter-node encryption for C* without having a downtime?
Any links to apache source code for inter-node encryption would be great.
Thanks in advance

There is a way to implement inter-node encryption without downtime by utilizing different ports for non-encrypted and encrypted traffic.
Take a look at ssl_storage_port and storage_port in the cassandra.yaml file. One can utilize the different storage ports to effectively support optional encryption by supporting both ports for a short time.
Note: In 4.0 Cassandra there is an optional flag on inter-node encryption and the ssl_storage_port is deprecated.

Related

SSL and PLAINTEXT in Kafka

Can we use PLAINTEXT and SSL in the Production environment, especially when brokers are within the same sub net and they are protected by Firewall?
I want to use SSL only for external connections like Kafka Connect.
Yes, you can list as many listeners as you have ports available in the listeners property on the brokers.
For instance, you can define PLAINTEXT_SASL between the brokers for replication, and allow for PLAINTEXT/PLAINTEXT_SASL & SSL/SASL_SSL for external traffic

Ejabberd clustering over SSL?

Our web and mobile application suite is used by some government agencies requiring strong security. We're providing XMPP-based chat. We used Openfire as XMPP server, but it turned out Openfire clustering (provided by Hazelcast plugin) does not allow Openfire nodes to communicate over SSL. We're not allowed to use node-to-node communications without SSL.
So, we're currently looking at Ejabberd XMPP server as a (more scalable) alternative to Openfire. But it looks like Ejabberd cluster nodes also communicate without SSL. Is it possible to set up Ejabberd cluster with nodes using SSL to talk to each other?
There is two ways to enable clustering over TLS with ejabberd:
You can set Erlang distribution over TLS: http://erlang.org/doc/apps/ssl/ssl_distribution.html
You can use VPN to protect your cluster.
Typically, the second solution is best for performance, as you offload the SSL processing from your cluster to a lower level layer. Clustering in ejabberd is not intended to be set over the internet, as you need low latency between your node for optimal operation.

Can I use kafka over Internet?

Is kafka suitable for Internet-use?
More precisely, what I want is to expose kafka topics as "public interface", then external consumers (or producers) can connect to it. Is it possible?
I hear there are problems if I want to use the cluster in both internal and external networks, because it is then hard to configure advertised.host.name. Is that true?
And do I have to expose zookeeper as well? I think the new consumer/producer api no longer need that.
Kafka's wire protocol is TCP-based and works fine over the public internet. In the latest versions of Kafka you can configure multiple interfaces for both internal and external traffic. Examples of Kafka over the internet in production include several Kafka-as-a-Service offerings from Heroku, IBM MessageHub, and Confluent Cloud.
You do not need to expose zookeeper if the Kafka clients use the new consumer API.
You may also choose to expose a REST Proxy such as the open source Confluent REST Proxy as a more client firewall friendly interface since it runs over HTTP(S) and will not be blocked by most corporate or personal firewalls.
I would personally not expose the Kafka server directly to clients via TCP for these reasons, only to name a few:
If a bad client opens too many connections this may affect the stability of the Kafka platform and may affects other clients too
Too many open files on the Kafka server, HW/SW settings and OS tuning is needed to limit uncontrolled clients
If you need to add a Kafka server to increase scalability, you may need to go through a lot of low level configuration (firewall, IPs visibility, certificates, etc.) on both client and server side. Other product address these problems using gateways or proxies: Coherence uses extend proxy clients, tibco EMS uses routed destinations, other SW (many JMS servers) use Store&Forward mechanisms, etc.
Maintenance of the Kafka nodes, in case of clients attached to the Kafka servers, will have to consider also the needs of clients and the SLA (service level aggreement) that have been defined with the client (ex. 24*7*365)
If you use Kafka also as a back end service, a multi layered architecture should be taken into consideration: FE gateways and BE services, etc.
Other considerations require to understand what exacly you consider to be an external (over the internet) consumer/producer in your system. Is it a component of your system that needs to access the Kafka servers? Are they internal or external to your organization, etc.
...
Naturally all these considerations can be correctly addressed also using a TCP direct connection to the Kafka servers, but I would personally use a different solution.
HTTP proxies
Or at least I would use a dedicated FE Kafka server (or couple of servers for HA) dedicated for each client that forward the messages to the main Kafka group of servers
It is possible to expose Kafka over the internet (in fact, that's how managed Kafka providers such as Aiven and Instaclustr make their money) but you have to ensure that it is adequately secured. At minimum:
ZooKeeper nodes should reside in a private subnet and not be routable from outside. ZK's security is inadequate and, at any rate, it is no longer required to bootstrap Kafka clients with ZK address(es).
Limit access to the brokers at the network level. If all your clients connect from a trusted network, then set appropriate firewall rules. If in AWS, use VPC peering or Direct Connect if you are connecting cloud-to-cloud or cloud-to-ground. If most of your clients are on a trusted network but a relative minority are not, force the latter to go via a VPN tunnel. Finally, if you want to allow connectivity from arbitrary locations, you'll just have to allow * on port 9092 (or whichever port you configure the brokers to listen on); just make sure that the other ports are closed.
Enable TLS (SSL) for client-broker connections. This is easily configured with a self-signed CA. Depending on how you expose your listeners, you may need to disable SSL hostname verification on the client. (The certificate chain of trust breaks if the advertised host names don't match the certificate's common name.) The clients will need the CA certificate installed. (Same CA that signed the brokers' certs.)
Optionally, you may enable mutual TLS authentication; however, this is logistically more taxing, as it requires each client to have its own private key that is signed by a CA trusted by the broker.
Use SASL to authenticate the client to the broker and create individual users for each application and each person that is expected to access the cluster.
Issue minimally-sufficient cluster- and topic-level access privileges in the ACLs for each user, following the Principle of Least Privilege (PoLP).
One other thing to bear in mind: Not all tooling supports SASL/SSL connectivity and some tools actually require a connection to ZooKeeper nodes (which will not be reachable in the above setup). Make sure any tooling you rely on uses the 'new' style of connectivity directly to the Kafka brokers and does not require a Zookeeper connection.
Beyond configuring client TLS, brokers have to have public IPs which we try to avoid. Normally for other services we hide everything behind load balancers. Would this be possible with kafka?
I'm not sure the Confluent REST proxy hosted on a public server is a real option when you need the high performance batching of the java producer client.

How to enable TLS Renegotiation in Tomcat?

I want to enable SSL keys renegotiation in Tomcat as described in https://www.rfc-editor.org/rfc/rfc5746. Tomcat will use JSSE implementation for SSL. Which cipher suite should I use to enable the same?
Tomcat Version: 6.0.44
Java version: Java 1.8
Protocol - TLS 1.2
Meta: I'm not sure this is ontopic here, but security is on-hold. Migrate if necessary.
All Java 8 and 7, and 6 from 6u22 up, enable secure renegotiation per 5746. See the documentation. By default, it is used if the peer offers or accepts it; if the peer does not, the connection is still made but renegotiation is not done because it would/could be insecure. This can be varied two ways:
set system property sun.security.ssl.allowLegacyHelloMessages false. JSSE will not make the connection if the peer does not agree to 5746. This is not actually more secure, but it is more visibly secure to simple minded basic scanners, and people who care about simple minded basic scanners like auditors
set system property sun.security.ssl.allowUnsafeRenegotiation true. This is less secure if the application depends on peer credentials checked after a message. Since client always checks server before any data, this means if server requests (not requires) client authentication, and checks auth status after a request, it can wrongly accept a forged-prefix request.
The protocol implementation of 5746 sometimes uses a fake "ciphersuite" (officially SCSV -- Signalling Cipher Suite Value) in ClientHello. JSSE client can be configured using the "ciphersuite" name TLS_EMPTY_RENEGOTIATION_INFO_SCSV whether to use this SCSV or the extension. All servers always use the extension, and thus this configuration has no effect on JSSE server.

How can I add an SSL proxy and Authentication layer for Redis

As "Redis is not optimized for maximum security but for maximum performance and simplicity." How can I add an SSL proxy and Authentication layer for Redis ?
Can nginx or twemproxy good for this?
You could use stunnel for this purpose.
Recently, there has been some talk about including OpenSSL in Redis - ETA is still unknown however.
Some recommend spiped instead of stunnel - it does not provide SSL but can be used to secure your connections to Redis.
ElastiCache provides Redis which supports TLS encryption in transit. No need to run a separate proxy.
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/in-transit-encryption.html