Apache Kafka doens't start after SSL configuration - ssl

I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more

It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.

Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.

Related

Cannot acces to localhost:8443/ejbca

I'm new in ejbca and i have to install it on a virtual machine for job
Ubuntu 20.04
ejbca_7_4_3_2
wildfly-18.0.0.Final
mariadb-server version: 10.3.32-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04
openjdk version "1.8.0_312"
Apache Ant(TM) version 1.10.7 compiled on October 24 2019
After a few try's(and a lot of virtual machines cloned and deleted), i finally get the "build successfully" message with the commands ant runinstall and ant deploy-keystore
But when i try to use the URL https://localhost:8443/ejbca/ (the certificate SuperAdmin.p12 is installed) my browser(firefox 96.0 64bits) give the message
An error occurred during a connection to localhost:8443. Cannot communicate securely with peer: no common encryption algorithm(s).
Error code: SSL_ERROR_NO_CYPHER_OVERLAP
i have this errors on my log file, the first one related with ant -q clean deployear
and the last, appear every time i try to access via URL https://localhost:8443/ejbca/
ERROR [org.jboss.as.jsf] (MSC service thread 1-1) WFLYJSF0002: Could not load JSF managed bean class: org.ejbca.ui.web.admin.peerconnector.PeerConnectorMBean
ERROR [io.undertow.request] (default I/O-2) Closing SSLConduit after exception on handshake: javax.net.ssl.SSLHandshakeException: no cipher suites in common
at sun.security.ssl.Alert.createSSLException(Alert.java:131)
at sun.security.ssl.Alert.createSSLException(Alert.java:117)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:311)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:267)
at sun.security.ssl.TransportContext.fatal(TransportContext.java:258)
at sun.security.ssl.ServerHello$T12ServerHelloProducer.chooseCipherSuite(ServerHello.java:461)
at sun.security.ssl.ServerHello$T12ServerHelloProducer.produce(ServerHello.java:296)
at sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:421)
at sun.security.ssl.ClientHello$T12ClientHelloConsumer.consume(ClientHello.java:1020)
at sun.security.ssl.ClientHello$ClientHelloConsumer.onClientHello(ClientHello.java:727)
at sun.security.ssl.ClientHello$ClientHelloConsumer.consume(ClientHello.java:693)
at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377)
at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:981)
at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:968)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:915)
at io.undertow.protocols.ssl.SslConduit$5.run(SslConduit.java:1072)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.lang.Thread.run(Thread.java:748)
ERROR [io.undertow.request] (default I/O-2) Closing SSLConduit after exception
Sounds like a TLS configuration issue. You will find the TLS configuration you did when configuring WildFly in the commands you ran like:
/opt/wildfly/bin/jboss-cli.sh --connect '/subsystem=elytron/server-ssl-context=httpspriv:add(key-manager=httpsKM,protocols=["TLSv1.2"],use-cipher-suites-order=false,cipher-suite-filter="TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",trust-manager=httpsTM,need-client-auth=true)'
The result is somewhere in standalone.xml in WildFly, and you can modify it directly in WildFly. For example if you have EC keys in the server certificate while using the above RSA algorithm selection.
In server.log you should also see when WildFly starts up if there are any error in parsing the values, or keystores.
Make sure that you server and client certificates have keys and algorithms that match the TLS algorithm settings, otherwise WildFly will remove those algortihms.

Kafka Security implementation issue SASL SSL and SCRAM

I'm facing error while starting kafka server,
have setup the SSL and it's working fine for kafka 3 brokers. And zookeeper is also setup with SSL
Now tried to setup the SCRAM with SASL_SSL for kafka broker from server property file.
It's not working I have created a user with following command
kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zookeeper-client.properties --entity-type users --entity-name broker-admin --alter --add-config 'SCRAM-SHA-512=[password=DEM123]'
and I can see user is created.
but while trying to run the command to run kafka broker
kafka-server-start.sh -daemon server-0.properties
It is having some error while I have checked server.log file
[2021-10-05 16:21:38,369] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /config/users/broker-admin
Can anyone support me?
let me share my zookeeper.proerpties file
dataDir=/var/www/kafka/data/zookeeper
clientPort=2181
secureClientPort=2182
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
ssl.trustStore.location=/var/www/kafka/ssl/kafka.zookeeper.truststore.jks
ssl.trustStore.password=zookeepbook
ssl.keyStore.location=/var/www/kafka/ssl/kafka.zookeeper.keystore.jks
ssl.keyStore.password=zookeepbook
ssl.clientAuth=need
maxClientCnxns=0
admin.enableServer=true
admin.serverPort=9090
server.1=localhost:2888:3888
server.properties file content :
broker.id=0
listeners=SASL_SSL://localhost:9092
advertised.listeners=SASL_SSL://localhost:9092
zookeeper.connect=localhost:2182
log.dirs=/var/www/kafka/data/broker-0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2
zookeeper.ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
zookeeper.ssl.truststore.password=zookeepbookbrk0
zookeeper.ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
zookeeper.ssl.keystore.password=zookeepbookbrk0
zookeeper.set.acl=true
ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
ssl.truststore.password=zookeepbookbrk0
ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
ssl.keystore.password=zookeepbookbrk0
ssl.key.password=zookeepbookbrk0
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=none
ssl.protocol=TLSv1.2
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username='broker-admin' password=DEM123;
super.users=User:broker-admin
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
Can you try to set 'skipACL=yes' to your zookeeper.properties ?
If you authenticated with Zookeeper by using SSL client certs when you created 'broker-admin' user, I think it because access from other than the place where you executed the command is denied.

kafka connect to s3 fails to start

I'm trying to configure the kafka-connect to send my data from kafka to s3.
I'm newbie in aspect of kafka, and I'm trying to implement this flow without any ssl encryptions just to get the hang of it.
kafka version : 2.12-2.2.0
kafka-connect : 4.1.1 (https://api.hub.confluent.io/api/plugins/confluentinc/kafka-connect-s3/versions/4.1.1/archive)
In the server.properties file the only change that I did is setting the advertised.listeners to my ec2 IP:
advertised.listeners=PLAINTEXT://ip:9092
kafka-connect properties :
# Kafka broker IP addresses to connect to
bootstrap.servers=localhost:9092
# Path to directory containing the connector jar and dependencies
plugin.path=/root/kafka_2.12-2.2.0/plugins/
# Converters to use to convert keys and values
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
# The internal converters Kafka Connect uses for storing offset and configuration data
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
security.protocol=SASL_PLAINTEXT
consumer.security.protocol=SASL_PLAINTEXT
my s3-sink.properties file :
name=s3.sink
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=my_topic
s3.region=us-east-1
s3.bucket.name=my_bucket
s3.part.size=5242880
flush.size=3
storage.class=io.confluent.connect.s3.storage.S3Storage
format.class=io.confluent.connect.s3.format.json.JsonFormat
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE
I'm launcing kafka-connect with the following command :
connect-standalone.sh kafka-connect.properties s3-sink.properties
At first I got the following error :
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
From other posts I saw that I need to create a jaas config file so that what I did :
cat config/kafka_server_jass.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="userName"
serviceName="kafka"
password="password";
};
and :
export KAFKA_OPTS="-Djava.security.auth.login.config=/root/kafka_2.12-2.2.0/config/kafka_server_jass.conf"
Now I'm getting the following error :
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
help :)
You might also need to define principal and keytab inside your jaas configuration:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="userName"
serviceName="kafka"
password="password";
useKeyTab=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com#EXAMPLE.COM";
};

Open MQ broker won't start

We're running Glassfish 4.1.1 (Payara) with mq 5.1.1. It's a HA setup with load balancer and cluster.
Glassfish is running ok. Problem is that MQ won't start.
I think that a remote MQ is starting. I can do imqcmd list bkr -b and I get successful results.
However when I do imqcmd list bkr (or imqcmd list jmx, without -b hostname) I get:
Host Primary Port
-------------------------
localhost 7676
WARNING: [C4003]: Error occurred on connection creation [localhost:7676]. - cause: java.net.SocketException: Connection reset
Error while connecting to the broker on host 'localhost' and port '7676'.
I'd like to get rid of the error, and see my network ip instead of localhost.
Also GF server.log gives this:
[2017-04-12T11:54:46.516-0400] [Payara 4.1] [SEVERE] [rardeployment.start_failed] [javax.enterprise.resource.resourceadapter.com.sun.enterprise.connectors] [tid: _ThreadID=42 _ThreadName=admin-listener(2)] [timeMillis: 1492012486516] [levelValue: 1000] [[
RAR6035 : Resource adapter start failed.
javax.resource.spi.ResourceAdapterInternalException: java.security.PrivilegedActionException: javax.resource.spi.ResourceAdapterInternalException: MQJMSRA_RA4001: start:Aborting:Exception starting EMBEDDED broker=Broker failed to start
at com.sun.enterprise.connectors.jms.system.ActiveJmsResourceAdapter.startResourceAdapter(ActiveJmsResourceAdapter.java:557)
at com.sun.enterprise.connectors.ActiveOutboundResourceAdapter.init(ActiveOutboundResourceAdapter.java:130)
...
Caused by: java.lang.RuntimeException: Broker failed to start
at com.sun.messaging.jmq.jmsclient.runtime.impl.BrokerInstanceImpl.start(BrokerInstanceImpl.java:205)
at com.sun.messaging.jms.blc.EmbeddedBrokerRunner.start(EmbeddedBrokerRunner.java:331)
at com.sun.messaging.jms.blc.LifecycleManagedBroker.start(LifecycleManagedBroker.java:457)
... 92 more
Caused by: java.io.IOException: [B3297]: Unable to make directory <mydirectory>/imq/instances/imqbroker/etc
at com.sun.messaging.jmq.jmsserver.Broker.initializePasswdFile(Broker.java:376)
I'm wondering where the directory that it is unable to make is configured.
I've been debugging this for days. I need to know where to configure the ip for the embedded broker. I also need to know where to set up the jmxrmi url.
any help would be appreciated. Thanks!
I found the solution to this problem. We had a broken symlink to the openmq application directory, within the Glassfish application directory. On domain startup, Glassfish could not find mq and therefore could not start the embedded broker. Once we fixed the symlink, the embedded broker started up on glassfish domain startup (asadmin start-domain).
I knew the embedded broker was not starting because the "imq" folder was not being created in <domaindir>/
Check for those broken symlinks!!

Setting up secure cassandra cluster (java.lang.RuntimeException: Failed to setup secure pipeline at )

I have followed the steps mentioned on https://github.com/PatrickCallaghan/datastax-ssl-secure-cluster/blob/master/README.md for setting up a secure SSL cassandra cluster. I receive the same error as you "Failed to setup secure pipeline". I overrode my cassandra.yaml cipher suites as mentioned by the website and I still get the same error.
My cassandra.yaml looks like this:
client_encryption_options:
enabled: true
# If enabled and optional is set to true encrypted and unencrypted connections are handled.
optional: false
keystore: ***/ssl/cassandra3_keystore.jks
keystore_password: ****
# require_client_auth: false
# Set trustore and truststore_password if require_client_auth is true
# truststore: conf/.truststore
# truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA]
Could someone guide me on what I could do?
Here is the full error trace:
Exception (java.lang.RuntimeException) encountered during startup: Failed to setup secure pipeline
java.lang.RuntimeException: Failed to setup secure pipeline
at org.apache.cassandra.transport.Server$AbstractSecureIntializer.<init>(Server.java:354)
at org.apache.cassandra.transport.Server$SecureInitializer.<init>(Server.java:411)
at org.apache.cassandra.transport.Server.start(Server.java:152)
at org.apache.cassandra.service.NativeTransportService$$Lambda$203.0000000040E88830.accept(Unknown Source)
at java.util.Collections$SingletonSet.forEach(Collections.java:4778)
at org.apache.cassandra.service.NativeTransportService.start(NativeTransportService.java:128)
at org.apache.cassandra.service.CassandraDaemon.startNativeTransport(CassandraDaemon.java:633)
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:495)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
Caused by: java.io.IOException: Error creating the initializing the SSL Context
at org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:170)
at org.apache.cassandra.transport.Server$AbstractSecureIntializer.<init>(Server.java:350)
... 9 more
Caused by: java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not available
at sun.security.jca.GetInstance.getInstance(GetInstance.java:171)
at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:12)
at org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:146)
... 10 more
ERROR 15:36:01 Exception encountered during startup
java.lang.RuntimeException: Failed to setup secure pipeline
at org.apache.cassandra.transport.Server$AbstractSecureIntializer.<init>(Server.java:354) ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.transport.Server$SecureInitializer.<init>(Server.java:411) ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.transport.Server.start(Server.java:152) ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.service.NativeTransportService$$Lambda$203.0000000040E88830.accept(Unknown Source) ~[na:na]
at java.util.Collections$SingletonSet.forEach(Collections.java:4778) ~[na:1.8.0-internal]
at org.apache.cassandra.service.NativeTransportService.start(NativeTransportService.java:128) ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.service.CassandraDaemon.startNativeTransport(CassandraDaemon.java:633) [apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:495) [apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600) [apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714) [apache-cassandra-3.7.jar:3.7]
Caused by: java.io.IOException: Error creating the initializing the SSL Context
at org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:170) ~[apache-cassandra-3.7.jar:3.7]
at org.apache.cassandra.transport.Server$AbstractSecureIntializer.<init>(Server.java:350) ~[apache-cassandra-3.7.jar:3.7]
... 9 common frames omitted
Caused by: java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not available
at sun.security.jca.GetInstance.getInstance(GetInstance.java:171) ~[na:1.8.0-internal]
at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:12) ~[na:8.0 build_20150122]
at org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:146) ~[apache-cassandra-3.7.jar:3.7]
... 10 common frames omitted
You can get round it by overriding the cipher suites for both node-to-node and client-node properties e.g.
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA]
This is because of the following problem in Oracle Java. http://www.pathin.org/tutorials/java-cassandra-cannot-support-tls_rsa_with_aes_256_cbc_sha-with-currently-installed-providers/
Once downloaded you can copy the files to the correct library on your server.
e.g.
scp * root#server:/usr/lib/jvm/java-7-oracle/jre/lib/security/