i have cassandra 2.1.0 running on Debian 7.6.0 and cqlsh running on the same machine. when i try to connect through cqlsh,
$/usr/local/cassandra-2.1.0/bin/cqlsh --ssl --debug
i get the following error message:
Using CQL driver: <module 'cassandra' from '/usr/local/cassandra-2.1.0/bin/../lib/cassandra-driver-internal-only-2.1.0.post.zip/cassandra-driver-2.1.0.post/cassandra/__init__.py'>
Connection error: ('Unable to connect to any servers', {'127.0.0.1': SSLError(0, '_ssl.c:340: error:00000000:lib(0):func(0):reason(0)')})
the details are as follows. pls. let me know how to resolve this issue. thanks in advance.
server side
as explained in (http://www.datastax.com/documentation/cassandra/2.1/cassandra/security/secureSSLCertificates_t.html), i have generated a keystore and have modified cassandra.yaml as follows:
client_encryption_options:
enabled: true
keystore: /usr/local/cassandra-2.1.0/ssl/.keystore
keystore_password: ***********
i have exported the public key of the server.
client side
copied the public key exported from the previous step into ~/keys/cassandra_node0.cert.
modified ~/.cassandra/cqlshrc as follows:
[connection]
hostname = 127.0.0.1
port = 9042
factory = cqlshlib.ssl.ssl_transport_factory
[tracing]
max_trace_wait = 10.0
[ssl]
certfile = ~/keys/cassandra_node0.cert
validate = true
I had the same issue
Although you probably found the solution by now, but I think it can be beneficial to record the solution for other people.
I followed the documentation from here to create a .pem certificate.
My cqlshrc ssl configuration looks as follows
[ssl]
certfile = /ssl/cqlsh.pem
validate = false
That worked for me.
As with all ssl related topics in cassandra's documentation, this part isn't covered well enough.
Related
Enabled SSL on cassandra nodes on port 9142. The service is running fine when testing it from local but getting AllNodesFailedException when deploying on ECS cluster. Using the same keystore locally. Non SSL Port 9042 is working ok.
Failed to instantiate [com.datastax.oss.driver.api.core.CqlSession]:
Factory method 'session' threw exception; nested exception is
com.datastax.oss.driver.api.core.AllNodesFailedException: Could not
reach any contact point, make sure you've provided valid addresses
(showing first 3 nodes, use getAllErrors() for more):
Node(endPoint=ip-10-18-28-203.us-west-2.compute.internal/10.18.28.203:9142,
hostId=null, hashCode=6551c917):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-28-203.us-west-2.compute.internal/10.18.28.203:9142],
Node(endPoint=ip-10-18-8-110.us-west-2.compute.internal/10.18.8.110:9142,
hostId=null, hashCode=36985f57):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-8-110.us-west-2.compute.internal/10.18.8.110:9142],
Node(endPoint=ip-10-18-7-47.us-west-2.compute.internal/10.18.7.47:9142,
hostId=null, hashCode=8eab7e9):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-7-47.us-west-2.compute.internal/10.18.7.47:9142]
cassandra.yaml properties
server_encryption_options:
internode_encryption: none
keystore: /etc/cassandra/conf/casskeystore
keystore_password: changeit
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: true
optional: true
keystore: /etc/cassandra/conf/casskeystore
keystore_password: changeit
I'm deploying a flask app on Heroku using a Redis premium plan. I get the following error: 'SSL Certification Verify Failed'. Attempted fixes:
Downgrading to Redis 5
Passing ssl_cert_reqs=None to the Redis constructor in redis-py
A solution to this problem could be:
Explain how to disable TLS certification on heroku redis premium plans
Explain how to make TLS certification work on heroku redis premium plans
From Heroku's docs, this may be a hint: 'you must enable TLS in your Redis client’s configuration in order to connect to a Redis 6 database'. I don't understand what this means.
I solved my problem by adding ?ssl_cert_reqs=CERT_NONE to the end of REDIS_URL in my Heroku config.
You can disable TLS certification on Heroku by downgrading to Redis 5 and passing ssl_cert_reqs=None to the Redis constructor.
$ heroku addons:create heroku-redis:premium-0 --version 5
from redis import ConnectionPool, Redis
import os
connection_pool = ConnectionPool.from_url(os.environ.get('REDIS_URL'))
app.redis = Redis(connection_pool=connection_pool, ssl_cert_reqs=None)
My mistake was not doing both at the same time.
An ideal solution would explain how to configure TLS certification for Redis 6.
The docs are actually incorrect, you have to set SSL to verify_none because TLS happens automatically.
From Heroku support:
"Our data infrastructure uses self-signed certificates so certificates
can be cycled regularly... you need to set the verify_mode
configuration variable to OpenSSL::SSL::VERIFY_NONE"
I solved this by setting the ssl_params to verify_none:
ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } }
For me it was where I config redis (in a sidekiq initializer):
# config/initializers/sidekiq.rb
Sidekiq.configure_client do |config|
config.redis = { url: ENV['REDIS_URL'], size: 1, network_timeout: 5,
ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } }
end
Sidekiq.configure_server do |config|
config.redis = { url: ENV['REDIS_URL'], size: 7, network_timeout: 5,
ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } }
end
On Heroku (assuming Heroku Redis addon), the redis TLS route already has the ssl_cert_reqs param sorted out. A common oversight that can cause errors in cases like this on heroku is: using REDIS_URL over REDIS_TLS_URL.
Solution:
redis_url = os.environ.get('REDIS_TLS_URL')
This solution works with redis 6 and python on Heroku
import os, redis
redis_url = os.getenv('REDIS_URL')
redis_store = redis.from_url(redis_url, ssl_cert_reqs=None)
In my local development environment I do not use redis with the rediss scheme, so I use a function like this to allow work in both cases:
def get_redis_store():
'''
Get a connection pool to redis based on the url configured
on env variable REDIS_URL
Returns
-------
redis.ConnectionPool
'''
redis_url = os.getenv('REDIS_URL')
if redis_url.startswith('rediss://'):
redis_store = redis.from_url(
redis_url, ssl_cert_reqs=None)
else:
redis_store = redis.from_url(redis_url)
return redis_store
If using the django-rq wrapper and trying to deal with this, be sure to not use the URL parameter with SSL_CERTS_REQS. There is an outstanding issue that describes this all, but basically you need to specify each connection param instead of using the URL.
I am finding it impossible to set up an encrypted connection with a RabbitMQ broker using python's pika library on the client side. My starting point was the pika tutorial example here but I cannot make it work. I have proceeded as follows.
(1) The RabbitMQ configuration file was:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = false
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
(2) The rabbitmq-auth-mechanism-ssl plugin was enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling was confirmed by checking the enable status through: rabbitmq-plugins list.
(3) The correctness of the TLS certificates was verified by using openssl tools as described here.
(4) The client-side program to set up the connection was:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(
cafile="/Xyz/sampleNodeCert/tms.crt")
context.load_cert_chain("/Xyz/sampleNodeCert/node.crt",
"/Xyz/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, '127.0.0.1')
conn_params = pika.ConnectionParameters(host='127.0.0.1',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials())
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
(5) The client-side program failed with the following error message:
pika.exceptions.ProbableAuthenticationError: ConnectionClosedByBroker: (403) 'ACCESS_REFUSED - Login was refused using authentication mechanism EXTERNAL. For details see the broker logfile.'
(6) The log message in the RabbitMQ broker was:
2019-10-15 20:17:46.028 [info] <0.642.0> accepting AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
2019-10-15 20:17:46.032 [error] <0.642.0> Error on AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671, state: starting):
EXTERNAL login refused: user 'CN=www.node.com,O=Node GmbH,L=NodeTown,ST=NodeProvince,C=DE' - invalid credentials
2019-10-15 20:17:46.043 [info] <0.642.0> closing AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
(7) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
Questions: Does anyone have any idea as to why my test fails? Where can I find a complete working example of setting up an encrypted connection to RabbitMQ using pika?
NB: I am familiar with this post but none of the tips in the post helped me.
After studying the link provided above by Luke Bakken, I am now in a position to answer my own question. The main change with respect to my original example is that I configure the RabbitMQ broker with a passwordless user which has the same name as the CN field of the TLS certificate on both the server and the client side. To illustrate, below, I go through my example again in detail:
(1) The RabbitMQ configuration file is:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_cert_login_from = common_name
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = EXTERNAL
auth_mechanisms.2 = PLAIN
auth_mechanisms.3 = AMQPLAIN
Note that, with the ssl_cert_login_from configuration option, I am asking for the username of the RabbitMQ account to be taken from the "common name" (CN) field of the TLS certificate.
(2) The rabbitmq-auth-mechanism-ssl plugin is enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling can be confirmed by checking the enable status through command: rabbitmq-plugins list.
(3) The signed TLS certificate must have the issuer and subject CN fields equal to each other and equal to the hostname of the RabbitMQ broker node. In my case, inspection of the RabbitMQ log file (in /var/log/rabbitmq) shows that the broker is running on a node called: rabbit#pnp-vm2. The host name is therefore pnp-vm2. In order to check the CN fields of the client-side certificate, I use the following command:
ap#pnp-vm2:openssl x509 -noout -text -in /etc/cert/node.crt | fgrep CN
Issuer: C = CH, ST = CH, L = Location, O = Organization GmbH, CN = pnp-vm2
Subject: C = DE, ST = NodeProvince, L = NodeTown, O = Node GmbH, CN = pnp-vm2
As you can see, both the Issuer CN field and the Subject CN Field are equal to: "pnp-vm2" (which is the hostname of the RabbitMQ broker, see above). I tried using this name for only one of the two CN fields but then the connection to the broker could not be established. In my test environment, it was easy to create a client certificate with identical CN names but, in an operational environment, this may be a lot harder to do. Also, I do not quite understand the reason for this constraint: is it a bug or it is a feature? And does it originate in the particular RabbitMQ library I am using (python's pika) or in the AMQP protocol? These question probably deserve a dedicated post.
(4) The client-side program to set up the connection is:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(cafile="/home/ap/RocheTe/cert/sampleNodeCert/tms.crt")
context.load_cert_chain("/home/ap/RocheTe/cert/sampleNodeCert/node.crt",
"/home/ap/RocheTe/cert/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, 'pnp-vm2')
conn_params = pika.ConnectionParameters(host='a.b.c.d',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials(),
heartbeat=0)
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
input("Press Enter to continue...")
Here, "a.b.c.d" is the IP address of the machine on which the RabbitMQ broker is running.
(5) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
One final word of warning: with this configuration, I was able to establish a secure connection to the RabbitMQ Broker but, for reasons which I still do not understand, it became impossible to start the RabbitMQ Web Management Tool...
this is my first post on Stackoverflow, i hope i didnt choose the wrong section.
Context :
Kafka HEAP size is configured on following file :
/etc/systemd/system/kafka.service
With following parameter :
Environment="KAFKA_HEAP_OPTS=-Xms6g -Xmx6g"
OS is "CentOS Linux release 7.7.1908".
Kafka is "confluent-kafka-2.12-5.3.1-1.noarch", installed from the following repository :
# Confluent REPO
[Confluent.dist]
name=Confluent repository (dist)
baseurl=http://packages.confluent.io/rpm/5.3/7
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
[Confluent]
name=Confluent repository
baseurl=http://packages.confluent.io/rpm/5.3
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
I activated SSL on a 3-machine KAFKA cluster few days ago, and suddently, the following command stopped working :
kafka-topics --bootstrap-server <the.fqdn.of.server>:9093 --describe --topic <TOPIC-NAME>
Which return me the following error :
[2019-10-03 11:38:52,790] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1':(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152)
at java.lang.Thread.run(Thread.java:748)
On the server's log, the following line appears when i try to request it via "kafka-topics" :
/var/log/kafka/server.log :
[2019-10-03 11:41:11,913] INFO [SocketServer brokerId=<ID>] Failed authentication with /<ip.of.the.server> (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I was able to use this command properly BEFORE implementing SSL on the cluster. Here is the configuration i'm using.
All functionnality work properly (consumers, producers...) except "kafka-topics" :
# SSL Configuration
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
ssl.keystore.type=<keystore-type>
ssl.keystore.location=<keystore-path>
ssl.keystore.password=<keystore-password>
# Enable SSL between brokers
security.inter.broker.protocol=SSL
# Listeners
listeners=SSL://<fqdn.of.the.server>:9093
advertised.listeners=SSL://<fqdn.of.the.server>:9093
There is no problem with the certificate (which is signed by internal CA, internal CA which i added to the truststore specified on the configuration). OpenSSL show no errors :
openssl s_client -connect <fqdn.of.the.server>:9093 -tls1
>> Verify return code: 0 (ok)
The following command is working pretty well with SSL, thanks to parameter "-consumer.config client-ssl.properties"
kafka-console-consumer --bootstrap-server <fqdn.of.the.server>:9093 --topic <TOPIC-NAME> -consumer.config client-ssl.properties
"client-ssl.properties" content is :
security.protocol=SSL
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
Right now, i'm forced to use "--zookeeper", which according to the documentation, is deprecated :
--zookeeper <String: hosts> DEPRECATED, The connection string for
the zookeeper connection in the form
host:port. Multiple hosts can be
given to allow fail-over.
And of course, it's working pretty well :
kafka-topics --zookeeper <fqdn.of.the.server>:2181 --describe --topic <TOPIC-NAME>
Topic:<TOPIC-NAME> PartitionCount:3 ReplicationFactor:2
Configs:
Topic: <TOPIC-NAME> Partition: 0 Leader: <ID-3> Replicas: <ID-3>,<ID-1> Tsr: <ID-1>,<ID-3>
Topic: <TOPIC-NAME> Partition: 1 Leader: <ID-1> Replicas: <ID-1>,<ID-2> Isr: <ID-2>,<ID-1>
Topic: <TOPIC-NAME> Partition: 2 Leader: <ID-2> Replicas: <ID-2>,<ID-3> Isr: <ID-2>,<ID-3>
So, my question is : why am i unable to use "--bootstrap-server" atm ? Because of the "zookeeper" deprecation, i'm worried about not to be able to consult my topics, and their details...
I believe that kafka-topics needs the same option than kafka-console-consumer, aka "-consumer.config"...
Ask if any additionnal precision needed.
Thanks a lot, hope my question is clear and readable.
Blyyyn
I finally found a way to deal with this SSL error. The key is to use the following setting :
--command-config client-ssl.properties
This is working with the most part of KAFKA commands, like kafka-consumer-groups, and of course kafka-topics. See examples below :
kafka-consumer-groups --bootstrap-server <kafka-hostname>:<kafka-port> --group <consumer-group> --topic <topic> --reset-offsets --to-offset <offset> --execute --command-config <ssl-config>
kafka-topics --list --bootstrap-server <kafka-hostname>:<kafka-port> --command-config client-ssl.properties
ssl-config was "client-ssl.properties",see initial post for content.
Beware, by using IP address on , you'll have an error if the machine certificate doesnt have alternative name with that IP address. Try to have correct DNS resolution and use FQDN if possible.
Hope this solution will help, cheers!
Blyyyn
Stop your Brokers and run below ( assuming you have more that 1.5GB RAM on your server)
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
then start your Brokers on all 3 nodes and then try it.
Note that for consumer and producer clients you need to prefix security.protocol accordingly inside your client-ssl.properties.
For Kafka Consumers:
consumer.security.protocol=SASL_SSL
For Kafka Producers:
producer.security.protocol=SASL_SSL
I am trying to setup SSL in CouchDB 1.6.1 on Raspbian and I get that :
** {badarg,[{ets,select_delete,
[undefined,[{{{undefined,'_','_'},'_'},[],[true]}]],
[]},
{ets,match_delete,2,[{file,"ets.erl"},{line,655}]},
{ssl_pkix_db,remove_certs,2,[{file,"ssl_pkix_db.erl"},{line,221}]},
{ssl_connection,terminate,3,
[{file,"ssl_connection.erl"},{line,934}]},
{tls_connection,terminate,3,
[{file,"tls_connection.erl"},{line,326}]},
{gen_fsm,terminate,7,[{file,"gen_fsm.erl"},{line,595}]},
{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,517}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}
My version of erlang is Erlang/OTP 17 [erts-6.2].
My local.ini file contains :
httpsd = {couch_httpd, start_link, [https]}
[ssl]
port = 6984
cert_file = /etc/couchdb/cert/couchdb.crt
key_file = /etc/couchdb/cert/couchdb.key
Http works fine. Any idea?
Cheers.
This looks like the crash fixed by this pull request, merged into Erlang/OTP 18.2.
To the best of my knowledge, the crash itself is harmless: it occurs during connection termination, because the code is trying to clean up something that never was set up. However, it might draw your attention from other errors happening before it, such as incorrect path to key / certificate files.