redis-benchmark fails to get CONFIG - redis

ERROR: failed to fetch CONFIG from 127.0.0.1:6379
WARNING: Could not fetch server CONFIG
It looks like, it requires certs in order to connect with Redis Instance.
I dont see TLS options in redis-benchmark help menu ???
Any idea on how to build/compile redis-benchmark with TLS options enabled

The version of redis that I use 6.0.5 doesn't have TLS option for redis-benchmark and it was introduced in https://github.com/redis/redis/releases/tag/6.2-rc1.

Related

Command not found while starting the secured zookeeper CLI to connect to ZK server

I have configured the ZK Server to use SSL (signed cert, trust store,keystore, modified zookeeper.properties all setup done and good). Zookeeper starts and listens on the port 2182 for SSL requests and no errors in the zookeeper and kafka server logs.
#new properties added in kafka/config/zookeeper.properties
secureClientPort=2182
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
ssl.trustStore.location=/path/to/ssl/kafka.zookeeper.truststore.jks
ssl.trustStore.password=serversecret
ssl.keyStore.location=/path/to/ssl/kafka.zookeeper.keystore.jks
ssl.keyStore.password=serversecret
ssl.clientAuth=need
Now to connect to secure zookeeper using ZK-CLI I am following similar approach. Create zk-client cert, get it signed, create truststore and keystore for the same. Create the properties file and trying to connect to ZK server but I get an error
Command not found: Command not found /path/to/ssl/zookeeper-client.properties
$ kafka/bin/zookeeper-shell.sh localhost:2182 -zk-tls-config-file /Users/path/to/ssl/zookeeper-client.properties
Connecting to localhost:2182
ZooKeeper -server host:port cmd args
addauth scheme auth
close
.....
Command not found: Command not found /Users/path/to/ssl/zookeeper-client.properties
My zookeeper-client.properties looks like this
$cat /Users/path/to/ssl/zookeeper-client.properties
#zookeeper.connect=localhost:2182
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2
zookeeper.ssl.truststore.location=/Users/path/to/ssl/kafka.zookeeper-client.truststore.jks
zookeeper.ssl.truststore.password=serversecret
zookeeper.ssl.keystore.location=/Users/path/to/ssl/kafka.zookeeper-client.keystore.jks
zookeeper.ssl.keystore.password=serversecret
Kafka Server logs at the start of the ZK.
[2021-07-16 11:27:38,676] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NettyServerCnxnFactory)
[2021-07-16 11:27:43,760] INFO bound to port 2181 (org.apache.zookeeper.server.NettyServerCnxnFactory)
.....
[2021-07-16 11:27:43,819] INFO Using org.apache.zookeeper.server.NettyServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)
[2021-07-16 11:27:43,819] INFO binding to port 0.0.0.0/0.0.0.0:2182 (org.apache.zookeeper.server.NettyServerCnxnFactory)
[2021-07-16 11:27:43,821] INFO bound to port 2182 (org.apache.zookeeper.server.NettyServerCnxnFactory)
...
When I try to connect to port 2182 with the zk-client the server logs doesn't show an entry (probably because it is not able to connect as the command to initiate connection fails)
I am using kafka_2.12 version and it has zookeeper-3.5.7
What am I missing here? To me configurations look as expected and the zk-cli shouldn't throw
Reference :
https://atsc.com.sg/docs/edp/7-security/zookeeper-mutual-tls/
https://docs.confluent.io/platform/current/security/zk-security.html
Thanks,
JE
I think the problem is that your cli is running from older version that does not yet support this parameter, check your execution path , are you truly executing from the "current" version?

How to force 'OpenConnect' client to use TLS 1.0

I'm using 'OpenConnect version v8.05' on Red Hat Enterprise Linux 8.1 (Ootpa) in order to connect to a server.
The server only accepts SSLv3, TLSv1.0 ciphers and I don't have access to the server for security update/upgrade.
When I try to connect:
[root#RHEL8 ~]# openconnect --authenticate XXX.XXX.XXX.XXX:443 -status -msg -debug
MTU 0 too small
POST https://XXX.XXX.XXX.XXX/
Connected to XXX.XXX.XXX.XXX:443
SSL negotiation with XXX.XXX.XXX.XXX
SSL connection failure: A packet with illegal or unsupported version was received.
Failed to open HTTPS connection to XXX.XXX.XXX.XXX
Failed to obtain WebVPN cookie
I have changed OpenSSL Min SSL Protocol by changing:
/etc/crypto-policies/back-ends/opensslcnf.config
MinProtocol = TLSv1.0
Now I'm able to handshake the server using 'openssl s_client -connect'. But the openconnect client is not yet able to connect to the server.
How can I force it to use TLS 1.0?
I have filed an issue on their community issue tracker and got useful info.
It is possible to allow this insecure connection with any version newer than 8.05(currently not available on rpm repositories) as mentioned by the maintainer:
$ ./openconnect --gnutls-priority "NONE:+VERS-SSL3.0:+VERS-TLS1.0:%NO_EXTENSIONS:%SSL3_RECORD_VERSION:+3DES-CBC:+ARCFOUR-128:+MD5:+SHA1:+COMP-ALL:+KX-ALL" ***

How to use Kafka with TLS peer verification turned off

I'm testing kafka cluster creation using let's encrypt staging certs. After creating, on my machine, I run the kafka-provided kafka-console-consumer.sh and kafka-console-producer.sh scripts. When I ran with let's encrypt production, it worked fine. But now that I'm using staging certs, I get this when I run the producer:
ERROR [Producer clientId=console-producer] Connection to node -1 (2.kafka.mysite.com/10.1.17.191:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
I use these properties for producer script:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
I'd like to give the option to ignore TLS, and I'd like it to be some parameter I can toggle (on the cluster or on the client) to allow it. How can I achieve this? For anyone familiar with Rabbitmq, I think it's similar to VERIFY_PEER=false, aka VERIFY_NONE.
The kafka configuration has setting
ssl.client.auth
Its value could be set as required,requested or none. You could set it to requested.his means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
https://docs.confluent.io/current/installation/configuration/broker-configs.html

Apache kafka 2.0.0 version - Connection to node 1 failed authentication due to: SSL handshake

I'm using kafka version kafka_2.12-2.0.0 and received the below error after enabling SSL authentication. It seems to be working fine with previous versions: kafka_2.12-1.1.0, 2.11-0.10.2.2 etc.
I don't understand why it is not working with latest version 2.11-0.2.0.0? Has anyone observed the same issue that I'm facing right now with 2.0.0 version.
Below is my test environment docker config file.
listeners=PLAINTEXT://:9092,SSl://:9093
ssl.client.auth=required
ssl.keystore.location=/path/to/server.keystore
ssl.keystore.password=<Key store password>
ssl.key.password = <private key password>
ssl.truststore.location=/path/to/truststore.keystore
ssl.truststore.password=<trust store password>
security.inter.broker.protocol=SSL
And here's the error:
[2018-10-01 09:33:38,984] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
Can someone help me ?
Without more details it's hard to tell for sure, but 2.0.0 introduced a change of behaviour related to the handling of SSL connections.
As mentioned in the 2.0.0 upgrade notes, the broker setting ssl.endpoint.identification.algorithm is now set to https. This enforces hostname verification to prevent "man-in-the-middle" attacks.
To restore previous behaviour, you need to explicitely set this to an empty string.
ssl.endpoint.identification.algorithm=
Was also facing a similar issue. My issue, I was having Kafka server 1.1.1 running and was using Kafka client 2.1.0 to push records. Changing Kafka client to 1.1.1 solved my issue.
Hope this helps.

Mosquitto certificate SSL23_GET_CLIENT_HELLO:unknown protocol

I'm been desperately trying to get my MQTT clients to connect to a MQTT broker which is set up with a certificate from a CA. (Letsencrypt: https://pypi.python.org/pypi/letsencrypt/0.4.1) I'm using the same certificate for my https site, and that seems to work fine. I'm not sure if that holds any connection, though.
I've used this guide to set-up the certificates for the broker (http://mosquitto.org/2015/12/using-lets-encrypt-certificates-with-mosquitto/)
The broker, v1.4.8 seems to work fine with the following config:
cafile chain.pem
certfile cert.pem
keyfile privkey.pem
[ ok ] mosquitto is running.
Clients attempting to connect to this broker with debug message yields:
Client mosqsub/42074-titan sending CONNECT
On my broker's side log I recieve this error message:
1457358950: New connection from NOT.TELLING.YOU.OBVIOUSLY on port 8883.
1457358950: OpenSSL Error: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
1457358950: Socket error on client <unknown>, disconnecting.
I've searched high and wide for a solution to this, sadly there is little to nothing out there.
Any help would be greatly appreciated! Thank you!
I ran into this problem with the paho.mqtt.c MQTT client library when I was using tcp as a protocol instead of ssl.
So I had to use
ssl://1.2.3.4:56789
instead of
tcp://1.2.3.4:56789
Also when using paho.mqtt.c make sure you are linking against the libs with SSL support and that the libs with SSL support are actually built with SSL support! There used to be a bug in a CMake file in which a define was missing (OPENSSL) and thus the SSL libraries did not offer SSL support...
My guess is that you've not enabled TLS mode - did you pass --cafile to mosquitto_sub?
This worked for me just to test out a simple secure publish-subscribe.
I used https://github.com/owntracks/tools/blob/master/TLS/generate-CA.sh to generate the certificates (in /share/mosquitto), simply:
generate-CA.sh
I configured mosquitto.conf (including full logging) with:
log_dest file /var/log/mosquitto.log
log_type all
cafile /share/mosquitto/ca.crt
certfile /share/mosquitto/localhost.crt
keyfile /share/mosquitto/localhost.key
I subscribed (with debug enabled) with:
mosquitto_sub -h localhost -t test -p 8883 --insecure -d --cafile /share/mosquitto/ca.crt
I published with:
mosquitto_pub -h localhost -t test -p 8883 --cafile /share/mosquitto/ca.crt -m "Hi" --insecure
I started getting this issue very recently on my one of the Cloud mosquitto broker.
Im connecting to this broker from another VPS with python client and I am using paho.mqtt.client library for python.
Everything was working until one fine day it all broke. Cause might be regular updates or something else, but it suddenly started giving me handshake error and exactly same error mentioned by OP.
Client connection from AREA51 failed: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol.
At my client in python I am using transport=tcp and connecting to secure MQTT port using tls. This was working fine earlier. After having this issue I have updated Openssl to latest, but it could not resolve this issue.
My problem was my broker was allowing all ssl/tcp and websocket connection from all other clients. Even same Python code was working fine on my local machine.
So It was clear that something wrong with transport mechanism on my other VPS (Client)
Tapping into Python MQTT library, I found that we can try changing transport mechanism.
hence simply changing client code to :
client = mqtt.Client(transport="websockets")
which earlier was:
client = mqtt.Client(transport="tcp")
resolved my issue.
I do have to change the port in connection where my secure websocket was running.
I hope this might help someone in similar situation.