Redis SSL Errors In Logs - No Actual Issues? - ssl

I have a strange question. I by all rights believe I have a fully functional 6 node (3 masters, 3 replicas) working with Redis 6.2.6 on Ubuntu Server. The client key appears to work and I get responses from all nodes as expected.
However, my logs for all 6 nodes are spamming:
Error accepting a client connection: error:1408F10B:SSL
routines:ssl3_get_record:wrong version number (conn: fd=20)
Even at the lowest logging level I believe they have, warning, this keeps happening. Am I missing something and I actually DO have a problem or is there a bug and a way to get this to stop spewing this beyond turning off logging?
Config:
port 0
bind 127.0.0.1
tls-port 6381
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
appendfsync everysec
tcp-backlog 65536
tcp-keepalive 0
maxclients 10000
loglevel warning
logfile "/var/log/redis/redis-cluster-6381.log"
tls-replication yes
tls-cluster yes
tls-auth-clients no
tls-protocols "TLSv1.2 TLSv1.3"
tls-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
tls-ca-cert-dir /opt/redis-ssl
tls-cert-file /opt/redis-ssl/redis-cluster-01.mydomain.pem
tls-key-file /opt/redis-ssl/redis-cluster-01.mydomain.key
tls-ca-cert-file /opt/redis-ssl/digicert-ca.crt

Di you ever find the solution to this?
I have the same problem. As you mention it appears that there is no impact, but it is annoying. What is more frustrating is that I have (what I think is) an identical set (I am using Sentinel; same redis version 6.2.6) and the problem does not occur there.
What I also see is that the message comes up immediately after “Accepted 127.0.0.1:38348” and if you look at the it points to that same ID as being used for the redis-cli so it looks like it could be something to do with communication from “this” machine and to re-stress I have no issues connecting from Redis Insight 2, Python and redis-cli works fine
We did have an issue connecting to the “OK” instance from .net (Stack exchange) but it looks like that was just a Stack Exchange parameter.

Related

Redis connection refused error in the application logs

We saw "Connection refused to ip:263*" to redis instances from the application logs. To solve we changed the port number from 26** to 6379 and it worked fine.
Upon analysis we found one one of the slave redis servers have the port number 26380 opened using
netstat -tupln
command. but the other server is not. Upon reading found that 26380, 26379, 26381 are ports used by sentinel. We suspect this 2**** ports should be opened on all servers and sue to some reasons it is not.
Please tell us how to check the logs in sentinel
checking if sentinel is configured.
checking if it is running.
checking what could have caused this to stop suddenly.
redis logs for port
EDIT
this is what I can see from the sentinel logs
2907:signal-handler (1653294181) Received SIGTERM scheduling shutdown...
2907:X 23 May 16:23:01.105 # User requested shutdown...
2907:X 23 May 16:23:01.105 * Removing the pid file.
2907:X 23 May 16:23:01.106 # Sentinel is now ready to exit, bye bye...
433:X 23 May 16:25:08.364 # Creating Server TCP listening socket ipaddress:26379: bind: Cannot assign requested address
anotheripaddress

Configuration of Running Redis Instance in Swisscom CloudFoundry

I am trying to read the configuration of the running Redis instance. I want to better understand how Redis is configured, especially in regard to persistence settings.
I have successfully connected to the running Redis instance (SSH tunnel) and try to execute the following command:
CONFIG GET *
CONFIG GET appendonly
However, I get the message
ERR unknown command 'CONFIG'
If I invoke the command "CONFIG GET" without any parameters I get the message
Invalid input argument for command: 'CONFIG GET', passed 0 arguments, must be in range 1 - 1
So the command is known. Seems to be a permission issue!? Is there a way to get the configuration?
The current Redis offering (march 2019) has the following settings for persistency:
appendonly yes
appendfsync everysec
It runs with 2 replicas.
Please note that this allies to the current service offering of Swisscom and might change in the future.

Node address in Infinispan

I start up an infinispan cache which joins to a cluster. It is the only cache in the cluster.
Now, I connect using JMX to see what ports are being used.
I click on:
CacheManager / MyCache/ CacheManager/ Attributes
Under clusterMembers, I see [mymachine-54202]
Thinking 54202 is the port, I do both a
lsof -i udp
lsof -i tcp
I am on a mac and I can't see anything on 54202. What does 54202 correspond to then?
Thanks
It's a random number to differentiate between multiple caches running on the same box.
For more details, see http://docs.jboss.org/infinispan/5.0/apidocs/config.html#ce_global_transport

Redis crashes instantly without error

I've got redis installed on my VM, and I haven't used it in a while. (Last I was using it, it did work, and now it doesn't.. nothing's changed in that time (about a month)). Needless to say I'm deeply confused but I'll post as much info as I can.
$ redis-server
Server starts, but throws a warning about overcommit memory being set to 0. I'm on a VM, so I can't change this setting from 0 to 1 if I wanted, which I wouldn't want to anyway for my purposes. I've written a custom redis.config file though, which I want it to use (and which I was using in the past), so starting it with the default config file doesn't do me much good. Let's try this again.
$ redis-server redis.config
$
Nothing. Silence. No error message, just didn't start.
$ nohup redis-server redis.config > nohup.out&
I get a process ID, but then $ ps and I see the the process is listed as stop and shortly disappears. Again, no errors, and no output in nohup.out nor in the log file for redis. Below is the redis.config I'm using (without the comments to keep it short)
daemonize yes
pidfile [my-user-account-path]/redis/redis.pid
port 0
bind 127.0.0.1
unixsocket [my-user-account-path]/tmp/redis.sock
unixsocketperm 770
timeout 10
tcp-keepalive 60
loglevel warning
logfile [my-user-account-path]/redis/logs/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression no
rdbchecksum no
dbfilename dump.rdb
dir [my-user-account-path]/redis/db
slave-serve-stale-data yes
slave-priority 100
appendonly no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
# ADVANCED CONFIG is set to all default settings#
I'm sure it's probably something stupid, probably even a permissions thing somewhere (I've tried executing this as root, fyi), to no avail. Anyone ever experience something similar with Redis?
i have been experiencing redis crashes as well. just an fyi - the guy responsible for much of redis' development, Salvatore Sanfilippo, aka antirez, keeps an interesting blog that has some insight on redis crashes:
http://antirez.com/news/43

RPC Authentication error

Last week I was using RPC and could run my RPC server program just fine. However, today I tried to start it again and I am getting this error:
Cannot register service: RPC: Authentication error; why = Client
credential too weak unable to register (X_PROG, X_VERS, udp)
Can anybody tell me what the cause of this error can be?
rpcinfo gives me this:
program version netid address service owner
100000 4 tcp6 ::.0.111 portmapper superuser
100000 3 tcp6 ::.0.111 portmapper superuser
100000 4 udp6 ::.0.111 portmapper superuser
100000 3 udp6 ::.0.111 portmapper superuser
100000 4 tcp 0.0.0.0.0.111 portmapper superuser
100000 3 tcp 0.0.0.0.0.111 portmapper superuser
100000 2 tcp 0.0.0.0.0.111 portmapper superuser
100000 4 udp 0.0.0.0.0.111 portmapper superuser
100000 3 udp 0.0.0.0.0.111 portmapper superuser
100000 2 udp 0.0.0.0.0.111 portmapper superuser
100000 4 local /run/rpcbind.sock portmapper superuser
100000 3 local /run/rpcbind.sock portmapper superuser
The weird thing is that I haven't even been using this pc the past week.
Are there any services that should be running?
Hope you can help me out.
Grtz Stefan
this error is linked to rpcbind,so you should stop service portmap like this:
sudo -i service portmap stop
then
sudo -i rpcbind -i -w
at end start service portmap:
sudo -i service portmap start
I realize this is an older thread, but Google finds it among the top 3 results and people are still discovering the nfs service error. Even Red Hat's RHN's fix didn't work.
As of December 2013 on a RHEL 6.4 (x64), and patched as of November 2013, the only solution was changing the permissions on the tcp_wrapper config files. Because we had secured the box pretty heavily, we had permissions of 640 on /etc/hosts.allow and /etc/hosts.deny, both owned by root:root. We did try given these files different group ownership nothing corrected the issue when nfs started.
Once we put the perms back to "out-of-the-box" (644) the nfs (rquotad) service started up as expected. Or if we moved the hosts.allow/deny out of the way entirely.
What a pain that was to figure out. The selinux logs may have helped if I had looked sooner.
Now if we had left selinux in enforcing mode this MAY have not been an issue. I still have to test that theory.
Good luck.
Making the change persistent on Ubuntu12.04
(assuming security implications of running rpcbind with -i are irrelevant):
echo 'OPTIONS="-w -i"' | sudo tee /etc/default/rpcbind
sudo service portmap restart
Yet Another Solution: CentOS 7.3 edition
In addition to rpcbind, I also had to allow mountd in /etc/hosts.allow:
rpcbind : ALL : allow
mountd : ALL : allow
This finally allowed me to not only execute rpcinfo, but showmount and mount as well.
None of the solutions presented here so far worked for me on the Debian Squeeze to Wheezy upgrade.
In my case the sole thing I had to do was to replace all occurrences of "portmapper" (or "portmap", no more sure) in /etc/hosts.allow with "rpcbind". That was all. (Otherwise ypbind couldn't connect to rpcbind via localhost.)
This also happens if iptables is used and it is blocking UDP connections for localhost. Ran into this today. Stopped iptables, connections started working.
You will need to figure out the rules that broke it.
I think that it is worth mentioning that if you see errors like:
0-rpc-service: Could not register with portmap
it can be related to hosts.allow and hosts.deny files set and lacking permissions for localhost in the hosts.allow file.
I had this kind of problem with setting NFS with GlusterFS.
In my /etc/hosts.allow file I have added:
ALL: 127.0.0.1 : ALLOW
and problem with registering service with portmap went away and everything is working.
Note: with GlusterFS remember to restart the glusterd service
/etc/init.d/glusterd restart
I was receiving an error like so on rhel7:
ypserv: Cannot register service: RPC: Authentication error; why = Client credential too weak
when starting ypbind. I tried everything including the '-i' to rpcbind above. In the end as XTaran mentioned modifying /etc/hosts. allow adding this line:
rpcbind: 127.0.0.1
worked for me.
FWIW, here's an 'alternative' solution.
Check the /etc/hosts.deny file. It should say something like:
rpcbind mountd nfsd statd lockd rquotad : ALL
Ensure that there is a blank last line in this file.
Check the /etc/hosts.allow file. It should say something like:
rpcbind mountd nfsd statd lockd rquotad: 127.0.0.1 192.168.1.100
Ensure that there is a blank last line in this file.
The "trick" (for me) was the blank final line in the file(s).