Redis crashes instantly without error - redis

I've got redis installed on my VM, and I haven't used it in a while. (Last I was using it, it did work, and now it doesn't.. nothing's changed in that time (about a month)). Needless to say I'm deeply confused but I'll post as much info as I can.
$ redis-server
Server starts, but throws a warning about overcommit memory being set to 0. I'm on a VM, so I can't change this setting from 0 to 1 if I wanted, which I wouldn't want to anyway for my purposes. I've written a custom redis.config file though, which I want it to use (and which I was using in the past), so starting it with the default config file doesn't do me much good. Let's try this again.
$ redis-server redis.config
$
Nothing. Silence. No error message, just didn't start.
$ nohup redis-server redis.config > nohup.out&
I get a process ID, but then $ ps and I see the the process is listed as stop and shortly disappears. Again, no errors, and no output in nohup.out nor in the log file for redis. Below is the redis.config I'm using (without the comments to keep it short)
daemonize yes
pidfile [my-user-account-path]/redis/redis.pid
port 0
bind 127.0.0.1
unixsocket [my-user-account-path]/tmp/redis.sock
unixsocketperm 770
timeout 10
tcp-keepalive 60
loglevel warning
logfile [my-user-account-path]/redis/logs/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression no
rdbchecksum no
dbfilename dump.rdb
dir [my-user-account-path]/redis/db
slave-serve-stale-data yes
slave-priority 100
appendonly no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
# ADVANCED CONFIG is set to all default settings#
I'm sure it's probably something stupid, probably even a permissions thing somewhere (I've tried executing this as root, fyi), to no avail. Anyone ever experience something similar with Redis?

i have been experiencing redis crashes as well. just an fyi - the guy responsible for much of redis' development, Salvatore Sanfilippo, aka antirez, keeps an interesting blog that has some insight on redis crashes:
http://antirez.com/news/43

Related

"client_loop: send disconnect: Broken pipe" while running long experiments with bash script

I am connected through ssh to a linux virtual machine to run long experiments (3 hours per program) for academic research. When my computer is not used I get the error message: client_loop: send disconnect: Broken pipe. I have looked at this forum and tried many of the solutions such as:
in my ~/.ssh creating a file config (while creating using sudo chmod 644 ~/.ssh/config) and adding the following lines:
ServerAliveInterval 60
ServerAliveCountMax 100000
In /etc/ssh/ssh_config I have added the following:
Host*
ServerAliveInterval 60
ServerAliveCountMax 100000
And finally /etc/ssh/sshd_config I have the added the following:
TCPKeepAlive yes
ClientAliveInterval 60
ClientAliveCountMax 100000
I have all my macbook settings such that it won't go to sleep by using the following command sudo pmset -a disablesleep 1 and by changing all power saving methods.
However, while going away from the computer for ~1 hour of not using it actively (so screensaver is on the screen) I get this message.
I really don't know where to look at this point. The only things I can consider are MaxStartups 10:30:100 in /etc/ssh/sshd_config or ConnectTimeout 0 in /etc/ssh/ssh_config, but I wasn't entirely sure what the impact of changing these were.
Any suggestions to solve this problem would be appreciated!
Thanks!
edit/update: I notice that when I leave my computer on overnight but I am not running a bash script, that I do not get the broken pipe error.
edit/update 2: I find that I can leave my computer unattended for at least 30 minutes without a broken pipe error
I solve by adding following in my macbook /etc/ssh/ssh_config
Host *
ServerAliveInterval 60 #add this line

Redis SSL Errors In Logs - No Actual Issues?

I have a strange question. I by all rights believe I have a fully functional 6 node (3 masters, 3 replicas) working with Redis 6.2.6 on Ubuntu Server. The client key appears to work and I get responses from all nodes as expected.
However, my logs for all 6 nodes are spamming:
Error accepting a client connection: error:1408F10B:SSL
routines:ssl3_get_record:wrong version number (conn: fd=20)
Even at the lowest logging level I believe they have, warning, this keeps happening. Am I missing something and I actually DO have a problem or is there a bug and a way to get this to stop spewing this beyond turning off logging?
Config:
port 0
bind 127.0.0.1
tls-port 6381
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
appendfsync everysec
tcp-backlog 65536
tcp-keepalive 0
maxclients 10000
loglevel warning
logfile "/var/log/redis/redis-cluster-6381.log"
tls-replication yes
tls-cluster yes
tls-auth-clients no
tls-protocols "TLSv1.2 TLSv1.3"
tls-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
tls-ca-cert-dir /opt/redis-ssl
tls-cert-file /opt/redis-ssl/redis-cluster-01.mydomain.pem
tls-key-file /opt/redis-ssl/redis-cluster-01.mydomain.key
tls-ca-cert-file /opt/redis-ssl/digicert-ca.crt
Di you ever find the solution to this?
I have the same problem. As you mention it appears that there is no impact, but it is annoying. What is more frustrating is that I have (what I think is) an identical set (I am using Sentinel; same redis version 6.2.6) and the problem does not occur there.
What I also see is that the message comes up immediately after “Accepted 127.0.0.1:38348” and if you look at the it points to that same ID as being used for the redis-cli so it looks like it could be something to do with communication from “this” machine and to re-stress I have no issues connecting from Redis Insight 2, Python and redis-cli works fine
We did have an issue connecting to the “OK” instance from .net (Stack exchange) but it looks like that was just a Stack Exchange parameter.

Redis server deleted after rebooting the machine

I am working with redis 3.0.7 on Ubuntu. Everytime im doing reboot and then starting redis server using "nohup redis-server &" all the keys-values content is deleted.
I checked the snapshotting and in redis.conf and i have the default configuration of:
save 900 1
save 300 10
save 60 10000
The machine was up for a long time (months) before the first reboot.
Any idea why this could be happenning?
Ok apperantly i have to start the server from where the dump file actually is. found it, works now.

How to disable Redis RDB and AOF?

How to completely disable RDB and AOF?
I don't care about Persistence and want it to be in mem only.
I have already commented out the:
#save 900 1
#save 300 10
#save 60 10000
But this did not help and I see that Redis still tries to write to disk.
I know that Redis wants to write to disk because I get this error: "Failed opening .rdb for saving: Permission denied"
I don't care about the error, because I want to disable the Persistence altogether.
If you want to change the redis that is running, log into the redis, and
disable the aof:
config set appendonly no
disable the rdb:
config set save ""
If you want to make these changes effective after restarting redis, using
config rewrite
to make these changes to redis conf file.
If your redis have not started, just make some changes to redis.conf,
appendonly no
save ""
make sure there are no sentences like "save 60 1000" after the upper sentences, since the latter would rewrite the former.
Update: please look at Fibonacci's answer. Mine is wrong, although it was accepted.
Commenting the "dbfilename" line in redis.conf should do the trick.

Redis is configured to save RDB snapshots, but is currently not able to persist on disk

I get the following error, whenever I execute any commands that modify data in redis
Redis is configured to save RDB snapshots, but is currently not able to persist on disk.
Commands that may modify the data set are disabled.
Please check Redis logs for details about the error.
I installed redis using brew on mac. How can I get the location of log files where redis-server logs information to. I tried looking for redis conf. file, but couldn't find it either.
What is the default location of [1] redis conf file [2] redis log file.
How do I get rid of the above error, and be able to execute commands that modify data in redis.
When installing with brew the logfile is set to stdout. You need to edit /usr/local/etc/redis.conf and change logfile to something else. I set mine to:
logfile /var/log/redis-server.log
You'll also make sure the user that runs redis has write permissions to the logfile, or redis will simply fail to launch completely. Then just restart redis:
brew services restart redis
After restarting it'll take a while for the error to show up in the logs, because it happens after redis fails its timed flushes. You should be seeing something like:
[7051] 29 Dec 02:37:47.164 # Background saving error
[7051] 29 Dec 02:37:53.009 * 10 changes in 300 seconds. Saving...
[7051] 29 Dec 02:37:53.010 * Background saving started by pid 7274
[7274] 29 Dec 02:37:53.010 # Failed opening .rdb for saving: Permission denied
After a brew install it attempts to save to /usr/local/var/db/redis/ and since redis is probably running as your current user and not root, it can't write to it. Once redis has permission to write to the directory, your logfile will say:
[7051] 29 Dec 03:08:59.098 * 1 changes in 900 seconds. Saving...
[7051] 29 Dec 03:08:59.098 * Background saving started by pid 8833
[8833] 29 Dec 03:08:59.099 * DB saved on disk
[7051] 29 Dec 03:08:59.200 * Background saving terminated with success
and the stop-writes-on-bgsave-error error will no longer get raised.
So I guess it is a bit late for adding an answer here but since I wondered on your question as I had the same error. I got it solved by changing my redis.conf 's dir variable like this:
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /root/path/to/dir/with/write/access/
The default value is: ./, so depending on how you launch your redis server you might not be able to save snapshots.
Hope it helps someone !
In my case i resolved this issue with below steps
Cause : By default redis store data # ./ and if redis runs with redis user this means redis will not be able to write data in ./ file then you will face above error.
Resolution :
Step # 1 (Enter a valid location where redis can do write operations)
root#fpe:/var/lib/redis# vim /etc/redis/redis.conf
dir /var/lib/redis # ( This location must have right for redis user to write)
Step # 2 (Connect to redis cli and map directory to write and issue below variable)
127.0.0.1:6379> CONFIG SET dir "/var/lib/redis"
127.0.0.1:6379> BGSAVE -
This will enable redis to write data on dump file.
Was going through the github discussion and the proposed solution is
to run
config set stop-writes-on-bgsave-error no
in the redis-cli.
here's the link
https://github.com/redis/redis/issues/584#issuecomment-11416418
Steps to fix this error:
Go to redis cli by typing redis-cli
127.0.0.1:6379> config set stop-writes-on-bgsave-error no
after that try to set key value
127.0.0.1:6379> set test_key 'Test Value'
127.0.0.1:6379> get test_key
"Test Value"
Check the following places:
/usr/local/Cellar/redis...
/usr/local/var/log/redis.log
/usr/local/etc/redis.conf
This error often indicates an issue with write permissions, make sure you're RDB directory is writable.
It is usually because permission limits. In my case, it's redis disabled write options.
You can try to run redis-cli in the shell, and then run the following command:
set stop-writes-on-bgsave-error yes