I'm running a 6 nodes cluster in version 5.0.6, using Redis Docker official image and have in my config file the following configuration:
SAVE ""
appendonly no
I can confirm these settings are loaded running:
config get save
1) "save"
2) ""
config get appendonly
1) "appendonly"
2) "no"
But Redis still creating a dump.rdb file frequently:
info persistence
# Persistence
loading:0
rdb_changes_since_last_save:364575
rdb_bgsave_in_progress:1
rdb_last_save_time:1570058274
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:68
rdb_current_bgsave_time_sec:54
rdb_last_cow_size:445624320
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
-rw-r--r-- 1 redis redis Oct 2 19:19 dump.rdb
-rw-r--r-- 1 redis redis Oct 2 18:02 nodes.conf
-rw-r--r-- 1 redis redis Oct 2 19:20 temp-260.rdb
I have checked if a BGSAVE command are being issued by my application running INFO COMMANDSTATS and that is not the case.
I have tried to set a very big value for SAVE ( CONFIG SET save "99999999999 1215752191" ) to see if it changes the frequency of the snapshots and it didn't affected. Snapshots are being saved at the same frequency (Every few seconds).
Is persistence something that can not be disabled in the cluster? Any other way to disable the persistence?
Thank you,
The configuration proposed in the question is accurate to disable the persistence.
Related
i use reddison client but when the client has error “MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk”
Unable to send PING command over channel: [id: 0x04130153, L:/171.20.0.8:38080 - R:10.3.236.102/10.3.236.102:6379]
org.redisson.client.RedisException: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.. channel: [id: 0x04130153, L:/171.20.0.8:38080 - R:10.3.236.102/10.3.236.102:6379] command: (PING), params: []
the redis server has no error
{"log":"3443340:C 09 Apr 00:12:41.648 * DB saved on disk\n","stream":"stdout","time":"2022-04-09T00:12:41.649083457Z"}
{"log":"3443340:C 09 Apr 00:12:41.772 * RDB: 38 MB of memory used by copy-on-write\n","stream":"stdout","time":"2022-04-09T00:12:41.77335587Z"}
{"log":"7:M 09 Apr 00:12:42.024 * Background saving terminated with success\n","stream":"stdout","time":"2022-04-09T00:12:42.025019006Z"}
{"log":"7:M 09 Apr 00:12:45.027 *
beacuse the server time is not correct
I am trying to slower down the start process of Redis, so that when we initiate the command to start redis server and at the same do
info persistence
It should give Loading:1, but right now I am getting
loading:0
rdb_changes_since_last_save:1024
rdb_bgsave_in_progress:0
rdb_last_save_time:1530558451
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
I have to rewrite some parts of this answer due to new information I got with the hint from #ItamarHaber
If you have only RDB set up
You can safely issue INFO PERSISTENCE and get loading:1
Loading RDB snapshot of 2GB db (500mb on disk) takes 15 sec on my machine (just for reference).
If you have AOF set up
During start Redis replays operations (it may do a compaction, removing operations that overwrite same keys) to bring DB state to what you had before shutdown. What this means is you either can get a meaningful answer (like above) or this:
BUSY Redis is busy running a script. You can only call SCRIPT KILL or SHUTDOWN NOSAVE.
if you had some big LUA scripts in server history. So with this configuration you need to be aware of that or use competely different method of checking whether Redis is ready:
tail -f to the Redis log file and look for lines:
oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
and
Ready to accept connections
Everything in between is the time when Redis starts and loads saved data.
p.s. How to fill Redis with test data
I use a celery worker server with redis as the broker url (for receiving tasks) as well as the result backend.
BROKER_URL = 'redis://localhost:6379/2'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/2'
app = Celery('myceleryapp', broker=BROKER_URL,backend=CELERY_RESULT_BACKEND)
I launch the celery worker server using celery -A myceleryapp worker -l info -c 8
The worker processes start processing my tasks from the redis queue until at some point, I receive the infamous MISCONF redis error and the celery worker process terminates.
Unrecoverable error: ResponseError('MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.',)
I checked the redis log files in /var/log/redis and the tail end of the file has the following
24745:C 19 Aug 09:20:26.169 * RDB: 0 MB of memory used by copy-on-write
1590:M 19 Aug 09:20:26.247 * Background saving terminated with success
1590:M 19 Aug 09:25:27.080 * 10 changes in 300 seconds. Saving...
1590:M 19 Aug 09:25:27.081 * Background saving started by pid 25397
25397:C 19 Aug 09:25:27.082 # Write error saving DB on disk: No space left on device
1590:M 19 Aug 09:25:27.181 # Backgroun1590:M 19 Aug 09:51:03.042 * 1 changes in 900 seconds. Saving...
1590:M 19 Aug 09:51:03.042 * Background saving started by pid 26341
26341:C 19 Aug 09:51:03.405 * DB saved on disk
26341:C 19 Aug 09:51:03.405 * RDB: 22 MB of memory used by copy-on-write
1590:M 19 Aug 09:51:03.487 * Background saving terminated with success
The dump.rdb file is being written to /var/lib/redis/dump.rdb.
Since the logs reported a No space left on device, I checked the disk space where /var is mounted and there seems to be sufficient space left (1.2GB).
How do I get to the root cause of this error if there is enough disk space? Of course, to prevent this error from happening, I could set config set stop-writes-on-bgsave-error no in redis-cli. But I want to get to the root cause of this error. Any help or pointers?
Maybe this is caused by the swap file. Because the swap file took the 1.2Gb space of your disk. So redis complains No space to write.
Try this "swapon -s" command to check this.
I think 1.2Gb is not enough if this disk accept the RAM page swap. you should change the dir of RDB in a more big dir.
Today I took a look and found one of our redis server has a very serious problem. It can't save.
info
# Persistence
loading:0
rdb_changes_since_last_save:82904
rdb_bgsave_in_progress:1
rdb_last_save_time:1444978843
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:1444978844
rdb_current_bgsave_time_sec:1448356070
Basically the last bgsave happened on Oct 11th, I can't restart it because I think I'll lose all the in memory data.
What should I do?
I'm getting "OOM command not allowed" when trying to set a key,
maxmemory is set to 500M with maxmemory-policy "volatile-lru", I'm setting TTL for each key sent to redis.
INFO command returns : used_memory_human:809.22M
If maxmemory is set to 500M, how did I reached 809M ?
INFO command does not show any Keyspaces , how is it possible ?
KEYS * returns "(empty list or set)" ,I've tried to change db number , still no keys found.
Here is info command output:
redis-cli -p 6380
redis 127.0.0.1:6380> info
# Server
redis_version:2.6.4
redis_git_sha1:00000000
redis_git_dirty:0
redis_mode:standalone
os:Linux 2.6.32-358.14.1.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:28291
run_id:229a2ee688bdbf677eaed24620102e7060725350
tcp_port:6380
uptime_in_seconds:1492488
uptime_in_days:17
lru_clock:1429357
# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:848529904
used_memory_human:809.22M
used_memory_rss:863551488
used_memory_peak:848529192
used_memory_peak_human:809.22M
used_memory_lua:31744
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.0.0
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1375949883
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
# Stats
total_connections_received:3
total_commands_processed:8
instantaneous_ops_per_sec:0
rejected_connections:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
# Replication
role:master
connected_slaves:0
# CPU
used_cpu_sys:18577.25
used_cpu_user:1376055.38
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Keyspace
redis 127.0.0.1:6380>
Redis' maxmemory volatile-lru policy can fail to free enough memory if the maxmemory limit is already used by the non-volatile keys.
Any chance you changed the number of databases? If you use a very large number then the initial memory usage may be high
In our case, maxmemory was set to a high amount, then someone on the team changed it to a lower amount after data had already been stored.
My problem was that old data wasn't being released and it caused the redis db to get jammed up quickly.
in Python, I cleared the cache server by running
red = redis.StrictRedis(...)
red.flushdb()
And then, limted the ttl to 24h by saving the file with "ex":
red.set(<FILENAME>, png, ex=(60*60*24))
Memory is controlled in the config. Thus, your instance limited as it says. You can either look in your redis.conf or from the CLI Tool issue "config get maxmemory" to get the limit.
If you manage this Redis instance, you'll need to consult and adjust the config file. Usually looked for in /etc/redis.conf or /etc/redis/redis.conf.
If you are using a Redis provider you will need to get with them about increasing your limit.
TO debug this issue, need to check that what action you performed on the redis-cli manually or somewhere from the code.
It might be possible you ran keys * and you have very less memory to accommodate memory consumed by this command. This leads to throttling to cache service.
In code, your changes might impact key insertion and duplicate or unique data in the db and this leads to overall memory exceed in the system.