I understand Redis AOF and RDB persistence options and have read the doc (maybe not thoroughly). What I want to ask is this: is it possible to eliminate the possibility of data loss with Redis?
Setting appendfsync to always seems to be the closest solution. However, there is stil the possibility that Redis crashes just after responding to the client with "OK" and before persisting the data to disk. There would be no way for the client to know that the data is lost, which will result in inconsistency.
As far as I'm concerned, an option to make Redis respond after fsync should resolve the issue (or maybe an additional WAITFSYNC command). Is that possible?
i think for now the safest option is to add appendonly yes in your redis config.
if you are using version 1.1 or greater one.
appendfsync always is slowest among them. if you are okay with that then sure you can use it. but if you care about your DB's performance use appendfsync everysec.
The append-only file is a fully-durable strategy for Redis. every time Redis receives a command that changes the dataset (e.g. SET) it will append it to the AOF. When you restart Redis it will re-play the AOF to rebuild the state.
details
Related
fairly new to Redis and working on a Azure Cache for Redis implementation.
In the Azure documentation around Redis trouble shooting it's stated that there can be long-running commands and that the redis command documentation shows the time complexity of all commands.
I couldn't find anything around the config set maxmemory-policy command.
Is my assumption correct that setting/changing the maxmemory-policy itself is not an expensive command (unlike e.g. resharding/rebalancing a cluster)?
(I know, "expensive" does not really have a proper definition here :) )
Thanks for any hints or answers!
Yes, the config set command itself is not an expensive command. It iterates the config array which contains about 200 items, to find the config, and set it. That's all.
However, after the setting, Redis might need to free memory for evicted items periodically or for each command. That's a cost.
We are using redis server in production with 6 GB data size, Initially
we thought redis can be used as memory cache only, If it restarts then we can repopulate from the persistants data store with minimal downtime.
Now we realized that re-population of data from persistence store is not a good idea at all, It is causing major service downtime.
We want to evaluate redis persistant option by using RDB and AOF combination.We tried taking RDB snapshot once in a hour and committing to the AOF file with one second interval in test environments. AOF file is growing too big in test environment only. We tried to analyze the AOF file content and noticed that lot of keys we don't want to persist to the disk, We need them only in redis memory.
Is there any way to stop logging certain type of keys (block list keys) while logging to the AOF file
Generally, Redis does not provide a way to exclude certain types of keys from persistency. If you need some keys to persist to disk and others not to, you should use two independent Redis instances - one for each type and configure their persistency settings approriately. Divide and conquer.
Note: it is possible, however, to control what gets persisted in AOF inside the context if a Lua script - see the "Selective replication of commands" section of EVAL's documentation. That said, besides the consistency risks, it would be too much of a hassle to use this approach for what you need imo.
If we enable the AppendFileOnly in the redis.conf file, every operation which changes the redis database is loggged in that file.
Now, Suppose Redis has used all the memory allocated to it in the "maxmemory" direcive in the redis.conf file.
To store more data., it starts removing data by any one of the behaviours(volatile-lru, allkeys-lru etc.) specified in the redis.conf file.
Suppose some data gets removed from the main memory, But its log will still be there in the AppendOnlyFile(correct me if I am wrong). Can we get that data back using this AppendOnlyFile ?
Simply, I want to ask that if there is any way we can get that removed data back in the main memory ? Like Can we store that data into disk memory and load that data in the main memory when required.
I got this answer from google groups. I'm sharing it.
----->
Eviction of keys is recorded in the AOF as explicit DEL commands, so when
the file is replayed fully consistency is maintained.
The AOF is used only to recover the dataset after a restart, and is not
used by Redis for serving data. If the key still exists in it (with a
subsequent eviction DEL), the only way to "recover" it is by manually
editing the AOF to remove the respective deletion and restarting the
server.
-----> Another answer for this
The AOF, as its name suggests, is a file that's appended to. It's not a database that Redis searches through and deletes the creation record when a deletion record is encountered. In my opinion, that would be too much work for too little gain.
As mentioned previously, a configuration that re-writes the AOF (see the BGREWRITEAOF command as one example) will erase any keys from the AOF that had been deleted, and now you can't recover those keys from the AOF file. The AOF is not the best medium for recovering deleted keys. It's intended as a way to recover the database as it existed before a crash - without any deleted keys.
If you want to be able to recover data after it was deleted, you need a different kind of backup. More likely a snapshot (RDB) file that's been archived with the date/time that it was saved. If you learn that you need to recover data, select the snapshot file from a time you know the key existed, load it into a separate Redis instance, and retrieve the key with RESTORE or GET or similar commands. As has been mentioned, it's possible to parse the RDB or AOF file contents to extract data from them without loading the file into a running Redis instance. The downside to this approach is that such tools are separate from the Redis code and may not always understand changes to the data format of the files the way the Redis server does. You decide which approach will work with the level of speed and reliability you want.
But its log will still be there in the AppendOnlyFile(correct me if I am wrong). Can we get that data back using this AppendOnlyFile ?
NO, you CANNOT get the data back. When Redis evicts a key, it also appends a delete command to AOF. After rewriting the AOF, anything about the evicted key will be removed.
if there is any way we can get that removed data back in the main memory ? Like Can we store that data into disk memory and load that data in the main memory when required.
NO, you CANNOT do that. You have to take another durable data store (e.g. Mysql, Mongodb) for saving data to disk, and use Redis as cache.
This question is about Redis persistence.
I'm using redis as a 'fast backend' for a social networking website. It's a single server set up. I've been transferring PostgreSQL responsibilities to Redis steadily. Currently in etc/redis/redis.conf, the appendonly setting is set to appendonly no. Snapshotting settings are save 900 1, save 300 10, save 60 10000. All this is true for production and development both. As per production logs, save 60 10000 gets invoked heavily. Does this mean that practically, I'm getting backups every 60 seconds?
Some literature suggests using AOF and RDB backups together. Thus I was weighing in on turning appendonly on and using appendfsync everysec. For anyone who has had experience of both sides of the coin:
1) Will using appendonly on and appendfsync everysec cause a performance downgrade? Will it hit the CPU? The write load is on the high side.
2) Once I restart the redis server with these new settings, I'll still lose the last 60 secs of my data, correct?
3) Are restart times something to worry about? My dump.rdb file is small; ~90MB.
I'm trying to find out more about redis persistence, and getting my expectations right. Personally, I'm fine with losing 60s of data in the case of a catastrophe, thus whether I should use AOF is also something I'm pondering. Feel free to chime in. Thanks!
Does this mean that practically, I'm getting backups every 60 seconds?
NO. Redis does a background save after 60 seconds, if there're at least 10000 keys have been changed. Otherwise, it doesn't do a background save.
Will using appendonly on and appendfsync everysec cause a performance downgrade? Will it hit the CPU? The write load is on the high side.
It depends on many things, e.g. disk performance (SSD VS HDD), write/read load (QPS), data model, and so on. You need do a benchmark with your own data in your specific environment.
Once I restart the redis server with these new settings, I'll still lose the last 60 secs of my data, correct?
NO. If you turn on both AOF and RDB, when Redis restarts, the AOF file will be used to rebuild the database. Since you config it to appendfsync everysec, you will only lose the last 1 second of data.
Are restart times something to worry about? My dump.rdb file is small; ~90MB.
If you turn on AOF, and when Redis restarts, it replays logs in AOF file to rebuild the database. Normally AOF file is larger then RDB file, and it might be slower than recovering from RDB file. Should you worry about that? Do a benchmark with your own data in your specific environment.
EDIT
IMPORTANT NOTICE
Assume that you already set Redis to use RDB saving, and write lots of data to Redis. After a while, you want to turn on AOF saving. NEVER MODIFY THE CONFIG FILE TO TURN ON AOF AND RESTART REDIS, OTHERWISE YOU'LL LOSE EVERYTHING.
Because, once you set appendonly yes in redis.conf, and restart Redis, it will load data from AOF file, no matter whether the file exists or not. If the file doesn't exist, it creates an empty file, and tries to load data from that empty file. So you'll lose everything.
In fact, you don't have to restart Redis to turn on AOF. Instead, you can use config set command to dynamically turn it on: config set appendonly yes.
I'm building a Redis db which consumes nearly all of my machines memory. If Redis startes to save to disc while heavy inserting is going on, the memory consumption is more or less doubled (as described in the documentation). This leads to terrible performance in my case.
I would like to force Redis to not to store any data while I'm inserting and would trigger the save manually afterwards. That should solve my problem, but however I configure the "save" setting, at some point in time Redis starts to save to disc. Any hint how to prevent Redis from doing so?
You can disable saving by commenting all the "save" lines" in your redis.conf.
Alternately, if you don't want to edit any .conf files, run Redis with:
redis-server --save ""
As per example config (search for save):
It is also possible to remove all the previously configured save
points by adding a save directive with a single empty string argument
like in the following example:
save ""
I would also suggest looking at Persistence only slaves (Master / Slave replication) (Have the slaves persist data instead of master)
Take a look at this LINK