what will happen when the aof file for redis is too big, even after rewriting? - redis

I am reading something about redis persistence, right now two ways are available for keeping the redis persistence.
AOF
RDB
ok, i will skip the basic meanings of "AOF" and "RDB", I have a question about the "AOF", My question is "what will happen when the aof file for redis is too big, even after being rewrited ?", I have searched on google, but no result, somesone said the redis-server will fail to startup when the size of "AOF" file reach 3G or 4G. Could anyone can tell me ? Thanks a lot.

Redis doesn't limit the size of AOF file. You can safely have a large AOF file. One of my Redis instance writes an AOF file with a size of 95G, and it can reload the file successfully. Of course, it takes a very long time.
somesone said the redis-server will fail to startup when the size of "AOF" file reach 3G or 4G
I'm not sure if the problem someone met, is because of the limit of file system. For some old file system, the size of a single file CANNOT exceed 2G or 4G. However, for modern file system, the limit has been removed.

Related

AOF file size greater than memory allocated to redis nodes

AOF file size greater than memory allocated to redis nodes.
I have redis nodes with 16Gi each but aof file size is going greater than 16Gi and has reached 24Gi.
Is aof file journey of all the keys ?
I am adding keys processing it and deleting it and adding new keys. So will aof keep sync of all the deleted keys as well ?
It should be fine
The answer to your fundamental question is that the AoF file exceeding the size of your redis instance is not something that should overly concern you.The AoF file is a record of all the commands that have been executed against redis up to the current point. The purpose of the AoF file is to allow you to re-run all the commands to put redis back into its current state, so the fact that it's grown larger than the database is not a concern, when the AoF file is run all the way through your Redis instance will be exactly as large as it currently is.
You Might want to have a think on AOF rewrites
That said, it may be worth looking into how AOF rewrites are operating on your redis instance. Redis can rewrite the AoF file periodically to make it smaller and more efficient in case of a disaster.
Couple points
if you are running Redis 2.2 or less, you will want to from time to time call BGAOFREWRITE to keep the size of the AOF file under control.
If you are on a more modern version of Redis, you might want to take a look at the re-write config settings in your redis.conf file - see the aof config settings in redis.conf e.g. auto-aof-rewrite-percentage and auto-aof-rewrite-min-size

Optimize a redis appendonly file

I have a appendonly.aof file that's grown too large (1.5gb, and 29,558,054 lines).
When I try and load redis it hangs on "Loading data into memory" for what seems like all day (still hasn't finished).
Is there anything I can do to optimize this file as it likely contains many duplicate transactions (like deleting the same record).
Or anything I can do to see progress to know if i'm waiting for nothing or how long it will take before I try and restore an older backup?
With redis 4+ you can use mixed format for optimizing appendonly by setting aof-use-rdb-preamble to yes.
With this setting in place redis dumps the data in RDB format to AOF file with every BGAOFREWRITE call, which you can verify by aof files contents which starts with REDIS keyword.
Upon restarts with this REDIS keyword in aof file along with this aof-use-rdb-preamble, redis will load the RDB followed by aof contents.
You can configure your Redis server based on this AOF
And if you are using docker you should be careful with how frequently your container is being restarted

Redis AOF from RDB base file

I'm looking for the best way to backup my Redis data.
I read about RDB and AOF. But from what I think, the best way would be to combine it in the following way:
Create RDB periodically, and only save AOF from that point.
That way, when you restart. Redis can restore the RDB file (which is faster than the whole AOF rollback) and then for the last seconds rollback the AOF file.
The AOF file contains every write since the last RDB.
My question is, is this available in Redis? Or are there any downsides about it?
This is how Redis works by default.
See the comments about the aof-use-rdb-preamble configuration in the default redis.conf.

redis wiped out the copied rdb file during it starts

We have encountered a weird redis issue.
After I upgrade my redis from an old version to a new one,
I bring up the redis with clean data.
I copied the previous rdb file into the data direcotry
I restart the redis to load the data.
THen, I figure that my data is wiped out in step 4. Do any of you have encounter this? What could be the possible root cause for this?
We are suspect the redis is getting new request for it. Will that be an possible issue?
Before shutting down Redis will persist its data to disk (unless it is completely disabled in config), so you should not try such "hot swapping" of RDB file while Redis server is running - as it simply has overwritten the file on exit. Instead, just stop Redis server and replace RDB file to get it loaded (and later saved back properly).

what's the performance impact causing from the large size of Apache's access.log?

If the logs file like access.log or error.log gets very large, will the large-size problem impact the performance of Apache running or user accessing? From my understanding, Apache doesn't read entire logs into memory, but just make use of filehandle to write. Right? If so, I don't have to remove the logs manually every time when it's large enough except for the filesystem issue. Please help and correct me if I'm wrong. Or is there any Apache Log I/O issue I'm supposed to take care when running it?
Thx very much
Well, i totally agree with you. Per my understanding apache access the log files using handlers and just append the new message at the end of the file. That's way a huge log file will not make the difference when has to do with writing to the file. But may be if you want to access the file or open it with a kind of logging monitoring tool then the huge size will slowdown the process of reading the file.
So i would suggest you to use log rotation to have an overall better end result.
This suggestion is directly form the apche web site.
Log Rotation
On even a moderately busy server, the quantity of information stored in the log files is very large. The access log file typically grows 1 MB or more per 10,000 requests. It will consequently be necessary to periodically rotate the log files by moving or deleting the existing logs. This cannot be done while the server is running, because Apache will continue writing to the old log file as long as it holds the file open. Instead, the server must be restarted after the log files are moved or deleted so that it will open new log files.
From the Apache Software Foundation site