I am using Redis 2.8.4. Enabled append only in configuration. It is working fine as per my requirement but the prob is this file will be getting more larger so it may create memory shortage in future? Is there any solution?
Redis has a way to compress the aof file by removing useless operations.
For example, if you are incrementing a counter 100 times, you'll end
up with a single key in your dataset containing the final value, but
100 entries in your AOF. 99 of those entries are not needed to rebuild
the current state.
It is a automatic job in Redis 2.8.4, see redis.conf for auto-aof-rewrite-percentage and auto-aof-rewrite-min-size. You can also start it with BGREWRITEAOF command, see https://redis.io/commands/bgrewriteaof.
Related
I would like to cache very large amounts of data in an Infinispan 13 cache that uses passivation to the disk. I've accomplished this with the following configuration:
<persistence passivation="true">
<file-store purge="true"/>
</persistence>
<memory storage="OFF_HEAP" max-size="1GB" when-full="REMOVE"/>
However, now I would like to set the maximum size for the file-store to i.e. 50GB and have the cache delete overflowing entries completely.
Is there a way to do this? I could not find any option to limit the size of a file-store in the documentation.
Thank you!
There is no way to specifically limit the total size of the files stored. Depending upon your use case setting the compaction-ratio lower which should help free some space. https://docs.jboss.org/infinispan/13.0/configdocs/infinispan-config-13.0.html under file-store
You can use expiration though to remove entries after a given period of time. https://infinispan.org/docs/stable/titles/configuring/configuring.html#expiration_configuring-memory-usage This will remove those entries from the cache, which in turn would hit that compaction-ratio sooner to clean up old files.
I'm wondering the most efficient way to store this data.
I need to track 30-50 million data points per day. It needs to be extremely fast read/write, so I'm using redis.
The data only needs to last for 24 hours, at which point it will EXPIRE.
The data looks like this as a key/value hash
{
"statistics:a5ded391ce974a1b9a86aa5322ea9e90": {
xbi: 1,
bid: 0.24024,
xpl: 25.0,
acc: 40,
pid: 43,
cos: 0.025,
xmp: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
clu: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
I've replaced the actual string with a lot of x but that IS the proper length of the string.
So far, according to my calculations.... this will use hundreds of GB of memory. Does that seem correct?
This is mostly ephemeral logging data thats important, but not important enough to try to support writing to disk or failovers. I am comfortable keeping it on 1 machine, if that helps make this easier.
What would be the best way to reduce memory space in this scenario? Is there a better way I can do this? Does redis support 300GB on a single instance?
In redis.conf - set hash-max-ziplist-value to 1 more than the length of the field 'xmp'. Then restart redis, and watch your memory go down significantly.
The default value is 64. Increasing it increases cpu utilization when you modify or add new fields in the hash. But your use case seems to be create-only, and in that case there shouldn't be any drawbacks of increasing the setting.
this will use hundreds of GB of memory. Does that seem correct?
YES
Does redis support 300GB on a single instance?
YES
Is there a better way I can do this?
You can try the following methods:
Avoid Using Hash
Since you always get all fields of the log with HGETALL, there's NO need to save the log as HASH. HASH consumes more memory than STRING.
You can serialize all fields into a string, and save the log as a key-value pair:
SET 'statistics:a5ded391ce974a1b9a86aa5322ea9e90' '{xbi: 1, bid: 0.24024, and other fields}'
#Sripathi Krishnan's answer gives another way to avoid HASH, i.e. config Redis to encode the HASH into ZIPLIST. It's a good idea if you don't share your Redis with other applications. Otherwise, this modification might cause problem to others.
Compress The Data
In order to reduce memory usage, you can try to compress your data. Redis can store binary strings, so you can use gzip, snappy or other compression algorithm to compress the log text into binary string, and save it into Redis.
Normally, you can get better compression when the input is bigger. So you'd better compress the whole log, instead of compress each field one by one.
The side-effect is that the producer and consumer of the log need to cost some CPU to compress and decompress the data. However, normally that's NOT a problem, and also it can reduce some network bandwidth.
Batch Write and Batch Read
As I mentioned above, if you want to get better compression, you should get a bigger input. So if you can write multiple logs in a batch, you can compress the batch of logs to get better compression.
Compress multiple logs into a batch: compress(log1, log2, log3) -> batch1: batch-result
Put the batch result into Redis as a key-value pair: SET batch1 batch-result
Build an index for the batch: MSET log1 batch1 log2 batch1 log3 batch1
When you need to get the log:
Search the index to get the batch key: GET log1 -> batch1
Get the batch result: GET batch1 -> batch-result
Decompress the batch result and look up the log from the result
The last method is the most complicated one, and the extra index will cost some extra memory. However, it can largely reduce the size of your data.
Also what these methods can achieve, largely depends on your log. You should do lots of benchmark :)
We found ourselves this problem. Config is as follows :-
Aerospike version : 3.14
Underlying hard disk : non-SSD
Variable Name Value
memory-size 5 GB
free-pct-memory 98 %
available_pct 4 %
max-void-time 0 millisec
stop-writes 0
stop-writes-pct 90 %
hwm-breached true
default-ttl 604,800 sec
max-ttl 315,360,000 sec
enable-xdr false
single-bin false
data-in-memory false
Can anybody please help us out with this ? What could be a potential reason for this ?
Aerospike only writes to free blocks. A block may contain any number of records that fit. If your write/update pattern is such that a block never falls below 50% active records(the default threshold for defragmenting: defrag-lwm-pct), then you have a bunch of "empty" space that can't be utilized. Read more about defrag in the managing storage page.
Recovering from this is much easier with a cluster that's not seeing any writes. You can increase defrag-lwm-pct, so that more blocks are eligible and gets defragmented.
Another cause could be just that the HDD isn't fast enough to keep up with defragmentation.
You can read more on possible resolutions in the Aerospike KB - Recovering from Available Percent Zero. Don't read past "Stop service on a node..."
You are basically not defragging your perisistence storage device (75GB per node). From the snapshot you have posted, you have about a million records on 3 nodes with 21 million expired. So looks like you are writing records with very short ttl and the defrag is unable to keep up.
Can you post the output of few lines when you are in this state of:
$ grep defrag /var/log/aerospike/aerospike.log
and
$ grep thr_nsup /var/log/aerospike/aerospike.log ?
What is your write/update load ? My suspicion is that you are only creating short ttl records and reading, not updating.
Depending on what you are doing, increasing defrag-lwm-pct may actually make things worse for you. I would also tweak nsup-delete-sleep from 100 microseconds default but it will depend on what your log greps above show. So post those, and lets see.
(Edit: Also, from the fact that you are not seeing evictions even though you are above the 50% HWM on persistence storage means your nsup thread is taking a very long time to run. That again points to nsup-delete-sleep value needing tuning for your set up.)
Using the Redis info command, I am able to get all the stats of the redis server.
I am also able to get the used memory metric.
How do I get the total memory allocated to the Redis instance, so I can get the percent of memory used?
You could do something like:
$ ps aux -m | grep redis-server | head -n1 | awk '{printf "VSZ: %dMB, RSS: %dMB\n", $5/1024, $6/1024}'
This will display the rounded Virtual and Real memory sizes of the redis-server process in MB (to see actual numbers remove the /1024 from both parameters).
Redis by default gets all available memory (as much as it needs and up to all available physical memory). You can limit the amount of memory allocated to Redis though, using maxmemory parameter in redis.conf file.
This is an excerpt from the file:
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys with an
# EXPIRE set. It will try to start freeing keys that are going to expire
# in little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, Redis will start to reply with errors to commands
# that will use more memory, like SET, LPUSH, and so on, and will continue
# to reply to most read-only commands like GET.
#
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
# 'state' server or cache, not as a real DB. When Redis is used as a real
# database the memory usage will grow over the weeks, it will be obvious if
# it is going to use too much memory in the long run, and you'll have the time
# to upgrade. With maxmemory after the limit is reached you'll start to get
# errors for write operations, and this may even lead to DB inconsistency.
#
# maxmemory <bytes>
We are using Redis 1.2.6 in production environment. There are 161804 keys in redis. Machine has 2GB RAM.
Problem:
Select queries to Redis are taking 0.02 sec on an average. But some times they take 1.5-2.0 secs, I think whenever redis save modified keys on disk.
One strange thing I noticed before and after restarting the redis is that:
Before restart "changes_since_last_save" changing too fast and was reaching 3000+ (in 5 minutes). But after restart "changes_since_last_save" remains below 20 or so.
Redis stats before restart:
{:bgrewriteaof_in_progress=>"0", :arch_bits=>"64", :used_memory=>"53288487", :total_connections_received=>"586171", :multiplexing_api=>"epoll", :used_memory_human=>"50.82M", :total_commands_processed=>"54714152", :uptime_in_seconds=>"1629606", :changes_since_last_save=>"3142", :role=>"master", :uptime_in_days=>"18", :bgsave_in_progress=>"0", :db0=>"keys=161863,expires=10614", :connected_clients=>"13", :last_save_time=>"1280912841", :redis_version=>"1.2.6", :connected_slaves=>"1"}
Redis stats after restart:
{:used_memory_human=>"49.92M", :total_commands_processed=>"6012", :uptime_in_seconds=>"1872", :changes_since_last_save=>"2", :role=>"master", :uptime_in_days=>"0", :bgsave_in_progress=>"0", :db0=>"keys=161823,expires=10464", :connected_clients=>"13", :last_save_time=>"1280917477", :redis_version=>"1.2.6", :connected_slaves=>"1", :bgrewriteaof_in_progress=>"0", :arch_bits=>"64", :used_memory=>"52341658", :total_connections_received=>"252", :multiplexing_api=>"epoll"}
Not sure what is going wrong here.
Thanks in advance.
Sunil
By default Redis is configured to dump all data to disk from time to time depending on the amount of keys that changed in a time span (see the default config).
Another option is to use the append-only file, which is more lightweight but needs some kind of maintenance – you need to run BGREWRITEAOF every once in a while so that your log doesn't get too big. There's more on the Redis config file about this.
As Tobias says, you should switch to 2.0 as soon as you can since it's faster and, in many cases, uses less memory than 1.2.6.