How to find the memory allocated to a redis instance? - redis

Using the Redis info command, I am able to get all the stats of the redis server.
I am also able to get the used memory metric.
How do I get the total memory allocated to the Redis instance, so I can get the percent of memory used?

You could do something like:
$ ps aux -m | grep redis-server | head -n1 | awk '{printf "VSZ: %dMB, RSS: %dMB\n", $5/1024, $6/1024}'
This will display the rounded Virtual and Real memory sizes of the redis-server process in MB (to see actual numbers remove the /1024 from both parameters).

Redis by default gets all available memory (as much as it needs and up to all available physical memory). You can limit the amount of memory allocated to Redis though, using maxmemory parameter in redis.conf file.
This is an excerpt from the file:
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys with an
# EXPIRE set. It will try to start freeing keys that are going to expire
# in little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, Redis will start to reply with errors to commands
# that will use more memory, like SET, LPUSH, and so on, and will continue
# to reply to most read-only commands like GET.
#
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
# 'state' server or cache, not as a real DB. When Redis is used as a real
# database the memory usage will grow over the weeks, it will be obvious if
# it is going to use too much memory in the long run, and you'll have the time
# to upgrade. With maxmemory after the limit is reached you'll start to get
# errors for write operations, and this may even lead to DB inconsistency.
#
# maxmemory <bytes>

Related

Aerospike %age of available write blocks is less when hard disk space is available

We found ourselves this problem. Config is as follows :-
Aerospike version : 3.14
Underlying hard disk : non-SSD
Variable Name Value
memory-size 5 GB
free-pct-memory 98 %
available_pct 4 %
max-void-time 0 millisec
stop-writes 0
stop-writes-pct 90 %
hwm-breached true
default-ttl 604,800 sec
max-ttl 315,360,000 sec
enable-xdr false
single-bin false
data-in-memory false
Can anybody please help us out with this ? What could be a potential reason for this ?
Aerospike only writes to free blocks. A block may contain any number of records that fit. If your write/update pattern is such that a block never falls below 50% active records(the default threshold for defragmenting: defrag-lwm-pct), then you have a bunch of "empty" space that can't be utilized. Read more about defrag in the managing storage page.
Recovering from this is much easier with a cluster that's not seeing any writes. You can increase defrag-lwm-pct, so that more blocks are eligible and gets defragmented.
Another cause could be just that the HDD isn't fast enough to keep up with defragmentation.
You can read more on possible resolutions in the Aerospike KB - Recovering from Available Percent Zero. Don't read past "Stop service on a node..."
You are basically not defragging your perisistence storage device (75GB per node). From the snapshot you have posted, you have about a million records on 3 nodes with 21 million expired. So looks like you are writing records with very short ttl and the defrag is unable to keep up.
Can you post the output of few lines when you are in this state of:
$ grep defrag /var/log/aerospike/aerospike.log
and
$ grep thr_nsup /var/log/aerospike/aerospike.log ?
What is your write/update load ? My suspicion is that you are only creating short ttl records and reading, not updating.
Depending on what you are doing, increasing defrag-lwm-pct may actually make things worse for you. I would also tweak nsup-delete-sleep from 100 microseconds default but it will depend on what your log greps above show. So post those, and lets see.
(Edit: Also, from the fact that you are not seeing evictions even though you are above the 50% HWM on persistence storage means your nsup thread is taking a very long time to run. That again points to nsup-delete-sleep value needing tuning for your set up.)

what will happen when .aof file will be very large?

I am using Redis 2.8.4. Enabled append only in configuration. It is working fine as per my requirement but the prob is this file will be getting more larger so it may create memory shortage in future? Is there any solution?
Redis has a way to compress the aof file by removing useless operations.
For example, if you are incrementing a counter 100 times, you'll end
up with a single key in your dataset containing the final value, but
100 entries in your AOF. 99 of those entries are not needed to rebuild
the current state.
It is a automatic job in Redis 2.8.4, see redis.conf for auto-aof-rewrite-percentage and auto-aof-rewrite-min-size. You can also start it with BGREWRITEAOF command, see https://redis.io/commands/bgrewriteaof.

What is the memory formula of Redis Sorted Set?

I need to calculate how much memory a Redis SortedSet takes assuming my average element of the Sorted Set is X bytes.
If you know the average size of an element before it's stored in redis, just do this:
Clear redis of all data: command flushall (dumps all databases)
Command info, check field used_memory_human (should be zero or close to it)
Add/store data in redis
info again, check used_memory_human, size indicates memory used by redis to store objects.
Hope it helps

Why is Redis SET performance better than GET?

According to the Redis benchmarks , Redis can perform 100'000 SET operations/s, and 80'000 GET operations/s. Redis being an in-memory DB, this seems surprising because typically one would expect that memory writes are somewhat slower than reads, e.g. considering that SETs need to allocate memory prior to being able to write a value.
Can someone explain why SET is faster than GET?
Actually this is only an effect that by default you measure more I/O than the actual command execution time. If you start enabling pipelining in the benchmark, it is a bit more the measure of the actual command performance, and the numbers will change:
$ redis-benchmark -q -n 1000000 -P 32 set foo bar
set foo bar: 338964.03
$ redis-benchmark -q -n 1000000 -P 32 get foo
get foo: 432713.09 requests per second
Now GET is faster :-)
We should include pipelining in our benchmark doc page.
EDIT: This is even more evident here:
redis 127.0.0.1:6379> info commandstats
# Commandstats
cmdstat_get:calls=1001568,usec=221845,usec_per_call=0.22
cmdstat_set:calls=831104,usec=498235,usec_per_call=0.60
This command provides CPU time to serve the request internally, without accounting for I/O. SET is three times slower to process.

Redis (1.2.6) : Slow queries

We are using Redis 1.2.6 in production environment. There are 161804 keys in redis. Machine has 2GB RAM.
Problem:
Select queries to Redis are taking 0.02 sec on an average. But some times they take 1.5-2.0 secs, I think whenever redis save modified keys on disk.
One strange thing I noticed before and after restarting the redis is that:
Before restart "changes_since_last_save" changing too fast and was reaching 3000+ (in 5 minutes). But after restart "changes_since_last_save" remains below 20 or so.
Redis stats before restart:
{:bgrewriteaof_in_progress=>"0", :arch_bits=>"64", :used_memory=>"53288487", :total_connections_received=>"586171", :multiplexing_api=>"epoll", :used_memory_human=>"50.82M", :total_commands_processed=>"54714152", :uptime_in_seconds=>"1629606", :changes_since_last_save=>"3142", :role=>"master", :uptime_in_days=>"18", :bgsave_in_progress=>"0", :db0=>"keys=161863,expires=10614", :connected_clients=>"13", :last_save_time=>"1280912841", :redis_version=>"1.2.6", :connected_slaves=>"1"}
Redis stats after restart:
{:used_memory_human=>"49.92M", :total_commands_processed=>"6012", :uptime_in_seconds=>"1872", :changes_since_last_save=>"2", :role=>"master", :uptime_in_days=>"0", :bgsave_in_progress=>"0", :db0=>"keys=161823,expires=10464", :connected_clients=>"13", :last_save_time=>"1280917477", :redis_version=>"1.2.6", :connected_slaves=>"1", :bgrewriteaof_in_progress=>"0", :arch_bits=>"64", :used_memory=>"52341658", :total_connections_received=>"252", :multiplexing_api=>"epoll"}
Not sure what is going wrong here.
Thanks in advance.
Sunil
By default Redis is configured to dump all data to disk from time to time depending on the amount of keys that changed in a time span (see the default config).
Another option is to use the append-only file, which is more lightweight but needs some kind of maintenance – you need to run BGREWRITEAOF every once in a while so that your log doesn't get too big. There's more on the Redis config file about this.
As Tobias says, you should switch to 2.0 as soon as you can since it's faster and, in many cases, uses less memory than 1.2.6.