One of gpfs on our server has a very low value of inode and that gpfs is being used by redis to write aof files and dump.rdb, so we are planning to increase the inode value on that server, the plan is to increase inode value online meaning the applications and redis wont stop and infra team will increase inode value, the command they have suggested is:
mmchfs gpfs-name –inode-limit 2.9G
I am worried will there be any impact on the running redis server (like data loss or AOF file corruption?) when the execute the command while redis is running or should we shutdown redis gracefully first?
Any suggestions will be very helpful.
Thanks
GPFS supports online inode expansion. The command you describe should not affect ongoing use of the file system.
Only thing to take into consideration is how much space is consumed on the filesystem, Spectrum Scale (GPFS) allocate space for inodes (even if you can fit data into inode space), a part from that, you can increase it online, if you are tight on space, I recommend you do it gradually (wait for the operation to complete it's not instantaneous).
Related
i'm using redis and noticed that it crashes with the following error :
MISCONF Redis is configured to save RDB snapshots
I tried the solution suggested in this post
but everything seems to be OK in term of permissions and space.
htop command tells me that redis is consuming 70% of RAM. i tried to stop / restart redis in order to flush but at startup, the amount of RAM used by redis was growing up dramatically and stops around 66%. I'm pretty sure at this moment no processus was using any redis instance !
what happens there ?
The growing up ram issue is an expected behaviour of Redis at first data load, after restarts, writing the data to disk (snapshot process). Redis tends to allocate memory as much as it can unless you don't use "maxmemory" option at your conf file.
It allocates memory but not release immediately. Sometimes it takes hours, I saw such cases.
Well known fact about Redis is that, it can allocate memory up to twice size of the dataset it keeps.
I suggest you to wait couple of hours without any restart (Redis can work in this time, get/set operations etc.) and keep watching the memory.
Please check that too
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist.
Need some help in diagnosing and tuning the performance of my Redis set up (2 redis-server instances on an Ubuntu 14.04 machine). Note that a write-heavy Django web application shares the VM with Redis. The machine has 8 cores and 25GB RAM.
I recently discovered that background saving was intermittently failing (with a fork() error) even when RAM wasn't exhausted. To remedy this, I applied the setting vm.overcommit_memory=1 (was previously default).
Moreover vm.swappiness=2, vm.overcommit_ratio=50. I have disabled transparent huge pages in my set up as well via echo never > /sys/kernel/mm/transparent_hugepage/enabled (although haven't done echo never > /sys/kernel/mm/transparent_hugepage/defrag).
Right after changing the overcommit_memory setting, I noticed that I/O utilization went from 13% to 36% (on average). I/O operations per second doubled, the redis-server CPU consumption has more than doubled, and the memory it's consuming has gone up 66%. Consequently, the server response time has substantially gone up . This is how abruptly things escalated after applying vm.overcommit_memory=1:
Note that redis-server is the only ingredient showing escalation - gunicorn, nginx ,celery etc. are performing like before. Moreover, redis has become very spikey.
Lastly, New Relic has started showing me 3 redis instances instead of 2 (bottom most graph). I think the forked child is counted as the 3rd:
My question is: how can I diagnose and salvage performance here? Being new to server administration, I'm unsure how to proceed. Help me find out what's going on here and how I can fix it.
free -m has the following output (in case needed):
total used free shared buffers cached
Mem: 28136 27912 224 576 68 6778
-/+ buffers/cache: 21064 7071
Swap: 0 0 0
As you don't have swap enabled in your system ( which might be worth reconsidering if you have SSDs), ( and your swappiness was set to a low value), you can't blame it on increased swapping due to memory contention.
Your caching about 6GB of data inside the VFS cache. In case of contention this cache would have depleted in favor of process working memory, so I believe it's safe to say memory is not an issue all together.
It's a shot in the dark, but my guess is that your redis-server is configured to "sync"/"save" too often ( search for in the redis config file "appendfsync"), and that by removing the memory allocation limitation, it now actually does it's job :)
If the data is not super crucial, set appendfsync to never and perhaps tweek the save settings to cause less frequent saving.
BTW, regarding the redis & forked child, I believe you are correct.
I'm using redis as a client side caching mechanism.
Implemented with C# using stackexchange.redis.
I configured the snapshotting to "save 5 1" and rdbcompression is on.
The RDB mechanism loads the rdb file to memory every time it needs to append data.
The problem is when you have a fairly large RDB file and it's loaded to memory all at once. It chokes up the memory, disk and cpu for the average endpoint.
Is there a way to update the rdb file without loading the whole file to memory?
Also any other solution that lowers the load on the memory and cpu is welcome.
The RDB mechanism loads the rdb file to memory every time it needs to append data.
This isn't what the open source Redis server does (other variants, such as the MSFT fork, may behave differently) - RDBs are created by copying the contents of the memory to disk with a forked process. The dump's file is never loaded, except when used for recovery. The increased memory usage during the save process is dependent on the amount of writes performed while the dump is undergoing because of the copy-on-write (COW) mechanism.
Also any other solution that lowers the load on the memory and cpu is welcome.
There are several ways to tackle this, depending on your requirements and budget. These include:
Using both RDB and AOF for data persistency, thus reducing the frequency of dumps.
Delegating persistency to a slave instance.
Sharding your databases and performing cascading dumps.
We tackled the problem by using RDB and now use AOF exclusively.
We have reduced the memory peaks by reducing the auto-aof-rewrite-percentage and also limiting the auto-aof-rewrite-min-size to the desired size.
I am experimenting with redis 3.0 eviction policies on my local machine - I'd like to limit max memory so redis cannot consume more than 20 megabytes.
my configuration:
loglevel debug
maxmemory 20mb
maxmemory-policy noeviction
from here, I run redis-server with my configuration followed by
redis-benchmark -q -n 100000 -c 50 -P 12
to store a bunch of keys in memory. This puts memory usage for redis at 21MB on my mac, 1 megabyte over the specified limit. If I run it again, even more is consumed.
According to the redis documentation this should be controlled by my maxmemory directive and eviction policy, where an error is thrown on subsequent writes but I am not finding that this is the case.
Why is redis-server consuming more memory than allotted?
The Redis maxmemory policy control the user data memory usage (as Itamer Haber sas in comment). But here is some more complex situation with memory compsumation:
Depends on operation system.
Depends on CPU and used compiler (read as Redis x86/x64 used)
Depends on used allocator (jemalloc by default in Redis)
In real world application (as Redis is) you have limited rights with memory management. So your applicaion would compsume different memory for same application compiled as x64 or x86). In case of Redis data overhead is nearest to 2 times by memory.
Why this important
Each time you write some data to Redis it's allocate or reallocate memory with allocator. The last (jemalloc) has complex strategy about that. In few words - allocate the memory size, lined up to the nearest power of two (you need 17 bytes - 32 would be allocated). Much of Redis structures use same policy. For example HASH (and ZSET becouse of HASH used under hood) use policy like that. Strings use much more brute strategy - double the size (with reallocation) while under REDIS_ENCODING_EMBSTR_SIZE_LIMIT (1 mb) or just allocate need size + REDIS_ENCODING_EMBSTR_SIZE_LIMIT).
So, is you limiting your maxmemory - the actual used memory in os can be a lot more and Redis can't do something with that.
p.s. Do not take for advertising please. Your question is very close to me. Here is series of articles about real memory usage in Redis (they all in russian, sorry for that. I planning to translate them in english in this new years weekend. After that update links here. The part of translated available here).
I am trying to write some data to a namespace in Aerospike, but i don't have enough ram for the whole data.
How can i configure my Aerospike so that a portion of the data in kept in the ram as cache and remaining is kept in the hard drive?
Can I reduce the number of copies of data made in Aerospike kept in ram?
It can be done by modifying the contents ofaerospike.conf file but how exactly am i going to achieve it.
You should have seen the configuration page in aerospike documentation before asking such question
http://www.aerospike.com/docs/operations/configure/namespace/storage/
How can i configure my Aerospike so that a portion of the data in kept in the ram as cache and remaining is kept in the hard drive?
The post-write-queue parameter defines the amount of RAM to use to keep recently written records in RAM. As long as these records are still in the post-write-queue Aerospike will read directly from RAM rather than disk. This will allow you to configure a LRU cache for an namespace that is storage-engine device and data-in-memory false. Note that this is least recently updated (or created) rather than least recently used (read or write) cache eviction algorithm.