I am looking for suggestions on my RocksDB configuration. Our use case is to load 100GB of key-value pairs into rocksdb and at run time only serve the key-value pairs in the db. Key is 32 bytes and value is 1.6 KB in size.
What we have right now is we used hadoop to generate a 100GB sst file using SstFileWriter api and save it off in S3. Each new server that comes up ingests the file using: db.ingestExternalFile(..). We use a i3.large machine (15.25 GiB | 2 vCPUs | 475 GiB NVMe SSD). P95 and avg response from rocksdb given the current configuration:
BlockSize = 2KB
FormatVersion = 4
Read-Write=100% read at runtime
is ~1ms but P99 and PMAX are pretty bad. We are looking for some way to reduce the PMAX response times which is ~10x P95.
Thanks.
Can you load all of the db in memory using tmpfs ... basically just copy the data to RAM and see if that helps .... also it may make sense to compact the sst files in a separate job rather than ingesting at startup ... this may need a change in your machine config to be geared more towards RAM and less towards SSD storage
Related
I'm wondering the most efficient way to store this data.
I need to track 30-50 million data points per day. It needs to be extremely fast read/write, so I'm using redis.
The data only needs to last for 24 hours, at which point it will EXPIRE.
The data looks like this as a key/value hash
{
"statistics:a5ded391ce974a1b9a86aa5322ea9e90": {
xbi: 1,
bid: 0.24024,
xpl: 25.0,
acc: 40,
pid: 43,
cos: 0.025,
xmp: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
clu: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
I've replaced the actual string with a lot of x but that IS the proper length of the string.
So far, according to my calculations.... this will use hundreds of GB of memory. Does that seem correct?
This is mostly ephemeral logging data thats important, but not important enough to try to support writing to disk or failovers. I am comfortable keeping it on 1 machine, if that helps make this easier.
What would be the best way to reduce memory space in this scenario? Is there a better way I can do this? Does redis support 300GB on a single instance?
In redis.conf - set hash-max-ziplist-value to 1 more than the length of the field 'xmp'. Then restart redis, and watch your memory go down significantly.
The default value is 64. Increasing it increases cpu utilization when you modify or add new fields in the hash. But your use case seems to be create-only, and in that case there shouldn't be any drawbacks of increasing the setting.
this will use hundreds of GB of memory. Does that seem correct?
YES
Does redis support 300GB on a single instance?
YES
Is there a better way I can do this?
You can try the following methods:
Avoid Using Hash
Since you always get all fields of the log with HGETALL, there's NO need to save the log as HASH. HASH consumes more memory than STRING.
You can serialize all fields into a string, and save the log as a key-value pair:
SET 'statistics:a5ded391ce974a1b9a86aa5322ea9e90' '{xbi: 1, bid: 0.24024, and other fields}'
#Sripathi Krishnan's answer gives another way to avoid HASH, i.e. config Redis to encode the HASH into ZIPLIST. It's a good idea if you don't share your Redis with other applications. Otherwise, this modification might cause problem to others.
Compress The Data
In order to reduce memory usage, you can try to compress your data. Redis can store binary strings, so you can use gzip, snappy or other compression algorithm to compress the log text into binary string, and save it into Redis.
Normally, you can get better compression when the input is bigger. So you'd better compress the whole log, instead of compress each field one by one.
The side-effect is that the producer and consumer of the log need to cost some CPU to compress and decompress the data. However, normally that's NOT a problem, and also it can reduce some network bandwidth.
Batch Write and Batch Read
As I mentioned above, if you want to get better compression, you should get a bigger input. So if you can write multiple logs in a batch, you can compress the batch of logs to get better compression.
Compress multiple logs into a batch: compress(log1, log2, log3) -> batch1: batch-result
Put the batch result into Redis as a key-value pair: SET batch1 batch-result
Build an index for the batch: MSET log1 batch1 log2 batch1 log3 batch1
When you need to get the log:
Search the index to get the batch key: GET log1 -> batch1
Get the batch result: GET batch1 -> batch-result
Decompress the batch result and look up the log from the result
The last method is the most complicated one, and the extra index will cost some extra memory. However, it can largely reduce the size of your data.
Also what these methods can achieve, largely depends on your log. You should do lots of benchmark :)
We found ourselves this problem. Config is as follows :-
Aerospike version : 3.14
Underlying hard disk : non-SSD
Variable Name Value
memory-size 5 GB
free-pct-memory 98 %
available_pct 4 %
max-void-time 0 millisec
stop-writes 0
stop-writes-pct 90 %
hwm-breached true
default-ttl 604,800 sec
max-ttl 315,360,000 sec
enable-xdr false
single-bin false
data-in-memory false
Can anybody please help us out with this ? What could be a potential reason for this ?
Aerospike only writes to free blocks. A block may contain any number of records that fit. If your write/update pattern is such that a block never falls below 50% active records(the default threshold for defragmenting: defrag-lwm-pct), then you have a bunch of "empty" space that can't be utilized. Read more about defrag in the managing storage page.
Recovering from this is much easier with a cluster that's not seeing any writes. You can increase defrag-lwm-pct, so that more blocks are eligible and gets defragmented.
Another cause could be just that the HDD isn't fast enough to keep up with defragmentation.
You can read more on possible resolutions in the Aerospike KB - Recovering from Available Percent Zero. Don't read past "Stop service on a node..."
You are basically not defragging your perisistence storage device (75GB per node). From the snapshot you have posted, you have about a million records on 3 nodes with 21 million expired. So looks like you are writing records with very short ttl and the defrag is unable to keep up.
Can you post the output of few lines when you are in this state of:
$ grep defrag /var/log/aerospike/aerospike.log
and
$ grep thr_nsup /var/log/aerospike/aerospike.log ?
What is your write/update load ? My suspicion is that you are only creating short ttl records and reading, not updating.
Depending on what you are doing, increasing defrag-lwm-pct may actually make things worse for you. I would also tweak nsup-delete-sleep from 100 microseconds default but it will depend on what your log greps above show. So post those, and lets see.
(Edit: Also, from the fact that you are not seeing evictions even though you are above the 50% HWM on persistence storage means your nsup thread is taking a very long time to run. That again points to nsup-delete-sleep value needing tuning for your set up.)
I want to monitor my redis cache cluster on ElastiCache. From AWS/Elasticache i am able to get metrics like FreeableMemory and BytesUsedForCache. If i am not wrong BytesUsedForCache is the memory used by cluster(assuming there is only one node in cluster). I want to calculate percentage uses of memory. Can any one help me to get percentage of Memory uses in Redis.
We had the same issue since we wanted to monitor the percentage of ElastiCache Redis memory that is consumed by our data.
As you wrote correctly, you need to look at BytesUsedForCache - that is the amount of memory (in bytes) consumed by the data you've stored in Redis.
The other two important numbers are
The available RAM of the AWS instance type you use for your ElastiCache node, see https://aws.amazon.com/elasticache/pricing/
Your value for parameter reserved-memory-percent (check your ElastiCache parameter group). That's the percentage of RAM that is reserved for "nondata purposes", i.e. for the OS and whatever AWS needs to run there to manage your ElastiCache node. By default this is 25 %. See https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/redis-memory-management.html#redis-memory-management-parameters
So the total available memory for your data in ElastiCache is
(100 - reserved-memory-percent) * instance-RAM-size
(In our case, we use instance type cache.r5.2xlarge with 52,82 GB RAM, and we have the default setting of reserved-memory-percent = 25%.
Checking with the info command in Redis I see that maxmemory_human = 39.61 GB, which is equal to 75 % of 52,82 GB.)
So the ratio of used memory to available memory is
BytesUsedForCache / ((100 - reserved-memory-percent) * instance-RAM-size)
By comparing the freeableMemory and bytesUsedForCache metrics, you will have the available memory for the Elasticache non-cluster mode (not sure if it applies to cluster-mode too).
Here is the NRQL we're using to monitor the cache:
SELECT Max(`provider.bytesUsedForCache.Sum`) / (Max(`provider.bytesUsedForCache.Sum`) + Min(`provider.freeableMemory.Sum`)) * 100 FROM DatastoreSample WHERE provider = 'ElastiCacheRedisNode'
This is based on the following:
FreeableMemory: The amount of free memory available on the host. This is derived from the RAM, buffers and cache that the OS reports as freeable.AWS CacheMetrics HostLevel
BytesUsedForCache: The total number of bytes allocated by Redis for all purposes, including the dataset, buffers, etc. This is derived from used_memory statistic at Redis INFO.AWS CacheMetrics Redis
So BytesUsedForCache (amount of memory used by Redis) + FreeableMemory (amount of data that Redis can have access to) = total memory that Redis can use.
With the release of the 18 additional CloudWatch metrics, you can now use DatabaseMemoryUsagePercentage and see the percentage of memory utilization in redis.
View more about the metric in the memory section here
You would have to calculate this based on the size of the node you have selected. See these 2 posts for more information.
Pricing doc gives you the size of your setup.
https://aws.amazon.com/elasticache/pricing/
https://forums.aws.amazon.com/thread.jspa?threadID=141154
Making an AMI and storing it to S3 using the ec2-bundle-vol/ec2-upload-bundle/ec2-register trifecta in AWS I get 36 10 MB image chunks. For a readability/testing standpoint I would much prefer something like 4 100 MB images or one 3.5 GB file.
I am not seeing an easy way to change this behavior without finding and reverse-engineering the Ruby Rails script wrapped in ec2-bundle-vol.
Alternately, is there a good reason for three dozen small files?
unfortunately there's no way without modifying the ruby script.
The chunk size is hardcoded
$AWS_PATH/amitools/ec2/lib/ec2/amitools/bundle.rb line 14
CHUNK_SIZE = 10 * 1024 * 1024 # 10 MB in bytes.
Eventually, it might not work to register the bundle if chunks have a different size
We are using Redis 1.2.6 in production environment. There are 161804 keys in redis. Machine has 2GB RAM.
Problem:
Select queries to Redis are taking 0.02 sec on an average. But some times they take 1.5-2.0 secs, I think whenever redis save modified keys on disk.
One strange thing I noticed before and after restarting the redis is that:
Before restart "changes_since_last_save" changing too fast and was reaching 3000+ (in 5 minutes). But after restart "changes_since_last_save" remains below 20 or so.
Redis stats before restart:
{:bgrewriteaof_in_progress=>"0", :arch_bits=>"64", :used_memory=>"53288487", :total_connections_received=>"586171", :multiplexing_api=>"epoll", :used_memory_human=>"50.82M", :total_commands_processed=>"54714152", :uptime_in_seconds=>"1629606", :changes_since_last_save=>"3142", :role=>"master", :uptime_in_days=>"18", :bgsave_in_progress=>"0", :db0=>"keys=161863,expires=10614", :connected_clients=>"13", :last_save_time=>"1280912841", :redis_version=>"1.2.6", :connected_slaves=>"1"}
Redis stats after restart:
{:used_memory_human=>"49.92M", :total_commands_processed=>"6012", :uptime_in_seconds=>"1872", :changes_since_last_save=>"2", :role=>"master", :uptime_in_days=>"0", :bgsave_in_progress=>"0", :db0=>"keys=161823,expires=10464", :connected_clients=>"13", :last_save_time=>"1280917477", :redis_version=>"1.2.6", :connected_slaves=>"1", :bgrewriteaof_in_progress=>"0", :arch_bits=>"64", :used_memory=>"52341658", :total_connections_received=>"252", :multiplexing_api=>"epoll"}
Not sure what is going wrong here.
Thanks in advance.
Sunil
By default Redis is configured to dump all data to disk from time to time depending on the amount of keys that changed in a time span (see the default config).
Another option is to use the append-only file, which is more lightweight but needs some kind of maintenance – you need to run BGREWRITEAOF every once in a while so that your log doesn't get too big. There's more on the Redis config file about this.
As Tobias says, you should switch to 2.0 as soon as you can since it's faster and, in many cases, uses less memory than 1.2.6.