I have a redis cluster where there are 3-4 different types of keys with prefixes for each. I want to get size percentage distribution of values of particular key type.
eg
redis keys catalog:styleId1, catalog:styleId2, size:styleId1, sizeStyleId2 etc
expected output
9% catalog:style 7560 bytes
12% catalog:style x bytes
Basically even range would work, but I need a simple way to do analysis on data size of particular key type.
I tried --bigKeys but it doesn't work on particular key pattern
Related
Actually the question is about the capacity of a single instance of the Redis, regardless of the Memory size.
The reference said:
Redis can handle up to 2^32 keys, and was tested in practice to handle
at least 250 million keys per instance. Every hash, list, set, and
sorted set, can hold 2^32 elements. In other words your limit is likely
the available memory in your system.
So regardless of the server's memory size, Can I create 4 "set" and fill them with almost 2^32 keys in a single instance of Redis? That means 4*(2^32) keys by total.
Sets do not contain keys, they contain strings.
Redis Sets are an unordered collection of Strings.
Of course, your string could happen to share the same characters as one of your keys, but there's nothing special about that. So, yes, you could have four sets containing up to 4 * (2^32) strings, but the total number of keys would still be limited to 2^32.
I have a file with 13 million floats each of them have a associated index as integer. The original size of file is 80MB.
We want to pass multiple indexes to get float data. The only reason, I needed hashmap field and value as List does not support passing multiple indexes to get.
Stored them as hashmap in redis, with index being field and float as value. On checking memory usage it was about 970MB.
Storing 13 million as list is using 280MB.
Is there any optimization I can use.
Thanks in advance
running on elastic cache
You can do a real good optimization by creating buckets of index vs float values.
Hashes are very memory optimized internally.
So assume your data in original file looks like this:
index, float_value
2,3.44
5,6.55
6,7.33
8,34.55
And you have currently stored them one index to one float value in hash or a list.
You can do this optimization of bucketing the values:
Hash key would be index%1000, sub-key would be index, and value would be float value.
More details here as well :
At first, we decided to use Redis in the simplest way possible: for
each ID, the key would be the media ID, and the value would be the
user ID:
SET media:1155315 939 GET media:1155315
939 While prototyping this solution, however, we found that Redis needed about 70 MB to store 1,000,000 keys this way. Extrapolating to
the 300,000,000 we would eventually need, it was looking to be around
21GB worth of data — already bigger than the 17GB instance type on
Amazon EC2.
We asked the always-helpful Pieter Noordhuis, one of Redis’ core
developers, for input, and he suggested we use Redis hashes. Hashes in
Redis are dictionaries that are can be encoded in memory very
efficiently; the Redis setting ‘hash-zipmap-max-entries’ configures
the maximum number of entries a hash can have while still being
encoded efficiently. We found this setting was best around 1000; any
higher and the HSET commands would cause noticeable CPU activity. For
more details, you can check out the zipmap source file.
To take advantage of the hash type, we bucket all our Media IDs into
buckets of 1000 (we just take the ID, divide by 1000 and discard the
remainder). That determines which key we fall into; next, within the
hash that lives at that key, the Media ID is the lookup key within
the hash, and the user ID is the value. An example, given a Media ID
of 1155315, which means it falls into bucket 1155 (1155315 / 1000 =
1155):
HSET "mediabucket:1155" "1155315" "939" HGET "mediabucket:1155"
"1155315"
"939" The size difference was pretty striking; with our 1,000,000 key prototype (encoded into 1,000 hashes of 1,000 sub-keys each),
Redis only needs 16MB to store the information. Expanding to 300
million keys, the total is just under 5GB — which in fact, even fits
in the much cheaper m1.large instance type on Amazon, about 1/3 of the
cost of the larger instance we would have needed otherwise. Best of
all, lookups in hashes are still O(1), making them very quick.
If you’re interested in trying these combinations out, the script we
used to run these tests is available as a Gist on GitHub (we also
included Memcached in the script, for comparison — it took about 52MB
for the million keys)
I have a redis standalone server, with around 8000 keys at a given instance .
The used_memeory is showing to be around 8.5 GB.
My individuals key-value size is max around 50kb , by that calculation the used_memory should be less than 1 GB (50kb * 8000)
I am using spring RedisTemplate with default pool configuration to connect to redis
Any idea what should I look into, to narrow down where the memory is being consumed ?
zset internally uses two data structures to hold the same elements in order to get O(log(N)) INSERT and REMOVE operations into a sorted data structure.
The two Data-structures to be specific are,
Hash Table
Skip list
Storage for ideal cases according to my research is in the following order,
hset < set < zset
I would recommend you to start using hset in case you have hierarchical data storage. This would lower down your memory consumption but might make searching teeny-tiny bit slower (only if one key has more than say a couple of hundred records)
I'm trying to analyise the db size for redis db and tweak the storage of our data per a few articles such as https://davidcel.is/posts/the-story-of-my-redis-database/
and https://engineering.instagram.com/storing-hundreds-of-millions-of-simple-key-value-pairs-in-redis-1091ae80f74c
I've read documentation about "key sizes" (i.e. https://redis.io/commands/object)
and tried running various tools like:
redis-cli --bigkeys
and also tried to read the output from the redis-cli:
INFO memory
The size semantics are not clear to me.
Does the reported size reflect ONLY the size for the key itself, i.e. if my key is "abc" and the value is "value1" the reported size is for the "abc" portion? Also the same question in respects to complex data structures for that key such as a hash / array or list.
Trial and error doesn't seem to give me a clear result.
Different tools give different answers.
First read about --bigkeys - it reports big value sizes in the keyspace, excluding the space taken by the key's name. Note that in this case the size of the value means something different for each data type, i.e. Strings are sized by their STRLEN (bytes) whereas all other by the number of their nested elements.
So that basically means that it gives little indication about actual usage, but rather does as it is intended - finds big keys (not big key names, only estimated big values).
INFO MEMORY is a different story. The used_memory is reported in bytes and reflects the entire RAM consumption of key names, their values and all associated overheads of the internal data structures.
There also DEBUG OBJECT but note that it's output is not a reliable way to measure the memory consumption of a key in Redis - the serializedlength field is given in bytes needed for persisting the object, not the actual footprint in memory that includes various administrative overheads on top of the data itself.
Lastly, as of v4 we have the MEMORY USAGE command that does a much better job - see https://github.com/antirez/redis-doc/pull/851 for the details.
I am writing a JAR file that fetches large amount of data from Oracle db and stores in Redis. The details are properly stored, but the set key and hash key I have defined in the jar are getting limited in redis db. There should nearly 200 Hash and 300 set keys. But, I am getting only 29 keys when giving keys * in redis. Please help on how to increase the limit of the redis memory or hash or set key storage size.
Note: I changed the
hash-max-zipmap-entries 1024
hash-max-zipmap-value 64
manually in redis.conf file. But, its not reflecting. Anywhere it needs to be changed?
There is no limit about the number of set or hash keys you can put in a Redis instance, except for the size of the memory (check the maxmemory, and maxmemory-policy parameters).
The hash-max-zipmap-entries parameter is completely unrelated: it only controls memory optimization.
I suggest using a MONITOR command to check which queries are sent to the Redis instance.
hash-max-zipmap-value keeps the hash key value pair system in redis optimized as the searching for the keys in these hashes follow an amortized N and therefore longer keys will in turn increase the latency of the system.
These settings are available in redis.conf.
If one enters keys more then the specified number then the hash key value pair will be converted to basic key value pair structure internally and thereby will not be able to provide the advantage in memory which hashes provide so.