I am having the 250MB data storing into redis Cache as a single hash object. I am using Spring RedisTemplate to read data from redis. But it's taking the the around 30 to 35 secs of time.
redisTemplate.opsForHash().put("masterMap","masterMap", masterMap);
redisTemplate.opsForHash().get("masterMap","masterMap");
The requirement was to get data in milli secs. However it's taking 30 to 35 secs of time.How to read this much size of data very quickly from redis cache. Having any alternative ways to read data from redis or I have to change any configurations.
Can someone please guide me on this.
Do a profiler run.
If you spend most of the time deserializing this data, then consider
faster serialization methods. Like protobuf or cap'n'proto.
If you spend most of the time reading massive amount of data from
socket, then try to decrease the amount stored. Use compression or/and
normalization. For example, if there are a lot of strings with low
cardinality, store their dictionary as a separate structure.
Related
I have 100GB dataset in this format with row format as seen below.
cookie,iplong1,iplong2..,iplongN
I am currently trying to fit this data into redis as a sorted set data structure. I would also need to set a TTL for each of those IPs. I was thinking to have TTL implemented on each element in that set, I will probably add a score to them, where score is the epoch time. And may be I will write a separate script to parse the scores and remove expired IPs based score as applicable. With that said, I am also noticing that it almost takes 100GB memory to this 100GB dataset. I was wondering if there is any other way of efficiently packing this data in redis with minimal memory footprint.
I am also happy to know if there are any other tech stack out there that can handle this better. This dataset would be updated frequently based on hourly logs, also the expectation is we should be able to read from it at faster rate, concurrently.
Thanks in advance.
we are planning to implement distributed Cache(Redis Cache) for our application. We have a data and stored it in map with having size around 2GB and it is a single object. Currently it is storing in Context scope similarly we have plenty of objects storing into context scope.
Now we are planning to store all these context data into Redis Cache. Here the map data taking high amount of memory and we have to store this map data as single key-value object.
Is it suitable Redis Cache for my requirement. And which data type is suitable to store this data into Redis Cache.
Please suggest the way to implement this.
So, you didn't finish discussion in the other question and started a new one? 2GB is A LOT. Suppose, you have 1Gb/s link between your servers. You need 16 seconds just to transfer raw data. Add protocol costs, add deserialization costs. And you're at 20 seconds now. This is hardware limitations. Of course you may get 10Gb/s link. Or even multiplex it for 20Gb/s. But is it the way? The real solution is to break this data into parts and perform only partial updates.
To the topic: use String (basic) type, there are no options. Other types are complex structures and you need just one value.
I intend to use chronicle-map instead of redis, the application scenario is the memoryData module starts every day from the database to load hundreds of millions of records to chronicle-map, and dozens of jvm continue to read chronicle-map records. Each jvm has hundreds of threads. But probably because of the lack of understanding of the chronicle-map, the code poor performance, running slower, until the memory overflow. I wonder if the above practice is the correct use of chronicle-map.
Because Chronicle map stores your data off-heap it's able to store more data than you can hold in main memory, but will perform better if all the data can fit into memory, ( so if possible, consider increasing your machine memory, if this is not possible try to use an SSD drive ), another reason for poor performance maybe down to how you have sized the map in the chronicle map builder, for example how you have set the number of max entries, if this is too large it will effect performance.
I am using Redis as time series database. I am importing mysql data into Redis by reforming data in the format of score and value in order to fit data into sorted set. I have 26 tables and at some point of time data, they can extend to 100 million records for every table.
Is it okay to store that much of data into Redis as Redis stores data in memory?
Is there an chance of Redis crash? If yes how often it will crash?
Is it okay to use Redis for my task?
You should ask yourself how you intend to query your data. Will you access single values or do scans?
Depending on your answers, a more specialized solution might be a better fit for your problem:
Warp 10 (disclaimer: I help build it)
InfluxDB
KairosDB
OpenTSDB
How much data do sessions take in general. If I'm expecting traffic around a 1000 hits a day, will the 5MB limit of the free plan of Redis hosts work for me?
It depends on what type of data structure is used to store individual sessions. Take a look at this article which summarize memory usage of data structures provided by redis. It might be a little bit outdated in terms of memory optimisation but it's still a good resource to get a rough estimate.