how to clear up redis entries periodically? - redis

Using a map of hash of my entry to epoch timestamp. Over a day, the map accumulates about 5 million entries and takes up too much memory. 80% of entries are not updated frequently and I want to delete those entries with epoch timestamp updated in last 24 hours. The clean up can happen weekly to avoid the map growing out of bounds over time.
I tried setting TTL on entries but it causes slowness in performance in redis lookups and inserts. I was thinking of maintaining a sorted set by an hour to hashes, but that will take up more space and I am not sure if the value set can be that large to contain millions of hashes. What are some of the better approaches to handle this cleanup?

Related

Best Redis Data Type For Distributed Computation

I have an application that needs to use Redis with the following requirements:
A producer storing tens of millions of string records up to 128 bytes each
Indexing the records as each worker needs to access records from its own determined range X to Y in order for multiple workers to be able to process in parallel
Deleting the processed records and storing the results back in redis under a different index
Which redis data type is optimal for this?
I am considering ordered sets, where I would write original strings in one set and results in another, though I have read somewhere that they come with a 64 byte overhead and I'd like to save on memory as much as possible as that allows me to process more records. Another alternative is a simple SET key value where I would index let's say 0-100,000,000 as records to be processed and 100,000,000-200,000,000 as the corresponding result records.
Does anyone know how much memory overhead exists for each solution or can even propose a better one?

In TimescaleDB how to add retention policy for size instead of time interval?

In TimescaleDB, how to set a standard size in terms of GB/MB for a particular table/hypertable, so that when it reaches a particular size, it begins to delete the old rows in order to accommodate new rows of data.
From the documentation, it was evident that only retention policy(time interval) can be added for a hypertable. Is it possible to add retention policy for size?
Retention policy drops entire chunk and chunks are measured by time intervals, thus there is no sense to define policy in size and not in time. The policy drops a chunk after entire chunk is older than given interval, thus if chunk size is 7 days and retention policy is 3 days, then the oldest dropped data will be 10 days old (the dropped chunk contains data from 10 to 3 days old). Chunks are represented by tables internally, thus dropping a chunk is dropping a table, which is the most efficient way to delete data in PostgreSQL. Deleting by row is much more expensive than dropping or truncating a table and doesn't free space until VACUUM is run.
TimescaleDB expects that you know your application load well and can correctly estimate desired size in time interval.
Time dimension column is not required to have time type, but can be a number. It is important that time dimension column increases over time and it is clear how to use in queries and define chunk size. So it is possible to use a counter for the time dimension column and increment it for each row by 1 or by row size. Notice that syncing counter can be a bottleneck.
It is also possible to write a user-defined action, where own action can be defined to be executed on regular basis as a custom policy.
Summary of thee possible solutions:
Give good estimate of chunk size, which is expected way by TimescaleDB.
Define numerical Time dimension column with counter-like implementation.
Write custom policy using user-defined action.

Databricks datamart updated optimization

I had a performance issue to ask your input on.
This is singly based in Databricks on Azure with storage on Azure Data Lake Storage. And tech stack is not more than 2 years old and is all up to the most recent release.
Say I have a datamart Delta table, 100 columns, 30,000,000 rows, grows 225,000 rows every calendar-quarter.
There isn't a Datawarehouse in this architecture, so the newest 225,000 rows are simply appended to the datamart; 30,000,000+ and growing every quarter.
Two columns are a dimension key Dim1_cd and a matching Dim1_desc.
There are 36 other dimensions key-value pairs in the datamart much like Dim1 is a key-value pair.
The datamart is a list of transactions, has a Period column, eg. "2021Q3", and Period is the first level and only partition of the datamart.
The partition divides the delta table into 15 partition folders currently. Each with 100Meg-ish size files numbering in the 150-parquet file range per partition folder.
A calendar-quarter later a new set of files is delivered and to be appended to the datamart, one of which is a Dim1_lookup.txt file which is first read into a Dim1_deltaTable; and has only two columns Dim1_cd and Dim1_desc. Each row is 3rd normal form distinct. On disk, the Dim1_lookup.txt file is only 55K large.
Applying this dimension’s newest version will sometimes have to take only 3-4 minutes, where there are not any Dim1_desc values needing to be updated. Other times, there are 20,000 to 100,000 updates to be written across 100s to 1000s of parquet files and can take an unpleasant long time.
Of course, writing the code of a delta table update to apply the Dim1_deltaTable is no big challenge.
But what can you suggest how to optimize the updates?
Ideally you might have a Datawarehouse backing the datamart, but not the case in this architecture.
You might want to partition Dim1_desc to take advantage of delta’s data skipping but there are 36 other _desc fields you have the same update concerns for.
What do you consider possible to optimize the update/minimize update processing time?

Oracle: Fastest way of doing stats on millions of rows?

I have developed an application which allows users to enter measurements - these are stored in an Oracle database. Each measurement "session" could contain around 100 measurements. There could be around 100 measurement sessions in a "batch", so that's 10,000 measurements per batch. There could easily be around 1000 batches at some point, bringing the total number of measurements into the millions.
The problem is that calculations and statistics need to be performed on the measurements. It ranges from things like average measurements per batch to statistics across the last 6 months of measurements.
My question is: is there any way that I can make the process of calculating these statistics faster? Either through the types of queries I'm running or the structure of the database?
Thanks!
I assume that most calculations will be performed on either a single session or a single batch. If this is the case, then it's important that sessions and batches aren't distributed all over the disk.
In order to achieve the desired data clustering, you probably want to create an index-organized table (IOT) organized by batch and session. That way, the measurements belonging to the same session or same batch are close togehter on the disk and the queries for a session or batch will be limited to a small number of disk pages.
Unfortunately, as the number of calculations that needed to be carried out are not limited to just a few, I could not calculate them at the end of every measurement session.
In the end the queries did not take so long - around 3 minutes for calculating all the stats. For end users this was still an unacceptably long time to wait, BUT the good thing was that the stats did not necessarily have to be completely up to date.
Therefore I used a materialized view to take a 'snapshot' of the stats, and set it to update itself every morning at 2am. Then, when the user requested the stats from the materialized view it was instant!

Efficiently storing 7.300.000.000 rows

How would you tackle the following storage and retrieval problem?
Roughly 2.000.000 rows will be added each day (365 days/year) with the following information per row:
id (unique row identifier)
entity_id (takes on values between 1 and 2.000.000 inclusive)
date_id (incremented with one each day - will take on values between 1 and 3.650 (ten years: 1*365*10))
value_1 (takes on values between 1 and 1.000.000 inclusive)
value_2 (takes on values between 1 and 1.000.000 inclusive)
entity_id combined with date_id is unique. Hence, at most one row per entity and date can be added to the table. The database must be able to hold 10 years worth of daily data (7.300.000.000 rows (3.650*2.000.000)).
What is described above is the write patterns. The read pattern is simple: all queries will be made on a specific entity_id. I.e. retrieve all rows describing entity_id = 12345.
Transactional support is not needed, but the storage solution must be open-sourced. Ideally I'd like to use MySQL, but I'm open for suggestions.
Now - how would you tackle the described problem?
Update: I was asked to elaborate regarding the read and write patterns. Writes to the table will be done in one batch per day where the new 2M entries will be added in one go. Reads will be done continuously with one read every second.
"Now - how would you tackle the described problem?"
With simple flat files.
Here's why
"all queries will be made on a
specific entity_id. I.e. retrieve all
rows describing entity_id = 12345."
You have 2.000.000 entities. Partition based on entity number:
level1= entity/10000
level2= (entity/100)%100
level3= entity%100
The each file of data is level1/level2/level3/batch_of_data
You can then read all of the files in a given part of the directory to return samples for processing.
If someone wants a relational database, then load files for a given entity_id into a database for their use.
Edit On day numbers.
The date_id/entity_id uniqueness rule is not something that has to be handled. It's (a) trivially imposed on the file names and (b) irrelevant for querying.
The date_id "rollover" doesn't mean anything -- there's no query, so there's no need to rename anything. The date_id should simply grow without bound from the epoch date. If you want to purge old data, then delete the old files.
Since no query relies on date_id, nothing ever needs to be done with it. It can be the file name for all that it matters.
To include the date_id in the result set, write it in the file with the other four attributes that are in each row of the file.
Edit on open/close
For writing, you have to leave the file(s) open. You do periodic flushes (or close/reopen) to assure that stuff really is going to disk.
You have two choices for the architecture of your writer.
Have a single "writer" process that consolidates the data from the various source(s). This is helpful if queries are relatively frequent. You pay for merging the data at write time.
Have several files open concurrently for writing. When querying, merge these files into a single result. This is helpful is queries are relatively rare. You pay for merging the data at query time.
Use partitioning. With your read pattern you'd want to partition by entity_id hash.
You might want to look at these questions:
Large primary key: 1+ billion rows MySQL + InnoDB?
Large MySQL tables
Personally, I'd also think about calculating your row width to give you an idea of how big your table will be (as per the partitioning note in the first link).
HTH.,
S
Your application appears to have the same characteristics as mine. I wrote a MySQL custom storage engine to efficiently solve the problem. It is described here
Imagine your data is laid out on disk as an array of 2M fixed length entries (one per entity) each containing 3650 rows (one per day) of 20 bytes (the row for one entity per day).
Your read pattern reads one entity. It is contiguous on disk so it takes 1 seek (about 8mllisecs) and read 3650x20 = about 80K at maybe 100MB/sec ... so it is done in a fraction of a second, easily meeting your 1-query-per-second read pattern.
The update has to write 20 bytes in 2M different places on disk. IN simplest case this would take 2M seeks each of which takes about 8millisecs, so it would take 2M*8ms = 4.5 hours. If you spread the data across 4 “raid0” disks it could take 1.125 hours.
However the places are only 80K apart. In the which means there are 200 such places within a 16MB block (typical disk cache size) so it could operate at anything up to 200 times faster. (1 minute) Reality is somewhere between the two.
My storage engine operates on that kind of philosophy, although it is a little more general purpose than a fixed length array.
You could code exactly what I have described. Putting the code into a MySQL pluggable storage engine means that you can use MySQL to query the data with various report generators etc.
By the way, you could eliminate the date and entity id from the stored row (because they are the array indexes) and may be the unique id – if you don't really need it since (entity id, date) is unique, and store the 2 values as 3-byte int. Then your stored row is 6 bytes, and you have 700 updates per 16M and therefore a faster inserts and a smaller file.
Edit Compare to Flat Files
I notice that comments general favor flat files. Don't forget that directories are just indexes implemented by the file system and they are generally optimized for relatively small numbers of relatively large items. Access to files is generally optimized so that it expects a relatively small number of files to be open, and has a relatively high overhead for open and close, and for each file that is open. All of those "relatively" are relative to the typical use of a database.
Using file system names as an index for a entity-Id which I take to be a non-sparse integer 1 to 2Million is counter-intuitive. In a programming you would use an array, not a hash-table, for example, and you are inevitably going to incur a great deal of overhead for an expensive access path that could simply be an array indeing operation.
Therefore if you use flat files, why not use just one flat file and index it?
Edit on performance
The performance of this application is going to be dominated by disk seek times. The calculations I did above determine the best you can do (although you can make INSERT quicker by slowing down SELECT - you can't make them both better). It doesn't matter whether you use a database, flat-files, or one flat-file, except that you can add more seeks that you don't really need and slow it down further. For example, indexing (whether its the file system index or a database index) causes extra I/Os compared to "an array look up", and these will slow you down.
Edit on benchmark measurements
I have a table that looks very much like yours (or almost exactly like one of your partitions). It was 64K entities not 2M (1/32 of yours), and 2788 'days'. The table was created in the same INSERT order that yours will be, and has the same index (entity_id,day). A SELECT on one entity takes 20.3 seconds to inspect the 2788 days, which is about 130 seeks per second as expected (on 8 millisec average seek time disks). The SELECT time is going to be proportional to the number of days, and not much dependent on the number of entities. (It will be faster on disks with faster seek times. I'm using a pair of SATA2s in RAID0 but that isn't making much difference).
If you re-order the table into entity order
ALTER TABLE x ORDER BY (ENTITY,DAY)
Then the same SELECT takes 198 millisecs (because it is reading the order entity in a single disk access).
However the ALTER TABLE operation took 13.98 DAYS to complete (for 182M rows).
There's a few other things the measurements tell you
1. Your index file is going to be as big as your data file. It is 3GB for this sample table. That means (on my system) all the index at disk speeds not memory speeds.
2.Your INSERT rate will decline logarithmically. The INSERT into the data file is linear but the insert of the key into the index is log. At 180M records I was getting 153 INSERTs per second, which is also very close to the seek rate. It shows that MySQL is updating a leaf index block for almost every INSERT (as you would expect because it is indexed on entity but inserted in day order.). So you are looking at 2M/153 secs= 3.6hrs to do your daily insert of 2M rows. (Divided by whatever effect you can get by partition across systems or disks).
I had similar problem (although with much bigger scale - about your yearly usage every day)
Using one big table got me screeching to a halt - you can pull a few months but I guess you'll eventually partition it.
Don't forget to index the table, or else you'll be messing with tiny trickle of data every query; oh, and if you want to do mass queries, use flat files
Your description of the read patterns is not sufficient. You'll need to describe what amounts of data will be retrieved, how often and how much deviation there will be in the queries.
This will allow you to consider doing compression on some of the columns.
Also consider archiving and partitioning.
If you want to handle huge data with millions of rows it can be considered similar to time series database which logs the time and saves the data to the database. Some of the ways to store the data is using InfluxDB and MongoDB.