Data loss without downtime with Redis - redis

Redis seems to have lost some of my data, without the server process dying. The first new data to have persisted seems to be at 12:26. Logs from Redis are below. redis-cli info stats show the process uptime is 3 days. Is this RDB background saving failing? There is ample disk space available.
The Redis version is 4.0.6
24121:M 16 Dec 12:17:26.011 * 10 changes in 300 seconds. Saving...
24121:M 16 Dec 12:17:26.117 * Background saving started by pid 370
370:C 16 Dec 12:17:44.994 * DB saved on disk
370:C 16 Dec 12:17:45.068 * RDB: 167 MB of memory used by copy-on-write
24121:M 16 Dec 12:17:45.260 * Background saving terminated with success
24121:M 16 Dec 12:21:19.891 * DB saved on disk
24121:M 16 Dec 12:21:21.465 * DB saved on disk
24121:M 16 Dec 12:22:00.152 * DB saved on disk
24121:M 16 Dec 12:22:00.474 * DB saved on disk
24121:M 16 Dec 12:22:32.699 * DB saved on disk
24121:M 16 Dec 12:22:33.044 * DB saved on disk
24121:M 16 Dec 12:22:33.579 * DB saved on disk
24121:M 16 Dec 12:22:33.993 * DB saved on disk
24121:M 16 Dec 12:22:34.462 * DB saved on disk
24121:M 16 Dec 12:22:35.167 * DB saved on disk
24121:M 16 Dec 12:22:35.500 * DB saved on disk
24121:M 16 Dec 12:22:36.107 * DB saved on disk
24121:M 16 Dec 12:23:02.170 * DB saved on disk
24121:M 16 Dec 12:23:02.564 * DB saved on disk
24121:M 16 Dec 12:23:02.853 * DB saved on disk
24121:M 16 Dec 12:23:03.142 * DB saved on disk
24121:M 16 Dec 12:23:03.505 * DB saved on disk
24121:M 16 Dec 12:23:03.792 * DB saved on disk
24121:M 16 Dec 12:23:04.174 * DB saved on disk
24121:M 16 Dec 12:23:04.526 * DB saved on disk
24121:M 16 Dec 12:23:04.898 * DB saved on disk
24121:M 16 Dec 12:23:05.214 * DB saved on disk
24121:M 16 Dec 12:23:05.573 * DB saved on disk
24121:M 16 Dec 12:23:06.078 * DB saved on disk
24121:M 16 Dec 12:23:06.266 * DB saved on disk
24121:M 16 Dec 12:23:06.452 * DB saved on disk
24121:M 16 Dec 12:23:19.422 * DB saved on disk
24121:M 16 Dec 12:23:29.048 * DB saved on disk
24121:M 16 Dec 12:23:38.699 * DB saved on disk
24121:M 16 Dec 12:23:48.633 * DB saved on disk
24121:M 16 Dec 12:23:58.422 * DB saved on disk
24121:M 16 Dec 12:24:08.165 * DB saved on disk
24121:M 16 Dec 12:24:18.620 * DB saved on disk
24121:M 16 Dec 12:24:28.847 * DB saved on disk
24121:M 16 Dec 12:24:38.802 * DB saved on disk
24121:M 16 Dec 12:24:48.660 * DB saved on disk
24121:M 16 Dec 12:24:58.978 * DB saved on disk
24121:M 16 Dec 12:25:11.011 * DB saved on disk
24121:M 16 Dec 12:25:21.948 * DB saved on disk
24121:M 16 Dec 12:25:32.383 * DB saved on disk
24121:M 16 Dec 12:25:43.789 * DB saved on disk
24121:M 16 Dec 12:25:58.678 * DB saved on disk
24121:M 16 Dec 12:26:10.804 * DB saved on disk
24121:M 16 Dec 12:26:21.522 * DB saved on disk
24121:M 16 Dec 12:26:32.147 * DB saved on disk
24121:M 16 Dec 12:26:42.517 * DB saved on disk
24121:M 16 Dec 12:26:52.922 * DB saved on disk
24121:M 16 Dec 12:31:53.081 * 10 changes in 300 seconds. Saving...
24121:M 16 Dec 12:31:53.092 * Background saving started by pid 8671
8671:C 16 Dec 12:31:54.833 * DB saved on disk
8671:C 16 Dec 12:31:54.839 * RDB: 12 MB of memory used by copy-on-write
24121:M 16 Dec 12:31:54.898 * Background saving terminated with success

I guess this FAQ provides an answer to your question:
Redis doesn't really lose keys randomly. If the keys have disappeared,
then it is likely because of one of the following reasons:
Expiration: The TTL specified on a key was hit, so the system removed
the key. More details around Redis expiration can be found in the
documentation for the Expires command. TTL values can be set through
operations like SET, PSETEX or EXPIRE.
The INFO command can be used to get stats about how many keys have
expired using the expired_keys entry under the STATS section. You can
also see the number of keys with a TTL value, as well as the average
TTL value, in the KEYSPACE section.
# Stats
expired_keys:46583
# Keyspace
db0:keys=3450,expires=2,avg_ttl=91861015336
See related article with debugging tips
Eviction:
Under memory pressure, the system will evict keys to free up memory. When the used_memory or used_memory_rss values in the INFO
command approach the configured maxmemory setting, the system will
start evicting keys from memory based on your configured memory policy
as described here. You can monitor the number of keys evicted using
the same INFO command mentioned previously
# Stats
evicted_keys:13224

Related

The expired time of data and `keep` parameter in TDengine

I'm using TDengine database 2.1.3.0. I create a table with keep as 10.
I found that the data inserted 10 days ago are still in the database.
What may be the cause?
There is another parameter called "days" with default value 10 in TDengine. TDengine stores data in one data file every 10 days, this data file will not be deleted until all the data in the data file expire.
For example:
one data file may have data from Mar 15 to Mar 25, on Mar 26th, the data of Mar 15 should be expired, but this data file also have data from Mar 16 to Mar 25, so this data file should not be deleted. The data of Mar 15 should be still in database

SQL generate a table, count or function for graph

Firstly I have no code to show (but for good reason). I need a pointer or direction before I try again as I have failed a few times already trying to create a recursive function and so on. Kind of given up and thought I would ask you experts as I am lost and stressed.
My Scenario is this.
Im creating a graph in PHP using Json and thats all fine. However the data I need is my issue.
I have records that have a start and an end date.
Example
ID 14
Start_Date 03/08/2021
End_Date 07/08/2021
Running a stored procedure to grab a records and count between 1 Aug to 10 Aug would display the above as a single record.
Im trying to create a line chart that would have 1 Aug to 2 Aug null then 3 Aug through 7 Aug displaying 1 and finally 8 to 10 Aug null.
1 Aug 2021 0
2 Aug 2021 0
3 Aug 2021 1
4 Aug 2021 1
5 Aug 2021 1
6 Aug 2021 1
7 Aug 2021 1
8 Aug 2021 0
9 Aug 2021 0
10 Aug 2021 0
Is this possible as I have nearly given up.
The nearest I came was using a loop to create a temporary table and inserting records was NOT pretty and certain was embarrassing. If I recreated and posted it here I would die of shame for sure.
So if anyone can point me in the right direction, offer a suggestion or anything like this would be very much appreciated.
Thank you for reading.
You need to start with a list of dates. There are many ways to generate such a list -- perhaps you have an existing table, or your database supports a function. SQL (in general) supports recursive CTEs, which is an alternative method.
Once you have the dates, you can use left join and group by to get the counts you want. Here is an example using MySQL syntax:
with recursive dates as (
select date('2021-08-01') as dte
union all
select dte + interval 1 day
from dates
where dte < '2021-08-10'
)
select d.dte, count(t.id)
from dates d left join
t
on d.dte between t.start_date and t.end_date
group by d.dte;
Here is a db<>fiddle.

How to get difference between 2 timestamp values in SAP Hana?

I have two column timestamps values and would like to find out difference between those times in SAP HANA.
I have not found any easiest way to find out like other DB. For better understanding, an example has been given in the following.
COLUMN1: COLUMN2:
Thu Oct 01 2020 09:18:08 GMT+0200 (CEST) Thu Oct 01 2020 15:49:40 GMT+0200 (CEST)
Resulting Column: 06 hours 31 min 32s
You can use the xx_between functions (days_between, seconds_between, nano100_between) and then you can do some math, as in:
CONCAT(CONCAT(CONCAT(CONCAT(CONCAT( CONCAT(to_varchar(to_integer(DAYS_BETWEEN(ED."START_TIME", ED."END_TIME")),'00'), 'D '), -- Days
to_varchar(to_integer(NANO100_BETWEEN(Table1."START_TIME", Table1."END_TIME")/10000000/60/60)-DAYS_BETWEEN(Table1."START_TIME", Table1."END_TIME")*24,'00') --Hours),':'),
to_varchar(to_integer(((NANO100_BETWEEN(Table1."START_TIME", Table1."END_TIME")/10000000/60/60)-to_integer(NANO100_BETWEEN(Table1."START_TIME", Table1."END_TIME")/10000000/60/60))*60),'00') --Minutes
),':'),to_varchar(to_integer(((((NANO100_BETWEEN(Table1."START_TIME", Table1."END_TIME")/10000000/60/60)-to_integer(NANO100_BETWEEN(Table1."START_TIME", Table1."END_TIME")/10000000/60/60))*60) - to_integer(((NANO100_BETWEEN(Table1."START_TIME", Table1."END_TIME")/10000000/60/60)-to_integer(NANO100_BETWEEN(Table1."START_TIME", Table1."END_TIME")/10000000/60/60))*60))*60), '00'))
The code is written in a way to understand what is going on in terms of time conversion. It could be shortend a lot, if needed.

Get the latest date in SQL from text format

I'm trying to get the latest date from a csv file , the dates are stored in this form
NOV 14 2010
FEB 1 2012
JUN 18 2014
and my query is like
SELECT Max(date) from table
I'm getting
NOV 14 2010
any idea ?
They are likely being considered strings(varchar) not DateTimes. Try:
SELECT MAX(CAST(TABLE.date as DateTime)) FROM TABLE

Sorting records based on modified timestamp?

I am trying to sort a list of records that have been created using a screen scraping script. The script adds the following style of date and time (timestamp) to each record:
13 Jan 14:49
The script runs every 15 minutes, but if I set the sort order to 'time DESC' it doesn't really make sense because it lists the records as follows:
13 Jan 14:49
13 Jan 12:32
13 Jan 09:45
08 Feb 01:10
07 Feb 23:31
07 Feb 06:53
06 Feb 23:15
As you can see, it's listing the first figure correctly (the day of the month in number form) but it's putting February after January. To add to the confusion it's putting the latest date in February at the top of the February section.
Is there a better way of sorting these so they are in a more understandable order?
If you are storing the values in a database, simply use the column type datetime when creating the field. The database will treat the field as time and will sort the values chronologically.
Otherwise, if you are storing the values elsewhere, for example in a flat file, convert the formatted time to unix time. Unix time is an integer, thus you can sort it easier.
Time.parse("13 Jan 09:45").to_i
# => 1326444300
Time.parse("08 Feb 01:10").to_i
# => 1328659800
You can always convert a unix time to a Time instance.
Time.at(1328659800).to_s
# => "2012-02-08 01:10:00 +0100"