Seconds to timestamp as Days:HH:MM:SS - sql

So I have a total seconds difference from two timestamps, lets say:
Table Ex
Timestamp_A | Timestamp_B | Seconds
2022-10-12 15:19:02 | 2022-11-28 15:35:38 | 4,061,796
2022-11-21 09:58:25 | 2022-11-21 09:58:27 | 2
I used DATEDIFF('s', Timestamp_A, Timestamp_B) to produce the seconds.
I want to be able to convert the seconds to something like Days Hours:Minutes:Seconds or at least a way to represent days (DD:HH:MM:SS).
So for these two examples, I'd have:
Table Ex
Timestamp_A | Timestamp_B | Seconds | Converted
2022-10-12 15:19:02 | 2022-11-28 15:35:38 | 4,061,796 | 47 00:16:36
2022-11-21 09:58:25 | 2022-11-21 09:58:27 | 2 | 00 00:00:02
I tried messing around with to_varchar mixed around with to_timestamp but to no avail.
Any help is appreciated

You can use this SQL UDF to do this. It's easier than having to complicate the SQL with the logic to calculate the formatted string:
create or replace function DHMS(sec int)
returns string
language sql strict immutable
as $$
to_varchar(floor(sec/86400), '00') || ' '
|| to_varchar(dateadd(seconds, sec - floor(sec/86400) * 86400, '1970-01-01 00:00:00'), 'HH:MI:SS')
$$;
with T1 as
(
select
COLUMN1::timestamp as Timestamp_A,
COLUMN2::timestamp as Timestamp_B,
COLUMN3::int as Seconds
from (values
('2022-10-12 15:19:02', '2022-11-28 15:35:38', 4061796),
('2022-11-21 09:58:25','2022-11-21 09:58:27', 2))
)
select *, DHMS(seconds) from T1
;
Output:
TIMESTAMP_A
TIMESTAMP_B
SECONDS
DHMS(SECONDS)
2022-10-12 15:19:02.000
2022-11-28 15:35:38.000
4061796
47 00:16:36
2022-11-21 09:58:25.000
2022-11-21 09:58:27.000
2
00 00:00:02
If you need more room for the days, just add one or more additional zeros to the to_varchar(floor(sec/86400), '00') part in the format string for the days.

Related

Extract 30 minutes from timestamp and group it by 30 mins time interval -PGSQL

In PostgreSQL I am extracting hour from the timestamp using below query.
select count(*) as logged_users, EXTRACT(hour from login_time::timestamp) as Hour
from loginhistory
where login_time::date = '2021-04-21'
group by Hour order by Hour;
And the output is as follows
logged_users | hour
--------------+------
27 | 7
82 | 8
229 | 9
1620 | 10
1264 | 11
1990 | 12
1027 | 13
1273 | 14
1794 | 15
1733 | 16
878 | 17
126 | 18
21 | 19
5 | 20
3 | 21
1 | 22
I want the same output for same SQL for 30 mins. Please suggest
SELECT to_timestamp((extract(epoch FROM login_time::timestamp)::bigint / 1800) * 1800)::timestamp AS interval_30_min
, count(*) AS logged_users
FROM loginhistory
WHERE login_time::date = '2021-04-21' -- inefficient!
GROUP BY 1
ORDER BY 1;
Extracting the epoch gets the number of seconds since the epoch. Integer division truncates. Multiplying back effectively rounds down, achieving the same as date_trunc() for arbitrary time intervals.
1800 because 30 minutes contain 1800 seconds.
Detailed explanation:
Truncate timestamp to arbitrary intervals
The cast to timestamp makes me wonder about the actual data type of login_time? If it's timestamptz, the cast depends on your current time zone setting and sets you up for surprises if that setting changes. See:
How do I match an entire day to a datetime field?
Subtract hours from the now() function
Ignoring time zones altogether in Rails and PostgreSQL
Depending on the actual data type, and exact definition of your date boundaries, there is a more efficient way to phrase your WHERE clause.
You can change the column on which you're aggregating to use the minute too:
select
count(*) as logged_users,
CONCAT(EXTRACT(hour from login_time::timestamp), '-', CASE WHEN EXTRACT(minute from login_time::timestamp) < 30 THEN 0 ELSE 30 END) as HalfHour
from loginhistory
where login_time::date = '2021-04-21'
group by HalfHour
order by HalfHour;

incorrect date time format in Oracle DB, convert to hours and minutes

Don't ask me why but for some reason we have a date time column that is in the wrong format that I need help converting.
Example timestamp from DB: 01-OCT-20 12.18.44.000000000 AM
In the example above the hours is actually 18 and the minutes is 44.
Not sure how this happened by 12 is the default for everything. All I want to do is get the difference in HH:MM from 2 timestamps, but i dont know how to convert this properly with the hours being in the minute section and the minutes being in the seconds section.
Example of what I'm looking for:
01-OCT-20 12.18.44.000000000 AM - 01-OCT-20 12.12.42.000000000 AM
Output: 06:02 . so the timespan would be 6 hours and 2 minutes in this case.
Thanks,
In the example above the hours is actually 18 and the minutes is 44.
Not sure how this happened by 12 is the default for everything. All I want to do is get the difference in HH:MM from 2 timestamps, but i dont know how to convert this properly with the hours being in the minute section and the minutes being in the seconds section.
To convert minutes to hours, you need to multiply by 60.
To convert seconds to minutes, you also need to multiply by 60.
So, if you want to convert the time part of the correct value then you take the time since midnight and multiply it all by 60.
If you want to get the difference between the current and correct time (after multiplying by 60) then you want to subtract the current time (which can be simplified to just multiplying by 59).
So to get the time difference you can use:
SELECT (value - TRUNC(value))*59 AS difference,
value + (value - TRUNC(value))*59 AS updated_value
FROM table_name;
So, for your sample data:
CREATE TABLE table_name ( value ) AS
SELECT TO_TIMESTAMP( '01-OCT-20 12.18.44.000000000 AM', 'DD-MON-RR HH12.MI.SS.FF9 AM' ) FROM DUAL
Then the output is:
DIFFERENCE | UPDATED_VALUE
:---------------------------- | :-------------------------
+000000000 18:25:16.000000000 | 2020-10-01 18:44:00.000000
db<>fiddle here
If you want to compare two wrong values just subtract one timestamp from the other and multiply by 60 (assuming that the hour will always be 12 AM or 00 in the 24 hour clock):
SELECT (value1 - value2) * 60 AS difference,
value1,
value1 + (value1 - TRUNC(value1))*59 AS updated_value1,
value2,
value2 + (value2 - TRUNC(value2))*59 AS updated_value2
FROM table_name;
So, for the sample data:
CREATE TABLE table_name ( value1, value2 ) AS
SELECT TO_TIMESTAMP( '01-OCT-20 12.18.44.000000000 AM', 'DD-MON-RR HH12.MI.SS.FF9 AM' ),
TO_TIMESTAMP( '01-OCT-20 12.12.42.000000000 AM', 'DD-MON-RR HH12.MI.SS.FF9 AM' )
FROM DUAL
The output is:
DIFFERENCE | VALUE1 | UPDATED_VALUE1 | VALUE2 | UPDATED_VALUE2
:---------------------------- | :------------------------- | :------------------------- | :------------------------- | :-------------------------
+000000000 06:02:00.000000000 | 2020-10-01 00:18:44.000000 | 2020-10-01 18:44:00.000000 | 2020-10-01 00:12:42.000000 | 2020-10-01 12:42:00.000000
Which gives the difference as 6 hours and 2 minutes.
db<>fiddle here

Oracle query returns no result if extending time on condition

If I do:
SELECT count(*) FROM XX where "date" >= '8-APR-2015' and "date" <= '8-APR-2016'
It would return many rows, but if I do:
SELECT count(*) FROM XX where "date" >= '8-APR-2010' and "date" <= '8-APR-2016'
It returns 0. How is that possible? If anything I would get more rows because I'm increasing the range that is valid for retrieval. Any ideas?
EDIT:
NLS_TIMESTAMP_FORMAT 'DD-MON-RR HH.MI.SSXFF
NLS_DATE_FORMAT DD-MON-RR
If you look at the execution plans for the two queries, particularly the predicate information, you'll see that the first one does:
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| XX | 1 | 13 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
2 - filter("date">=TO_TIMESTAMP('8-APR-2015') AND
"date"<=TO_TIMESTAMP('8-APR-2016'))
while the second does:
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 0 (0)| |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | FILTER | | | | | |
|* 3 | TABLE ACCESS FULL| XX | 1 | 13 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(NULL IS NOT NULL)
3 - filter("date">=TO_TIMESTAMP('8-APR-2010') AND
"date"<=TO_TIMESTAMP('8-APR-2016'))
And since NULL IS NOT NULL is never true, that gets zero rows. But that's down to your NLS settings. With other format masks it does not have that filter step.
You can get a sense of what's happening if you look at how those to_timestamp() calls are being evaluated with your format NLS settings:
alter session set nls_timestamp_format = 'DD-MON-RR HH.MI.SSXFF';
select to_char(to_timestamp('8-APR-2015'), 'YYYY-MM-DD') as from_1,
to_char(to_timestamp('8-APR-2016'), 'YYYY-MM-DD') as to_1,
to_char(to_timestamp('8-APR-2010'), 'YYYY-MM-DD') as from_2,
to_char(to_timestamp('8-APR-2016'), 'YYYY-MM-DD') as to_2
from dual;
FROM_1 TO_1 FROM_2 TO_2
---------- ---------- ---------- ----------
2015-04-08 2016-04-08 2020-04-08 2016-04-08
The first pair of dates look OK - 2015 is before 2016. But the second 'from' has come out as 2020, not 2010; and since Oracle is smart enough to realise that 2020 is later than 2016, it knows there can be no data that matches, and adds the impossible condition to short circuit and avoid redundant data access.
Compare that with a mask that handles four-digit years properly:
alter session set nls_timestamp_format = 'DD-MON-RRRR HH.MI.SSXFF';
select to_char(to_timestamp('8-APR-2015'), 'YYYY-MM-DD') as from_1,
to_char(to_timestamp('8-APR-2016'), 'YYYY-MM-DD') as to_1,
to_char(to_timestamp('8-APR-2010'), 'YYYY-MM-DD') as from_2,
to_char(to_timestamp('8-APR-2016'), 'YYYY-MM-DD') as to_2
from dual;
FROM_1 TO_1 FROM_2 TO_2
---------- ---------- ---------- ----------
2015-04-08 2016-04-08 2010-04-08 2016-04-08
Now the second from 'date' is correct.
The difference is down to how the RR format mask behaves, though this specific behaviour isn't really documented.
What's actually happening is down to Oracle's helpfulness in trying to be flexible in interpreting format masks. As it says in the docs, just under the table of datetime format elements, "Oracle Database converts strings to dates with some flexibility" - but the effects of that are sometimes a bit unexpected.
It's actually the bit after RR that's throwing it out. You can see that with this little demo:
with t as (
select 1998 + level as year from dual connect by level < 16
)
select year, to_char(to_timestamp(to_char(year), 'RR HH'), 'YYYY-MM-DD HH24:MI:SS')
from t;
YEAR TO_CHAR(TO_TIMESTAM
---------- -------------------
1999 1999-04-01 00:00:00
2000 2000-04-01 00:00:00
2001 2020-04-01 01:00:00
2002 2020-04-01 02:00:00
2003 2020-04-01 03:00:00
2004 2020-04-01 04:00:00
2005 2020-04-01 05:00:00
2006 2020-04-01 06:00:00
2007 2020-04-01 07:00:00
2008 2020-04-01 08:00:00
2009 2020-04-01 09:00:00
2010 2020-04-01 10:00:00
2011 2020-04-01 11:00:00
2012 2020-04-01 12:00:00
2013 2013-04-01 00:00:00
The RR model only seems to look at the first two digits of the year, but when being helpful it also tries to handle four-digit years for you, and that is working for 2015 and 2016. And it would work for other years if the mask didn't have a time component. But it does, and it's preferring to interpret the third and fourth characters of your four-digit year using the HH part of the mask.
So for 2010, it's seeing the '10', decides it can interpret that as an HH value, does that, and then only converts the remaining two digits '20' using the RR mask - which it treats as 2020. So you end up with 10am on April 8th 2020. The same thing happens for 2000 (though you can't tell the difference) through to 2012. When you get to 2013, '13' is no longer valid for the HH mask, so it goes back to treating all four digits as the year. If the NLS format mast had HH24 then it would 'break' for 2013-2023 as well.
The moral is to never rely on NLS settings. (And never use 2-digit years, or 2-digit year masks). Convert strings to dates/timestamp explicitly:
where "date" >= to_timestamp('8-APR-2015', 'DD-MON-YYYY')
and "date" <= to_timestamp('8-APR-2016', 'DD-MON-YYYY');
... though preferably not with month names as they are also NLS-dependent, though you can specify you want English translation:
where "date" >= to_timestamp('8-APR-2015', 'DD-MON-YYYY', 'NLS_DATE_LANGUAGE=ENGLISH')
and "date" <= to_timestamp('8-APR-2016', 'DD-MON-YYYY', 'NLS_DATE_LANGUAGE=ENGLISH');
Or even better for fixed values, use ANSI date/timestamp literals:
where "date" >= timestamp '2010-04-08 00:00:00'
and "date" <= timestamp '2016-04-08 00:00:00';

How to average data on periods from a table in SQL

I'm trying to average data on specific period of time and then, averaging a date between from these result.
Having data like:
value | datetime
-------+------------------------
15 | 2015-08-16 01:00:40+02
22 | 2015-08-16 01:01:40+02
16 | 2015-08-16 01:02:40+02
19 | 2015-08-16 01:03:40+02
21 | 2015-08-16 01:04:40+02
18 | 2015-08-16 01:05:40+02
29 | 2015-08-16 01:06:40+02
16 | 2015-08-16 01:07:40+02
16 | 2015-08-16 01:08:40+02
15 | 2015-08-16 01:09:40+02
I would like to obtain something like in one query:
value | datetime
-------+------------------------
18.6 | 2015-08-16 01:03:00+02
18.8 | 2015-08-16 01:08:00+02
where value corresponding with the first 5 initial values averaged and the datetime with the middle (or average) of the 5 intial datetimes. 5 representing the interval n.
I saw some posts that put me on the track with avg, group by and averaging date format in SQL but I'm still not able to find out what to do exactly.
I'm working under PostgreSQL 9.4
You would need to share more information but here is a way to do it. Here is more information on it : HERE
mysql> SELECT AVG(value), AVG(datetime)
FROM database.table
WHERE datetime > date1
AND datetime < date2;
Something like
SELECT
to_timestamp(round(AVG(EXTRACT(epoch from datetime)))) as middleDate,
avg(value) AS avgValue
FROM
myTable
GROUP BY
(id) / ((SELECT Count(*) FROM myTable) / 100);
filled roughtly my requirements, with 100 acting on averaged intervals length (globally equals to the outputed lines).

Group records by time

I have a table containing a datetime column and some misc other columns. The datetime column represents an event happening. It can either contains a time (event happened at that time) or NULL (event didn't happen)
I now want to count the number of records happening in specific intervals (15 minutes), but do not know how to do that.
example:
id | time | foreign_key
1 | 2012-01-01 00:00:01 | 2
2 | 2012-01-01 00:02:01 | 4
3 | 2012-01-01 00:16:00 | 1
4 | 2012-01-01 00:17:00 | 9
5 | 2012-01-01 00:31:00 | 6
I now want to create a query that creates a result set similar to:
interval | COUNT(id)
2012-01-01 00:00:00 | 2
2012-01-01 00:15:00 | 2
2012-01-01 00:30:00 | 1
Is this possible in SQL or can anyone advise what other tools I could use? (e.g. exporting the data to a spreadsheet program would not be a problem)
Give this a try:
select datetime((strftime('%s', time) / 900) * 900, 'unixepoch') interval,
count(*) cnt
from t
group by interval
order by interval
Check the fiddle here.
I have limited SQLite background (and no practice instance), but I'd try grabbing the minutes using
strftime( FORMAT, TIMESTRING, MOD, MOD, ...)
with the %M modifier (http://souptonuts.sourceforge.net/readme_sqlite_tutorial.html)
Then divide that by 15 and get the FLOOR of your quotient to figure out which quarter-hour you're in (e.g., 0, 1, 2, or 3)
cast(x as int)
Getting the floor value of a number in SQLite?
Strung together it might look something like:
Select cast( (strftime( 'YYYY-MM-DD HH:MI:SS', your_time_field, '%M') / 15) as int) from your_table
(you might need to cast before you divide by 15 as well, since strftime probably returns a string)
Then group by the quarter-hour.
Sorry I don't have exact syntax for you, but that approach should enable you to get the functional groupings, after which you can massage the output to make it look how you want.