IF between time in SAP HANA - sql

SELECT SERIES_ROUND(TO_TIMESTAMP(DATE_TIME,'YYYYMMDDHH24MISS'),'INTERVAL 30 MINUTE') as TIME
returns a result like "Mon Nov 02 2020 00:00:00 GMT+0100 (CET)"
what i need is to return shift name
if time between 08:00:00 and 16:30:00 return SHIFT A
if time between 16:30:00 and 00:15:00 return SHIFT B
if time between 00:15:00 and 08:00:00 return SHIFT C
any help appreciated

Something like the following should work:
SELECT CASE WHEN CAST(TO_TIMESTAMP(DATE_TIME,'YYYYMMDDHH24MISS') AS TIME) BETWEEN TO_TIME ('08:00:00', 'HH24:MI:SS') AND TO_TIME ('16:30:00', 'HH24:MI:SS') THEN 'Shift A'
WHEN CAST(TO_TIMESTAMP(DATE_TIME,'YYYYMMDDHH24MISS') AS TIME) BETWEEN TO_TIME ('00:15:00', 'HH24:MI:SS') AND TO_TIME ('08:00:00', 'HH24:MI:SS') THEN 'Shift C'
ELSE 'Shift B' END

This reads a bit like a query that should process the clock-in times of workers or something similar.
Therefore I used clock_ins as the name for the timestamped events in my example code.
To make this work correctly, it helps to make the different computations and data type conversions step by step:
WITH clock_ins AS (
SELECT 1 AS ID, to_timestamp('20200101080012', 'YYYYMMDDHH24MISS' ) AS DATE_TIME FROM DUMMY
UNION ALL
SELECT 2 AS ID, to_timestamp('20200101043135', 'YYYYMMDDHH24MISS' ) AS DATE_TIME FROM DUMMY
UNION ALL
SELECT 3 AS ID, to_timestamp('20200101120056', 'YYYYMMDDHH24MISS' ) AS DATE_TIME FROM DUMMY
UNION ALL
SELECT 4 AS ID, to_timestamp('20200101170012', 'YYYYMMDDHH24MISS' ) AS DATE_TIME FROM DUMMY
),
clock_ins_rounded AS (
SELECT
ID
, SERIES_ROUND(DATE_TIME,'INTERVAL 30 MINUTE') AS clock_in
, to_time(SERIES_ROUND(DATE_TIME,'INTERVAL 30 MINUTE')) AS clock_in_time
FROM
CLOCK_INS)
SELECT
id
, CLOCK_IN
, clock_in_time
, CASE
WHEN clock_in_time BETWEEN to_time('08:00:00') AND to_time('16:30:00') THEN
'SHIFT A'
WHEN ( clock_in_time >= to_time('16:30:00')
OR clock_in_time <= to_time('00:15:00')) THEN
'SHIFT B'
WHEN clock_in_time BETWEEN to_time('00:15:00') AND to_time('08:00:00') THEN
'SHIFT C'
ELSE
'NO SHIFT'
END AS SHIFT
FROM
clock_ins_rounded;
ID|CLOCK_IN |CLOCK_IN_TIME|SHIFT |
--|-------------------|-------------|-------|
1|2020-01-01 08:00:00| 08:00:00|SHIFT A|
2|2020-01-01 04:30:00| 04:30:00|SHIFT C|
3|2020-01-01 12:00:00| 12:00:00|SHIFT A|
4|2020-01-01 17:00:00| 17:00:00|SHIFT B|
The first WITH-clause CLOCK_INS generates the test data and includes at least one case for every rule the OP mentioned.
The second WITH-clause CLOCK_INS_ROUNDED performs the "rounding" to the 30 minutes intervals via the SERIES_ROUND function (good choice!). It also takes the result from this rounding and creates a time of day version from it via the TO_TIME() function. This column is then made available as CLOCK_IN_TIME.
The final and main SELECT clause projects the computed CLOCK_IN timestamp, the CLOCK_IN_TIME column, and also the assigned shifts.
The shifts are assigned via a CASE statement. To make this work correctly it is important to use TIME data types for the comparisons. Leaving out the TO_TIME( ) conversion for each of the conditions would lead to implicit type conversion and the comparison would be calculated as string comparisons which is NOT what you want for this use case.
The other important bit to notice is the rule for SHIFT B. This rule describes an interval between a time before and a time after midnight. As we are dealing with instants of time (actually not even that - it's just time of day data we use) here and not with intervals the BETWEEN comparison would not work correctly for the case of BETWEEN 16:30 AND 00:15.
Therefore, we need to take care of this in the code and translate this condition to LARGER (LATER) THAN 16:30 OR SMALLER (EARLIER) THAN 00:15.
Finally, it is a good idea to include an others branch in CASE statements, so that it is easy to detect data that is not handled by any rule or when rules are not "catching" data that they should have caught.
In summary, these are the main points to consider for this requirement:
implement one computation at a time and avoid overly nested expressions (to make the code easier to be read and understood)
use an appropriate data type for the comparison semantic (e.g. use a TIME data type when using time comparisons like "earlier" or "later). This includes both the data to be processed as well as the comparison data in the code.
when dealing with time-related data types, make sure to understand the differences between date-time, intervals, and time-of-day data
when using CASE statements, include an others case for data that is not handled by the other rules

Related

Get time difference between Log records

I have a log table that tracks the bug's status. I would like to extract the amount of time spent when the log changes from OPEN (OldStatus) to FIXED or REQUEST CLOSE (NewStatus). Right now, my query looks at the max and min of the log which does not produce the result I want. For example, the bug #1 was fixed in 2 hours on 2020-01-01, then reopened (OldStatus) and got a REQUEST CLOSE (NewStatus) in 3 hours on 2020-12-12. I want the query result to return two rows with date and number of hours spent to fix the bug since its most recently opened time.
Here's the data:
CREATE TABLE Log (
BugID int,
CurrentTime timestamp,
Person varchar(20),
OldStatus varchar(20),
NewStatus varchar(20)
);
INSERT INTO Log (BugID, CurrentTime, Person, OldStatus, NewStatus)
VALUES (1, '2020-01-01 00:00:00', 'A', 'OPEN', 'In Progress'),
(1, '2020-01-01 00:00:01', 'A', 'In Progress', 'REVIEW In Progress'),
(1, '2020-01-01 02:00:00', 'A', 'In Progress', 'FIXED'),
(1, '2020-01-01 06:00:00', 'B', 'OPEN', 'In Progress'),
(1, '2020-01-01 00:00:00', 'B', 'In Progress', 'REQUEST CLOSE')
SELECT DATEDIFF(HOUR, start_time, finish_time) AS Time_Spent_Min
FROM (
SELECT BugId,
MAX(CurrentTime) as finish_time,
MIN(CurrentTime) as start_time
FROM Log
WHERE (OldStatus = 'OPEN' AND NewString = 'In Progress') OR NewString = 'FIXED'
) AS TEMP
The actual data looks as below:
FYI #Charlieface
This is a type of gaps-and-islands problem.
There are a number of solutions, here is one:
We need to assign a grouping ID to each island of OPEN -> In Progress. We can use windowed conditional COUNT to get a grouping number for each start point.
To get a grouping for the end point, we need to assign the previous row's NewStatus using LAG, then do another conditional COUNT on that.
We then simply group by BugId and our calculated grouping and return the start and end times
WITH IslandStart AS (
SELECT *,
COUNT(CASE WHEN OldStatus = 'OPEN' AND NewStatus = 'In Progress' THEN 1 END)
OVER (PARTITION BY BugID ORDER BY CurrentTime ROWS UNBOUNDED PRECEDING) AS GroupStart,
LAG(NewStatus) OVER (PARTITION BY BugID ORDER BY CurrentTime) AS Prev_NewStatus
FROM Log l
),
IslandEnd AS (
SELECT *,
COUNT(CASE WHEN Prev_NewStatus IN ('CLAIM FIXED', 'REQUEST CLOSE') THEN 1 END)
OVER (PARTITION BY BugID ORDER BY CurrentTime ROWS UNBOUNDED PRECEDING) AS GroupEnd
FROM IslandStart l
)
SELECT
BugId,
MAX(CurrentTime) as finish_time,
MIN(CurrentTime) as start_time,
DATEDIFF(minute, MIN(CurrentTime), MAX(CurrentTime)) AS Time_Spent_Min
FROM IslandEnd l
WHERE GroupStart = GroupEnd + 1
GROUP BY
BugId,
GroupStart;
Notes:
timestamp is not meant for actual dates and times, instead use datetime or datetime2
You may need to adjust the COUNT condition if OPEN -> In Progress is not always the first row of an island
You have a few competing factors here:
You should use a SmallDateTime, DateTime2 or DateTimeOffset typed columns to store the actual time in the log, these types allow for calculating the differece between values using DateDiff() and DateAdd() and other date/time based comparison logic, where as Timestamp is designed to be used as a currency token, you can use it to determine if one record is more recent than another, you shouldn't try to use it to determine the actual time of the event.
What is difference between datetime and timestamp
DATETIMEOFFSET, DATE, TIME, SMALLDATETIME, DATETIME SYSUTCDATETIME and SYSUTCDATE
You have not explained the expected workflow, we can only assume that the flow is [OPEN]=>[In Progress]=>[CLAIM FIXED]. There is also no mention of 'In Progress', which we assume is an interim state. What actually happens here is that this structure can really only tell you the time spent in the 'In Progress' state, which is probably OK for your needs as this is the time spent actually working, but it is important to recognise that we do not know when the bug is changed to 'OPEN' in the first place, unless that is also logged but we need to see the data to explain that.
Your example dataset does not cover enough combinations for you to notice that the existing logic will fail as soon as you add more than 1 bug. What is more you have asked to calculate the number of hours, but your example data only shows a variation minutes and has no example where the bug is completed at all.
Without a realistic set of data to test with, you will find it hard to debug your logic and hard to accept that it actually works before you execute this against a larger dataaset. It can help to have a scripted scenario, much like your post here, but you should create the data to reflect that script.
You use 'FIXED' in your example, but 'CLAIM FIXED' in query, so which one is it?
Step 1: Structure
Change the datatype of CurrentTime to a DateTime based column. Your application logic may drive requirements here. If your system is cloud based or international, then you may see benefits from using DateTimeOffset instead of having to convert into UTC, otherwise if you do not need high precision timing in your logs, it is very common to use SmallDateTime for logging.
Many ORM and application frameworks will allow you to configure a DateTime based column as the concurrency token, it you need one at all. If you are not happy using a lower precision value for concurrency, then you could have the two columns side by side, to compare the time difference between two records, we need to use a DateTime based type.
In the case of log, we rarely allow or expect logs to be edited, if your logs are read-only then having a concurrency token at all may not be necessary, especially if you only use the concurrency token to determine concurrency during edits of individual records.
NOTE: You should consider using an enum or FK for the Status concept. Already in your example dataset there was a typo for 'In Progerss', using a numeric comparison for the status may provide some performance benefits but it will help to prevent spelling mistakes, especially when FK or lookup lists are used from any application logic.
Step 2: Example Data
If the requirement is to calculate the number of hours spent between records, then we need to create some simple examples that show a difference of a few hours, and then add some examples where the same bug is opened, fixed and then re-opened.
bug #1 was fixed in 2 hours on 2020-01-01, then reopened and got fixed in 3 hours on 2020-12-12
The following table shows the known data states and the expected hrs, we need to throw in a few more data stories to validate that the end query handles obvious boundary conditions like multiple Bugs and overlapping dates
BUG #
Time
Previous State
New State
Hrs In Progress
1
2020-01-01 08:00:00
OPEN
In Progress
1
2020-01-01 10:00:00
In Progress
FIXED
(2 hrs)
1
2020-12-10 09:00:00
FIXED
OPEN
1
2020-12-12 9:30:00
OPEN
In Progress
1
2020-12-12 12:30:00
In Progress
FIXED
(3 hrs)
2
2020-03-17 11:15:00
OPEN
In Progress
2
2020-03-17 14:30:00
In Progress
FIXED
(3.25 hrs)
3
2020-08-22 10:00:00
OPEN
In Progress
3
2020-08-22 16:30:00
In Progress
FIXED
(6.5 hrs)
Step 3: Query
What is interesting to notice here is that 'In Progress' is actually the significant state to query against. What we actually want is to see all rows where the OldStatus is 'In Progress' and we want to link that row to the most recent record before this one with the same BugID and with a NewStatus equal to 'In Progress'
What is interesting in the above table is that not all the expected hours are whole numbers (integers) which makes using DateDiff a little bit tricky because it only counts the boundary changes, not the total number of hours. to highlight this, look at the next two queries, the first one represents 59 minutes, the other only 2 minutes:
SELECT DateDiff(HOUR, '2020-01-01 08:00:00', '2020-01-01 08:59:00') -- 0 (59m)
SELECT DateDiff(HOUR, '2020-01-01 08:59:00', '2020-01-01 09:01:00') -- 1 (1m)
However the SQL results show the first query as 0 hours, but the second query returns 1 hour. That is because it only compares the HOUR column, it is not actually doing a subtraction of the time value at all.
To work around this, we can use MINUTE or MI as the date part argument and divide the result by 60.
SELECT CAST(ROUND(DateDiff(MI, '2020-01-01 08:00:00', '2020-01-01 08:59:00')/60.0,2) as Numeric(10,2)) -- 0.98
SELECT CAST(ROUND(DateDiff(MI, '2020-01-01 08:59:00', '2020-01-01 09:01:00')/60.0,2) as Numeric(10,2)) -- 0.03
You can choose to format this in other ways by calculating the modulo to get the minutes in whole numbers instead of a fraction but that is out of scope for this post, understanding the limitations of DateDiff is what is important to take this further.
There are a number of ways to correlate a previous record within the same table, if you need other values form the record then you might use a join with a sub-query to return the TOP 1 from all the records before the current one, you could use window queries or a CROSS APPLY to perform a nested lookup. The following uses CROSS APPLY which is NOT standard across all RDBMS but I feel it keeps MS SQL queries really clean:
SELECT [Fixed].BugID, [start_time], [Fixed].[CurrentTime] as [finish_time]
, DATEDIFF(MI, [start_time], [Fixed].[CurrentTime]) / 60 AS Time_Spent_Hr
, DATEDIFF(MI, [start_time], [Fixed].[CurrentTime]) % 60 AS Time_Spent_Min
FROM Log as Fixed
CROSS APPLY (SELECT MAX(CurrentTime) AS start_time
FROM Log as Started
WHERE Fixed.BugID = Started.BugID
AND Started.NewStatus = 'In Progress'
AND CurrentTime < Fixed.CurrentTime) as Started
WHERE Fixed.OldStatus = 'In Progress'
You can play with this fiddle: http://sqlfiddle.com/#!18/c408d4/3
However the results show this:
BugID
start_time
finish_time
Time_Spent_Hr
Time_Spent_Min
1
2020-01-01T08:00:00Z
2020-01-01T10:00:00Z
2
0
1
2020-12-12T09:30:00Z
2020-12-12T12:30:00Z
3
0
2
2020-03-17T11:15:00Z
2020-03-17T14:30:00Z
3
15
3
2020-08-22T10:00:00Z
2020-08-22T16:30:00Z
6
30
If I assume that every "open" is followed by one "fixed" before the next open, then you can basically use lead() to solve this problem.
This version unpivots the data, so you could have "open" and "fixed" in the same row:
select l.*, datediff(hour, currenttime, fixed_time)
from (select v.*,
lead(v.currenttime) over (partition by v.bugid order by v.currenttime) as fixed_time
from log l cross apply
(values (bugid, currentTime, oldStatus),
(bugid, currentTime, newStatus)
) v(bugid, currentTime, status)
where v.status in ('OPEN', 'FIXED')
) l
where status = 'OPEN';
Here is a db<>fiddle, which uses data compatible with your explanation. (Your sample data is not correct.)

BQ: Select latest date from multiple columns

Good day, all. I wrote a question relating to this earlier, but now I have encountered another problem.
I have to calculate the timestamp difference between the install_time and contributer_time columns. HOWEVER, I have three contributor_time columns, and I need to select the latest time from those columns first then subtract it from install time.
Sample Data
users
install_time
contributor_time_1
contributor_time_2
contributor_time_3
1
8:00
7:45
7:50
7:55
2
10:00
9:15
9:45
9:30
3
11:00
10:30
null
null
For example, in the table above I would need to select contributor_time_3 and subtract it from install_time for user 1. For user 2, I would do the same, but with contributor_time_2.
Sample Results
users
install_time
time_diff_min
1
8:00
5
2
10:00
15
3
11:00
30
The problem I am facing is that 1) the contributor_time columns are in string format and 2) some of them have 'null' string values (which means that I cannot cast it into a timestamp.)
I created a query, but I am am facing an error stating that I cannot subtract a string from timestamp. So I added safe_cast, however the time_diff_min results are only showing when I have all three contributor_time columns as a timestamp. For example, in the sample table above, only the first two rows will pull.
The query I have so far is below:
SELECT
users,
install_time,
TIMESTAMP_DIFF(install_time, greatest(contributor_time_1, contributor_time_2, contributor_time_3), MINUTE) as ctct_min
FROM
(SELECT
users,
install_time,
safe_cast(contributor_time_1 as timestamp) as contributor_time_1,
safe_cast(contributor_time_2 as timestamp) as contributor_time_2,
safe_cast(contributor_time_3 as timestamp) as contributor_time_3,
FROM
(SELECT
users,
install_time,
case when contributor_time_1 = 'null' then '0' else contributor_time_1 end as contributor_time_1,
....
FROM datasource
Any help to point me in the right direction is appreciated! Thank you in advance!
Consider below
select users, install_time,
time_diff(
parse_time('%H:%M',install_time),
greatest(
parse_time('%H:%M',contributor_time_1),
parse_time('%H:%M',contributor_time_2),
parse_time('%H:%M',contributor_time_3)
),
minute) as time_diff_min
from `project.dataset.table`
if applied to sample data in your question - output is
Above can be refactored slightly into below
create temp function latest_time(arr any type) as ((
select parse_time('%H:%M',val) time
from unnest(arr) val
order by time desc
limit 1
));
select users, install_time,
time_diff(
parse_time('%H:%M',install_time),
latest_time([contributor_time_1, contributor_time_2, contributor_time_3]),
minute) as time_diff_min
from `project.dataset.table`
less verbose and no redundant parsing - with same result - so just matter of preferences
You can use greatest():
select t.*,
time_diff(install_time, greatest(contributor_time_1, contributor_time_2, contributor_time_3), minute) as diff_min
from t;
Note: this assumes that the values are never NULL, which seems reasonable based on your sample data.

Oracle SQL: How to best go about counting how many values were in time intervals? Database query vs. pandas (or more efficient libraries)?

I currently have to wrap my head around programming the following task.
Situation: suppose we have one column where we have time data (Year-Month-Day Hours-Minutes). Our program shall get the input (weekday, starttime, endtime, timeslot) and we want to return the interval (specified by timeslot) where there are the least values. For further information, the database has several million entries.
So our program would be specified as
def calculate_optimal_window(weekday, starttime, endtime, timeslot):
return optimal_window
Example: suppose we want to input
weekday = Monday, starttime = 10:00, endtime = 12:00, timeslot = 30 minutes.
Here we want to count how many entries there are between 10:00 and 12:00 o'clock, and compute the number of values in every single 30 minute slot (i.e. 10:00 - 10:30, 10:01 - 10:31 etc.) and in the end return the slot with the least values. How would you go about formulating an efficient query?
Since I'm working with an Oracle SQL database, my second question is: would it be more efficient to work with libraries like Dask or Vaex to get the filtering and counting done? Where is the bottleneck in this situation?
Happy to provide more information if the formulation was too blurry.
All the best.
This part:
Since I'm working with an Oracle SQL database, my second question is:
would it be more efficient to work with libraries like Dask or Vaex to
get the filtering and counting done? Where is the bottleneck in this
situation?
Depending on your server's specs and the cluster/machine you have available for Dask, it is rather likely that the bottleneck in your analysis would be the transfer of data between the SQL and Dask workers, even in the (likely) case that this can be efficiently parallelised. From the DB's point of view, selecting data and serialising it is likely at least as expensive as counting in a relatively small number of time bins.
I would start by investigating how long the process takes with SQL alone, and whether this is acceptable, before moving the analysis to Dask. Usual rules would apply: having good indexing and sharding on the time index.
You should at least do the basic filtering and counting in the SQL query. With a simple predicate, Oracle can decide whether to use an index or a partition and potentially reduce the database processing time. And sending fewer rows will significantly decrease the network overhead.
For example:
select trunc(the_time, 'MI') the_minute, count(*) the_count
from test1
where the_time between timestamp '2021-01-25 10:00:00' and timestamp '2021-01-25 11:59:59'
group by trunc(the_time, 'MI')
order by the_minute desc;
(The trickiest part of these queries will probably be off-by-one issues. Do you really want "between 10:00 and 12:00", or do you want "between 10:00 and 11:59:59"?)
Optionally, you can perform the entire calculation in SQL. I would wager the SQL version will be slightly faster, again because of the network overhead. But sending one result row versus 120 aggregate rows probably won't make a noticeable difference unless this query is frequently executed.
At this point, the question veers into the more subjective question about where to put the "business logic". I bet most programmers would prefer your Python solution to my query. But one minor advantage of doing all the work in SQL is keeping all of the weird date logic in one place. If you process the results in multiple steps there are more chances for an off-by-one error.
--Time slots with the smallest number of rows.
--(There will be lots of ties because the data is so boring.)
with dates as
(
--Enter literals or bind variables here:
select
cast(timestamp '2021-01-25 10:00:00' as date) begin_date,
cast(timestamp '2021-01-25 11:59:59' as date) end_date,
30 timeslot
from dual
)
--Choose the rows with the smallest counts.
select begin_time, end_time, total_count
from
(
--Rank the time slots per count.
select begin_time, end_time, total_count,
dense_rank() over (order by total_count) smallest_when_1
from
(
--Counts per timeslot.
select begin_time, end_time, sum(the_count) total_count
from
(
--Counts per minute.
select trunc(the_time, 'MI') the_minute, count(*) the_count
from test1
where the_time between (select begin_date from dates) and (select end_date from dates)
group by trunc(the_time, 'MI')
order by the_minute desc
) counts
join
(
--Time ranges.
select
begin_date + ((level-1)/24/60) begin_time,
begin_date + ((level-1)/24/60) + (timeslot/24/60) end_time
from dates
connect by level <=
(
--The number of different time ranges.
select (end_date - begin_date) * 24 * 60 - timeslot + 1
from dates
)
) time_ranges
on the_minute between begin_time and end_time
group by begin_time, end_time
)
)
where smallest_when_1 = 1
order by begin_time;
You can run a db<>fiddle here.

Multiple aggregate sums from different conditions in one sql query

Whereas I believe this is a fairly general SQL question, I am working in PostgreSQL 9.4 without an option to use other database software, and thus request that any answer be compatible with its capabilities.
I need to be able to return multiple aggregate totals from one query, such that each sum is in a new row, and each of the groupings are determined by a unique span of time, e.g. WHERE time_stamp BETWEEN '2016-02-07' AND '2016-02-14'. The number of records that satisfy there WHERE clause is unknown and may be zero, in which case ideally the result is "0". This is what I have worked out so far:
(
SELECT SUM(minutes) AS min
FROM downtime
WHERE time_stamp BETWEEN '2016-02-07' AND '2016-02-14'
)
UNION ALL
(
SELECT SUM(minutes)
FROM downtime
WHERE time_stamp BETWEEN '2016-02-14' AND '2016-02-21'
)
UNION ALL
(
SELECT SUM(minutes)
FROM downtime
WHERE time_stamp BETWEEN '2016-02-28' AND '2016-03-06'
)
UNION ALL
(
SELECT SUM(minutes)
FROM downtime
WHERE time_stamp BETWEEN '2016-03-06' AND '2016-03-13'
)
UNION ALL
(
SELECT SUM(minutes))
FROM downtime
WHERE time_stamp BETWEEN '2016-03-13' AND '2016-03-20'
)
UNION ALL
(
SELECT SUM(minutes)
FROM downtime
WHERE time_stamp BETWEEN '2016-03-20' AND '2016-03-27'
)
Result:
min
---+-----
1 | 119
2 | 4
3 | 30
4 |
5 | 62
6 | 350
That query gets me almost the exact result that I want; certainly good enough in that I can do exactly what I need with the results. Time spans with no records are blank but that was predictable, and whereas I would prefer "0" I can account for the blank rows in software.
But, while it isn't terrible for the 6 weeks that it represents, I want to be flexible and to be able to do the same thing for different time spans, and for a different number of data points, such as each day in a week, each week in 3 months, 6 months, each month in 1 year, 2 years, etc... As written above, it feels as if it is going to get tedious fast... for instance 1 week spans over a 2 year period is 104 sub-queries.
What I'm after is a more elegant way to get the same (or similar) result.
I also don't know if doing 104 iterations of a similar query to the above (vs. the 6 that it does now) is a particularly efficient usage.
Ultimately I am going to write some code which will help me build (and thus abstract away) the long, ugly query--but it would still be great to have a more concise and scale-able query.
In Postgres, you can generate a series of times and then use these for the aggregation:
select g.dte, coalesce(sum(dt.minutes), 0) as minutes
from generate_series('2016-02-07'::timestamp, '2016-03-20'::timestamp, interval '7 day') g(dte) left join
downtime dt
on dt.timestamp >= g.dte and dt.timestamp < g.dte + interval '7 day'
group by g.dte
order by g.dte;

How to filter table to date when it has a timestamp with time zone format?

I have a very large dataset - records in the hundreds of millions/billions.
I would like to filter the data in this column - i am only showing 2 records of millions:
arrival_time
2019-04-22 07:36:09.870+00
2019-06-07 09:46:09.870+00
How can i filter the data in this column to only the date part? as in I would like to filter where the arrival_time is 2019-04-22 as this would give me the first record and any other records which have the matching date of 2019-04-22?
I have tried to cast the column to timestamp::date = "2019-04-22" but this has been costly and does not work well given i have such vast amounts of records.
sample code is:
select
*
from
mytable
where
arrival_time::timestamp::date = '2019-09-30'
again very costly if i cast to date format as this will be done before the filtering!
any ideas? I am using postgresql and pgadmin4
This query:
where (arrival_time::timestamp)::date = '2019-09-30'
Is converting arrival_time to another type. That generally precludes the use of index and makes it harder for the optimizer to choose the best execution path.
Instead, compare to same data type:
where arrival_time >= '2019-09-30'::timestamp and
arrival_time >= ('2019-09-30'::timestamp + interval '1 day')
You can try to filter for the upper and lower bounds of that day.
...
WHERE arrival_time >= '2019-04-22'::timestamp
AND arrival_time < '2019-04-23'::timestamp
...
Like that an index on arrival_time should be usable and help to improve performance.