Create DateFirst in Select - sql

I have a Select Statement that requires that the DateFirst = 1 Monday
In the US so default is 7 Sunday
How can I modify this to embed the DateFirst in the select statement so I can create it as a view?
SET DATEFIRST 1;
SELECT
T_APPLICANT.APPL_ID AS empID,
T_APPLICANT.APPL_LASTNAME,
T_APPLICANT.APPL_FIRSTNAME,
T_APPLICANT_ASSIGNMENT.ASS_STARTDATE,
DATEPART(ww, dbo.T_APPLICANT_ASSIGNMENT.ASS_STARTDATE) AS WeekNo,
DATEPART(WEEKDAY, dbo.T_APPLICANT_ASSIGNMENT.ASS_STARTDATE) AS WeekDay,
DATEPART(ww, GETDATE()) AS CurWeekNo,
(T_APPLICANT_ASSIGNMENT.ASS_HOURS) AS Total_Assigned_hrs,
(T_APPLICANT_ASSIGNMENT.ASS_BILL) AS AvgBill_Rate,
(T_APPLICANT_ASSIGNMENT.ASS_PAY) AS AvgPay_Rate,
(T_APPLICANT_ASSIGNMENT.ASS_HOURS * T_APPLICANT_ASSIGNMENT.ASS_PAY) AS Total_AmtPaid,
(T_APPLICANT_ASSIGNMENT.ASS_HOURS * T_APPLICANT_ASSIGNMENT.ASS_BILL) AS Total_AmtBilled,
(LTRIM(STR(DATEPART(yy, T_APPLICANT_ASSIGNMENT.ASS_STARTDATE))) + '-'
+ LTRIM(STR(DATEPART(M, T_APPLICANT_ASSIGNMENT.ASS_STARTDATE)))
) AS YearMo
FROM
T_APPLICANT
RIGHT OUTER JOIN
T_APPLICANT_ASSIGNMENT
ON T_APPLICANT.APPL_ID = T_APPLICANT_ASSIGNMENT.APPL_ID
WHERE
DATEPART(ww, dbo.T_APPLICANT_ASSIGNMENT.ASS_STARTDATE)
BETWEEN DATEPART(ww, GETDATE()) AND DATEPART(ww, GETDATE()) + 1
AND DATEPART(yy, T_APPLICANT_ASSIGNMENT.ASS_STARTDATE) = DATEPART(yy, GETDATE())
AND ASS_STATUS = 'A';

Unless proven otherwise, you can't set DATEFIRST in a view.
And neither in a user defined function.
So to have a view that returns week & weekday numbers as if DATEFIRST was set to 1?
That could use different calculations.
Haven't figured out yet how to calculate the week number regardless of the DATEFIRST setting as if it the weeks would start on Monday.
That's a tricky one.
I know, one could link to a Calendar table with the week numbers.
But that's not the goal here.
However, the WEEKDAY can also be calculated without using DATEPART.
For example by combining a CASE with a FORMAT.
Because the names of the weekdays remain the same, regardless of the DATEFIRST setting.
And an ISO_WEEK also starts on Monday.
So it can be used in the WHERE clause to filter on the current week & next week.
create table testdatefirst (
id int primary key not null identity(1,1),
dt date not null
)
GO
✓
with rcte as
(
select cast('2018-12-24' as date) dt
union all
select dateadd(day, 1, dt)
from rcte
where dt < cast('2019-03-01' as date)
)
insert into testdatefirst (dt)
select *
from rcte
order by dt
GO
68 rows affected
CREATE view vw_testdatefirst AS
select dt
, FORMAT(dt,'ddd','en-GB') as [dayname]
, DATEPART(WEEKDAY, dt) as [weekday]
, DATEPART(WEEK, dt) as [week]
-- , DATEPART(ISO_WEEK, dt) as [ISO_WEEK]
, case FORMAT(dt,'ddd','en-GB')
when 'Mon' then 1
when 'Tue' then 2
when 'Wed' then 3
when 'Thu' then 4
when 'Fri' then 5
when 'Sat' then 6
when 'Sun' then 7
end as [weekday2]
, (((DATEPART(WEEKDAY, dt) + ##DATEFIRST-2)%7)+1) AS [weekday3]
from testdatefirst
where DATEPART(ISO_WEEK, dt) between DATEPART(ISO_WEEK, '2019-01-01') and DATEPART(ISO_WEEK, '2019-01-01')+1
GO
✓
set datefirst 7;
GO
✓
select ##datefirst as [datefirst];
select * from vw_testdatefirst order by dt;
GO
| datefirst |
| :-------- |
| 7 |
dt | dayname | weekday | week | weekday2 | weekday3
:------------------ | :------ | ------: | ---: | -------: | -------:
31/12/2018 00:00:00 | Mon | 2 | 53 | 1 | 1
01/01/2019 00:00:00 | Tue | 3 | 1 | 2 | 2
02/01/2019 00:00:00 | Wed | 4 | 1 | 3 | 3
03/01/2019 00:00:00 | Thu | 5 | 1 | 4 | 4
04/01/2019 00:00:00 | Fri | 6 | 1 | 5 | 5
05/01/2019 00:00:00 | Sat | 7 | 1 | 6 | 6
06/01/2019 00:00:00 | Sun | 1 | 2 | 7 | 7
07/01/2019 00:00:00 | Mon | 2 | 2 | 1 | 1
08/01/2019 00:00:00 | Tue | 3 | 2 | 2 | 2
09/01/2019 00:00:00 | Wed | 4 | 2 | 3 | 3
10/01/2019 00:00:00 | Thu | 5 | 2 | 4 | 4
11/01/2019 00:00:00 | Fri | 6 | 2 | 5 | 5
12/01/2019 00:00:00 | Sat | 7 | 2 | 6 | 6
13/01/2019 00:00:00 | Sun | 1 | 3 | 7 | 7
set datefirst 1;
GO
✓
select ##datefirst as [datefirst];
select * from vw_testdatefirst order by dt;
GO
| datefirst |
| :-------- |
| 1 |
dt | dayname | weekday | week | weekday2 | weekday3
:------------------ | :------ | ------: | ---: | -------: | -------:
31/12/2018 00:00:00 | Mon | 1 | 53 | 1 | 1
01/01/2019 00:00:00 | Tue | 2 | 1 | 2 | 2
02/01/2019 00:00:00 | Wed | 3 | 1 | 3 | 3
03/01/2019 00:00:00 | Thu | 4 | 1 | 4 | 4
04/01/2019 00:00:00 | Fri | 5 | 1 | 5 | 5
05/01/2019 00:00:00 | Sat | 6 | 1 | 6 | 6
06/01/2019 00:00:00 | Sun | 7 | 1 | 7 | 7
07/01/2019 00:00:00 | Mon | 1 | 2 | 1 | 1
08/01/2019 00:00:00 | Tue | 2 | 2 | 2 | 2
09/01/2019 00:00:00 | Wed | 3 | 2 | 3 | 3
10/01/2019 00:00:00 | Thu | 4 | 2 | 4 | 4
11/01/2019 00:00:00 | Fri | 5 | 2 | 5 | 5
12/01/2019 00:00:00 | Sat | 6 | 2 | 6 | 6
13/01/2019 00:00:00 | Sun | 7 | 2 | 7 | 7
db<>fiddle here

Related

Oracle SQL revenue YTD computation

I want to write an oracle SQL query to compute monthly YTD revenue (cumulative sum) for all possible combinations of the given dimensions. There are also some months where there are no transactions and hence no revenue, in this case the previous month YTD revenue must be displayed for that dimension combination. Given table:
| Month | site | channel | type | revenue |
| ----- | ---- | ------- | ---- | ------- |
| 2017-02 | abc | 1 | A | 50 |
| 2017-04 | abc | 2 | B | 100 |
| 2018-12 | xyz | 1 | A | 150 |
Sample Desired output:
| Month | site | channel | type | ytd revenue |
| ----- | ---- | ------- | ---- | ------- |
| 2017-01 | abc | 1 | A | 0 |
| 2017-02 | abc | 1 | A | 50 |
| 2017-03 | abc | 1 | A | 50 |
| 2017-04 | abc | 1 | A | 50 |
| ------ | --- | -- | -- | --- |
| 2018-12 | abc | 1 | A | 1000 |
| ----- | -- | -- | -- | --- |
| 2017-04 | abc | 2 | A | 100 |
| ---- | --- | - | - | -- |
| 2018-12 | abc | 2 | A | 10 |
| --- | -- | - | - | -- |
| 2018-12 | xyz | 1 | A | 150 |
the fiscal year starts in 1st month and ends in 12th month. So the cumulative sum or YTD revenue must be from 1st month to 12th month every year for all dimension combinations as illustrated in the sample output above.
Use a PARTITION OUTER JOIN:
SELECT ADD_MONTHS( t.year, c.month - 1 ) AS month,
t.site,
t.channel,
t.type,
SUM( COALESCE( t.revenue, 0 ) ) OVER (
PARTITION BY t.site, t.channel, t.type, t.year
ORDER BY c.month
) AS ytd_revenue
FROM (
SELECT LEVEL AS month
FROM DUAL
CONNECT BY LEVEL <= 12
) c
LEFT OUTER JOIN (
SELECT t.*,
TRUNC( month, 'YY' ) AS year
FROM table_name t
) t
PARTITION BY ( site, channel, type, year )
ON ( c.month = EXTRACT( MONTH FROM t.month ) );
Which, for the sample data:
CREATE TABLE table_name ( Month, site, channel, type, revenue ) AS
SELECT DATE '2017-02-01', 'abc', 1, 'A', 50 FROM DUAL UNION ALL
SELECT DATE '2017-04-01', 'abc', 2, 'B', 100 FROM DUAL UNION ALL
SELECT DATE '2018-12-01', 'xyz', 1, 'A', 150 FROM DUAL;
Outputs:
MONTH | SITE | CHANNEL | TYPE | YTD_REVENUE
:------------------ | :--- | ------: | :--- | ----------:
2017-01-01 00:00:00 | abc | 1 | A | 0
2017-02-01 00:00:00 | abc | 1 | A | 50
2017-03-01 00:00:00 | abc | 1 | A | 50
2017-04-01 00:00:00 | abc | 1 | A | 50
2017-05-01 00:00:00 | abc | 1 | A | 50
2017-06-01 00:00:00 | abc | 1 | A | 50
2017-07-01 00:00:00 | abc | 1 | A | 50
2017-08-01 00:00:00 | abc | 1 | A | 50
2017-09-01 00:00:00 | abc | 1 | A | 50
2017-10-01 00:00:00 | abc | 1 | A | 50
2017-11-01 00:00:00 | abc | 1 | A | 50
2017-12-01 00:00:00 | abc | 1 | A | 50
2017-01-01 00:00:00 | abc | 2 | B | 0
2017-02-01 00:00:00 | abc | 2 | B | 0
2017-03-01 00:00:00 | abc | 2 | B | 0
2017-04-01 00:00:00 | abc | 2 | B | 100
2017-05-01 00:00:00 | abc | 2 | B | 100
2017-06-01 00:00:00 | abc | 2 | B | 100
2017-07-01 00:00:00 | abc | 2 | B | 100
2017-08-01 00:00:00 | abc | 2 | B | 100
2017-09-01 00:00:00 | abc | 2 | B | 100
2017-10-01 00:00:00 | abc | 2 | B | 100
2017-11-01 00:00:00 | abc | 2 | B | 100
2017-12-01 00:00:00 | abc | 2 | B | 100
2018-01-01 00:00:00 | xyz | 1 | A | 0
2018-02-01 00:00:00 | xyz | 1 | A | 0
2018-03-01 00:00:00 | xyz | 1 | A | 0
2018-04-01 00:00:00 | xyz | 1 | A | 0
2018-05-01 00:00:00 | xyz | 1 | A | 0
2018-06-01 00:00:00 | xyz | 1 | A | 0
2018-07-01 00:00:00 | xyz | 1 | A | 0
2018-08-01 00:00:00 | xyz | 1 | A | 0
2018-09-01 00:00:00 | xyz | 1 | A | 0
2018-10-01 00:00:00 | xyz | 1 | A | 0
2018-11-01 00:00:00 | xyz | 1 | A | 0
2018-12-01 00:00:00 | xyz | 1 | A | 150
Or, if you want the complete date range rather than just each year:
WITH calendar ( month ) AS (
SELECT ADD_MONTHS( start_month, LEVEL - 1 )
FROM (
SELECT MIN( ADD_MONTHS( TRUNC( ADD_MONTHS( month, -3 ), 'YY' ), 3 ) ) AS start_month,
ADD_MONTHS( MAX( TRUNC( ADD_MONTHS( month, -3 ), 'YY' ) ), 14 ) AS end_month
FROM table_name
)
CONNECT BY
ADD_MONTHS( start_month, LEVEL - 1 ) <= end_month
)
SELECT TO_CHAR( c.month, 'YYYY-MM' ) AS month,
t.site,
t.channel,
t.type,
SUM( COALESCE( t.revenue, 0 ) ) OVER (
PARTITION BY t.site, t.channel, t.type, TRUNC( c.month, 'YY' )
ORDER BY c.month
) AS ytd_revenue
FROM calendar c
LEFT OUTER JOIN (
SELECT t.*,
TRUNC( month, 'YY' ) AS year
FROM table_name t
) t
PARTITION BY ( site, channel, type )
ON ( c.month = t.month )
ORDER BY
site, channel, type, month;
Which outputs:
MONTH | SITE | CHANNEL | TYPE | YTD_REVENUE
:------------------ | :--- | ------: | :--- | ----------:
2017-01-01 00:00:00 | abc | 1 | A | 0
2017-02-01 00:00:00 | abc | 1 | A | 50
2017-03-01 00:00:00 | abc | 1 | A | 50
2017-04-01 00:00:00 | abc | 1 | A | 50
2017-05-01 00:00:00 | abc | 1 | A | 50
2017-06-01 00:00:00 | abc | 1 | A | 50
2017-07-01 00:00:00 | abc | 1 | A | 50
2017-08-01 00:00:00 | abc | 1 | A | 50
2017-09-01 00:00:00 | abc | 1 | A | 50
2017-10-01 00:00:00 | abc | 1 | A | 50
2017-11-01 00:00:00 | abc | 1 | A | 50
2017-12-01 00:00:00 | abc | 1 | A | 50
2018-01-01 00:00:00 | abc | 1 | A | 0
2018-02-01 00:00:00 | abc | 1 | A | 0
2018-03-01 00:00:00 | abc | 1 | A | 0
2018-04-01 00:00:00 | abc | 1 | A | 0
2018-05-01 00:00:00 | abc | 1 | A | 0
2018-06-01 00:00:00 | abc | 1 | A | 0
2018-07-01 00:00:00 | abc | 1 | A | 0
2018-08-01 00:00:00 | abc | 1 | A | 0
2018-09-01 00:00:00 | abc | 1 | A | 0
2018-10-01 00:00:00 | abc | 1 | A | 0
2018-11-01 00:00:00 | abc | 1 | A | 0
2018-12-01 00:00:00 | abc | 1 | A | 0
2017-01-01 00:00:00 | abc | 2 | B | 0
2017-02-01 00:00:00 | abc | 2 | B | 0
2017-03-01 00:00:00 | abc | 2 | B | 0
2017-04-01 00:00:00 | abc | 2 | B | 100
2017-05-01 00:00:00 | abc | 2 | B | 100
2017-06-01 00:00:00 | abc | 2 | B | 100
2017-07-01 00:00:00 | abc | 2 | B | 100
2017-08-01 00:00:00 | abc | 2 | B | 100
2017-09-01 00:00:00 | abc | 2 | B | 100
2017-10-01 00:00:00 | abc | 2 | B | 100
2017-11-01 00:00:00 | abc | 2 | B | 100
2017-12-01 00:00:00 | abc | 2 | B | 100
2018-01-01 00:00:00 | abc | 2 | B | 0
2018-02-01 00:00:00 | abc | 2 | B | 0
2018-03-01 00:00:00 | abc | 2 | B | 0
2018-04-01 00:00:00 | abc | 2 | B | 0
2018-05-01 00:00:00 | abc | 2 | B | 0
2018-06-01 00:00:00 | abc | 2 | B | 0
2018-07-01 00:00:00 | abc | 2 | B | 0
2018-08-01 00:00:00 | abc | 2 | B | 0
2018-09-01 00:00:00 | abc | 2 | B | 0
2018-10-01 00:00:00 | abc | 2 | B | 0
2018-11-01 00:00:00 | abc | 2 | B | 0
2018-12-01 00:00:00 | abc | 2 | B | 0
2017-01-01 00:00:00 | xyz | 1 | A | 0
2017-02-01 00:00:00 | xyz | 1 | A | 0
2017-03-01 00:00:00 | xyz | 1 | A | 0
2017-04-01 00:00:00 | xyz | 1 | A | 0
2017-05-01 00:00:00 | xyz | 1 | A | 0
2017-06-01 00:00:00 | xyz | 1 | A | 0
2017-07-01 00:00:00 | xyz | 1 | A | 0
2017-08-01 00:00:00 | xyz | 1 | A | 0
2017-09-01 00:00:00 | xyz | 1 | A | 0
2017-10-01 00:00:00 | xyz | 1 | A | 0
2017-11-01 00:00:00 | xyz | 1 | A | 0
2017-12-01 00:00:00 | xyz | 1 | A | 0
2018-01-01 00:00:00 | xyz | 1 | A | 0
2018-02-01 00:00:00 | xyz | 1 | A | 0
2018-03-01 00:00:00 | xyz | 1 | A | 0
2018-04-01 00:00:00 | xyz | 1 | A | 0
2018-05-01 00:00:00 | xyz | 1 | A | 0
2018-06-01 00:00:00 | xyz | 1 | A | 0
2018-07-01 00:00:00 | xyz | 1 | A | 0
2018-08-01 00:00:00 | xyz | 1 | A | 0
2018-09-01 00:00:00 | xyz | 1 | A | 0
2018-10-01 00:00:00 | xyz | 1 | A | 0
2018-11-01 00:00:00 | xyz | 1 | A | 0
2018-12-01 00:00:00 | xyz | 1 | A | 150
db<>fiddle here
Fiscal Years (April to March):
WITH calendar ( month ) AS (
SELECT ADD_MONTHS( start_month, LEVEL - 1 )
FROM (
SELECT MIN( TRUNC( ADD_MONTHS( month, -3 ), 'YY' ) ) AS start_month,
ADD_MONTHS( MAX( TRUNC( ADD_MONTHS( month, -3 ), 'YY' ) ), 11 ) AS end_month
FROM table_name
)
CONNECT BY
ADD_MONTHS( start_month, LEVEL - 1 ) <= end_month
)
SELECT TO_CHAR( ADD_MONTHS( c.month, 3 ), 'YYYY-MM' ) AS month,
t.site,
t.channel,
t.type,
SUM( COALESCE( t.revenue, 0 ) ) OVER (
PARTITION BY t.site, t.channel, t.type, TRUNC( c.month, 'YY' )
ORDER BY c.month
) AS ytd_revenue
FROM calendar c
LEFT OUTER JOIN (
SELECT ADD_MONTHS( month, -3 ) AS month,
site,
channel,
type,
revenue,
TRUNC( ADD_MONTHS( month, -3 ), 'YY' ) AS year
FROM table_name t
) t
PARTITION BY ( site, channel, type )
ON ( c.month = t.month )
ORDER BY
site, channel, type, month;
db<>fiddle here
If I understand correctly, you can use cross join to get all the rows and then left join and a cumulative sum to get the most recent value:
select m.month, sc.site, sc.channel, sc.type,
sum(revenue) over (partition by sc.site, sc.channel, sc.type, trunc(m.month, 'YYYY') order by m.month) as ytd_revenue
from (select distinct month from t) m cross join
(select distinct site, channel, type from t) sct left join
t
on t.month = m.month and t.site = sct.site and
t.channel = sc.channel and t.type = sct.type;
This assumes that all months are available in the data. If not, you need to generate the months . . . either with an explicit list or using some sort of generator such as:
with months(month) as (
select date '2019-01-01' as month
from dual
union all
select month + interval '1' month
from months
where month < date '2021-1-01'
)

Time dimension table in SQLPlus

I'm doing a datawarehouse project for college with Oracle Database (SQLPlus).
I need to create the time dimension table and to populate it. The table needs to be like this:
It needs to go from 2004 to 2019.
I've tried different things and queries that I've found but they don't works and, sadly, I don't know enough about SQLPlus to create one on my own (or to successfully modify one). I'm completely lost.
Thank you very much for your help and patience.
Do not store all the columns; use virtual columns instead to calculate derived data otherwise you will find that your columns could be inconsistent:
CREATE TABLE table_name (
id NUMBER(10,0)
GENERATED ALWAYS AS IDENTITY
CONSTRAINT table_name__id__pk PRIMARY KEY,
"DATE" DATE
CONSTRAINT table_name__date__nn NOT NULL
CONSTRAINT table_name__date__u UNIQUE
CONSTRAINT table_name__date__chk CHECK ( "DATE" = TRUNC( "DATE" ) ),
id_day_of_week NUMBER(1,0)
GENERATED ALWAYS AS ( "DATE" - TRUNC( "DATE", 'IW' ) + 1 ),
day_of_week VARCHAR2(9)
GENERATED ALWAYS AS ( CAST( TRIM( TO_CHAR( "DATE", 'DAY', 'NLS_DATE_LANGUAGE = AMERICAN' ) ) AS VARCHAR2(9) ) ),
is_holiday NUMBER(1,0)
CONSTRAINT table_name__id_holiday__chk CHECK ( is_holiday IN ( 0, 1 ) ),
id_month NUMBER(2,0)
GENERATED ALWAYS AS ( EXTRACT( MONTH FROM "DATE" ) ),
month VARCHAR2(9)
GENERATED ALWAYS AS ( CAST( TRIM( TO_CHAR( "DATE", 'MONTH', 'NLS_DATE_LANGUAGE = AMERICAN' ) ) AS VARCHAR2(9) ) ),
id_year NUMBER(5,0)
GENERATED ALWAYS AS ( EXTRACT( YEAR FROM "DATE" ) ),
id_total NUMBER(1,0)
GENERATED ALWAYS AS ( 1 ),
total CHAR(5)
GENERATED ALWAYS AS ( 'Total' )
);
Note:
You should not name the column DATE as its a keyword and you will need to surround it in double-quotes and use the same case every time you use it.
The id_day_of_week is based on the day of the ISO8601 week because relying on TO_CHAR( "DATE", 'D' ) depends on the NLS_TERRITORY setting as to which day of the week is the first day; this way it is independent of any settings.
The day_of_week and month columns have a fixed language.
It is unclear what id_total and total should contain so these are generated as literal values; if you want to have non-static data in these columns then remove the GENERATED ... part of the declaration.
Then you can populate it using:
INSERT INTO table_name ( "DATE", is_holiday )
SELECT DATE '2004-01-01' + LEVEL - 1, 0
FROM DUAL
CONNECT BY DATE '2004-01-01' + LEVEL - 1 < DATE '2020-01-01';
And update the holiday dates using an UPDATE statement according to your territory.
Then if you do:
SELECT *
FROM table_name
ORDER BY "DATE" ASC
FETCH FIRST 32 ROWS ONLY;
The output is:
ID | DATE | ID_DAY_OF_WEEK | DAY_OF_WEEK | IS_HOLIDAY | ID_MONTH | MONTH | ID_YEAR | ID_TOTAL | TOTAL
-: | :-------- | -------------: | :---------- | ---------: | -------: | :------- | ------: | -------: | :----
1 | 01-JAN-04 | 4 | THURSDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
2 | 02-JAN-04 | 5 | FRIDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
3 | 03-JAN-04 | 6 | SATURDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
4 | 04-JAN-04 | 7 | SUNDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
5 | 05-JAN-04 | 1 | MONDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
6 | 06-JAN-04 | 2 | TUESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
7 | 07-JAN-04 | 3 | WEDNESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
8 | 08-JAN-04 | 4 | THURSDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
9 | 09-JAN-04 | 5 | FRIDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
10 | 10-JAN-04 | 6 | SATURDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
11 | 11-JAN-04 | 7 | SUNDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
12 | 12-JAN-04 | 1 | MONDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
13 | 13-JAN-04 | 2 | TUESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
14 | 14-JAN-04 | 3 | WEDNESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
15 | 15-JAN-04 | 4 | THURSDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
16 | 16-JAN-04 | 5 | FRIDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
17 | 17-JAN-04 | 6 | SATURDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
18 | 18-JAN-04 | 7 | SUNDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
19 | 19-JAN-04 | 1 | MONDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
20 | 20-JAN-04 | 2 | TUESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
21 | 21-JAN-04 | 3 | WEDNESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
22 | 22-JAN-04 | 4 | THURSDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
23 | 23-JAN-04 | 5 | FRIDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
24 | 24-JAN-04 | 6 | SATURDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
25 | 25-JAN-04 | 7 | SUNDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
26 | 26-JAN-04 | 1 | MONDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
27 | 27-JAN-04 | 2 | TUESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
28 | 28-JAN-04 | 3 | WEDNESDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
29 | 29-JAN-04 | 4 | THURSDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
30 | 30-JAN-04 | 5 | FRIDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
31 | 31-JAN-04 | 6 | SATURDAY | 0 | 1 | JANUARY | 2004 | 1 | Total
32 | 01-FEB-04 | 7 | SUNDAY | 0 | 2 | FEBRUARY | 2004 | 1 | Total
db<>fiddle here
create table date_dim
(id number(38),
date date,
id_dayofweek number(38),
dayofweek varchar(100),
id_holiday number(38),
id_month number(38),
month varchar(100),
id_year number(38),
id_total number(38),
Total varchar(100));
Use above query to create the table.
Regarding the data, you can generate it through connect by clause.
insert into date_dim
(select level as id, to_date('31-DEC-2003', 'DD-MON-YYYY') + level as date1,
case when ltrim(rtrim(to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day'))) = 'Monday' then 2
when ltrim(rtrim(to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day'))) = 'Tuesday' then 3
when ltrim(rtrim(to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day'))) = 'Wednesday' then 4
when ltrim(rtrim(to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day'))) = 'Thursday' then 5
when ltrim(rtrim(to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day'))) = 'Friday' then 6
when ltrim(rtrim(to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day'))) = 'Saturday' then 7
when ltrim(rtrim(to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day'))) = 'Sunday' then 1 end as id_dayofweek,
to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Day') as dayofweek,
0 as id_holiday,
to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'MM') as id_month,
to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'Month') as month,
to_char(to_date('31-DEC-2003', 'DD-MON-YYYY') + level, 'YYYY') as year,
1 as id_total,
'Total' as Total
from dual
connect by level < = 5844);

Is there a function in Google big query to find the first date and last date of the ISO Week number / Week number of a calendar year?

Let us assume a calendar week.
The week number is 02 of 2020.
I am looking for ways to find the beginning and end dates of the week.
Any pointers to built in function or any other approaches will be helpful.
I don't see a direct way, but with existing date functions, it is super easy to build a look up table which you can query:
CREATE TABLE day_of_week_table AS
SELECT
date,
EXTRACT(ISOYEAR FROM date) AS isoyear,
EXTRACT(ISOWEEK FROM date) AS isoweek,
EXTRACT(WEEK FROM date) AS week,
EXTRACT(DAYOFWEEK FROM date) AS dayOfWeek
FROM UNNEST(GENERATE_DATE_ARRAY('2020-1-1', '2021-1-1')) AS date
ORDER BY date;
Paste first a few rows of this table
| date | isoyear | isoweek | week | dayOfWeek |
+------------+---------+---------+------+-----------+
| 2020-01-01 | 2020 | 1 | 0 | 4 |
| 2020-01-02 | 2020 | 1 | 0 | 5 |
| 2020-01-03 | 2020 | 1 | 0 | 6 |
| 2020-01-04 | 2020 | 1 | 0 | 7 |
| 2020-01-05 | 2020 | 1 | 1 | 1 |
| 2020-01-06 | 2020 | 2 | 1 | 2 |
| 2020-01-07 | 2020 | 2 | 1 | 3 |
| 2020-01-08 | 2020 | 2 | 1 | 4 |
| 2020-01-09 | 2020 | 2 | 1 | 5 |
| 2020-01-10 | 2020 | 2 | 1 | 6 |
| 2020-01-11 | 2020 | 2 | 1 | 7 |

Is there a way to COUNT the amount of non-null between the NULLs in a column?

I have a query that is pulling financial figures and flags whether they have hit a target or not. I have a column that populates a 1 if the target is hit, and it NULLs if the target isn't hit. This is a simple CASE statement.
I need to be able count how many consecutive rows in that column are populated with a 1, and then stop counting when a NULL is hit, and then start counting again from the next non-null.
I have tried every combination of "COUNT(*) OVER" I can possibly think of, all not quite giving me the result I need.
I'll post the entire query as it's not too long -
SELECT
*,
CASE
WHEN zzz.Flag_hit_Target IS NOT NULL THEN COUNT(*) OVER (PARTITION BY zzz.Flag_hit_Target ORDER BY CAST(zzz.Close_month as DATE) DESC)
ELSE NULL
END AS Counter
FROM
(
SELECT
zz.Close_month,
SUM(MRP) as Total_MRP,
zz.Target,
CASE
WHEN SUM(MRP) >= zz.Target THEN 1
ELSE NULL
END AS Flag_hit_target
FROM
(
SELECT
Opp.id,
opp.MRP__c as MRP,
1500 as Target,
CONCAT(DATENAME(month, Closedate), ' ', DATEPART(year, Closedate)) as Close_month
FROM Table1 as Opp WITH (NOLOCK)
WHERE OPP_type__c = 'Opp Type 1'
AND Appointment_setter1__c = 'Person 1'
AND Stagename = 'Closed (Won)'
) as zz
GROUP BY zz.Close_month, zz.Target
) as zzz
ORDER by CAST(zzz.Close_month as DATE) desc
With this I get the following results -
+----------------+-----------------+---------+
| Close_month | Flag_hit_target | Counter |
+----------------+-----------------+---------+
| June 2019 | NULL | NULL |
| April 2019 | NULL | NULL |
| March 2019 | 1 | 1 |
| February 2019 | NULL | NULL |
| January 2019 | 1 | 2 |
| November 2018 | NULL | NULL |
| October 2018 | NULL | NULL |
| September 2018 | NULL | NULL |
| July 2018 | NULL | NULL |
| June 2018 | 1 | 3 |
| May 2018 | NULL | NULL |
| April 2018 | 1 | 4 |
| March 2018 | NULL | NULL |
| February 2018 | 1 | 5 |
| January 2018 | 1 | 6 |
| December 2017 | 1 | 7 |
| October 2017 | NULL | NULL |
| September 2017 | 1 | 8 |
| August 2017 | 1 | 9 |
| July 2017 | 1 | 10 |
| June 2017 | 1 | 11 |
| May 2017 | NULL | NULL |
| April 2017 | 1 | 12 |
| March 2017 | NULL | NULL |
| February 2017 | 1 | 13 |
| January 2017 | 1 | 14 |
+----------------+-----------------+---------+
The results I am after is as following (notice the end column) -
+----------------+-----------------+---------+
| Close_month | Flag_hit_target | Counter |
+----------------+-----------------+---------+
| June 2019 | NULL | NULL |
| April 2019 | NULL | NULL |
| March 2019 | 1 | 1 |
| February 2019 | NULL | NULL |
| January 2019 | 1 | 1 |
| November 2018 | NULL | NULL |
| October 2018 | NULL | NULL |
| September 2018 | NULL | NULL |
| July 2018 | NULL | NULL |
| June 2018 | 1 | 1 |
| May 2018 | NULL | NULL |
| April 2018 | 1 | 1 |
| March 2018 | NULL | NULL |
| February 2018 | 1 | 3 |
| January 2018 | 1 | 2 |
| December 2017 | 1 | 1 |
| October 2017 | NULL | NULL |
| September 2017 | 1 | 4 |
| August 2017 | 1 | 3 |
| July 2017 | 1 | 2 |
| June 2017 | 1 | 1 |
| May 2017 | NULL | NULL |
| April 2017 | 1 | 1 |
| March 2017 | NULL | NULL |
| February 2017 | 1 | 2 |
| January 2017 | 1 | 1 |
+----------------+-----------------+---------+
Thank you!
A solution is to use a ROW_NUMBER for all records, and substract the ROW_NUMBER value of the last NULL record for each date.
Setup:
IF OBJECT_ID('tempdb..#Test') IS NOT NULL
DROP TABLE #Test
CREATE TABLE #Test (
Date DATE,
Flag BIT)
INSERT INTO #Test (
Date,
Flag)
VALUES
('2019-09-01', NULL),
('2019-08-01', NULL),
('2019-07-01', 1),
('2019-06-01', NULL),
('2019-05-01', 1),
('2019-04-01', NULL),
('2019-03-01', NULL),
('2019-02-01', NULL),
('2019-01-01', 1),
('2018-12-01', NULL),
('2018-11-01', 1),
('2018-10-01', NULL),
('2018-09-01', 1),
('2018-08-01', 1),
('2018-07-01', 1),
('2018-06-01', NULL),
('2018-05-01', 1),
('2018-04-01', 1),
('2018-03-01', 1),
('2018-02-01', 1),
('2018-01-01', NULL)
Solution:
;WITH DataWithRowNumber AS
(
SELECT
T.*,
RowNumber = -1 + ROW_NUMBER() OVER (ORDER BY T.Date)
FROM
#Test AS T
)
SELECT
D.Date,
D.Flag,
D.RowNumber,
M.MaxPreviousNullRowNumber,
RowNumberRest = D.RowNumber - M.MaxPreviousNullRowNumber,
Counter = CASE WHEN D.Flag IS NOT NULL THEN D.RowNumber - M.MaxPreviousNullRowNumber END
FROM
DataWithRowNumber AS D
OUTER APPLY (
SELECT
MaxPreviousNullRowNumber = MAX(R.RowNumber)
FROM
DataWithRowNumber AS R
WHERE
R.Date < D.Date AND
R.Flag IS NULL) AS M
ORDER By
D.RowNumber DESC
Result:
+------------+------+-----------+--------------------------+---------------+---------+
| Date | Flag | RowNumber | MaxPreviousNullRowNumber | RowNumberRest | Counter |
+------------+------+-----------+--------------------------+---------------+---------+
| 2019-09-01 | NULL | 20 | 19 | 1 | NULL |
| 2019-08-01 | NULL | 19 | 17 | 2 | NULL |
| 2019-07-01 | 1 | 18 | 17 | 1 | 1 |
| 2019-06-01 | NULL | 17 | 15 | 2 | NULL |
| 2019-05-01 | 1 | 16 | 15 | 1 | 1 |
| 2019-04-01 | NULL | 15 | 14 | 1 | NULL |
| 2019-03-01 | NULL | 14 | 13 | 1 | NULL |
| 2019-02-01 | NULL | 13 | 11 | 2 | NULL |
| 2019-01-01 | 1 | 12 | 11 | 1 | 1 |
| 2018-12-01 | NULL | 11 | 9 | 2 | NULL |
| 2018-11-01 | 1 | 10 | 9 | 1 | 1 |
| 2018-10-01 | NULL | 9 | 5 | 4 | NULL |
| 2018-09-01 | 1 | 8 | 5 | 3 | 3 |
| 2018-08-01 | 1 | 7 | 5 | 2 | 2 |
| 2018-07-01 | 1 | 6 | 5 | 1 | 1 |
| 2018-06-01 | NULL | 5 | 0 | 5 | NULL |
| 2018-05-01 | 1 | 4 | 0 | 4 | 4 |
| 2018-04-01 | 1 | 3 | 0 | 3 | 3 |
| 2018-03-01 | 1 | 2 | 0 | 2 | 2 |
| 2018-02-01 | 1 | 1 | 0 | 1 | 1 |
| 2018-01-01 | NULL | 0 | NULL | NULL | NULL |
+------------+------+-----------+--------------------------+---------------+---------+
Ryan you need to implement the sql running total here , please check the link
https://codingsight.com/calculating-running-total-with-over-clause-and-partition-by-clause-in-sql-server/

How to Group by 6 days in Postgresql

I want to convert this type of data to 6Days GROUP BY format.
+-----+--------------+------------+
| gid | cnt | date |
+-----+--------------+------------+
| 1 | 1 | 2012-02-05 |
| 2 | 2 | 2012-02-06 |
| 3 | 1 | 2012-02-07 |
| 4 | 1 | 2012-02-08 |
| 5 | 1 | 2012-02-09 |
| 6 | 2 | 2012-02-10 |
| 7 | 3 | 2012-02-11 |
| 8 | 1 | 2012-02-12 |
| 9 | 1 | 2012-02-13 |
| 10 | 2 | 2012-02-14 |
| 11 | 3 | 2012-02-15 |
| 12 | 4 | 2012-02-16 |
| 13 | 1 | 2012-02-17 |
| 14 | 1 | 2012-02-18 |
| 15 | 1 | 2012-02-19 |
| 16 | NULL | 2012-02-20 |
| 17 | 6 | 2012-02-21 |
| 18 | NULL | 2012-02-22 |
+-----+--------------+------------+
↓↓↓↓↓↓↓↓↓↓↓↓↓↓
The date is a continuous format.
If I understand correctly you need something like this:
WITH x AS (SELECT date::date, (random() * 3)::int AS cnt FROM generate_series('2012-02-05'::date, '2012-02-22'::date, '1 day'::interval) AS date
)
SELECT start::date,
(start + '5 day'::interval)::date AS end,
sum(cnt)
FROM generate_series(
(SELECT min(date) FROM x),
(SELECT max(date) FROM x),
'5 day'::interval
) AS start
LEFT JOIN x ON (x.date >= start AND x.date <= start + '5 day'::interval)
GROUP BY 1, 2
ORDER BY 1
In x I emulate your table.