I am getting data based on maintenance_due_date before 7 days or sysdate greater then maintenance_due_date. Now I want to add one more condition that if data not available in maintenace_schedule table against instrument_no and plan_id then also I want to get rows against plan_id from activity_detail table.
SELECT
instrument_no,
frequency,
detail,
plan_id,
activity_id,
period,
-- maintenance_date,
maintenance_due_date
FROM
(
SELECT
ms.instrument_no,
ad.frequency,
ad.detail,
ad.plan_id,
ad.activity_id,
ad.period,
-- ms.maintenance_date,
MAX(ms.maintenance_due_date) maintenance_due_date
FROM
maintenance_schedule ms,
activity_detail ad
--left join maintenance_schedule ms1 on ms1.plan_id = ad.plan_id and ms1.instrument_no = ms.instrument_no --am.inactive <> 1
WHERE
ms.instrument_no = 10073
AND ad.plan_id = 100
AND ms.plan_id = ad.plan_id
and ad.end_date is null
AND ms.activity_id = ad.activity_id
GROUP BY
ms.instrument_no,
ad.frequency,
ad.detail,
ad.plan_id,
ad.activity_id,
ad.period
-- ms.maintenance_date
)
WHERE
( maintenance_due_date BETWEEN sysdate AND sysdate + 7
OR maintenance_due_date < sysdate)
order by activity_id asc, maintenance_due_date asc;
Your inner query should be changed to use LEFT JOIN so you could handle the null values from a missing row within both of WHERE clauses (inner and outer query). For readability I moved your inner query to CTE definitions and named it activities. You don't need to do that - you can leave it where it was if you wish so.
Here are your (imagined) sample data and the new activities CTE with some comments in the code:
WITH
maintenance_schedule AS
( Select 100 "PLAN_ID", 1 "ACTIVITY_ID", To_Date('20.12.2022', 'dd.mm.yyyy') "MAINTENANCE_DUE_DATE", 10073 "INSTRUMENT_NO", 'some other data' "SOME_COLUMN" From Dual Union All
Select 100 "PLAN_ID", 2 "ACTIVITY_ID", To_Date('29.12.2022', 'dd.mm.yyyy') "MAINTENANCE_DUE_DATE", 10073 "INSTRUMENT_NO", 'some other data' "SOME_COLUMN" From Dual
),
activity_detail AS
( Select 1 "ACTIVITY_ID", 100 "PLAN_ID", 'details about ...' "DETAIL", 'frequency X...' "FREQUENCY", 'A' "PERIOD", Null "END_DATE" From Dual Union All
Select 2 "ACTIVITY_ID", 100 "PLAN_ID", 'details about ...' "DETAIL", 'frequency Y...' "FREQUENCY", 'B' "PERIOD", Null "END_DATE" From Dual Union All
Select 3 "ACTIVITY_ID", 100 "PLAN_ID", 'details about ...' "DETAIL", 'frequency Z...' "FREQUENCY", 'C' "PERIOD", Null "END_DATE" From Dual
),
activities AS
( SELECT ms.instrument_no,
ad.frequency,
ad.detail,
ad.plan_id,
ad.activity_id,
ad.period,
MAX(ms.maintenance_due_date) maintenance_due_date
FROM activity_detail ad
-- LEFT JOIN (line below) is the solution to your problem because it gives you options to handle null values from a missing row
LEFT JOIN maintenance_schedule ms ON( ms.plan_id = ad.plan_id And ms.activity_id = ad.activity_id )
WHERE ( Nvl(ms.instrument_no, 10073) = 10073 AND ad.plan_id = 100 AND ad.end_date Is Null)
-- you can use this ---> ( Nvl(ms.instrument_no, 10073) = 10073 AND ad.plan_id = 100 AND ad.end_date Is Null)
-- OR this ---> ( ms.instrument_no = 10073 AND ad.plan_id = 100 AND ad.end_date Is Null) OR ( ms.instrument_no Is Null AND ad.plan_id = 100 AND ad.end_date Is Null)
GROUP BY ms.instrument_no,
ad.frequency,
ad.detail,
ad.plan_id,
ad.activity_id,
ad.period
)
with above data your main SQL shoulld be like below:
SELECT instrument_no,
frequency,
detail,
plan_id,
activity_id,
period,
maintenance_due_date
FROM activities -- moved your subquery to CTE definition above for readability - you can leave it here if you wish so
WHERE ( Nvl(maintenance_due_date, sysdate + 1) BETWEEN sysdate AND sysdate + 7 OR maintenance_due_date < sysdate)
-- you can use this ---> ( Nvl(maintenance_due_date, sysdate + 1) BETWEEN sysdate AND sysdate + 7 OR maintenance_due_date < sysdate)
-- OR this ---> ( maintenance_due_date BETWEEN sysdate AND sysdate + 7 OR maintenance_due_date < sysdate OR maintenance_due_date Is Null)
ORDER BY activity_id, maintenance_due_date
... R e s u l t :
INSTRUMENT_NO
FREQUENCY
DETAIL
PLAN_ID
ACTIVITY_ID
PERIOD
MAINTENANCE_DUE_DATE
10073
frequency X...
details about ...
100
1
A
20-DEC-22
10073
frequency Y...
details about ...
100
2
B
29-DEC-22
frequency Z...
details about ...
100
3
C
Additionaly, you can use, let's say, Nvl(INSTRUMENT_NO , 0) in the SELECT list to give a value for missing data if you want.
Related
I need to write a query that gives me the count with the following logic. The example below shows that ACCOUNT_ID 123 signup in 2020-02-21 so M0 is 1 and then the same ACCOUNT_ID had an event in the consecutive month so M1 is 1.
M0 is a the signup date
M1 is signup date + 1 month
M2 is signup date + 2 consecutive months
M3 is signup date + 3 consecutive months
WITH M_O AS (
SELECT
parsed_data."ACCOUNT_ID" AS "parsed_data.account_id",
MIN(TO_CHAR(TO_DATE(parsed_data."TIMESTAMP"::timestamp_ntz ), 'YYYY-MM-DD')) AS "SIGNUP",
COUNT(DISTINCT (parsed_data."ACCOUNT_ID") ) AS "COUNT_USERS_O"
FROM "PUBLIC"."PARSED_DATA"
AS parsed_data
WHERE (parsed_data."ACCOUNT_ID") IS NOT NULL
AND (((parsed_data."EVENT") = 'Started'))
AND (
((TO_CHAR(TO_DATE(parsed_data."TIMESTAMP"::timestamp_ntz ), 'YYYY-MM-DD')) >= '2020-02-21')
AND ((parsed_data."TIMESTAMP"::timestamp_ntz ) < CURRENT_DATE())
)
GROUP BY 1),
M_1 AS (
SELECT
parsed_data."ACCOUNT_ID" AS "parsed_data.account_id",
TO_CHAR(TO_DATE(parsed_data."TIMESTAMP"::timestamp_ntz ), 'YYYY-MM-DD') AS "parsed_data.timestamp_date",
COUNT(DISTINCT (parsed_data."ACCOUNT_ID") ) AS "COUNT_USERS_1"
FROM "PUBLIC"."PARSED_DATA"
AS parsed_data INNER JOIN M_O ON parsed_data.account_id = M_O."parsed_data.account_id"
WHERE
(parsed_data."ACCOUNT_ID") IS NOT NULL
AND (((parsed_data."EVENT") = 'Started'))
AND (
(TO_CHAR(TO_DATE(parsed_data."TIMESTAMP"::timestamp_ntz ), 'YYYY-MM-DD')) >= DATEADD('MONTH', 1, SIGNUP)
AND ((parsed_data."TIMESTAMP"::timestamp_ntz ) < CURRENT_DATE())
)
GROUP BY 1,2
)
It looks like you want to create cohorts? As in "establish the creation date for each id, and then look how they changed their behavior every month thereafter".
This code should work:
with events as (
select 1 id, '2020-01-01'::date e_date
union all select 1, '2020-02-03'
union all select 2, '2020-03-01'
union all select 2, '2020-05-08'
union all select 3, '2020-08-01'
union all select 3, '2020-09-02'
union all select 3, '2020-09-22'
union all select 3, '2020-09-30'
union all select 3, '2020-10-10'
),
first_per_id as (
select id, min(e_date) first_date
from events
group by id
)
select a.id
, count_if(e_date>=dateadd(month, 0, first_date) and e_date<dateadd(month, 1, first_date)) m0
, count_if(e_date>=dateadd(month, 1, first_date) and e_date<dateadd(month, 2, first_date)) m1
, count_if(e_date>=dateadd(month, 2, first_date) and e_date<dateadd(month, 3, first_date)) m2
from events a
join first_per_id b
where a.id=b.id
group by 1
I have a table that has aggregations down to the hour level YYYYMMDDHH. The data is aggregated and loaded by an external process (I don't have control over). I want to test the data on a monthly basis.
The question I am looking to answer is: Does every hour in the month exist?
I'm looking to produce output that will return a 1 if the hour exists or 0 if the hour does not exist.
The aggregation table looks something like this...
YYYYMM YYYYMMDD YYYYMMDDHH DATA_AGG
201911 20191101 2019110100 100
201911 20191101 2019110101 125
201911 20191101 2019110103 135
201911 20191101 2019110105 95
… … … …
201911 20191130 2019113020 100
201911 20191130 2019113021 110
201911 20191130 2019113022 125
201911 20191130 2019113023 135
And defined as...
CREATE TABLE YYYYMMDDHH_DATA_AGG AS (
YYYYMM VARCHAR,
YYYYMMDD VARCHAR,
YYYYMMDDHH VARCHAR,
DATA_AGG INT
);
I'm looking to produce the following below...
YYYYMMDDHH HOUR_EXISTS
2019110100 1
2019110101 1
2019110102 0
2019110103 1
2019110104 0
2019110105 1
... ...
In the example above, two hours do not exist, 2019110102 and 2019110104.
I assume I'd have to join the aggregation table against a computed table that contains all the YYYYMMDDHH combos???
The database is Snowflake, but assume most generic ANSI SQL queries will work.
You can get what you want with a recursive CTE
The recursive CTE generates the list of possible Hours. And then a simple left outer join gets you the flag for if you have any records that match that hour.
WITH RECURSIVE CTE (YYYYMMDDHH) as
(
SELECT YYYYMMDDHH
FROM YYYYMMDDHH_DATA_AGG
WHERE YYYYMMDDHH = (SELECT MIN(YYYYMMDDHH) FROM YYYYMMDDHH_DATA_AGG)
UNION ALL
SELECT TO_VARCHAR(DATEADD(HOUR, 1, TO_TIMESTAMP(C.YYYYMMDDHH, 'YYYYMMDDHH')), 'YYYYMMDDHH') YYYYMMDDHH
FROM CTE C
WHERE TO_VARCHAR(DATEADD(HOUR, 1, TO_TIMESTAMP(C.YYYYMMDDHH, 'YYYYMMDDHH')), 'YYYYMMDDHH') <= (SELECT MAX(YYYYMMDDHH) FROM YYYYMMDDHH_DATA_AGG)
)
SELECT
C.YYYYMMDDHH,
IFF(A.YYYYMMDDHH IS NOT NULL, 1, 0) HOUR_EXISTS
FROM CTE C
LEFT OUTER JOIN YYYYMMDDHH_DATA_AGG A
ON C.YYYYMMDDHH = A.YYYYMMDDHH;
If your timerange is too long you'll have issues with the cte recursing too much. You can create a table or temp table with all of the possible hours instead. For example:
CREATE OR REPLACE TEMPORARY TABLE HOURS (YYYYMMDDHH VARCHAR) AS
SELECT TO_VARCHAR(DATEADD(HOUR, SEQ4(), TO_TIMESTAMP((SELECT MIN(YYYYMMDDHH) FROM YYYYMMDDHH_DATA_AGG), 'YYYYMMDDHH')), 'YYYYMMDDHH')
FROM TABLE(GENERATOR(ROWCOUNT => 10000)) V
ORDER BY 1;
SELECT
H.YYYYMMDDHH,
IFF(A.YYYYMMDDHH IS NOT NULL, 1, 0) HOUR_EXISTS
FROM HOURS H
LEFT OUTER JOIN YYYYMMDDHH_DATA_AGG A
ON H.YYYYMMDDHH = A.YYYYMMDDHH
WHERE H.YYYYMMDDHH <= (SELECT MAX(YYYYMMDDHH) FROM YYYYMMDDHH_DATA_AGG);
You can then fiddle with the generator count to make sure you have enough hours.
You can generate a table with every hour of the month and LEFT OUTER JOIN your aggregation to it:
WITH EVERY_HOUR AS (
SELECT TO_CHAR(DATEADD(HOUR, HH, TO_DATE(YYYYMM::TEXT, 'YYYYMM')),
'YYYYMMDDHH')::NUMBER YYYYMMDDHH
FROM (SELECT DISTINCT YYYYMM FROM YYYYMMDDHH_DATA_AGG) t
CROSS JOIN (
SELECT ROW_NUMBER() OVER (ORDER BY NULL) - 1 HH
FROM TABLE(GENERATOR(ROWCOUNT => 745))
) h
QUALIFY YYYYMMDDHH < (YYYYMM + 1) * 10000
)
SELECT h.YYYYMMDDHH, NVL2(a.YYYYMM, 1, 0) HOUR_EXISTS
FROM EVERY_HOUR h
LEFT OUTER JOIN YYYYMMDDHH_DATA_AGG a ON a.YYYYMMDDHH = h.YYYYMMDDHH
Here's something that might help get you started. I'm guessing you want to have 'synthetic' [YYYYMMDD] values? Otherwise, if the value aren't there, then they shouldn't appear in the list
DROP TABLE IF EXISTS #_hours
DROP TABLE IF EXISTS #_temp
--Populate a table with hours ranging from 00 to 23
CREATE TABLE #_hours ([hour_value] VARCHAR(2))
DECLARE #_i INT = 0
WHILE (#_i < 24)
BEGIN
INSERT INTO #_hours
SELECT FORMAT(#_i, '0#')
SET #_i += 1
END
-- Replicate OP's sample data set
CREATE TABLE #_temp (
[YYYYMM] INTEGER
, [YYYYMMDD] INTEGER
, [YYYYMMDDHH] INTEGER
, [DATA_AGG] INTEGER
)
INSERT INTO #_temp
VALUES
(201911, 20191101, 2019110100, 100),
(201911, 20191101, 2019110101, 125),
(201911, 20191101, 2019110103, 135),
(201911, 20191101, 2019110105, 95),
(201911, 20191130, 2019113020, 100),
(201911, 20191130, 2019113021, 110),
(201911, 20191130, 2019113022, 125),
(201911, 20191130, 2019113023, 135)
SELECT X.YYYYMM, X.YYYYMMDD, X.YYYYMMDDHH
-- Case: If 'target_hours' doesn't exist, then 0, else 1
, CASE WHEN X.target_hours IS NULL THEN '0' ELSE '1' END AS [HOUR_EXISTS]
FROM (
-- Select right 2 characters from converted [YYYYMMDDHH] to act as 'target values'
SELECT T.*
, RIGHT(CAST(T.[YYYYMMDDHH] AS VARCHAR(10)), 2) AS [target_hours]
FROM #_temp AS T
) AS X
-- Right join to keep all of our hours and only the target hours that match.
RIGHT JOIN #_hours AS H ON H.hour_value = X.target_hours
Sample output:
YYYYMM YYYYMMDD YYYYMMDDHH HOUR_EXISTS
201911 20191101 2019110100 1
201911 20191101 2019110101 1
NULL NULL NULL 0
201911 20191101 2019110103 1
NULL NULL NULL 0
201911 20191101 2019110105 1
NULL NULL NULL 0
With (almost) standard sql, you can do a cross join of the distinct values of YYYYMMDD to a list of all possible hours and then left join to the table:
select concat(d.YYYYMMDD, h.hour) as YYYYMMDDHH,
case when t.YYYYMMDDHH is null then 0 else 1 end as hour_exists
from (select distinct YYYYMMDD from tablename) as d
cross join (
select '00' as hour union all select '01' union all
select '02' union all select '03' union all
select '04' union all select '05' union all
select '06' union all select '07' union all
select '08' union all select '09' union all
select '10' union all select '11' union all
select '12' union all select '13' union all
select '14' union all select '15' union all
select '16' union all select '17' union all
select '18' union all select '19' union all
select '20' union all select '21' union all
select '22' union all select '23'
) as h
left join tablename as t
on concat(d.YYYYMMDD, h.hour) = t.YYYYMMDDHH
order by concat(d.YYYYMMDD, h.hour)
Maybe in Snowflake you can construct the list of hours with a sequence much easier instead of all those UNION ALLs.
This version accounts for the full range of days, across months and years. It's a simple cross join of the set of possible days with the set of possible hours of the day -- left joined to actual dates.
set first = (select min(yyyymmdd::number) from YYYYMMDDHH_DATA_AGG);
set last = (select max(yyyymmdd::number) from YYYYMMDDHH_DATA_AGG);
with
hours as (select row_number() over (order by null) - 1 h from table(generator(rowcount=>24))),
days as (
select
row_number() over (order by null) - 1 as n,
to_date($first::text, 'YYYYMMDD')::date + n as d,
to_char(d, 'YYYYMMDD') as yyyymmdd
from table(generator(rowcount=>($last-$first+1)))
)
select days.yyyymmdd || lpad(hours.h,2,0) as YYYYMMDDHH, nvl2(t.yyyymmddhh,1,0) as HOUR_EXISTS
from days cross join hours
left join YYYYMMDDHH_DATA_AGG t on t.yyyymmddhh = days.yyyymmdd || lpad(hours.h,2,0)
order by 1
;
$first and $last can be packed in as sub-queries if you prefer.
first: I have two tables with a primary key((Agent_ID). I want to join both tables, filter Agent_Type =1 and status =1
Second: get the last active year total transaction value monthly wise who is not done any transaction for the last three months.
Agent table
Agent_ID Agent_Type
234 1
456 1
567 1
678 0
Agent_Transaction table
Agent_ID Amount Transaction_Date status
234 70 23/7/2019 1
234 54 11/6/2019 0
234 30 23/5/2019 1
456 56 12/1/2019 1
456 80 15/3/2019 1
456 99 20/2/2019 1
456 76 23/12/2018 1
567 56 10/10/2018 0
567 60 30/6/2018 1
456
select Agent_ID,CONCAT(Extract(MONTH from Agent_Transaction.Transaction_Date),
EXTRACT (YEAR FROM Agent_Transaction.Transaction_Date))as MONTH_YEAR,
SUM(Agent_Transaction.Amount)AS TOTAL
from Agent
inner join Agent_Transaction
on Agent_Transaction.Agent_ID = Agent.Agent_ID
where Agent.Agent_Type='1' AND Agent_Transaction.status='1' AND
(Agent_Transaction.Transaction_Date between ADD_MONTHS(SYSDATE,-3) and SYSDATE)
GROUP BY Agent.Agent_ID,
CONCAT(Extract(MONTH from Agent_Transaction.Transaction_Date),EXTRACT (YEAR FROM Agent_Transaction.Transaction_Date)),
Agent_Transaction.Amount
But I didn't get what I expected.
As far as I understood the requirement, you can use the following query:
SELECT
A.AGENT_ID,
TO_CHAR(TRUNC(ATR.TRANSACTION_DATE, 'MONTH'), 'MONYYYY'), -- YOU CAN USE DIFFERENT FORMAT ACCORDING TO REQUIREMENT
SUM(AMOUNT) AS TOTAL_MONTHWISE_AMOUNT -- MONTHWISE TRANSACTION TOTAL
FROM
AGENT A
JOIN AGENT_TRANSACTION ATR ON ( A.AGENT_ID = ATR.AGENT_ID )
WHERE
-- EXCLUDING THE AGENTS WHICH HAVE DONE NO TRANSACTION IN LAST THREE MONTHS USING FOLLOWING NOT IN
ATR.AGENT_ID NOT IN (
SELECT
DISTINCT ATR_IN1.AGENT_ID
FROM
AGENT_TRANSACTION ATR_IN1
WHERE
ATR_IN1.TRANSACTION_DATE > ADD_MONTHS(SYSDATE, - 3)
AND ATR_IN1.STATUS = 1 -- YOU CAN USE IT ACCORDING TO REQUIREMENT
)
-- FETCHING LAST YEAR DATA
AND EXTRACT(YEAR FROM ATR.TRANSACTION_DATE) = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, - 12))
AND A.AGENT_TYPE = 1
AND ATR.STATUS = 1
GROUP BY
A.AGENT_ID,
TRUNC(ATR.TRANSACTION_DATE, 'MONTH');
Please comment if minor changes are required or you need different logic.
Cheers!!
-- Update --
Updated the query after OP described the original issue:
SELECT
AGENT_ID,
TO_CHAR(TRUNC(TRANSACTION_DATE, 'MONTH'), 'MONYYYY'), -- YOU CAN USE DIFFERENT FORMAT ACCORDING TO REQUIREMENT
SUM(AMOUNT) AS TOTAL_MONTHWISE_AMOUNT -- MONTHWISE TRANSACTION TOTAL
FROM
(
SELECT
A.AGENT_ID,
TRUNC(ATR.TRANSACTION_DATE, 'MONTH') AS TRANSACTION_DATE,
MAX(TRUNC(ATR.TRANSACTION_DATE, 'MONTH')) OVER(
PARTITION BY A.AGENT_ID
) AS LAST_TR_DATE,
AMOUNT,
AGENT_TYPE,
STATUS
FROM
AGENT A
JOIN AGENT_TRANSACTION ATR ON ( A.AGENT_ID = ATR.AGENT_ID )
WHERE
A.AGENT_TYPE = 1
AND ATR.STATUS = 1
)
WHERE
-- EXCLUDING THE AGENTS WHICH HAVE DONE NO TRANSACTION IN LAST THREE MONTHS USING FOLLOWING NOT IN
LAST_TR_DATE > ADD_MONTHS(SYSDATE, - 3)
-- FETCHING LAST YEAR DATA
AND TRANSACTION_DATE BETWEEN ADD_MONTHS(LAST_TR_DATE, - 12) AND LAST_TR_DATE
GROUP BY
AGENT_ID,
TRANSACTION_DATE;
Cheers!!
-- Update --
Your exact query should look like this:
SELECT
AGENT_ID,
TO_CHAR(TRUNC(TX_TIME, 'MONTH'), 'MONYYYY') AS MONTHYEAR,
SUM(TX_VALUE) AS TOTALMONTHWISE
FROM
(
SELECT
A.AGENT_ID,
TRUNC(ATR.TX_TIME, 'MONTH') AS TX_TIME, -- changed this alias name
MAX(TRUNC(ATR.TX_TIME, 'MONTH')) OVER(
PARTITION BY A.AGENT_ID
) AS LAST_TR_DATE,
ATR.TX_VALUE,
A.AGENT_TYPE_ID
FROM
TBLEZ_AGENT A
JOIN TBLEZ_TRANSACTION ATR ON ( A.AGENT_ID = ATR.SRC_AGENT_ID )
WHERE
A.AGENT_TYPE_ID = '3'
AND ATR.STATUS = '0'
AND ATR.TX_TYPE_ID = '5'
)
WHERE
LAST_TR_DATE < ADD_MONTHS(SYSDATE, - 3)
AND ( TX_TIME BETWEEN ADD_MONTHS(LAST_TR_DATE, - 12) AND LAST_TR_DATE )
GROUP BY
AGENT_ID,
TX_TIME;
-- UPDATE --
In response to this comment -- **Hi Tejash, How to get the total day-wise to above my scenario? **
SELECT
AGENT_ID,
TX_TIME,
SUM(TX_VALUE) AS TOTALDAYWISE
FROM
(
SELECT
A.AGENT_ID,
TRUNC(ATR.TX_TIME) AS TX_TIME, -- changed TRUNC INPUT PARAMETER -- REMOVED MONTH IN TRUNC
MAX(TRUNC(ATR.TX_TIME)) OVER( -- changed TRUNC INPUT PARAMETER -- REMOVED MONTH IN TRUNC
PARTITION BY A.AGENT_ID
) AS LAST_TR_DATE,
ATR.TX_VALUE,
A.AGENT_TYPE_ID
FROM
TBLEZ_AGENT A
JOIN TBLEZ_TRANSACTION ATR ON ( A.AGENT_ID = ATR.SRC_AGENT_ID )
WHERE
A.AGENT_TYPE_ID = '3'
AND ATR.STATUS = '0'
AND ATR.TX_TYPE_ID = '5'
)
WHERE
LAST_TR_DATE < ADD_MONTHS(SYSDATE, - 3)
AND ( TRUNC(TX_TIME, 'MONTH') BETWEEN ADD_MONTHS(LAST_TR_DATE, - 12) AND LAST_TR_DATE )
-- changed TRUNC INPUT PARAMETER -- ADDED MONTH IN TRUNC
GROUP BY
AGENT_ID,
TX_TIME;
Cheers!!
SELECT
A.AGENT_ID,
TO_CHAR(TRUNC(ATR.TX_TIME, 'MONTH'), 'MONYYYY') AS MONTHYEAR,
SUM(ATR.TX_VALUE) AS TOTALMONTHWISE
FROM
(
SELECT
A.AGENT_ID,
TRUNC(ATR.TX_TIME, 'MONTH') AS TRANSC_DATE,
MAX(TRUNC(ATR.TX_TIME, 'MONTH')) OVER(
PARTITION BY A.AGENT_ID
) AS LAST_TR_DATE,
ATR.TX_VALUE,
A.AGENT_TYPE_ID
FROM
TBLEZ_AGENT A
JOIN TBLEZ_TRANSACTION ATR ON ( A.AGENT_ID = ATR.SRC_AGENT_ID )
WHERE
A.AGENT_TYPE_ID = '3'
AND ATR.STATUS = '0'
AND ATR.TX_TYPE_ID = '5'
)
WHERE
LAST_TR_DATE > ADD_MONTHS(SYSDATE, - 3)
AND ( TX_TIME BETWEEN ADD_MONTHS(LAST_TR_DATE, - 12) AND LAST_TR_DATE )
GROUP BY
A.AGENT_ID,TRUNC(ATR.TX_TIME, 'MONTH')
I have a subquery which is used for an Oracle database, but I want to use an equivalent query for a SQL Server database.
I didn't figure out how to migrate the TO_TIMESTAMP(TO_CHAR(TO_DATE part and also didn't know how to handle the thing with rownums in T-SQL.
Is it even possible to migrate this query?
SELECT 0 run_id,
0 tran_id,
0 sort_id,
' ' tran_type,
10 prod_id,
72 type_id,
1 value,
TO_TIMESTAMP(TO_CHAR(TO_DATE('2016-03-18 00:00:00', 'YYYY.MM.DD HH24:MI:SS') + rownum -1, 'YYYY.MM.DD') || to_char(sw.end_time, 'HH24:MI:SS'), 'YYYY.MM.DD HH24:MI:SS') event_publication,
EXTRACT (YEAR
FROM (TO_DATE('2016-03-18 00:00:00', 'YYYY.MM.DD HH24:MI:SS') + rownum -1)) y,
EXTRACT (MONTH
FROM (TO_DATE('2016-03-18 00:00:00', 'YYYY.MM.DD HH24:MI:SS') + rownum -1)) mo,
EXTRACT (DAY
FROM (TO_DATE('2016-03-18 00:00:00', 'YYYY.MM.DD HH24:MI:SS') + rownum -1)) d,
to_number(to_char (sw.end_time, 'HH24')) h,
to_number(to_char (sw.end_time, 'MI')) mi,
to_number(to_char (sw.end_time, 'SS')) s,
0 ms
FROM all_objects ao,
settlement_win sw,
prod_def pd
WHERE pd.prod_id = 10
AND sw.country = pd.country
AND sw.commodity = pd.commodity
AND rownum <= TO_DATE('2016-03-18 23:59:00', 'YYYY.MM.DD HH24:MI:SS') -TO_DATE('2016-03-18 00:00:00', 'YYYY.MM.DD HH24:MI:SS')+1
The first thing to address is the use of rownum which has no direct equivalent in TSQL but we can mimic it, and for this particular query you need to recognize that the table ALL_OBJECTS is only being used to produce a number of rows. It has no other purpose to the query.
In TSQL we can generate rows using a CTE and there are many many variants of this, but for here I suggest:
;WITH
cteDigits AS (
SELECT 0 AS digit UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL
SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9
)
, cteTally AS (
SELECT
d1s.digit
+ d10s.digit * 10
+ d100s.digit * 100 /* add more like this as needed */
-- + d1000s.digit * 1000 /* add more like this as needed */
+ 1 AS rownum
FROM cteDigits d1s
CROSS JOIN cteDigits d10s
CROSS JOIN cteDigits d100s /* add more like this as needed */
--CROSS JOIN cteDigits d1000s /* add more like this as needed */
)
This will quickly spin-up 1000 rows as is and can be extended to produce many more rows by adding more cross joins. Note this returns a column called rownum which starts at 1 thus mimicking the Oracle rownum.
So next you can just add some of the remaining query, like this:
SELECT
0 run_id
, 0 tran_id
, 0 sort_id
, ' ' tran_type
, 10 prod_id
, 72 type_id
, 1 value
, convert(varchar, dateadd(day, rownum - 1,'20160318'),121) event_publication
-- several missing rows here
, 0 ms
FOM cteTally
INNER JOIN settlement_win sw
INNER JOIN prod_def pd ON sw.country = pd.country AND sw.commodity = pd.commodity
WHERE pd.prod_id = 10
AND rownum <= datediff(day,'20160318','20160318') + 1
Note that you really do not need a to_timestamp() equivalent you just need the ability to output date and time to the maximum precision of your data which appears to be to the level of seconds.
To progress further (I think) requires an understanding of the data held in the column sw.end_time. If this can be converted to the mssql datetime data type then it is just a matter of adding a number of days to that value to arrive at the event_publication and similarly if sw.end_time is converted to a datetime data type then use date_part() to get the hours, minutes and seconds from that column. e.g.
, DATEADD(day,rownum-1,CONVERT(datetime, sw.end_time)) AS event_publication
also, if such a calculation works then it would be possible to use an apply operator to simplify the overall query, something like this
;WITH
cteDigits AS (
SELECT 0 AS digit UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL
SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9
)
, cteTally AS (
SELECT
d1s.digit
+ d10s.digit * 10
+ d100s.digit * 100 /* add more like this as needed */
-- + d1000s.digit * 1000 /* add more like this as needed */
+ 1 AS rownum
FROM cteDigits d1s
CROSS JOIN cteDigits d10s
CROSS JOIN cteDigits d100s /* add more like this as needed */
--CROSS JOIN cteDigits d1000s /* add more like this as needed */
)
SELECT
0 run_id
, 0 tran_id
, 0 sort_id
, ' ' tran_type
, 10 prod_id
, 72 type_id
, 1 value
, convert(varchar(23), CA.Event_publication, 121) Event_publication
, datepart(day,CA.Event_publication) dd
, datepart(month,CA.Event_publication) mm
, datepart(year,CA.Event_publication) yyyy
, datepart(hour,CA.Event_publication) hh24
, datepart(minute,CA.Event_publication) mi
, datepart(second,CA.Event_publication) ss
, 0 ms
FOM cteTally
INNER JOIN settlement_win sw
INNER JOIN prod_def pd ON sw.country = pd.country AND sw.commodity = pd.commodity
CROSS APPLY (
SELECT DATEADD(day,rownum-1,CONVERT(datetime, sw.end_time)) AS event_publication ) CA
WHERE pd.prod_id = 10
AND rownum <= datediff(day,'20160318','20160318') + 1
NB: IT may be necessary to include this datediff(day,'19000101,'20160318') (which equals 42445) into the calculation of the event_date e.g.
SELECT DATEADD(day,42445 + (rownum-1),CONVERT(datetime, sw.end_time)) AS event_publication
One last point is that you could use datetime2 instead of datetime if you really do need a greater degree of time precision but there is no easily apparent requirement for that.
I have the following script that shows the last 11 months from the sysdate...
select * from (
select level-1 as num,
to_char(add_months(trunc(sysdate,'MM'),- (level-1)),'MM')||'-'||to_char(add_months(trunc(sysdate,'MM'),- (level-1)),'YYYY') as dte
from dual
connect by level <= 12
)
pivot (
max(dte) as "DATE"
for num in (0 as "CURRENT", 1 as "1", 2 as "2", 3 as "3", 4 as "4", 5 as "5",6 as "6",7 as "7",8 as "8",9 as "9",10 as "10", 11 as "11"))
I want to create a table that shows delivery qty where the delivery date ('MM-YYYY') equals the date generated from the above script.
I get the delivery qty and delivery date from the following
select dp.catnr,
nvl(sum(dp.del_qty),0) del_qty
from bds_dhead#sid_to_cdsuk dh,
bds_dline#sid_to_cdsuk dp
where dp.dhead_no = dh.dhead_no
and dh.d_status = '9'
and dp.article_no = 9||'2EDVD0007'
and to_char(trunc(dh.actshpdate),'MM')||'-'||to_char(trunc(dh.actshpdate),'YYYY') = = --this is where I would like to match the result of the above script
group by dp.catnr
The results would look something like...
Any ideas would be much appreciated.
Thanks, SMORF
with date_series as (
select add_months(trunc(sysdate,'MM'), 1 - lvl) start_date,
add_months(trunc(sysdate,'MM'), 2-lvl) - 1/24/60/60 end_date
from (select level lvl from dual connect by level <= 12)
),
your_table as (
select 'catnr1' catnr, 100500 del_qty, sysdate actshpdate from dual
union all select 'catnr1' catnr, 10 del_qty, sysdate-30 actshpdate from dual
union all select 'catnr2' catnr, 15 del_qty, sysdate-60 actshpdate from dual
),
subquery as (
select to_char(ds.start_date, 'MM-YYYY') dte, t.catnr, sum(nvl(t.del_qty, 0)) del_qty
from date_series ds left join your_table t
on (t.actshpdate between ds.start_date and ds.end_date)
group by to_char(ds.start_date, 'MM-YYYY'), t.catnr
)
select * from subquery pivot (sum(del_qty) s for dte in ('11-2013' d1, '12-2013' d2, '08-2014' d10, '09-2014' d11, '10-2014' d12))
where catnr is not null;