We are trying to find continuous date from table
The expected output is attached in the image below:
Expected Output
create column table "PS_CMP_TIME_ANALYTICS"."Temp2" (
"ID" integer,
"Period" date);
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (4, '2010-04-03');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (5, '2010-04-07');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (2, '2010-04-10');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (3, '2010-04-15');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (6, '2010-04-16');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (7, '2010-04-17');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (3, '2010-04-22');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (4, '2010-04-24');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (7, '2010-04-30');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (2, '2010-05-01');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (5, '2010-05-02');
INSERT INTO "PS_CMP_TIME_ANALYTICS"."Temp2" VALUES (3, '2010-05-03');
The query we tried in SAP HANA and output we are getting is mentioned below:
SELECT MIN("Period") AS BeginRange,
MAX("Period") AS EndRange
FROM (
SELECT "Period",
--DATEDIFF(D, ROW_NUMBER() OVER(ORDER BY "Period"), "Period") AS DtRange
cast(ROW_NUMBER() OVER(ORDER BY "Period") as date) as xyz,
days_between(to_date(cast(ROW_NUMBER() OVER(ORDER BY "Period") as
Date),'YYYY-MM-DD'), "Period") AS DtRange
FROM "PS_CMP_TIME_ANALYTICS"."Temp2") AS dt
GROUP BY DtRange;
But we are not getting the output as expected find the attached out we got using SAP HANA SQL The END date should be changed
Our Output
How can we achieve in SAP HANA SQL
You Can below Code in SAP HANA
SELECT MIN("Period") as "Start_Date",MAX("Period") as "End_Date",
(days_between(Min("Period"),Max("Period")) + 1) as "Days"
FROM
(select "Period",add_days("Period" ,- ROW_NUMBER() OVER(ORDER BY "Period"))rn
from "SchemaName"."Temp2")
GROUP BY rn
Actually you are looking for upper bound of data islands in your ordered table columns which is similar to finding lower limits of gaps
An other solution can be following SQLScript where I used SQL LEAD() function to calculate the next value in date field
with cte as (
select
"ID", "Period", LEAD("Period", 1) over (order by "Period") NextDate
from "Temp2"
)
select "ID", "Period"
from cte
where IFNULL(DAYS_BETWEEN("Period",NextDate),9) > 1
I also used SQL CTE expressions which might me a new syntax for many HANA developers coming from ABAP programming experience.
But I use it a lot and is especially more powerful with multiple CTE in single SELECT statements.
Output is as follows for the above SQLScript query
Related
I have a table with several index-dates and I need to build groups of rows with maximum interval between first and last row not exceeding a specific value.
A reproducible example looks like this:
create myTable (id CHAR(1), rownr INTEGER, index DATE);
insert into myTable values ('A', 1, '2018-01-01');
insert into myTable values ('A', 2, '2018-08-01');
insert into myTable values ('A', 3, '2019-03-04');
insert into myTable values ('A', 4, '2019-09-09');
insert into myTable values ('A', 5, '2020-02-15');
insert into myTable values ('B', 1, '2020-12-31');
insert into myTable values ('B', 2, '2021-03-04');
insert into myTable values ('B', 3, '2022-01-01');
insert into myTable values ('B', 4, '2022-02-20');
I need a new Variable index_grp where the index date of all rows of the group is within 1 year. First row which index-date is farther then 1 year from current minimum index-date starts a new group (window). The desired result looks like this:
I tried windows moving functions:
select id, rownr, index
, min(index) over(partition by id order by index range between interval '1' year preceding and current row) as index_grp
from myTable;
but obviously index_grp of row A3 would be 2018-08-01 so the windows would overlap.
In Teradata-SQL there is a neat little addon to the analytical function, called "reset when", to start a new window when a given condition is met.
However, I need a ANSI-SQL solution.
Given the following table
DECLARE #YourTable TABLE (id int, PLU int, Siteid int, description varchar(50))
INSERT #YourTable VALUES (1, 8972, 2, 'Beer')
INSERT #YourTable VALUES (2, 8972, 3, 'cider')
INSERT #YourTable VALUES (3, 8972, 4, 'Beer')
INSERT #YourTable VALUES (4, 8973, 2, 'Vodka')
INSERT #YourTable VALUES (5, 8973, 3, 'Vodka')
INSERT #YourTable VALUES (6, 8973, 4, 'Vodka')
I trying to write a query that would give me all rows that have multiple distinct values for a given description value against a plu.
So in the example above I would want to return rows 1,2,3 as they have both a 'cider' value and a 'beer' value for a plu of '8972'.
I thought 'GROUP BY' and 'HAVING' was the way to go but I can't seem to get it to work correctly.
SELECT P.PLU, P.Description
FROM #YourTable P
GROUP BY P.PLU, P.Description
HAVING COUNT(DISTINCT(P.DESCRIPTION)) > 1
Any help appreciated.
You shouldn't GROUP BY the description if you are doing a DISTINCT COUNT on it (then it will always be just 1). Try something like this:
SELECT P2.PLU, P2.Description
FROM #YourTable P2
WHERE P2.PLU in (
SELECT P.PLU
FROM #YourTable P
GROUP BY P.PLU
HAVING COUNT(DISTINCT(P.DESCRIPTION)) > 1
)
Using the provided table I would like to sample let's say 2 users per day so that users assigned to the two days are different. Of course the problem I have is more sophisticated, but this simple example gives the idea.
drop table if exists test;
create table test (
user_id int,
day_of_week int);
insert into test values (1, 1);
insert into test values (1, 2);
insert into test values (2, 1);
insert into test values (2, 2);
insert into test values (3, 1);
insert into test values (3, 2);
insert into test values (4, 1);
insert into test values (4, 2);
insert into test values (5, 1);
insert into test values (5, 2);
insert into test values (6, 1);
insert into test values (6, 2);
The expected results would look like this:
create table results (
user_id int,
day_of_week int);
insert into results values (1, 1);
insert into results values (2, 1);
insert into results values (3, 2);
insert into results values (6, 2);
You can use window functions. Here is an example . . . although the details do depend on your database (functions for random numbers vary by database):
select t.*
from (select t.*, row_number() over (partition by day_of_week order by random()) as seqnum
from test t
) t
where seqnum <= 2;
I have a requirement with some business rules to implement on SQL (within a PL/SQL block): I need to evaluate such rules and according to the result perform the corresponding update, delete or insert into a target table.
My database model contains a "staging" and a "real" table. The real table stores records inserted in the past and the staging one contains "fresh" data coming from somewhere that needs to be merged into the real one.
Basically these are my business rules:
Delta between staging MINUS real --> Insert rows into the real
Delta between real MINUS staging--> Delete rows from the real
Rows which PK is the same but any other fields different: Update.
(Those "MINUS" will compare ALL the fields to get equality and distinguise the 3rd case)
I haven't figured out the way to accomplish such tasks without overlapping between rules by using a merge statement: Any suggestion for the merge structure? Is it possible to do it all together within the same merge?
Thank you!
If I understand you task correctly following code should do the job:
--drop table real;
--drop table stag;
create table real (
id NUMBER,
col1 NUMBER,
col2 VARCHAR(10)
);
create table stag (
id NUMBER,
col1 NUMBER,
col2 VARCHAR(10)
);
insert into real values (1, 1, 'a');
insert into real values (2, 2, 'b');
insert into real values (3, 3, 'c');
insert into real values (4, 4, 'd');
insert into real values (5, 5, 'e');
insert into real values (6, 6, 'f');
insert into real values (7, 6, 'g'); -- PK the same but at least one column different
insert into real values (8, 7, 'h'); -- PK the same but at least one column different
insert into real values (9, 9, 'i');
insert into real values (10, 10, 'j'); -- in real but not in stag
insert into stag values (1, 1, 'a');
insert into stag values (2, 2, 'b');
insert into stag values (3, 3, 'c');
insert into stag values (4, 4, 'd');
insert into stag values (5, 5, 'e');
insert into stag values (6, 6, 'f');
insert into stag values (7, 7, 'g'); -- PK the same but at least one column different
insert into stag values (8, 8, 'g'); -- PK the same but at least one column different
insert into stag values (9, 9, 'i');
insert into stag values (11, 11, 'k'); -- in stag but not in real
merge into real
using (WITH w_to_change AS (
select *
from (select stag.*, 'I' as action from stag
minus
select real.*, 'I' as action from real
)
union (select real.*, 'D' as action from real
minus
select stag.*, 'D' as action from stag
)
)
, w_group AS (
select id, max(action) as max_action
from w_to_change
group by id
)
select w_to_change.*
from w_to_change
join w_group
on w_to_change.id = w_group.id
and w_to_change.action = w_group.max_action
) tmp
on (real.id = tmp.id)
when matched then
update set real.col1 = tmp.col1, real.col2 = tmp.col2
delete where tmp.action = 'D'
when not matched then
insert (id, col1, col2) values (tmp.id, tmp.col1, tmp.col2);
I have been trying to move away from using DECODE to pivot rows in Oracle 11g, where there is a handy PIVOT function. But I may have found a limitation:
I'm trying to return 2 columns for each value in the base table. Something like:
SELECT somethingId, splitId1, splitName1, splitId2, splitName2
FROM (SELECT somethingId, splitId
FROM SOMETHING JOIN SPLIT ON ... )
PIVOT ( MAX(splitId) FOR displayOrder IN (1 AS splitId1, 2 AS splitId2),
MAX(splitName) FOR displayOrder IN (1 AS splitName1, 2 as splitName2)
)
I can do this with DECODE, but I can't wrestle the syntax to let me do it with PIVOT. Is this even possible? Seems like it wouldn't be too hard for the function to handle.
Edit: is StackOverflow maybe not the right Overflow for SQL questions?
Edit: anyone out there?
From oracle-developer.net it would appear that it can be done like this:
SELECT somethingId, splitId1, splitName1, splitId2, splitName2
FROM (SELECT somethingId, splitId
FROM SOMETHING JOIN SPLIT ON ... )
PIVOT ( MAX(splitId) ,
MAX(splitName)
FOR displayOrder IN (1 AS splitName1, 2 as splitName2)
)
I'm not sure from what you provided what the data looks or what exactly you would like. Perhaps if you posted the decode version of the query that returns the data you are looking for and/or the definition for the source data, we could better answer your question. Something like this would be helpful:
create table something (somethingId Number(3), displayOrder Number(3)
, splitID Number(3));
insert into something values (1, 1, 10);
insert into something values (2, 1, 11);
insert into something values (3, 1, 12);
insert into something values (4, 1, 13);
insert into something values (5, 2, 14);
insert into something values (6, 2, 15);
insert into something values (7, 2, 16);
create table split (SplitID Number(3), SplitName Varchar2(30));
insert into split values (10, 'Bob');
insert into split values (11, 'Carrie');
insert into split values (12, 'Alice');
insert into split values (13, 'Timothy');
insert into split values (14, 'Sue');
insert into split values (15, 'Peter');
insert into split values (16, 'Adam');
SELECT *
FROM (
SELECT somethingID, displayOrder, so.SplitID, sp.splitname
FROM SOMETHING so JOIN SPLIT sp ON so.splitID = sp.SplitID
)
PIVOT ( MAX(splitId) id, MAX(splitName) name
FOR (displayOrder, displayOrder) IN ((1, 1) AS split, (2, 2) as splitname)
);