Cross apply to fill down values with multiple columns - sql

I have a table with a few columns. I want to fill down values to replace nulls, but this is complicated by the additional columns. Here is a sample of what I have:
date id1 id2 id3 id4 value
1/1/14 a 1 1 1 1.2
1/2/14 a 1 1 1 NULL
1/8/14 a 1 1 1 2.3
1/1/14 a 2 1 1 10.1
1/2/14 a 2 1 1 12.3
1/17/14 a 2 1 1 NULL
1/18/14 a 2 1 1 10.8
1/1/14 a 2 3 1 100.3
1/2/14 a 2 3 1 NULL
1/6/14 a 2 3 1 110.4
I want to copy down value while the value remains within a "group" of id1-4. For example, all of the "A-1-1-1" should be isolated from "a-2-1-1" in terms of what values to copy down. The output I need is:
date id1 id2 id3 id4 value
1/1/14 a 1 1 1 1.2
1/2/14 a 1 1 1 1.2
1/8/14 a 1 1 1 2.3
1/1/14 a 2 1 1 10.1
1/2/14 a 2 1 1 12.3
1/17/14 a 2 1 1 12.3
1/18/14 a 2 1 1 10.8
1/1/14 a 2 3 1 100.3
1/2/14 a 2 3 1 100.3
1/6/14 a 2 3 1 110.4
I can do this for a single column using CROSS APPLY but the syntax for the multiple columns is confusing me. The SQL to generate the temp data is:
DECLARE #test TABLE
(
date DATETIME
,id1 VARCHAR(1)
,id2 INT
,id3 INT
,id4 INT
,value FLOAT
)
INSERT INTO #test VALUES
('2014-01-01','a','1','1','1','1.2')
,('2014-01-02','a','1','1','1',NULL)
,('2014-01-08','a','1','1','1','2.3')
,('2014-01-01','a','2','1','1','10.1')
,('2014-01-02','a','2','1','1','12.3')
,('2014-01-17','a','2','1','1',NULL)
,('2014-01-18','a','2','1','1','10.8')
,('2014-01-01','a','2','3','1','100.3')
,('2014-01-02','a','2','3','1',NULL)
,('2014-01-06','a','2','3','1','110.4')
;
SELECT * FROM #test;

You can use apply for this:
select t.*, coalesce(t.value, tprev.value) as value
from #test t outer apply
(select top 1 value
from #test t2
where t2.id1 = t.id1 and t2.id2 = t.id2 and t2.id3 = t.id3 and t2.id4 = t.id4 and
t2.date < t.date and t2.value is not null
order by t2.date desc
) tprev;

Related

How to count numbers which are defined in other table, also show zero counts

This is the current situation:
Table1
key
some_id
date
class
1
1
1.1.2000
2
1
2
1.1.2000
2
2
1
1.1.1999
3
...
...
...
...
I'm counting the classes and providing the information through a view by using following select statement:
SELECT key, date, class, count(class) as cnt
FROM table1
GROUP BY key, date, class
The result would be:
key
date
class
cnt
1
1.1.2000
2
2
2
1.1.1999
3
1
...
...
...
...
but now there is another table which includes all possible class-codes, e.g.
parameter_key
class_code
1
1
1
2
1
3
2
1
...
...
For my view I'm only querying data for parameter_key 1. And the view now needs to show all possible class_codes, also if the count would be 0.
So my desired result table is:
key
date
class
cnt
1
1.1.2000
1
0
1
1.1.2000
2
2
1
1.1.2000
3
0
2
1.1.1999
1
0
2
1.1.1999
2
0
2
1.1.1999
3
1
...
...
...
...
but I just can't get my head around how to do this. I've tried to add a right join like this but that does not change anything (probably because I join the class column and do an aggregate which won't be displayed if there is nothing to count?):
SELECT key, date, class, count(class) as cnt
FROM table1
RIGHT JOIN table2 on table1.class = table2.class and table2.parameter_key = 1
GROUP BY key, date, class
Any idea on how to achieve the desired result table?
Use a PARTITIONed join:
SELECT t2.parameter_key AS key,
t1."DATE",
t2.class_code AS class,
count(t1.class) as cnt
FROM table2 t2
LEFT OUTER JOIN table1 t1
PARTITION BY (t1."DATE")
ON (t1.class = t2.class_code AND t1.key = t2.parameter_key)
WHERE t2.parameter_key = 1
GROUP BY
t2.parameter_key,
t1."DATE",
t2.class_code
Which, for the sample data:
CREATE TABLE table1 (key, some_id, "DATE", class) AS
SELECT 1, 1, DATE '2000-01-01', 2 FROM DUAL UNION ALL
SELECT 1, 2, DATE '2000-01-01', 2 FROM DUAL UNION ALL
SELECT 2, 1, DATE '1999-01-01', 3 FROM DUAL;
CREATE TABLE table2 (parameter_key, class_code) AS
SELECT 1, 1 FROM DUAL UNION ALL
SELECT 1, 2 FROM DUAL UNION ALL
SELECT 1, 3 FROM DUAL UNION ALL
SELECT 2, 1 FROM DUAL;
Outputs:
KEY
DATE
CLASS
CNT
1
1999-01-01 00:00:00
1
0
1
1999-01-01 00:00:00
2
0
1
1999-01-01 00:00:00
3
0
1
2000-01-01 00:00:00
1
0
1
2000-01-01 00:00:00
2
2
1
2000-01-01 00:00:00
3
0
Or, depending on how you want to manage the join conditions:
SELECT t1.key,
t1."DATE",
t2.class_code AS class,
count(t1.class) as cnt
FROM table2 t2
LEFT OUTER JOIN table1 t1
PARTITION BY (t1.key, t1."DATE")
ON (t1.class = t2.class_code)
WHERE t2.parameter_key = 1
GROUP BY
t1.key,
t1."DATE",
t2.class_code
Which outputs:
KEY
DATE
CLASS
CNT
1
2000-01-01 00:00:00
1
0
1
2000-01-01 00:00:00
2
2
1
2000-01-01 00:00:00
3
0
2
1999-01-01 00:00:00
1
0
2
1999-01-01 00:00:00
2
0
2
1999-01-01 00:00:00
3
1
db<>fiddle here

How to update a column based on values of other columns

I have a tables as below
row_wid id code sub_code item_nbr orc_cnt part_cnt variance reporting_date var_start_date
1 1 ABC PQR 23AB 0 1 1 11-10-2019 NULL
2 1 ABC PQR 23AB 0 1 1 12-10-2019 NULL
3 1 ABC PQR 23AB 1 1 0 13-10-2019 NULL
4 1 ABC PQR 23AB 1 2 1 14-10-2019 NULL
5 1 ABC PQR 23AB 1 3 2 15-10-2019 NULL
I have to update var_start_date column with min(reporting_date) for each combination of id,code,sub_code and item_nbr only till variance field is zero.
Row with variance = 0 should have null var_start_date. and next row after that should have next min(var_start_date.). FYI, variance is calculated as par_cnt-orc_cnt
so my output should look like this -
row_wid id code sub_code item_nbr orc_cnt part_cnt variance reporting_date var_start_date
1 1 ABC PQR 23AB 0 1 1 11-10-2019 11-10-2019
2 1 ABC PQR 23AB 0 1 1 12-10-2019 11-10-2019
3 1 ABC PQR 23AB 1 1 0 13-10-2019 NULL
4 1 ABC PQR 23AB 1 2 1 14-10-2019 14-10-2019
5 1 ABC PQR 23AB 1 3 2 15-10-2019 14-10-2019
I am trying to write a function using below query to divide the data into sets.
SELECT DISTINCT MIN(reporting_date)
OVER (partition by id, code,sub_code,item_nbr ORDER BY row_wid ),
RANK() OVER (partition by id, code,sub_code,item_nbr ORDER BY row_wid)
AS rnk,id, code,sub_code,item_nbr,orc_cnt,part_cnt,variance,row_wid
FROM TABLE T1
.But dont know how to include variance field to split the sets.
I would suggest:
select t.*,
(case when variance <> 0
then min(reporting_date) over (partition by id, code, sub_code, item_nbr, grouping)
end) as new_reporting_date
from (select t.*,
sum(case when variance = 0 then 1 else 0 end) over (partition by id, code, sub_code, item_nbr) as grouping
from t
) t;
Note that this does not use a JOIN. It should be more efficient than an answer that does.
Try as below
SELECT T.*, CASE WHEN T.variance = 0 THEN NULL ELSE MIN(reporting_date) OVER (PARTITION BY T1.RANK ORDER BY T1.RANK) END AS New_var_start_date
FROM mytbl T
LEFT JOIN (
SELECT row_wid, variance, COUNT(CASE variance WHEN 0 THEN 1 END) OVER (ORDER BY row_wid ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) +1 AS [Rank]
FROM mytbl
) T1 ON T.row_wid = T1.row_wid
SQL FIDDLE DEMO

View Level Number on Recursive Table SQL

I have the following table:
--------------------------------------------
ID ParentID Item
--------------------------------------------
1 root
2 1 AA
3 1 BB
4 1 CC
5 1 DD
6 2 A1
7 6 A11
ff.
I want to have the following result:
ID ParentID Item Level
---------------------------------------------
1 root 0
2 1 AA 1
3 1 BB 1
4 1 CC 1
5 1 DD 1
6 2 A1 2
7 6 A11 3
ff.
What is the best idea to create the new column level? Is create a new column and add a formula or something like computed or maybe function?
How can I achieve that on t-sql?
You would use a recursive CTE:
with cte as (
select t.id, t.parentid, t.item, 0 as lvl
from t
where parentid is null
union all
select t.id, t.parentid, t.item, cte.lvl + 1 as lvl
from t join
cte
on t.parentid = cte.id
)
select *
from cte;
Storing this data in the table is . . . cumbersome, because you need to keep it updated. You might want to just calculate it on-the-fly when you need it.
Simply using DENSE_RANK:
DECLARE #YourTable TABLE(ID INT,ParentID VARCHAR(10),Item VARCHAR(10))
INSERT into #YourTable VALUES(1,' ','root')
INSERT into #YourTable VALUES(2,'1','AA')
INSERT into #YourTable VALUES(3,'1','BB')
INSERT into #YourTable VALUES(4,'1','CC')
INSERT into #YourTable VALUES(5,'1','DD')
INSERT into #YourTable VALUES(6,'2','A1')
INSERT into #YourTable VALUES(7,'6','A11')
SELECT ID,ParentID,Item
,(DENSE_RANK() OVER(ORDER BY ISNULL(NULLIF(ParentID,''),0)))-1 [Level]
FROM #YourTable
Output:
ID ParentID Item Level
1 root 0
2 1 AA 1
3 1 BB 1
4 1 CC 1
5 1 DD 1
6 2 A1 2
7 6 A11 3
Hope it helps you.

Returning a list of rows that are unique by type and returning the first pass

ID UserID TYPE PASS DATE
1 12 TRACK1 1 20140101
2 32 TRACK2 0 20140105
3 43 PULL1 1 20140105
4 66 PULL2 1 20140110
5 54 PULL1 0 20140119
6 54 TRACK1 0 20140120
So users can take multiple attempts for 'Type', so they can take 'TRACK1' multiple times, or 'PULL2' multiple times.
I want to return the first PASS (1) for each unique 'Type' for each user.
I want to return both pass and fail rows, but only the first instance of a pass or fail.
How can I do this?
sample table and output
ID UserID TYPE PASS DATE
1 12 TRACK1 1 20140101
2 12 TRACK2 0 20140105
3 12 PULL1 1 20140105
4 12 PULL2 1 20140110
5 12 PULL1 0 20140119
6 12 TRACK1 0 20140120
7 12 TRACK1 0 20140121
8 12 PULL1 1 20140115
9 12 TRACK2 0 20140125
output:
1 12 TRACK1 1 20140101
2 12 TRACK2 0 20140105
3 12 PULL1 1 20140105
4 12 PULL2 1 20140110
select t1.*
from UserTrackStatus t1
join
(
select userid,
type,
min(date) as min_date
from UserTrackStatus
group by userid, type
) t2 on t1.userid = t2.userid and t1.type = t2.type and t1.date = t2.min_date
SQLFiddle
Just do it with CTE and ROW_NUMBER to identify which records comes first
;
WITH cte
AS (
SELECT *
,ROW_NUMBER() OVER ( PARTITION BY [UserID], [Type] ORDER BY [date] ASC ) AS rn
FROM MyTable
WHERE PASS = 1
)
SELECT *
FROM cte
WHERE rn = 1

DB2 sql group by/count distinct column values

if I have a table with values like this:
ID SUBID FLAG
-----------------
1 1 1
1 2 (null)
2 3 1
2 3 (null)
3 4 1
4 5 1
4 6 (null)
5 7 0
6 8 (null)
7 9 1
and I would like to get all the ID's where 'FLAG' is only set to 1, so in this case the query would return
ID SUBID FLAG
-----------------
3 4 1
7 9 1
How can I achieve this?
try this:
SELECT * FROM flags where flag=1
and ID NOT in( SELECT ID FROM flags where flag !=1 OR flag IS NULL)
I don't have a db2 instance to test on but this might work:
select t1.id, t1.subid, t1.flag
from yourtable t1
inner join
(
select id
from yourtable
group by id
having count(id) = 1
) t2
on t1.id = t2.id
where t1.flag = 1;