How to set Conditional Row Number by specific values from the table - sql

I have a query that returns table and based on value (if that exists) I want to set row_number.
I have some solution but it looks long and I think could be easier and less code (best option) to handle it. Below sample with expected results:
If query returns Client with NULL:
----------------------
Process | Client|
A | NULL |
A | B |
A | B |
A | B |
A | C |
A | C |
A | C |
OutPut should be:
----------------------
Process | Client| RowNumber
A | NULL | 1
A | B | 2
A | B | 3
A | B | 4
A | C | 2
A | C | 3
A | C | 4
If query returns without NULL:
----------------------
Process | Client|
A | B |
A | B |
A | B |
A | C |
A | C |
A | C |
OutPut should be:
----------------------
Process | Client| RowNumber
A | B | 1
A | B | 2
A | B | 3
A | C | 1
A | C | 2
A | C | 3

I'm not sure if NULL should always be treated as 'B', but you need to handle that:
select t.* ,
row_number() over (partition by process, coalesce(client, 'B') order by (select null))
from t
where client is not null;
Oh, I see, you are not setting the NULL to 'B' but adding the number of NULLs to the other values. That is also quite simple:
select t.* ,
(row_number() over (partition by process, client order by (select null)) +
(case when client is null then 0
else sum(case when client is null then 1 else 0 end) over ()
end)
)
from t
where client is not null;

dbfiddle
DROP TABLE if exists mytable;
CREATE TABLE mytable(Process char(1), Client char(1));
INSERT INTO mytable values
('A',null),
('A','B'),
('A','B'),
('A','B'),
('A','C'),
('A','C'),
('A','C');
-- with a NULL value
select
Process,
Client,
ROW_NUMBER() OVER (partition by process,Client order by (select null))+CASE WHEN Client is null THEN 0 else 1 end R
from mytable;
-- without a NULL value
select
Process,
Client,
ROW_NUMBER() OVER (partition by process,Client order by (select null)) R
from mytable
where not client is null;

…
declare #t table(process varchar(10), client varchar(10));
insert into #t(process, client)
values
('A', null),
('A', 'B'),('A', 'B'),('A', 'B'),
('A', 'C'),('A', 'C'),
('A', ''), ('A', ''), ('A', ' '), ('A', ' '),
('A', 'ZXY'), ('A', 'ZXY'),
('X', 'B'),('X', 'B'),('X', 'B'),
('X', 'C'),('X', 'C');
select *,
row_number() over(partition by process,client order by client)
--if there is a null client per process then add 1 to every non null client
+ case when client is not null and min(case when client is null then 0 else 1 end) over(partition by process) = 0 then 1 else 0 end
--+ case when client is not null and min(isnull(ASCII(client+'.'), 0)) over(partition by process) = 0 then 1 else 0 end
as rownumber
from
(
select *
from #t
--where client is not null
) as t;

Related

Condense or merge rows with null values not using group by

Let's say I have a select which returns the following Data:
select nr, name, val_1, val_2, val_3
from table
Nr. | Name | Value 1 | Value 2 | Value 3
-----+------------+---------+---------+---------
1 | Max | 123 | NULL | NULL
1 | Max | NULL | 456 | NULL
1 | Max | NULL | NULL | 789
9 | Lisa | 1 | NULL | NULL
9 | Lisa | 3 | NULL | NULL
9 | Lisa | NULL | NULL | Hello
9 | Lisa | 9 | NULL | NULL
I'd like to condense the rows down to the bare minimum with.
I want the following result:
Nr. | Name | Value 1 | Value 2 | Value 3
-----+------------+---------+---------+---------
1 | Max | 123 | 456 | 789
9 | Lisa | 1 | NULL | Hello
9 | Lisa | 3 | NULL | NULL
9 | Lisa | 9 | NULL | NULL
For condensing the rows with Max (Nr. 1) a group by of the max values would help.
select nr, name, max(val_1), max(val_2), max(val_3)
from table
group by nr, name
But I am unsure how to get the desired results for Lisa (Nr. 9). The row for Lisa contains a value in the Value 3 column, in this example it's condensed with the first row that matches Nr and Name and has a Null value in Value 3.
I'm thankful for every input!
Basic principle is same as Vladimir's solution. This uses UNPIVOT and PIVOT
with cte as
(
select nr, name, col, val,
rn = row_number() over(partition by nr, name, col order by val)
from [table]
unpivot
(
val
for col in (val_1, val_2, val_3)
) u
)
select *
from (
select nr, name, rn, col, val
from cte
) d
pivot
(
max (val)
for col in ([val_1], [val_2], [val_3])
) p
Here is one way to do it. Assign a unique row number for each column by sorting them in such a way that NULLs come last and then join them back together using these row numbers and remove rows with all NULLs.
Run just the CTE first and examine the intermediate result to understand how it works.
Sample data
DECLARE #T TABLE (Nr varchar(10), Name varchar(10), V1 varchar(10), V2 varchar(10), V3 varchar(10));
INSERT INTO #T VALUES
('1', 'Max ', '123' , NULL , NULL ),
('1', 'Max ', NULL , '456', NULL ),
('1', 'Max ', NULL , NULL , '789'),
('9', 'Lisa', '1' , NULL , NULL ),
('9', 'Lisa', '3' , NULL , NULL ),
('9', 'Lisa', NULL , NULL , 'Hello'),
('9', 'Lisa', '9' , NULL , NULL );
Query
WITH CTE
AS
(
SELECT
Nr
,Name
,V1
,V2
,V3
-- here we use CASE WHEN V1 IS NULL THEN 1 ELSE 0 END to put NULLs last
,ROW_NUMBER() OVER (PARTITION BY Nr ORDER BY CASE WHEN V1 IS NULL THEN 1 ELSE 0 END, V1) AS rn1
,ROW_NUMBER() OVER (PARTITION BY Nr ORDER BY CASE WHEN V2 IS NULL THEN 1 ELSE 0 END, V2) AS rn2
,ROW_NUMBER() OVER (PARTITION BY Nr ORDER BY CASE WHEN V3 IS NULL THEN 1 ELSE 0 END, V3) AS rn3
FROM #T AS T
)
SELECT
T1.Nr
,T1.Name
,T1.V1
,T2.V2
,T3.V3
FROM
CTE AS T1
INNER JOIN CTE AS T2 ON T2.Nr = T1.Nr AND T2.rn2 = T1.rn1
INNER JOIN CTE AS T3 ON T3.Nr = T1.Nr AND T3.rn3 = T1.rn1
WHERE
T1.V1 IS NOT NULL
OR T2.V2 IS NOT NULL
OR T3.V3 IS NOT NULL
ORDER BY
T1.Nr, T1.rn1
;
Result
+----+------+-----+------+-------+
| Nr | Name | V1 | V2 | V3 |
+----+------+-----+------+-------+
| 1 | Max | 123 | 456 | 789 |
| 9 | Lisa | 1 | NULL | Hello |
| 9 | Lisa | 3 | NULL | NULL |
| 9 | Lisa | 9 | NULL | NULL |
+----+------+-----+------+-------+

How to insert a value from another row into a column - SQL Server

I'm working on SQL for a project, I need to update Soh_Wh_A and Soh_Wh_B based on some rules.
This is table_A:
| Code | Warehouse | StockOnHand | Wh_A | Wh_B
----------------------------------------------------
| 001 | A | 10 | NULL | NULL
| 001 | B | 20 | NULL | NULL
| 003 | A | 30 | NULL | NULL
| 033 | B | 40 | NULL | NULL
I want to populate columns Wh_A and Wh_B. For example, lets work on the first row, Wh_A should have the same value of the column StockOnHand as this row belongs to warehouse "A". That is easy to do using an update case when statement.
What is difficult for me is to populate the column Wh_B with the column StockOnHand from the second row.
The table should be like this at the end.
| Code | Warehouse | StockOnHand | Wh_A | Wh_B
----------------------------------------------------
| 001 | A | 10 | 10 | 20
| 001 | B | 20 | 10 | 20
| 003 | A | 30 | 30 | 40
| 033 | B | 40 | 30 | 40
This is what I have done so far...
update Table_A set
Wh_A = (case
when warehouse = 'A' then stockOnHand
when warehouse = 'B' then ... end)
Wh_B = (case
when warehouse = 'A' then stockOnHand
when warehouse = 'B' then ... end)
You can use window functions:
select
code,
warehouse,
stockOnHand,
max(case when warehouse = 'A' then stockOnHand end)
over(partition by code) wh_a,
max(case when warehouse = 'B' then stockOnHand end)
over(partition by code) wh_b
from table_a
It is easy to turn this to an update query using an updateable cte:
with cte as (
select
wh_a,
wh_b
max(case when warehouse = 'A' then stockOnHand end)
over(partition by code) new_wh_a,
max(case when warehouse = 'B' then stockOnHand end)
over(partition by code) new_wh_b
from table_a
)
update cte set wh_a = new_wh_a, wh_b = new_wh_b
declare #Table table (
Code char(3) not null,
Warehouse char(1) not null,
StockOnHand int not null,
Wh_A int null,
Wh_B int null,
primary key (Code, Warehouse)
);
insert into #Table
(Code, Warehouse, StockOnHand)
values
('001', 'A', 10),
('001', 'B', 20),
('003', 'A', 30),
('003', 'B', 40);
update t set
Wh_A = (select top 1 StockOnHand from #Table a where a.Warehouse = 'A' and a.Code = t.Code),
Wh_B = (select top 1 StockOnHand from #Table b where b.Warehouse = 'B' and b.Code = t.Code)
from
#Table t;
select * from #Table

select SQL rows depending on non-identical results of a query

I am trying to select the columns which is relevant without knowing in advance which ones
I do a:
select *
from table
where id = '1'
the result i get is maybe 10 rows and 100+ columns
|id | column1 | column2 | column3 | column4 | column5 |....
| 1 | a | b | c | d | e |....
| 1 | a | XXX | c | d | e |....
| 1 | a | b | c | YYY | e |....
| 1 | a | b | c | d | e |....
For every row, one (or more) of the columns value is different, but i dont know which one(s)
is there any way i can create a temp table with the first query and do a sub query to display only one columns which doesnt have the same value in all the rows?
so the result would look like this:
|id | column2 | column4 |
| 1 | b | d |
| 1 | XXX | d |
| 1 | b | YYY |
| 1 | b | d |
since column 2 and 4 were the ones with non identical data these are the ones I want to see.
As already mentioned, this would require dynamic sql.
Maybe this will help you:
CREATE TABLE Column_Relevance
SELECT id,
COUNT(DISTINCT(column_1))/COUNT(*) AS relevance_column_1,
COUNT(DISTINCT(column_2))/COUNT(*) AS relevance_column_2,
COUNT(DISTINCT(column_3))/COUNT(*) AS relevance_column_3,
# AND SO ON....
GROUP BY id;
All relevance_columns with value < 1 indicate different values for the columns. You can build the whole statement in excel in a few minutes.
Once the table is created, add another column and create a select statement based on the column relevance (e.g. select if(relevance_column_1<1, column_1, else 'ignore') as column_1. This will return the string 'ignore' for all columns, that don't have distinct values.
This is far from perfect but maybe it helps you a little.
Here is a way you could use some aggregation to help. You said you have nearly 100 columns so this could take some effort to create but once it is done it would be fine. And this is just for analysis. You could utilize sys.columns to build the code for you but then we are back in the land of dynamic sql.
declare #Something table
(
ID int
, Column1 varchar(10)
, Column2 varchar(10)
, Column3 varchar(10)
, Column4 varchar(10)
, Column5 varchar(10)
)
insert #Something
values
(1, 'a', 'b', 'c', ' d ', 'e')
, (1, 'a', 'XXX', 'c', ' d ', 'e')
, (1, 'a', 'b', 'c', 'YYY', 'e')
, (1, 'a', 'b', 'c', ' d ', 'e')
;
with MinMax as
(
select ID
, MIN(Column1) as Col1Min
, MAX(Column1) as Col1Max
, MIN(Column2) as Col2Min
, MAX(Column2) as Col2Max
, MIN(Column3) as Col3Min
, MAX(Column3) as Col3Max
, MIN(Column4) as Col4Min
, MAX(Column4) as Col4Max
, MIN(Column5) as Col5Min
, MAX(Column5) as Col5Max
from #Something
group by ID
)
select s.ID
, Column1 = case when mm.Col1Max = mm.Col1Min then '' else s.Column1 end
, Column2 = case when mm.Col2Max = mm.Col2Min then '' else s.Column2 end
, Column3 = case when mm.Col3Max = mm.Col3Min then '' else s.Column3 end
, Column4 = case when mm.Col4Max = mm.Col4Min then '' else s.Column4 end
, Column5 = case when mm.Col5Max = mm.Col5Min then '' else s.Column5 end
from #Something s
join MinMax mm on mm.ID = s.ID
Have you tried using distinct ? It returns only unique rows:
select *
from table
where id = '1'
|id | column2 | column4 |
| 1 | a | a |
| 1 | a | a |
| 1 | b | d |
| 1 | b | d |
select distinct * from table where id= '1'
|id | column2 | column4 |
| 1 | a | a |
| 1 | b | d |
I hope this helps you.

Select distinct one field other first non empty or null

I have table
| Id | val |
| --- | ---- |
| 1 | null |
| 1 | qwe1 |
| 1 | qwe2 |
| 2 | null |
| 2 | qwe4 |
| 3 | qwe5 |
| 4 | qew6 |
| 4 | qwe7 |
| 5 | null |
| 5 | null |
is there any easy way to select distinct 'id' values with first non null 'val' values. if not exist then null. for example
result should be
| Id | val |
| --- | ---- |
| 1 | qwe1 |
| 2 | qwe4 |
| 3 | qwe5 |
| 4 | qew6 |
| 5 | null |
In your case a simple GROUP BY should be the solution:
SELECT Id
,MIN(val)
FROM dbo.mytable
GROUP BY Id
Whenever using a GROUP BY, you have to use an aggregate function on all columns, which are not listed in the GROUP BY.
If an Id has a value (val) other than NULL, this value will be returned.
If there are just NULLs for the Id, NULL will be returned.
As far as i unterstood (regarding your comment), this is exactly what you're going to approach.
If you always want to have "the first" value <> NULL, you'll need another sort criteria (like a timestamp column) and might be able to solve it with a WINDOW-function.
If you want the first non-NULL value (where "first" is based on id), then MIN() doesn't quite do it. Window functions do:
select t.*
from (select t.*,
row_number() over (partition by id
order by (case when val is not null then 1 else 2 end),
id
) as seqnum
from t
) t
where seqnum = 1;
SQL Fiddle:
Create Table from SQL Fiddle:
CREATE TABLE tab1(pid integer, id integer, val varchar(25))
Insert dummy records :
insert into tab1
values (1, 1 , null),
(2, 1 , 'qwe1' ),
(3, 1 , 'qwe2'),
(4, 2 , null ),
(5, 2 , 'qwe4' ),
(6, 3 , 'qwe5' ),
(7, 4 , 'qew6' ),
(8, 4 , 'qwe7' ),
(9, 5 , null ),
(10, 5 , null );
fire below query:
SELECT Id ,MIN(val) as val FROM tab1 GROUP BY Id;

Count Number of Consecutive Occurrence of values in Table

I have below table
create table #t (Id int, Name char)
insert into #t values
(1, 'A'),
(2, 'A'),
(3, 'B'),
(4, 'B'),
(5, 'B'),
(6, 'B'),
(7, 'C'),
(8, 'B'),
(9, 'B')
I want to count consecutive values in name column
+------+------------+
| Name | Repetition |
+------+------------+
| A | 2 |
| B | 4 |
| C | 1 |
| B | 2 |
+------+------------+
The best thing I tried is:
select Name
, COUNT(*) over (partition by Name order by Id) AS Repetition
from #t
order by Id
but it doesn't give me expected result
One approach is the difference of row numbers:
select name, count(*)
from (select t.*,
(row_number() over (order by id) -
row_number() over (partition by name order by id)
) as grp
from t
) t
group by grp, name;
The logic is easiest to understand if you run the subquery and look at the values of each row number separately and then look at the difference.
You could use windowed functions like LAG and running total:
WITH cte AS (
SELECT Id, Name, grp = SUM(CASE WHEN Name = prev THEN 0 ELSE 1 END) OVER(ORDER BY id)
FROM (SELECT *, prev = LAG(Name) OVER(ORDER BY id) FROM t) s
)
SELECT name, cnt = COUNT(*)
FROM cte
GROUP BY grp,name
ORDER BY grp;
db<>fiddle demo
The first cte returns group number:
+-----+-------+-----+
| Id | Name | grp |
+-----+-------+-----+
| 1 | A | 1 |
| 2 | A | 1 |
| 3 | B | 2 |
| 4 | B | 2 |
| 5 | B | 2 |
| 6 | B | 2 |
| 7 | C | 3 |
| 8 | B | 4 |
| 9 | B | 4 |
+-----+-------+-----+
And main query groups it based on grp column calculated earlier:
+-------+-----+
| name | cnt |
+-------+-----+
| A | 2 |
| B | 4 |
| C | 1 |
| B | 2 |
+-------+-----+
I have use Recursive CTE and minimise the use of row_number,also avoid count(*).
I think it will perform better,but in real world it depend what else filter you put to minimise number of rows affected.
If ID is having discreet values then One extra CTE will be use to generate continuous id.
;With CTE2 as
(
select ROW_NUMBER()over(order by id) id, name,1 Repetition ,1 Marker from #t
)
, CTE as
(
select top 1 cast(id as int) id, name,1 Repetition ,1 Marker from CTE2 order by id
union all
select a.id, a.name
, case when a.name=c.name then Repetition +1 else 1 end
, case when a.name=c.name then c.Marker else Marker+1 end
from #t a
inner join CTE c on a.id=c.id+1
)
,CTE1 as
(select *,ROW_NUMBER()over(partition by marker order by id desc)rn from cte c
)
select Name,Repetition from cte1 where rn=1