SQLite - First Per Group - Composite Order & Opposing Sort Order - sql

I'm looking for options on how to pick the first record per group, in SQLite, Where the sorting of the group is across a composite key.
Example Table:
Key_1 | Sort1 | Sort2 | Val_1 | Val_2
-------+-------+-------+-------+-------
1 | 1 | 3 | 0 | 2
1 | 1 | 2 | 2 | 4
1 | 1 | 1 | 4 | 6
1 | 2 | 2 | 6 | 8
1 | 2 | 1 | 8 | 1
2 | 1 | 2 | 0 | 5
2 | 1 | 1 | 1 | 6
2 | 2 | 3 | 2 | 7
2 | 2 | 2 | 3 | 8
2 | 2 | 1 | 4 | 9
Objective:
- Sort data by Key_1 ASC, Sort1 ASC, Sort2 DESC
- Select first record per unique Key_1
Key_1 | Sort1 | Sort2 | Val_1 | Val_2
-------+-------+-------+-------+-------
1 | 1 | 3 | 0 | 2
2 | 1 | 2 | 0 | 5
Analytic Function Solution...
SELECT
*
FROM
(
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY Key_1
ORDER BY Sort1,
Sort2 DESC
)
AS group_ordinal
FROM
table
)
sorted
WHERE
group_ordinal = 1
Laborious ANSI-92 approach...
SELECT
table.*
FROM
table
INNER JOIN
(
SELECT
table.Key1, table.Sort1, MAX(table.Sort2) AS Sort2
FROM
table
INNER JOIN
(
SELECT
Key_1, MIN(Sort1)
FROM
table
GROUP BY
Key_1
)
first_Sort1
ON table.Key_1 = first_Sort1.Key_1
AND table.Sort1 = first_Sort1.Sort1
GROUP BY
table.Key1, table.Sort1
)
first_Sort1_last_Sort2
ON table.Key_1 = first_Sort1_last_Sort2.Key_1
AND table.Sort1 = first_Sort1_last_Sort2.Sort1
AND table.Sort2 = first_Sort1_last_Sort2.Sort2
This involves a lot of nesting and self joins. Which is cumbersome enough when it involves just two sort columns.
My actual example has six sort columns.
I also would like to avoid anything like the following, as it is not (to my knowledge) guaranteed / deterministic...
SELECT
table.*
FROM
table
GROUP BY
table.Key_1
ORDER BY
MIN(table.Sort1),
MAX(table.Sort2)
Are there any other options that I'm just not seeing?

I believe this will work in SQLite:
select t.*
from table t
where exists (select 1
from (select t2.*
from table t2
where t2.id = t.id
order by t2.sort1 asc, t2.sort2 desc
limit 1
) t2
where t2.sort1 = t.sort1 and t2.sort2 = t.sort2
);
My concern is whether SQLite allows correlated references in nested subqueries. If not, you can just use = and concatenate the values together:
select t.*
from table t
where (sort1 || ':' || sort2) =
(select (sort1 || ':' || sort2)
from table t2
where t2.id = t.id
order by sort1 asc, sort2 desc
limit 1
);

Related

Grouping data using PostgreSQL based on 2 fields

I have a problem with grouping data in postgresql. let say that I have table called my_table
some_id | description | other_id
---------|-----------------|-----------
1 | description-1 | a
1 | description-2 | b
2 | description-3 | a
2 | description-4 | a
3 | description-5 | a
3 | description-6 | b
3 | description-7 | b
4 | description-8 | a
4 | description-9 | a
4 | description-10 | a
...
I would like to group my database based on some_id then differentiate which one has same and different other_id
I would expecting 2 type of queries: 1 that has same other_id and 1 that has different other_id
Expected result
some_id | description | other_id
---------|-----------------|-----------
2 | description-3 | a
2 | description-4 | a
4 | description-8 | a
4 | description-9 | a
4 | description-10 | a
AND
some_id | description | other_id
---------|-----------------|-----------
1 | description-1 | a
1 | description-2 | b
3 | description-5 | a
3 | description-6 | b
3 | description-7 | b
I am open for suggestion both using sequelize or raw query
thank you
One approach, using MIN and MAX as analytic functions:
WITH cte AS (
SELECT *, MIN(other_id) OVER (PARTITION BY some_id) min_other_id,
MAX(other_id) OVER (PARTITION BY some_id) max_other_id
FROM yourTable
)
-- all some_id the same
SELECT some_id, description, other_id
FROM cte
WHERE min_other_id = max_other_id;
-- not all some_id the same
SELECT some_id, description, other_id
FROM cte
WHERE min_other_id <> max_other_id;
Demo
You can also do this using exists and not exists:
-- all same
select t.*
from my_table t
where not exists (select 1
from my_table t2
where t2.some_id = t.some_id and t2.other_id <> t.other_id
);
-- any different
select t.*
from my_table t
where exists (select 1
from my_table t2
where t2.some_id = t.some_id and t2.other_id <> t.other_id
);
Note that this ignores NULL values. If you want them treated as a "different" value then use is distinct from rather than <>.

Get some values from the table by selecting

I have a table:
| id | Number |Address
| -----| ------------|-----------
| 1 | 0 | NULL
| 1 | 1 | NULL
| 1 | 2 | 50
| 1 | 3 | NULL
| 2 | 0 | 10
| 3 | 1 | 30
| 3 | 2 | 20
| 3 | 3 | 20
| 4 | 0 | 75
| 4 | 1 | 22
| 4 | 2 | 30
| 5 | 0 | NULL
I need to get: the NUMBER of the last ADDRESS change for each ID.
I wrote this select:
select dh.id, dh.number from table dh where dh =
(select max(min(t.history)) from table t where t.id = dh.id group by t.address)
But this select not correctly handling the case when the address first changed, and then changed to the previous value. For example id=1: group by return:
| Number |
| -------- |
| NULL |
| 50 |
I have been thinking about this select for several days, and I will be happy to receive any help.
You can do this using row_number() -- twice:
select t.id, min(number)
from (select t.*,
row_number() over (partition by id order by number desc) as seqnum1,
row_number() over (partition by id, address order by number desc) as seqnum2
from t
) t
where seqnum1 = seqnum2
group by id;
What this does is enumerate the rows by number in descending order:
Once per id.
Once per id and address.
These values are the same only when the value is 1, which is the most recent address in the data. Then aggregation pulls back the earliest row in this group.
I answered my question myself, if anyone needs it, my solution:
select * from table dh1 where dh1.number = (
select max(x.number)
from (
select
dh2.id, dh2.number, dh2.address, lag(dh2.address) over(order by dh2.number asc) as prev
from table dh2 where dh1.id=dh2.id
) x
where NVL(x.address, 0) <> NVL(x.prev, 0)
);

How to get count from one table which is mutually dependent to another table

I have two table
Let's name as first table: QC_Meeting_Master
Second table: QC_Project_Master I want to calculate count of problems_ID Which is mutually depend on second table
ID | QC_ID | Problems_ID |
___|_______|_____________|
1 | 1 | 2 |
2 | 1 | 7 |
ID | QC_ID | Problem_ID |
___|_______|_____________|
1 | 1 | 7 |
2 | 1 | 7 |
3 | 1 | 7 |
4 | 1 | 7 |
5 | 1 | 2 |
6 | 1 | 2 |
7 | 1 | 2 |
select COUNT(Problem_ID) from [QC_Project_Master] where Problem_ID in
(select Problems_ID from QC_Meeting_Master QMM join QC_Project_Master QPM on QMM.Problems_ID = QPM.Problem_ID)
I have to calculate Count of QC_Project_Master (problem_ID) on basis of QC_Meeting_Master (Problems_ID)
it means for first table: QC_Meeting_Master(Problems_ID) = 2,
then count should be 3
And for Second table: QC_Project_Master (Problems_ID) = 7,
then count should be 4
use conditional aggregation
select sum(case when t2.Problem_ID=2 then 1 else 0 end),
sum(case when t2.Problem_ID=7 then 1 else 0 end) from
table1 t1 join table2 t2 on t1.QC_ID=t2.QC_ID and t1.Problems_ID=t2.Problems_ID
if you need all the group count then use below
select t2.QC_ID,t2.Problems_ID, count(*) from
table1 t1 join table2 t2
on t1.QC_ID=t2.QC_ID and t1.Problems_ID=t2.Problems_ID
group by t2.QC_ID,t2.Problems_ID
As far as I understood your problem this is simple aggregation and JOIN as below:
SELECT mm.QC_ID, mm.Problem_ID, pm.cnt
FROM QC_Meeting_Master mm
INNER JOIN
(
SELECT QC_ID, Problem_ID, COUNT(*) cnt
FROM QC_Project_Master
GROUP BY QC_ID, Problem_ID
) pm
ON pm.QC_ID = mm.QC_ID AND pm.Problem_ID = mm.Problem_ID;

Efficient ROW_NUMBER increment when column matches value

I'm trying to find an efficient way to derive the column Expected below from only Id and State. What I want is for the number Expected to increase each time State is 0 (ordered by Id).
+----+-------+----------+
| Id | State | Expected |
+----+-------+----------+
| 1 | 0 | 1 |
| 2 | 1 | 1 |
| 3 | 0 | 2 |
| 4 | 1 | 2 |
| 5 | 4 | 2 |
| 6 | 2 | 2 |
| 7 | 3 | 2 |
| 8 | 0 | 3 |
| 9 | 5 | 3 |
| 10 | 3 | 3 |
| 11 | 1 | 3 |
+----+-------+----------+
I have managed to accomplish this with the following SQL, but the execution time is very poor when the data set is large:
WITH Groups AS
(
SELECT Id, ROW_NUMBER() OVER (ORDER BY Id) AS GroupId FROM tblState WHERE State=0
)
SELECT S.Id, S.[State], S.Expected, G.GroupId FROM tblState S
OUTER APPLY (SELECT TOP 1 GroupId FROM Groups WHERE Groups.Id <= S.Id ORDER BY Id DESC) G
Is there a simpler and more efficient way to produce this result? (In SQL Server 2012 or later)
Just use a cumulative sum:
select s.*,
sum(case when state = 0 then 1 else 0 end) over (order by id) as expected
from tblState s;
Other method uses subquery :
select *,
(select count(*)
from table t1
where t1.id < t.id and state = 0
) as expected
from table t;

T-SQL: Best way to replace NULL with most recent non-null value?

Assume I have this table:
+----+-------+
| id | value |
+----+-------+
| 1 | 5 |
| 2 | 4 |
| 3 | 1 |
| 4 | NULL |
| 5 | NULL |
| 6 | 14 |
| 7 | NULL |
| 8 | 0 |
| 9 | 3 |
| 10 | NULL |
+----+-------+
I want to write a query that will replace any NULL value with the last value in the table that was not null in that column.
I want this result:
+----+-------+
| id | value |
+----+-------+
| 1 | 5 |
| 2 | 4 |
| 3 | 1 |
| 4 | 1 |
| 5 | 1 |
| 6 | 14 |
| 7 | 14 |
| 8 | 0 |
| 9 | 3 |
| 10 | 3 |
+----+-------+
If no previous value existed, then NULL is OK. Ideally, this should be able to work even with an ORDER BY. So for example, if I ORDER BY [id] DESC:
+----+-------+
| id | value |
+----+-------+
| 10 | NULL |
| 9 | 3 |
| 8 | 0 |
| 7 | 0 |
| 6 | 14 |
| 5 | 14 |
| 4 | 14 |
| 3 | 1 |
| 2 | 4 |
| 1 | 5 |
+----+-------+
Or even better if I ORDER BY [value] DESC:
+----+-------+
| id | value |
+----+-------+
| 6 | 14 |
| 1 | 5 |
| 2 | 4 |
| 9 | 3 |
| 3 | 1 |
| 8 | 0 |
| 4 | 0 |
| 5 | 0 |
| 7 | 0 |
| 10 | 0 |
+----+-------+
I think this might involve some kind of analytic function - somehow partitioning over the value column - but I'm not sure where to look.
You can use a running sum to set groups and use max to fill in the null values.
select id,max(value) over(partition by grp) as value
from (select id,value,sum(case when value is not null then 1 else 0 end) over(order by id) as grp
from tbl
) t
Change the over() clause to order by value desc to get the second result in the question.
The best way has been covered by Itzik Ben-Gan here:The Last non NULL Puzzle
Below is a solution which for 10 million rows and completes around in 20 seconds on my system
SELECT
id,
value1,
CAST(
SUBSTRING(
MAX(CAST(id AS binary(4)) + CAST(value1 AS binary(4)))
OVER (ORDER BY id
ROWS UNBOUNDED PRECEDING),
5, 4)
AS int) AS lastval
FROM dbo.T1;
This solution assumes your id column is indexed
You can also try using correlated subquery
select id,
case when value is not null then value else
(select top 1 value from table
where id < t.id and value is not null order by id desc) end value
from table t
Result :
id value
1 5
2 4
3 1
4 1
5 1
6 14
7 14
8 0
9 3
10 3
If the NULLs are scattered I use a WHILE loop to fill them in
However if the NULLs are in longer consecutive strings there are faster ways to do it.
So here's one approach:
First find a record that we want to update. It has NULL in this record and no NULL in the prior record
SELECT C.VALUE, N.ID
FROM TABLE C
INNER JOIN TABLE N
ON C.ID + 1 = N.ID
WHERE C.VALUE IS NOT NULL
AND N.VALUE IS NULL;
Use that to update: (bit hazy on this syntax but you get the idea)
UPDATE N
SET VALUE = C.Value
FROM TABLE C
INNER JOIN TABLE N
ON C.ID + 1 = N.ID
WHERE C.VALUE IS NOT NULL
AND N.VALUE IS NULL;
.. now just keep doing it till you run out of rows
-- This is needed to set ##ROWCOUNT to non zero
SELECT 1;
WHILE ##ROWCOUNT <> 0
BEGIN
UPDATE N
SET VALUE = C.Value
FROM TABLE C
INNER JOIN TABLE N
ON C.ID + 1 = N.ID
WHERE C.VALUE IS NOT NULL
AND N.VALUE IS NULL;
END
The other way is to use a similiar query to get a range of id's to update. This works much faster if your NULLS are usually against consecutive id's
Here is the one simple approach using OUTER APPLY
CREATE TABLE #table(id INT, value INT)
INSERT INTO #table VALUES
(1,5),
(2,4),
(3,1),
(4,NULL),
(5,NULL),
(6,14),
(7,NULL),
(8,0),
(9,3),
(10,NULL)
SELECT t.id, ISNULL(t.value, t3.value) value
FROM #table t
OUTER APPLY(SELECT id FROM #table WHERE id = t.id AND VALUE IS NULL) t2
OUTER APPLY(SELECT TOP 1 value
FROM #table WHERE id <= t2.id AND VALUE IS NOT NULL ORDER BY id DESC) t3
OUTPUT:
id VALUE
---------
1 5
2 4
3 1
4 1
5 1
6 14
7 14
8 0
9 3
10 3
Using this sample data:
if object_id('tempdb..#t1') is not null drop table #t1;
create table #t1 (id int primary key, [value] int null);
insert #t1 values(1,5),(2,4),(3,1),(4,NULL),(5,NULL),(6,14),(7,NULL),(8,0),(9,3),(10,NULL);
I came up with:
with x(id, [value], grouper) as (
select *, row_number() over (order by id)-sum(iif([value] is null,1,0)) over (order by id)
from #t1)
select id, min([value]) over (partition by grouper)
from x;
I noticed, however, that Vamsi Prabhala beat me to it... My solution is identical to what he posted. (arghhhh!). So I thought I'd try a recursive solution. Here's a pretty efficient use of a recursive cte (provided that ID is indexed):
with sorted as (select *, seqid = row_number() over (order by id) from #t1),
firstRecord as (select top(1) * from #t1 order by id),
prev as
(
select t.id, t.[value], lastid = 1, lastvalue = null
from sorted t
where t.id = 1
union all
select t2.id, t2.[value], lastid+1, isnull(prev.[value],lastvalue)
from sorted t2
join prev on t2.id = prev.lastid+1
)
select id, [value]=isnull([value],lastvalue)--, *
from prev;
Normally I don't like recursive cte's (rCte for short) but in this case it offered an elegant solution and was faster than using the window aggregate function (sum over, min over...). Note the execution plans, the rcte on the bottom. The rCTE get's it done with two index seeks, one of which is for just one row. Unlike the window aggregate solution, the rcte does not require a sort. Running this with statistics io on; the rcte produces much less IO.
All this said, don't use either of these solutions, What the TheGameiswar posted will perform the best by far. His solution on a properly indexed id column would be lightening fast.
Following UPDATE statement can be used, please test it before use
update #table
set value = newvalue
from (
select
s.id, s.value,
(select top 1 t.value from #table t where t.id <= s.id and t.value is not null order by t.id desc) as newvalue
from #table S
) u
where #table.id = u.id and #table.value is null
stop worrying..here's the answer for you :)
SELECT *
INTO #TempIsNOtNull
FROM YourTable
WHERE value IS NOT NULL
SELECT *
INTO #TempIsNull
FROM YourTable
WHERE value IS NULL
UPDATE YourTable
SEt YourTable.value = UpdateDtls.value
FROM YourTable
JOIN (
SELECT OuterTab1.id,
#TempIsNOtNull.value
FROM #TempIsNull OuterTab1
CROSS JOIN #TempIsNOtNull
WHERE OuterTab1.id - #TempIsNOtNull.id > 0
AND (OuterTab1.id - #TempIsNOtNull.id) = ( SELECT TOP 1
OuterTab1.id - #TempIsNOtNull.id
FROM #TempIsNull InnerTab
CROSS JOIN #TempIsNOtNull
WHERE OuterTab1.id - #TempIsNOtNull.id > 0
AND OuterTab1.id = InnerTab.id
ORDER BY (OuterTab1.id - #TempIsNOtNull.id) ASC) ) AS UpdateDtls
ON (YourTable.id = UpdateDtls.id)