Finding next node in T-SQL - sql

Suppose I have the following table (Date + CustNum is an unique index)
RowId
Date
CustNum
1
1-Jan-2021
0001
2
1-Jan-2021
0002
3
1-Jan-2021
0004
4
2-Jan-2021
0001
5
3-Jan-2021
0001
6
3-Jan-2021
0004
7
7-Jan-2021
0004
The table has ~500K records.
What is the best method to get the previous and next rowid of the CustNum?
RowId
Date
CustNum
CustPrevRowId
CustNextRowId
1
1-Jan-2021
0001
4
2
1-Jan-2021
0002
3
1-Jan-2021
0004
6
4
2-Jan-2021
0001
1
5
5
3-Jan-2021
0001
4
6
3-Jan-2021
0004
3
7
7
7-Jan-2021
0004
6
I've tried to use sub-query but I have faced a performance issue.
SELECT T1.*,
(SELECT TOP 1 RowID FROM T T2 WHERE T2.CustNum = T1.CustNum AND T2.Date < T1.Date ORDER BY DATE DESC) AS CustPrevRowId,
(SELECT TOP 1 RowID FROM T T2 WHERE T2.CustNum = T1.CustNum AND T2.Date > T1.Date ORDER BY DATE ) AS CustNextRowId
FROM T T1

As already pointed in the comments, you can use the two window functions:
LAG, retrieves the previous row in the same partition, given a specified order
LEAD, does the same, but will get the following row instead
In this specific case, you want to:
partition on "CustNum" (since you want last row for each customer number)
order by the date field (so that it will attempt to get rowid with respect to last/next date)
SELECT *, LAG([RowId]) OVER(PARTITION BY [CustNum] ORDER BY [Date]) AS CustPrevRowId,
LEAD([RowId]) OVER(PARTITION BY [CustNum] ORDER BY [Date]) AS CustNextRowId
FROM tab
ORDER BY RowId
Check the demo here.
Note: the last ORDER BY RowId clause is not necessary.

Related

Database schema pattern for grouping transactions

I am working on an accounting system in which there is a way to revert transactions which are made by mistake.
There are processes which run on invoices which generate transactions.
One process can generate multiple transactions for an invoice. There can be multiple processes which can be run on an invoice.
The schema looks as under:
Transactions
========================================================
Id | InvoiceId | InvoiceProcessType | Amount | CreatedOn
1 1 23 10.00 Today
2 1 23 13.00 Today
3 1 23 17.00 Yesterday
4 1 23 32.00 Yesterday
Now 1 and 2 happened together and 3 and 4 happened together and I want to revert the latter (3,4), what would be a possible solution to group them.
One possible solution is to add a column ProcessCount which is incremented on every process.
The new schema would look as under.
Transactions
==============================================================================
Id | InvoiceId | InvoiceProcessType | Amount | CreatedOn | ProcessCount
1 1 23 10.00 Today 1
2 1 23 13.00 Today 1
3 1 23 17.00 Yesterday 2
4 1 23 32.00 Yesterday 2
Is there any other way I can implement this ?
TIA
If you are basing the batching on an arbitrary time frame between the createdon date/time values, then you can use lag() and a cumulative sum. For instance, if two rows are in the same batch if they are within an hour, then:
select t.*,
sum(case when prev_createdon > dateadd(hour, -1, createdon) then 0 else 1 end) over
(partition by invoiceid order by createdon, id) as processcount
from (select t.*,
lag(createdon) over (partition by invoiceid order by createdon, id) as prev_createdon
from transactions t
) t;
That said, it would seem that your processing needs to be enhanced. Each time the code runs, a row should be inserted into some table (say processes). The id generated from that insertion should be used to insert into transactions. That way, you can keep the information about when -- and who and what and so on -- inserted particular transactions.
You can use the dense_rank to identify it as follows:
select t.*,
dense_rank() over (partition by InvoiceId
order by CreatedOn desc) as ProcessCount
from your_table t
You can then revert (/delete) as per your requirement, There is no need to explicitly maintain the ProcessCount column. It can be derived as per the above query.

Generate sequence based on the value in the previous row and current row

I have the below table having student information.
S_ID Group_ID Date Score
12345 1 1/1/2015 1
12345 1 2/1/2015 2
12345 1 3/1/2015 4
12345 1 4/1/2015 5
12345 1 9/1/2015 3
12345 1 10/1/2015 8
12345 2 1/1/2015 2
12345 2 2/1/2015 4
12345 2 3/1/2015 6
I want to generate a new table based for few students after adding a sequence column as shown below
S_ID Group_ID Date Score Sequence
12345 1 1/1/2015 1 1
12345 1 2/1/2015 2 2
12345 1 3/1/2015 4 3
12345 1 4/1/2015 5 4
12345 1 9/1/2015 3 3
12345 1 10/1/2015 8 4
12345 2 1/1/2015 2 2
12345 2 2/1/2015 4 3
12345 2 3/1/2015 6 4
Rules:
Sequence should be generated for each combination of S_ID, Group_I
For the first record, sequence number will be same as the Score
2nd record onwards, this will be 1 + the previous sequence number
if the difference between the date of the previous row and current row is
more than 100 days, sequence number will be restarted (same as the
Score for that record)
This is a large table and I am looking for the most optimized SQL. Any help would be greatly appreciated
The trick here is to find where the sequence numbers start over. This is for new students, groups, and when the previous date has too big a gap. For the latter, you can use lag() to calculate a "new dates start flag" and then aggregate this to get a grouping.
select t.*,
(first_value(score) over (partition by s_id, group_id, grp order by date) +
row_number() over (partition by s_id, group_id, grp order by date) - 1
) as sequence
from (select t.*,
sum(case when prev_date is null or prev_date < date - 100
then 1 else 0
end) over (partition by s_id, group_id order by date) as grp
from (select t.*,
lag(date) over (partition by s_id, group_id order by date) as prev_date
from t
) t
) t;

SQL query to group by data but with order by clause

I have table booking in which I have data
GUEST_NO HOTEL_NO DATE_FROM DATE_TO ROOM_NO
1 1 2015-05-07 2015-05-08 103
1 1 2015-05-11 2015-05-12 104
1 1 2015-05-14 2015-05-15 103
1 1 2015-05-17 2015-05-20 101
2 2 2015-05-01 2015-05-02 204
2 2 2015-05-04 2015-05-05 203
2 2 2015-05-17 2015-05-22 202
What I want is to get the result as.
1 ) It should show output as Guest_no, Hotel_no, Room_no, and column with count as number of time previous three column combination repeated.
So OutPut should like
GUEST_NO HOTEL_NO ROOM_NO Count
1 1 103 2
1 1 104 1
1 1 101 1
2 2 204 1
etc. But I want result to in ordered way e.g.: The output should be order by bk.date_to desc
My query is as below its showing me count but if I use order by its not working
select bk.guest_no, bk.hotel_no, bk.room_no,
count(bk.guest_no+bk.hotel_no+bk.room_no) as noOfTimesRoomBooked
from booking bk
group by bk.guest_no, bk.hotel_no, bk.room_no, bk.date_to
order by bk.date_to desc
So with adding order by result is showing different , because as I added order by date_to column so i have to add this column is group by clause too which will end up in different result as below
GUEST_NO HOTEL_NO ROOM_NO Count
1 1 103 1
1 1 104 1
1 1 103 1
1 1 101 1
2 2 204 1
Which is not the output I want.
I want these four column but with order by desc of date_to column and count as no of repetition of first 3 columns
I think a good way to do this would be grouping by guest_no, hotel_no and room_no, and sorting by the maximum (i.e. most recent) booking date in each group.
SELECT
guest_no,
hotel_no,
room_no,
COUNT(1) AS BookingCount
FROM
booking
GROUP BY
guest_no,
hotel_no,
room_no
ORDER BY
MAX(date_to) DESC;
Maybe this is what you're looking for?
select
guest_no,
hotel_no,
room_no,
count(*) as Count
from
booking
group by
guest_no,
hotel_no,
room_no
order by
min(date_to) desc
Or maybe max() instead of min(). SQL Fiddle: http://sqlfiddle.com/#!6/e684c/3
You could try this.
select t.* from
(
select bk.guest_no, bk.hotel_no, bk.room_no, bk.date_to,
count(*) as noOfTimesBooked from booking bk
group by bk.guest_no, bk.hotel_no, bk.room_no, bk.date_to
) t
order by t.date_to
You will also have to select date_to and then group the result by it.
If you use 'group by' clause, SQL Server doesn't allow you to use 'order by'. So you can make a sub query and use 'order by' in the outer query.
SELECT * FROM
(select bk.guest_no,bk.hotel_no,bk.room_no
,count(bk.guest_no+bk.hotel_no+bk.room_no) as noOfTimesRoomBooked,
(SELECT MAX(date_to) FROM booking CK
WHERE CK.guest_no=BK.guest_no AND bk.hotel_no=CK.bk.hotel_no
bk.room_no=CK.ROOM_NO ) AS DATEBOOK
from booking bk
group by bk.guest_no,bk.hotel_no,bk.room_no,bk.date_to) A
ORDER BY DATEBOOK
IT MIGHT HELP YOU

How to add a running count to rows in a 'streak' of consecutive days

Thanks to Mike for the suggestion to add the create/insert statements.
create table test (
pid integer not null,
date date not null,
primary key (pid, date)
);
insert into test values
(1,'2014-10-1')
, (1,'2014-10-2')
, (1,'2014-10-3')
, (1,'2014-10-5')
, (1,'2014-10-7')
, (2,'2014-10-1')
, (2,'2014-10-2')
, (2,'2014-10-3')
, (2,'2014-10-5')
, (2,'2014-10-7');
I want to add a new column that is 'days in current streak'
so the result would look like:
pid | date | in_streak
-------|-----------|----------
1 | 2014-10-1 | 1
1 | 2014-10-2 | 2
1 | 2014-10-3 | 3
1 | 2014-10-5 | 1
1 | 2014-10-7 | 1
2 | 2014-10-2 | 1
2 | 2014-10-3 | 2
2 | 2014-10-4 | 3
2 | 2014-10-6 | 1
I've been trying to use the answers from
PostgreSQL: find number of consecutive days up until now
Return rows of the latest 'streak' of data
but I can't work out how to use the dense_rank() trick with other window functions to get the right result.
Building on this table (not using the SQL keyword "date" as column name.):
CREATE TABLE tbl(
pid int
, the_date date
, PRIMARY KEY (pid, the_date)
);
Query:
SELECT pid, the_date
, row_number() OVER (PARTITION BY pid, grp ORDER BY the_date) AS in_streak
FROM (
SELECT *
, the_date - '2000-01-01'::date
- row_number() OVER (PARTITION BY pid ORDER BY the_date) AS grp
FROM tbl
) sub
ORDER BY pid, the_date;
Subtracting a date from another date yields an integer. Since you are looking for consecutive days, every next row would be greater by one. If we subtract row_number() from that, the whole streak ends up in the same group (grp) per pid. Then it's simple to deal out number per group.
grp is calculated with two subtractions, which should be fastest. An equally fast alternative could be:
the_date - row_number() OVER (PARTITION BY pid ORDER BY the_date) * interval '1d' AS grp
One multiplication, one subtraction. String concatenation and casting is more expensive. Test with EXPLAIN ANALYZE.
Don't forget to partition by pid additionally in both steps, or you'll inadvertently mix groups that should be separated.
Using a subquery, since that is typically faster than a CTE. There is nothing here that a plain subquery couldn't do.
And since you mentioned it: dense_rank() is obviously not necessary here. Basic row_number() does the job.
You'll get more attention if you include CREATE TABLE statements and INSERT statements in your question.
create table test (
pid integer not null,
date date not null,
primary key (pid, date)
);
insert into test values
(1,'2014-10-1'), (1,'2014-10-2'), (1,'2014-10-3'), (1,'2014-10-5'),
(1,'2014-10-7'), (2,'2014-10-1'), (2,'2014-10-2'), (2,'2014-10-3'),
(2,'2014-10-5'), (2,'2014-10-7');
The principle is simple. A streak of distinct, consecutive dates minus row_number() is a constant. You can group by the constant, and take the dense_rank() over that result.
with grouped_dates as (
select pid, date,
(date - (row_number() over (partition by pid order by date) || ' days')::interval)::date as grouping_date
from test
)
select * , dense_rank() over (partition by grouping_date order by date) as in_streak
from grouped_dates
order by pid, date
pid date grouping_date in_streak
--
1 2014-10-01 2014-09-30 1
1 2014-10-02 2014-09-30 2
1 2014-10-03 2014-09-30 3
1 2014-10-05 2014-10-01 1
1 2014-10-07 2014-10-02 1
2 2014-10-01 2014-09-30 1
2 2014-10-02 2014-09-30 2
2 2014-10-03 2014-09-30 3
2 2014-10-05 2014-10-01 1
2 2014-10-07 2014-10-02 1

Sql query for start and end of a column

I am trying to write a query to get start and end serials for a particular sale
Serial Sale_id
0001 1
0002 1
0003 1
0004 2
0005 2
0006 1
0007 1
0008 1
I am looking for a result like
Quantity Start End
3 0001 0003
3 0006 0008
can some one please help me get this one right
This query (adapted from the book SQL MVP Deep Dives) should give you what you want:
SELECT
Sale_id, --optional, include is you want the Sale_id
-- if Serial is a (VAR)CHAR type use this field
Quantity = MAX(CAST(Serial AS INT)) - MIN(CAST(Serial AS INT)) + 1,
-- if Serial is INT already use this field instead
--Quantity = MAX(CAST(Serial AS INT)) - MIN(CAST(Serial AS INT)) + 1,
[Start] = MIN(Serial),
[End] = MAX(Serial)
FROM (
SELECT
Sale_id,
Serial,
RowNumber =
Serial - ROW_NUMBER() OVER (PARTITION BY Sale_id ORDER BY Serial)
FROM YourTable
) a
--WHERE Sale_id = 1 -- optional limiting WHERE clause
--WHERE Sale_id IN (1,2) -- optional limiting WHERE clause
GROUP BY Sale_id, RowNumber
ORDER BY Start;
I made the assumption that Serial is stored as a character type and included a cast to INT. If it is an numeric type already, change to the line without the cast for the Quantity.
I choose to include the Sale_id column in the output, if you don't want it, just comment it out. If you want to limit the results to one or more particular Sale_id include one of the the WHERE clauses.
Sample SQL Fiddle
Sample output:
Sale_id Quantity Start End
----------- ----------- ---------- ----------
1 3 0001 0003
2 2 0004 0005
1 3 0006 0008
(3 row(s) affected)
You can do this by counting the number of non-similar sales ids before each one. This is a constant for sequences where the sales ids are the same. Then use this for grouping:
select count(*), min(serial_no), max(serial_no)
from (select t.*,
(select count(*)
from taple t2
where t2.serial_no <= t.serial_no and
t2.sale_id <> t.sale_id
) as grp
from taple t
) t
group by grp;
If you only care about sale_id = 1 then put a where clause in the middle subquery query.
Here is a SQL Fiddle.