How to pick the latest row - sql

I have two tables, namely Price List (Table A) and Order Record (Table B):-
Table A
SKU Offer Date Amt
AAA 20120115 22
AAA 20120223 24
AAA 20120331 25
AAA 20120520 28
Table B
A001 AAA 20120201
B001 AAA 20120410
C001 AAA 20120531
I have to retrieve the latest pricing for each customer. The expected output should be like this:-
Customer SKU Order Date Amt
A001 AAA 20120201 28
B001 AAA 20120410 28
C001 AAA 20120531 28
Thanks.

Here is T-SQL - not sure what you are running, add that as a tag in your questions for better answers - Wrote this before the edit of the OP, so double check the cols.
EDITED per x-zeros' comment
SELECT B.CUSTOMER,S.SKU,B.ORDERDATE,S.Amt
FROM TABLE_B B
INNER JOIN
( SELECT C.SKU,C.OFFERDATE,C.Amt,
ROW_NUMBER() OVER (PARTITION BY C.SKU ORDER BY C.OFFERDATE DESC) X
FROM TABLE_A C
)S ON S.X = 1 AND B.SKU = S.SKU
ORDER BY B.CUSTOMER
CREATE TABLE TABLE_A
(SKU varchar(8), OfferDate Date, Amt int)
INSERT INTO TABLE_A
VALUES('AAA', '2012-01-15', 22),
('AAA' ,'2012-02-23', 24),
('AAA' ,'2012-03-31', 25),
('AAA' ,'2012-05-20', 28),
('BBB','2011-01-15 00:00:00.000', 33),
('BBB','2011-02-23 00:00:00.000', 35),
('BBB','2011-03-31 00:00:00.000', 36),
('BBB','2011-05-20 00:00:00.000', 39),
('CCC', '2012-01-15', 43),
('CCC' ,'2012-02-23', 45),
('CCC' ,'2012-03-31', 47),
('CCC' ,'2012-04-18', 44)
CREATE TABLE TABLE_B
(CUSTOMER varchar(8),SKU varchar(8), OrderDate Date)
INSERT INTO TABLE_B
VALUES('A001','AAA','2012-02-01'),
('B001','AAA','2012-04-10'),
('C001','AAA','2012-05-31'),
('A001','BBB','2011-02-01'),
('B001','BBB','2011-04-10'),
('C001','BBB','2011-05-31'),
('B001','CCC','2011-04-10'),
('C001','CCC','2011-05-31')

Related

Display Average Billing Amount For Each Customer only between years 2019-2021

QUESTION : Display Average Billing Amount For Each Customer ONLY between YEAR(2019-2021).
If customer doesn't have any billing amount for any of the particular year then consider as 0.
-------: OUTPUT :
Customer_ID | Customer_Name | AVG_Billed_Amount
-------------------------------------------------------------------------
1 | A | 87.00
2 | B | 200.00
3 | C | 183.00
--------: EXPLANATION :
If any customer doesn't have any billing records for these 3 years then we need to consider as one record with billing_amount = 0
Like Customer C doesn't have any record for Year 2020, so for C Average will be
(250+300+0)/3 = 183.33 OR 183.00
TEMP TABLE HAS FOLLOWING DATA
DROP TABLE IF EXISTS #TEMP;
CREATE TABLE #TEMP
(
Customer_ID INT
, Customer_Name NVARCHAR(100)
, Billing_ID NVARCHAR(100)
, Billing_creation_Date DATETIME
, Billed_Amount INT
);
INSERT INTO #TEMP
SELECT 1, 'A', 'ID1', TRY_CAST('10-10-2020' AS DATETIME), 100 UNION ALL
SELECT 1, 'A', 'ID2', TRY_CAST('11-11-2020' AS DATETIME), 150 UNION ALL
SELECT 1, 'A', 'ID3', TRY_CAST('12-11-2021' AS DATETIME), 100 UNION ALL
SELECT 2, 'B', 'ID4', TRY_CAST('10-11-2019' AS DATETIME), 150 UNION ALL
SELECT 2, 'B', 'ID5', TRY_CAST('11-11-2020' AS DATETIME), 200 UNION ALL
SELECT 2, 'B', 'ID6', TRY_CAST('12-11-2021' AS DATETIME), 250 UNION ALL
SELECT 3, 'C', 'ID7', TRY_CAST('01-01-2018' AS DATETIME), 100 UNION ALL
SELECT 3, 'C', 'ID8', TRY_CAST('05-01-2019' AS DATETIME), 250 UNION ALL
SELECT 3, 'C', 'ID9', TRY_CAST('06-01-2021' AS DATETIME), 300
-----------------------------------------------------------------------------------
Here, 'A' has 3 transactions - TWICE in year 2020(100+150) and 1 in year 2021(100), but none in 2019(SO, Billed_Amount= 0).
so the average will be calculated as (100+150+100+0)/4
DECLARE #BILL_dATE DATE = (SELECT Billing_creation_date from #temp group by customer_id, Billing_creation_date) /*-- THIS THROWS ERROR AS #BILL_DATE WON'T ACCEPT MULTIPLE VALUES.*/
OUTPUT should look like this:
Customer_ID
Customer_Name
AVG_Billed_Amount
1
A
87.00
2
B
200.00
3
C
183.00
You just need a formula to count the number of missing years.
That's 3 - COUNT(DISTINCT YEAR(Billing_creation_Date)
Then the average = SUM() / (COUNT() + (3 - COUNT(DISTINCT YEAR)))...
SELECT
Customer_ID,
Customer_Name,
SUM(Billed_Amount) * 1.0
/
(COUNT(*) + 3 - COUNT(DISTINCT YEAR(Billing_creation_Date)))
AS AVG_Billed_amount
FROM
#temp
WHERE
Billing_creation_Date >= '2019-01-01'
AND Billing_creation_Date < '2022-01-01'
GROUP BY
Customer_ID,
Customer_Name
Demo : https://dbfiddle.uk/ILcfiGWL
Note: The WHERE clause in another answer here would cause a scan of the table, due to hiding the filtered column behind a function. The way I've formed the WHERE clause allows a "Range Seek" if the column is in an index.
Here is a query that can do that :
select s.Customer_ID, s.Customer_Name, sum(Billed_amount)/ ( 6 - count(1)) as AVG_Billed_Amount from (
select Customer_ID, Customer_Name, sum(Billed_Amount) as Billed_amount
from TEMP
where year(Billing_creation_Date) between 2019 and 2021
group by Customer_ID, year(Billing_creation_Date)
) as s
group by Customer_ID;
According to your description the customer_name C will be 137.5000 not 183.00 since 2018 is not counted and 2020 is not there.

Fetch conditional rows in SQL server

I need a query like below. ApplicationID and InvoiceNumber columns show purchases made. Negative values in the Revenue rows indicate shopping refund. The ApplicationID column does not change when the purchase is refunded, but the InvoiceNumber column changes for the refund. I determine the returns according to the price totals of different InvoiceNumbers in the same ApplicationID equal to zero. For example, customer A bought 4 products that InvoiceNumber=AA in ApplicationID=11 shopping, but refund 2 of them (InvoiceNumber=BB). I want to get the remaining rows after the refunds are extracted. So in this example, rows 1-2 and 5-6 will eliminate each other for ApplicationID=11 and only rows 3-4 will remain. In addition, ApplicationID=22 and ApplicationID=33 rows will also come as it does not contain refunds. Finally, rows 3,4,7, 8 and 9 will get. How do I do this?
CustomerCode ApplicationID InvoiceNumber Date Revenue
A 11 AA 1.01.2020 150
A 11 AA 2.01.2020 200
A 11 AA 1.01.2020 250
A 11 AA 1.01.2020 300
A 11 BB 5.01.2020 -150
A 11 BB 5.01.2020 -200
A 22 CC 7.02.2020 500
A 22 DD 7.02.2020 700
A 11 AA 2.01.2020 800
I wrote the result I want. I want to subtract zero sum of revenue according to CustomerCode and ApplicationID and fetch all other columns
example code:
select a.CustomerCode,a.ApplicationID from Table a
group by CustomerCode,a.ApplicationID
having SUM(Revenue)>0
My desired result:
CustomerCode ApplicationID InvoiceNumber Date Revenue
A 11 AA 1.01.2020 250
A 11 AA 1.01.2020 300
A 22 CC 7.02.2020 500
A 22 DD 7.02.2020 700
A 11 AA 2.01.2020 800
I think you've gone down a route of needing to sum your results to remove certain rows from your data but that's not necessarily the case.
You can use a LEFT JOIN back to itself joining on CustomerCode, ApplicationID and Revenue = -Revenue; this effectively finds "purchase" rows that have an associated "refund" row (and vice versa). You can then just filter them off with your WHERE clause
Here's the code I used
DROP TABLE IF EXISTS #Orders
CREATE TABLE #Orders (CustomerCode VARCHAR(1), ApplicationID INT, InvoiceNumber VARCHAR(2), [Date] DATE, Revenue INT)
INSERT INTO #Orders (CustomerCode, ApplicationID, InvoiceNumber, Date, Revenue)
VALUES ('A', 11, 'AA', '2020-01-01', 150),
('A', 11, 'AA', '2020-01-02', 200),
('A', 11, 'AA', '2020-01-01', 250),
('A', 11, 'AA', '2020-01-01', 300),
('A', 11, 'BB', '2020-01-05', -150),
('A', 11, 'BB', '2020-01-05', -200),
('A', 22, 'CC', '2020-01-07', 500),
('A', 22, 'DD', '2020-01-07', 700),
('A', 11, 'AA', '2020-01-02', 800)
SELECT O.CustomerCode, O.ApplicationID, O.InvoiceNumber, O.Date, O.Revenue
FROM #Orders AS O
LEFT JOIN #Orders AS O2 ON O2.ApplicationID = O.ApplicationID AND O2.CustomerCode = O.CustomerCode AND O.Revenue = -O2.Revenue
WHERE O2.ApplicationID IS NULL
And this is the output:
CustomerCode ApplicationID InvoiceNumber Date Revenue
A 11 AA 2020-01-01 250
A 11 AA 2020-01-01 300
A 22 CC 2020-01-07 500
A 22 DD 2020-01-07 700
A 11 AA 2020-01-02 800

Dynamically Insert into Table A Based on Row_Num from Table B

I've condensed some data into TableB which looks like the following:
+========+==========+=========+
| AreaID | AreaName | Row_Num |
+========+==========+=========+
| 506 | BC-VanW | 1 |
+--------+----------+---------+
| 3899 | BC-VicS | 2 |
+--------+----------+---------+
| 1253 | AB-CalW | 3 |
+--------+----------+---------+
There are 2000 unique rows in total, Row_Num from 1 to 2000, and every AreaID in this table is naturally unique as well.
I now want to insert into a blank table, TableA, which has the following columns:
+========+==========+=========+============+===========+
| AreaID | StartDT | EndDT | MarketCode |Allocation |
+========+==========+=========+============+===========+
The insert statement I want to use is repeats everything except for the AreaID
I was attempting some things earlier and this is a basic look at what I have that I'm hoping Stackoverflow could help me expand on:
DECLARE #AreaID NVARCHAR(4)
SET #AreaID = (SELECT AreaID FROM TableB WHERE Row_Num = 1)
DECLARE #Sql NVARCHAR(MAX)
SET #Sql = N'
INSERT INTO TableA (AreaID, StartDt, EndDt, MarketCode, Allocation) VALUES ('+#AreaID+', ''2020-11-01 00:00:00.000'', ''2049-12-31 00:00:00.000'' , 31 , 25.00);
INSERT INTO TableA (AreaID, StartDt, EndDt, MarketCode, Allocation) VALUES ('+#AreaID+', ''2020-11-01 00:00:00.000'', ''2049-12-31 00:00:00.000'' , 38 , 60.00);
INSERT INTO TableA (AreaID, StartDt, EndDt, MarketCode, Allocation) VALUES ('+#AreaID+', ''2020-11-01 00:00:00.000'', ''2049-12-31 00:00:00.000'' , 39 , 15.00);
'
EXEC sp_executesql #Sql
GO
From here I would want it to 'loop' through the Row_Nums, once and once only, and run the full insert query above eventually doing it for all 2000 Row_Nums.
Of course, if there is a more efficient way please let me know and I will take a look.
Thanks!
I think you want a cross join with a fixed list of values:
INSERT INTO TableA (AreaID, StartDt, EndDt, MarketCode, Allocation)
SELECT b.AreaID, x.*
FROM tableB b
CROSS APPLY (VALUES
('2020-11-01 00:00:00.000', '2049-12-31 00:00:00.000', 31, 25.00),
('2020-11-01 00:00:00.000', '2049-12-31 00:00:00.000', 38, 60.00),
('2020-11-01 00:00:00.000', '2049-12-31 00:00:00.000', 39, 15.00)
) x(StartDt, EndDt, MarketCode, Allocation)
If the date range is always the same, this can be simplified a little:
INSERT INTO TableA (AreaID, StartDt, EndDt, MarketCode, Allocation)
SELECT b.AreaID, '2020-11-01 00:00:00.000', '2049-12-31 00:00:00.000', x.*
FROM tableB b
CROSS APPLY (VALUES
(31 , 25.00),
(38 , 60.00),
(39 , 15.00)
) x(MarketCode, Allocation)

get sales records totaling more than $1000 in any 3 hour timespan?

I'm asking because I'm not sure what to google for - attempts that seemed obvious to me returned nothing useful.
I have sales coming into the database of objects at particular datetimes with particular $ values. I want to get all groups of sales records a) within a (any, not just say "on the hour" like 1am-4am) 3 hour time frame, that b) total >= $1000.
The table looks like:
Sales
SaleId int primary key
Item varchar
SaleAmount money
SaleDate datetime
Even just a suggestion on what I should be googling for would be appreciated lol!
EDIT:
Ok after trying the cross apply solution - it's close but not quite there. To illustrate, consider the following sample data:
-- table & data script
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Sales](
[pkid] [int] IDENTITY(1,1) NOT NULL,
[item] [int] NULL,
[amount] [money] NULL,
[saledate] [datetime] NULL,
CONSTRAINT [PK_Sales] PRIMARY KEY CLUSTERED
(
[pkid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
INSERT [dbo].[Sales] VALUES (1, 649.3800, CAST(N'2017-12-31T21:46:19.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (1, 830.6700, CAST(N'2018-01-01T08:38:58.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (1, 321.0400, CAST(N'2018-01-01T09:08:04.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (3, 762.0300, CAST(N'2018-01-01T07:26:30.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (2, 733.5100, CAST(N'2017-12-31T12:04:07.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (3, 854.5700, CAST(N'2018-01-01T08:32:11.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (2, 644.1700, CAST(N'2017-12-31T17:49:59.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (1, 304.7700, CAST(N'2018-01-01T08:01:50.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (2, 415.1200, CAST(N'2017-12-31T20:27:28.000' AS DateTime))
INSERT [dbo].[Sales] VALUES (3, 698.1700, CAST(N'2018-01-01T02:39:28.000' AS DateTime))
A simple adaptation of the cross apply solution from the comments, to go item by item:
select s.*
, s2.saleamount_sum
from Sales s cross apply
(select sum(s_in.amount) as saleamount_sum
from Sales s_in
where s.item = s_in.item
and s.saledate >= s_in.saledate and s_in.saledate < dateadd(hour, 3, s.saledate)
) s2
where s2.saleamount_sum > 1000
order by s.item, s.saledate
So the actual data (sorted by item/time) looks like:
pkid item amount saledate
1 1 649.38 2017-12-31 21:46:19.000
8 1 304.77 2018-01-01 08:01:50.000
2 1 830.67 2018-01-01 08:38:58.000
3 1 321.04 2018-01-01 09:08:04.000
5 2 733.51 2017-12-31 12:04:07.000
7 2 644.17 2017-12-31 17:49:59.000
9 2 415.12 2017-12-31 20:27:28.000
10 3 698.17 2018-01-01 02:39:28.000
4 3 762.03 2018-01-01 07:26:30.000
6 3 854.57 2018-01-01 08:32:11.000
and the result of the cross apply method:
pkid item amount saledate saleamount_sum
2 1 830.67 1/1/18 8:38 AM 1784.82
3 1 321.04 1/1/18 9:08 AM 2105.86
7 2 644.17 12/31/17 5:49 PM 1377.68
9 2 415.12 12/31/17 8:27 PM 1792.8
4 3 762.03 1/1/18 7:26 AM 1460.2
6 3 854.57 1/1/18 8:32 AM 2314.77
The issue can be seen by considering the method's analysis of Item 1. From the data, we see that FIRST sale of item 1 does not participate in a 3-hour-over-$1000. The second, third, and fourth Item 1 sales however do so participate. And they are correctly picked out, pkid = 2 and 3. But their sums aren't right - both of their sums include the very FIRST sale of Item 1, which does not participate in the timespan/amount condition. I would have expected the saleamount_sum for pkid 2 to be 1135.44, and for pkid 3 to be 1456.48 (their reported sums minus the first non-participating sale).
Hopefully that makes sense. I'll try fiddling with the cross apply query to get it. Anyone who can quickly see how to get what I'm after, please feel free to chime in.
thanks,
-sff
Here is one method using apply:
select t.*, tt.saleamount_sum
from t cross apply
(select sum(t2.saleamount) as saleamount_sum
from t t2
where t2.saledate >= t.saledate and t2.saledate < dateadd(hour, 3, t.saledate)
) tt
where tt.saleamount_sum > 1000;
Edit:
If you want this per item (which is not specified in the question), then you need a condition to that effect:
select t.*, tt.saleamount_sum
from t cross apply
(select sum(t2.saleamount) as saleamount_sum
from t t2
where t2.item = t.item and t2.saledate >= t.saledate and t2.saledate < dateadd(hour, 3, t.saledate)
) tt
where tt.saleamount_sum > 1000;
Your query had one wrong comparison (s.saledate >= s_in.saledate) instead of s_in.saledate >= s.saledate. The inner query below looks for the next 3 hours for each row of the outer query.
Sample data
DECLARE #Sales TABLE (
[pkid] [int] IDENTITY(1,1) NOT NULL,
[item] [int] NULL,
[amount] [money] NULL,
[saledate] [datetime] NULL
);
INSERT INTO #Sales VALUES (1, 649.3800, CAST(N'2017-12-31T21:46:19.000' AS DateTime))
INSERT INTO #Sales VALUES (1, 830.6700, CAST(N'2018-01-01T08:38:58.000' AS DateTime))
INSERT INTO #Sales VALUES (1, 321.0400, CAST(N'2018-01-01T09:08:04.000' AS DateTime))
INSERT INTO #Sales VALUES (3, 762.0300, CAST(N'2018-01-01T07:26:30.000' AS DateTime))
INSERT INTO #Sales VALUES (2, 733.5100, CAST(N'2017-12-31T12:04:07.000' AS DateTime))
INSERT INTO #Sales VALUES (3, 854.5700, CAST(N'2018-01-01T08:32:11.000' AS DateTime))
INSERT INTO #Sales VALUES (2, 644.1700, CAST(N'2017-12-31T17:49:59.000' AS DateTime))
INSERT INTO #Sales VALUES (1, 304.7700, CAST(N'2018-01-01T08:01:50.000' AS DateTime))
INSERT INTO #Sales VALUES (2, 415.1200, CAST(N'2017-12-31T20:27:28.000' AS DateTime))
INSERT INTO #Sales VALUES (3, 698.1700, CAST(N'2018-01-01T02:39:28.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:01.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:02.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:03.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:04.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:05.000' AS DateTime))
INSERT INTO #Sales VALUES (4, 600, CAST(N'2018-01-01T02:39:06.000' AS DateTime))
Query
select
s.*
, s2.saleamount_sum
from
#Sales AS s
cross apply
(
select sum(s_in.amount) as saleamount_sum
from #Sales AS s_in
where
s.item = s_in.item
and s_in.saledate >= s.saledate
and s_in.saledate < dateadd(hour, 3, s.saledate)
) AS s2
where s2.saleamount_sum > 1000
order by s.item, s.saledate
;
Result
+------+------+--------+-------------------------+----------------+
| pkid | item | amount | saledate | saleamount_sum |
+------+------+--------+-------------------------+----------------+
| 8 | 1 | 304.77 | 2018-01-01 08:01:50.000 | 1456.48 |
| 2 | 1 | 830.67 | 2018-01-01 08:38:58.000 | 1151.71 |
| 7 | 2 | 644.17 | 2017-12-31 17:49:59.000 | 1059.29 |
| 4 | 3 | 762.03 | 2018-01-01 07:26:30.000 | 1616.60 |
| 11 | 4 | 600.00 | 2018-01-01 02:39:01.000 | 3600.00 |
| 12 | 4 | 600.00 | 2018-01-01 02:39:02.000 | 3000.00 |
| 13 | 4 | 600.00 | 2018-01-01 02:39:03.000 | 2400.00 |
| 14 | 4 | 600.00 | 2018-01-01 02:39:04.000 | 1800.00 |
| 15 | 4 | 600.00 | 2018-01-01 02:39:05.000 | 1200.00 |
+------+------+--------+-------------------------+----------------+
I added 6 rows with item=4 to the sample data. They are all within 3 hours and there are 5 subsets of these 6 rows that have a sum larger than 1000. Technically this result is correct, but do you really want this kind of result?
To get all sales within a specified hours interval:
SELECT SaleId, sum(SaleAmount) as amount FROM Sales WHERE (HOUR(SaleDate) BETWEEN 1 AND 4) GROUP BY SaleId HAVING amount >=1000;
You can add other conditions in WHERE clause.
If you're looking for periods like 0:00-3:00, 3:00-6:00, you can group by those intervals. The following query rounds the hour to multiples of three, combines it with the date, and groups on that:
select format(dt, 'yyyy-mm-dd') + ' ' +
cast(datepart(hour, dt) / 3 * 3 as varchar(2)) as period
, sum(amount) as total
from sales
group by
format(dt, 'yyyy-mm-dd') + ' ' +
cast(datepart(hour, dt) / 3 * 3 as varchar(2))
having sum(amount) > 1000
Working example at regtester.
If you're looking for any 3 hour period, like 0:33-3:33 or 12:01-15:01, see Gordon Linoff's answer.

Selecting the second (middle) row between MIN & MAX values in SQL Server

I have the following table:
TicketNumber CallDate
--------------------------------------------
101 10/09/2015 3:15:43 PM
101 10/09/2015 3:45:43 PM
101 11/19/2015 2:23:09 PM
I want to select the min date, the middle date and the max date. It is easy to get the first and last dates using MIN and MAX. But how to SELECT (get) the second date?
SELECT
TicketNumber
, MIN(CallDate) CallDate1
, MAX(CallDate) CallDate3
, COUNT(TicketNumber) [Count]
FROM Table1
WHERE -(conditions)-
GROUP BY TicketNumber
HAVING COUNT(TicketNumber)=3
Between MIN & MAX dates in the SELECT statement I want the second row date.
The expected output should be:
TicketNumber CallDate1 CallDate2 CallDate3 Count
------------------------------------------------------------------------------------------
101 10/9/2015 3:15:43 PM 10/9/2015 3:45:43 PM 11/19/2015 2:23:09 PM 3
Here is one possible variant. At first number and count all rows, then filter only those TicketNumbers that have three tickets and PIVOT result.
SQL Fiddle
Sample data
DECLARE #Tickets TABLE (TicketNumber int, CallDate datetime2(0));
INSERT INTO #Tickets (TicketNumber, CallDate) VALUES
(101, '2015-10-09 03:15:43'),
(101, '2015-10-09 03:45:43'),
(101, '2015-11-19 02:23:09'),
(102, '2015-11-20 02:23:09'),
(102, '2015-11-19 02:23:09'),
(102, '2015-11-21 02:23:09'),
(103, '2015-11-10 02:23:09'),
(103, '2015-11-19 02:23:09'),
(104, '2015-11-11 02:23:09'),
(104, '2015-11-01 02:23:09'),
(104, '2015-11-21 02:23:09'),
(104, '2015-11-30 02:23:09');
Query
WITH
CTE
AS
(
SELECT
TicketNumber
,CallDate
,ROW_NUMBER() OVER (PARTITION BY TicketNumber ORDER BY CallDate) AS rn
,COUNT(*) OVER (PARTITION BY TicketNumber) AS cnt
FROM
#Tickets AS T
)
SELECT
P.TicketNumber
,[1] AS CallDate1
,[2] AS CallDate2
,[3] AS CallDate3
,cnt
FROM
CTE
PIVOT (MIN(CTE.CallDate) FOR rn IN ([1], [2], [3])) AS P
WHERE cnt = 3
ORDER BY P.TicketNumber;
Result
+--------------+---------------------+---------------------+---------------------+-----+
| TicketNumber | CallDate1 | CallDate2 | CallDate3 | cnt |
+--------------+---------------------+---------------------+---------------------+-----+
| 101 | 2015-10-09 03:15:43 | 2015-10-09 03:45:43 | 2015-11-19 02:23:09 | 3 |
| 102 | 2015-11-19 02:23:09 | 2015-11-20 02:23:09 | 2015-11-21 02:23:09 | 3 |
+--------------+---------------------+---------------------+---------------------+-----+
This can be achieved using table JOINS.
SELECT t1.TicketNumber, t2.CallDate1, t1.CallDate AS CallDate2, t2.CallDate3, t2.Count
FROM tickets AS t1
JOIN (
SELECT TicketNumber, MIN(CallDate) AS CallDate1, MAX(CallDate) AS CallDate3,
COUNT(TicketNumber) AS Count
FROM tickets
GROUP BY TicketNumber
HAVING COUNT(TicketNumber)=3
) AS t2
ON t1.TicketNumber = t2.TicketNumber
WHERE t1.CallDate > t2.CallDate1
AND t1.CallDate < t2.CallDate3
Working Fiddle