Pivot Time on row and Date on Column in SQL Server 2012 - sql

I'll pivot date and time from 2 joined tables.
Table: Shipping
+-----+-------+-------------+---------------+-----------------+
| ID |PartNum|ForecastTime | ForecastDate | ForecastNetqty |
+-------------+-------------+---------------+-----------------+
| 1 | x001 | 8:00 | 20180101 | 5 |
| 2 | x001 | 12:00 | 20180101 | 10 |
| 3 | x002 | 12:00 | 20180102 | 15 |
| 4 | x003 | 08:00 | 20180101 | 13 |
| 5 | x003 | 12:00 | 20180103 | 12 |
| 6 | x004 | 8:00 | 20180104 | 10 |
| 7 | x004 | 12:00 | 20180104 | 5 |
| 8 | x005 | | 20180103 | 5 |
| 9 | x005 | 8:00 | 20180104 | 13 |
| 10 | x005 | 12:00 | 20180104 | 15 |
+-----+-------+-----------------------------+-----------------+
Table: Masterdata
+-----+--------+-------------+---------------+----------------+
| ID |Material| Shipto | DV | CusMaterialNum |
+-----+--------+-------------+---------------+----------------+
| 1 | 12345 | 11200 | 0101 | x001 |
| 2 | 98765 | 11201 | 0202 | x002 |
| 3 | 45678 | 11202 | 0303 | x003 |
| 4 | 12354 | 11203 | 0404 | x004 |
| 5 | 12365 | 11204 | 0505 | x005 |
+-----+--------+-----------------------------+----------------+
I'll to this report with looping date min to max in Forecastdate
+-------+--------+------+-----------------+--------+--------+--------+--------+
|PartNUm|Material|Shipto| DV |ForecastTime|20180101|20180102|20180103|20180104|
+-------+--------+------+-----------------+--------+--------+--------+--------+
| x001 | 12345 |11200 |0101| 08:00 | 5 | | | |
| | 12345 |11200 |0101| 12:00 | 10 | | | |
| x002 | 98765 |11201 |0202| 12:00 | | 15 | | |
| x003 | 45678 |11202 |0303| 8:00 | 13 | | | |
| | 45678 |11202 |0303| 12:00 | | | 12 | |
| x004 | 12354 |11203 |0404| 08:00 | 5 | | | 10 |
| | 12354 |11203 |0404| 12:00 | 10 | | | 5 |
| x005 | 12365 |11204 |0505| | | | 5 | |
| | 12365 |11204 |0505| 8:00 | | | | 13 |
| | 12365 |11204 |0505| 12:00 | | | | 15 |
+-------+--------+------+----+------------+--------+--------+--------+--------+

If you know all your dates you can use this code:
declare #Shipping table( ID int ,PartNum varchar(100),ForecastTime time, ForecastDate date, ForecastNetqty int);
insert into #Shipping values
( 1 , 'x001' , '8:00' , '20180101' , 5 ),
( 2 , 'x001' , '12:00' , '20180101' , 10 ),
( 3 , 'x002' , '12:00' , '20180102' , 15 ),
( 4 , 'x003' , '8:00' , '20180101' , 13 ),
( 5 , 'x003' , '12:00' , '20180103' , 12 ),
( 6 , 'x004' , '8:00' , '20180104' , 10 ),
( 7 , 'x004' , '12:00' , '20180104' , 5 ),
( 8 , 'x005' , null , '20180103' , 5 ),
( 9 , 'x005' , '8:00' , '20180104' , 13 ),
( 10 , 'x005' , '12:00' , '20180104' , 15 );
declare #Masterdata table( ID int ,Material int, Shipto int, DV varchar(100), CusMaterialNum varchar(100));
insert into #Masterdata values
( 1 , 12345 , 11200 , '0101' , 'x001' ),
( 2 , 98765 , 11201 , '0202' , 'x002' ),
( 3 , 45678 , 11202 , '0303' , 'x003' ),
( 4 , 12354 , 11203 , '0404' , 'x004' ),
( 5 , 12365 , 11204 , '0505' , 'x005' );
with d as
(
select s.PartNum,
m.Material,
m.Shipto,
m.DV,
s.ForecastTime,
s.ForecastDate,
s.ForecastNetqty
from #Shipping s join #Masterdata m
on s.PartNum = m.CusMaterialNum
)
select *
from d pivot (sum(ForecastNetqty) for ForecastDate in ([2018-01-01], [2018-01-02], [2018-01-03], [2018-01-04]))p;
If your dates are dynamic and you cannot directly write in ([2018-01-01], [2018-01-02], [2018-01-03], [2018-01-04]) you should use dynamic sql to define your in

Related

How to get interpolation value in SQL Server?

I want to get interpolation value for NULL. Interpolation is a statistical method by which related known values are used to estimate an unknown price or potential yield of a security. Interpolation is achieved by using other established values that are located in sequence with the unknown value.
Here is my sample table and code.
https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=673fcd5bc250bd272e8b6da3d0eddb90
I want to get this result:
| SEQ | cat01 | cat02 | dt_day | price | coeff |
+-----+-------+-------+------------+-------+--------+
| 1 | 230 | 1 | 2019-01-01 | 16000 | 0 |
| 2 | 230 | 1 | 2019-01-02 | NULL | 1 |
| 3 | 230 | 1 | 2019-01-03 | 13000 | 0 |
| 4 | 230 | 1 | 2019-01-04 | NULL | 1 |
| 5 | 230 | 1 | 2019-01-05 | NULL | 2 |
| 6 | 230 | 1 | 2019-01-06 | NULL | 3 |
| 7 | 230 | 1 | 2019-01-07 | 19000 | 0 |
| 8 | 230 | 1 | 2019-01-08 | 20000 | 0 |
| 9 | 230 | 1 | 2019-01-09 | 21500 | 0 |
| 10 | 230 | 1 | 2019-01-10 | 21500 | 0 |
| 11 | 230 | 1 | 2019-01-11 | NULL | 1 |
| 12 | 230 | 1 | 2019-01-12 | NULL | 2 |
| 13 | 230 | 1 | 2019-01-13 | 23000 | 0 |
| 1 | 230 | 2 | 2019-01-01 | NULL | 1 |
| 2 | 230 | 2 | 2019-01-02 | NULL | 2 |
| 3 | 230 | 2 | 2019-01-03 | 12000 | 0 |
| 4 | 230 | 2 | 2019-01-04 | 17000 | 0 |
| 5 | 230 | 2 | 2019-01-05 | 22000 | 0 |
| 6 | 230 | 2 | 2019-01-06 | NULL | 1 |
| 7 | 230 | 2 | 2019-01-07 | 23000 | 0 |
| 8 | 230 | 2 | 2019-01-08 | 23200 | 0 |
| 9 | 230 | 2 | 2019-01-09 | NULL | 1 |
| 10 | 230 | 2 | 2019-01-10 | NULL | 2 |
| 11 | 230 | 2 | 2019-01-11 | NULL | 3 |
| 12 | 230 | 2 | 2019-01-12 | NULL | 4 |
| 13 | 230 | 2 | 2019-01-13 | 23000 | 0 |
I use this code. I think this code incorrect.
coeff is the NULL is in order set.
This code is for implementing interpolation.
I tried to find out between the empty values and divide them by the number of spaces.
But, this code is incorrect.
WITH ROW_VALUE AS
(
SELECT SEQ
, dt_day
, cat01
, cat02
, price
, ROW_NUMBER() OVER (ORDER BY dt_day) AS sub_seq
FROM (
SELECT SEQ
, cat01
, cat02
, dt_day
, dt_week
, dt_month
, price
FROM temp01
WHERE price IS NOT NULL
)val
)
,STEP_CHANGE AS(
SELECT RV1.SEQ AS id_Start
, RV1.SEQ - 1 AS id_End
, RV1.cat01
, RV1.cat02
, RV1.dt_day
, RV1.price
, (RV2.price - RV1.price)/(RV2.SEQ - RV1.SEQ) AS change1
FROM ROW_VALUE RV1
LEFT JOIN ROW_VALUE RV2 ON RV1.cat01 = RV2.cat01
AND RV1.cat02 = RV2.cat02
AND RV1.SEQ = RV2.SEQ - 1
)
SELECT *
FROM STEP_CHANGE
ORDER BY cat01, cat02, dt_day
Please, let me know what a good way to fill NULL using linear relationships.
If there is another good way, please recommend it.
If I assume that you mean linear interpolation between the previous price and the next price based on the number of days that passed, then you can use the following method:
Use window functions to get the next and previous days with prices for each row.
Use window functions or joins to get the prices on those days as well.
Use arithmetic to calculate the linear interpolation.
You SQL Fiddle uses SQL Server, so I assume that is the database you are using. The code looks like this:
select t.*,
coalesce(t.price,
(tprev.price +
(tnext.price - tprev.price) / datediff(day, prev_price_day, next_price_day) *
datediff(day, t.prev_price_day, t.dt_day)
)
) as imputed_price
from (select t.*,
max(case when price is not null then dt_day end) over (partition by cat01, cat02 order by dt_day asc) as prev_price_day,
min(case when price is not null then dt_day end) over (partition by cat01, cat02 order by dt_day desc) as next_price_day
from temp01 t
) t left join
temp01 tprev
on tprev.cat01 = t.cat01 and
tprev.cat02 = t.cat02 and
tprev.dt_day = t.prev_price_day left join
temp01 tnext
on tnext.cat01 = t.cat01 and
tnext.cat02 = t.cat02 and
tnext.dt_day = t.next_price_day
order by cat01, cat02, dt_day;
Here is a db<>fiddle.

Get successive differences of rows, including both the first and last row, grouped by one or more columns

I'm trying to get the successive differences of rows of data in SQL, including differences between first and last row and 0, where the rows are grouped by multiple columns.
I have two tables that look like this
Date Value
+------------+-------+ +------------+-------+------+------+
| Date | Name | | Date | Value | Name | Type |
+------------+-------+ +------------+-------+------+------+
| 2019-10-10 | A | | 2019-10-11 | 10 | A | X |
| 2019-10-11 | A | | 2019-10-12 | 11 | A | X |
| 2019-10-12 | A | | 2019-10-14 | 20 | A | X |
| 2019-10-13 | A | | 2019-10-11 | 10 | A | Y |
| 2019-10-14 | A | | 2019-10-12 | 22 | A | Y |
| 2019-10-15 | A | | 2019-10-14 | 30 | A | Y |
| 2019-10-10 | B | | 2019-10-11 | 10 | B | X |
| 2019-10-11 | B | | 2019-10-12 | 33 | B | X |
| 2019-10-12 | B | | 2019-10-14 | 40 | B | X |
| 2019-10-13 | B | | 2019-10-11 | 10 | B | Y |
| 2019-10-14 | B | | 2019-10-12 | 44 | B | Y |
| 2019-10-15 | B | | 2019-10-15 | 50 | B | Y |
+------------+-------+ +------------+-------+------+------+
The Date table holds the universe of dates for the different names. The Value table has values of different types for each name. I'd like to get a set of successive differences for every value, grouped by Name and Type.
The end result I'm looking for is
+------------+-------+------+-------+---------------+------------+
| Date | Name | Type | Value | PreviousValue | Difference |
+------------+-------+------+-------+---------------+------------+
| 2019-10-11 | A | X | 10 | 0 | 10 |
| 2019-10-12 | A | X | 11 | 10 | 1 |
| 2019-10-14 | A | X | 20 | 11 | 9 |
| 2019-10-15 | A | X | 0 | 20 | -20 |
| 2019-10-11 | A | Y | 10 | 0 | 10 |
| 2019-10-12 | A | Y | 22 | 10 | 12 |
| 2019-10-14 | A | Y | 30 | 22 | 8 |
| 2019-10-15 | A | Y | 0 | 30 | -30 |
| 2019-10-11 | B | X | 10 | 0 | 10 |
| 2019-10-12 | B | X | 33 | 10 | 23 |
| 2019-10-14 | B | X | 40 | 33 | 7 |
| 2019-10-15 | B | X | 0 | 40 | -40 |
| 2019-10-11 | B | Y | 10 | 0 | 10 |
| 2019-10-12 | B | Y | 44 | 10 | 34 |
| 2019-10-15 | B | Y | 50 | 44 | 10 |
+------------+-------+------+-------+---------------+------------+
Note that the B–Y set of rows illustrates an important point—we might have a value for the last date, in which case there's no need for an "extra" row for that set.
The closest I can get right now is
SELECT
d.[Date],
d.[Name],
v.[Type],
v.[Value],
[PreviousValue] = COALESCE(LAG(v.[Value]) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY d.[Date]), 0),
[Difference] = v.[Value] - COALESCE(LAG(v.[Value]) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY v.[Date]), 0)
FROM
[Dates] d
LEFT JOIN
[Values] v
ON
d.[Date] = v.[Date]
AND d.[Name] = v.[Name]
But this doesn't produce the difference for the last row.
Since some data is missing on either side, you have to make up for it somehow.
One trick is to create such missing data by carefull joining.
The example below first joines the types to the Dates data. So that a FULL JOIN with the Values data can also be done on the type.
Then after adding enough COALESCE's or ISNULL's , calculating the metrics becomes easy.
CREATE TABLE [Dates](
[Date] DATE NOT NULL,
[Name] VARCHAR(8) NOT NULL,
PRIMARY KEY ([Date], [Name])
);
INSERT INTO [Dates]
([Date], [Name]) VALUES
('2019-10-10','A')
, ('2019-10-11','A')
, ('2019-10-12','A')
, ('2019-10-13','A')
, ('2019-10-14','A')
, ('2019-10-15','A')
, ('2019-10-10','B')
, ('2019-10-11','B')
, ('2019-10-12','B')
, ('2019-10-13','B')
, ('2019-10-15','B')
;
CREATE TABLE [Values](
[Id] INT IDENTITY(1,1) PRIMARY KEY,
[Date] DATE NOT NULL,
[Name] VARCHAR(8) NOT NULL,
[Value] INTEGER NOT NULL,
[Type] VARCHAR(8) NOT NULL
);
INSERT INTO [Values]
([Date], [Value], [Name], [Type]) VALUES
('2019-10-11', 10, 'A', 'X')
, ('2019-10-12', 11, 'A', 'X')
, ('2019-10-14', 20, 'A', 'X')
, ('2019-10-11', 10, 'A', 'Y')
, ('2019-10-12', 22, 'A', 'Y')
, ('2019-10-14', 30, 'A', 'Y')
, ('2019-10-11', 10, 'B', 'X')
, ('2019-10-12', 33, 'B', 'X')
, ('2019-10-14', 40, 'B', 'X')
, ('2019-10-11', 10, 'B', 'Y')
, ('2019-10-12', 44, 'B', 'Y')
, ('2019-10-15', 50, 'B', 'Y')
;
WITH CTE_DATA AS
(
SELECT
[Name] = COALESCE(d.[Name],v.[Name])
, [Type] = COALESCE(tp.[Type],v.[Type])
, [Date] = COALESCE(d.[Date],v.[Date])
, [Value] = ISNULL(v.[Value], 0)
FROM [Dates] AS d
INNER JOIN
(
SELECT [Name], [Type], MAX([Date]) AS [Date]
FROM [Values]
GROUP BY [Name], [Type]
) AS tp
ON tp.[Name] = d.[Name]
FULL JOIN [Values] AS v
ON v.[Date] = d.[Date]
AND v.[Name] = d.[Name]
AND v.[Type] = tp.[Type]
WHERE v.[Type] IS NOT NULL
OR d.[Date] > tp.[Date]
)
SELECT
[Name], [Type], [Date], [Value]
, [PreviousValue] = ISNULL(LAG([Value]) OVER (PARTITION BY [Name], [Type] ORDER BY [Date]), 0)
, [Difference] = [Value] - ISNULL(LAG([Value]) OVER (PARTITION BY [Name], [Type] ORDER BY [Date]), 0)
FROM CTE_DATA
ORDER BY [Name], [Type], [Date]
Name | Type | Date | Value | PreviousValue | Difference
:--- | :--- | :------------------ | ----: | ------------: | ---------:
A | X | 11/10/2019 00:00:00 | 10 | 0 | 10
A | X | 12/10/2019 00:00:00 | 11 | 10 | 1
A | X | 14/10/2019 00:00:00 | 20 | 11 | 9
A | X | 15/10/2019 00:00:00 | 0 | 20 | -20
A | Y | 11/10/2019 00:00:00 | 10 | 0 | 10
A | Y | 12/10/2019 00:00:00 | 22 | 10 | 12
A | Y | 14/10/2019 00:00:00 | 30 | 22 | 8
A | Y | 15/10/2019 00:00:00 | 0 | 30 | -30
B | X | 11/10/2019 00:00:00 | 10 | 0 | 10
B | X | 12/10/2019 00:00:00 | 33 | 10 | 23
B | X | 14/10/2019 00:00:00 | 40 | 33 | 7
B | X | 15/10/2019 00:00:00 | 0 | 40 | -40
B | Y | 11/10/2019 00:00:00 | 10 | 0 | 10
B | Y | 12/10/2019 00:00:00 | 44 | 10 | 34
B | Y | 15/10/2019 00:00:00 | 50 | 44 | 6
Test on db<>fiddle here
Just use lag() with the default value argument:
[PreviousValue] = COALESCE(LAG(v.Value, 1, 0) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY d.[Date]), 0)
[Difference] = v.[Value] - COALESCE(LAG(v.Value, 1, 0) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY v.[Date]), 0)

Returning MIN Row_Number() SQL

This is probably the clunkiest query I have ever made. I have to use a read-only account so I can't use temp tables or anything to make this easier. The goal is to return the MIN(RowNum) when sumPiecesScrapped = maxSum. I have tried adding the entire query into another subquery trying to return the MIN(RowNum) however, it is one-to-many that is tied to the primary key JobNo and when I tie it to JobNo and StepNo it gives me the same result as the one below.
SELECT
JobNo,
StepNo,
sumPiecesScrapped,
maxSum,
CASE
WHEN sumPiecesScrapped = maxSum THEN ROW_NUMBER() OVER(PARTITION BY JobNo ORDER BY JobNo, StepNo)
ELSE 0
END AS RowNum
FROM
(
SELECT
JobNo,
StepNo,
sumPiecesScrapped
FROM
(
SELECT
JobNo,
StepNo,
SUM(PiecesScrapped) as sumPiecesScrapped
FROM
(
SELECT
JobNo,
StepNo,
PiecesFinished,
PiecesScrapped
FROM TimeTicketDet
) tt2
GROUP BY JobNo, StepNo
) tt3
GROUP BY JobNo, StepNo, sumPiecesScrapped
) tt4
LEFT JOIN
(
SELECT
JobNo as tt5JobNo,
MAX(PiecesScrapped) as maxSum
FROM
(
SELECT
JobNo,
PiecesScrapped
FROM TimeTicketDet
) tt5
GROUP BY JobNo
) tt5
ON tt5.tt5JobNo = tt4.JobNo
WHERE tt4.JobNo = '12345'
Result:
+-------+--------+-------------------+--------+--------+
| JobNo | StepNo | sumPiecesScrapped | maxSum | RowNum |
+-------+--------+-------------------+--------+--------+
| 12345 | 10 | 0 | 5 | 0 |
| 12345 | 20 | 1 | 5 | 0 |
| 12345 | 30 | 5 | 5 | 3 |
| 12345 | 40 | 5 | 5 | 4 |
| 12345 | 60 | 5 | 5 | 5 |
| 12345 | 70 | 5 | 5 | 6 |
+-------+--------+-------------------+--------+--------+
Desired Result:
+-------+--------+-------------------+--------+--------+
| JobNo | StepNo | sumPiecesScrapped | maxSum | RowNum |
+-------+--------+-------------------+--------+--------+
| 12345 | 10 | 0 | 5 | 0 |
| 12345 | 20 | 1 | 5 | 0 |
| 12345 | 30 | 5 | 5 | 3 |
| 12345 | 40 | 5 | 5 | 3 |
| 12345 | 60 | 5 | 5 | 3 |
| 12345 | 70 | 5 | 5 | 3 |
+-------+--------+-------------------+--------+--------+
Other Possible Result:
+-------+--------+-------------------+--------+-----------+
| JobNo | StepNo | sumPiecesScrapped | maxSum | RowNum |
+-------+--------+-------------------+--------+-----------+
| 12345 | 10 | 0 | 5 | 0 |
| 12345 | 20 | 1 | 5 | 0 |
| 12345 | 30 | 5 | 5 | Something |
| 12345 | 40 | 5 | 5 | 0 |
| 12345 | 60 | 5 | 5 | 0 |
| 12345 | 70 | 5 | 5 | 0 |
+-------+--------+-------------------+--------+-----------+

SQL Dynamic Pivoting a "Week" Table

Table design:
| PeriodStart | Person | Day1 | Day2 | Day3 | Day4 | Day5 | Day6 | Day7 |
-------------------------------------------------------------------------
| 01/01/2018 | 123 | 2 | 4 | 6 | 8 | 10 | 12 | 14 |
| 01/15/2018 | 246 | 1 | 3 | 5 | 7 | 9 | 11 | 13 |
I am trying to create a pivot statement that can dynamically transpose both rows.
Desired output:
| Date | Person | Values |
--------------------------------
| 01/01/2018 | 123 | 2 |
| 01/02/2018 | 123 | 4 |
| 01/03/2018 | 123 | 6 |
| 01/04/2018 | 123 | 8 |
| 01/05/2018 | 123 | 10 |
| 01/06/2018 | 123 | 12 |
| 01/15/2018 | 246 | 1 |
| 01/16/2018 | 246 | 3 |
| 01/17/2018 | 246 | 5 |
... and so on. Date order not important
The following query will help initialize things:
DECLARE #WeekTable TABLE (
[PeriodStart] datetime
, [Person] int
, [Day1] int
, [Day2] int
, [Day3] int
, [Day4] int
, [Day5] int
, [Day6] int
, [Day7] int
)
INSERT INTO #WeekTable(
[PeriodStart],[Person],[Day1],[Day2],[Day3],[Day4],[Day5],[Day6],[Day7]
)
VALUES ('01/01/2018','123','2','4','6','8','10','12','14')
,('01/15/2018','246','1','3','5','7','9','11','13')
Other option is to do that with APPLY operator
SELECT DATEADD(DAY, Days-1, Dates) Date, a.Person, Value
FROM #WeekTable t CROSS APPLY (
VALUES (PeriodStart, Person, 1, Day1), (PeriodStart, Person, 2, Day2),
..., (PeriodStart, Person, 7, Day7)
)a(Dates, Person, Days, Value)
you can use unpivot to turn the day columns back into rows and then parse the number out of the column name:
with periods as (
select * from #WeekTable
unpivot ([Values] for [Day] in (Day1, Day2, Day3, Day4, Day5, Day6, Day7))x
)
select dateadd(day, convert(int, substring(Day, 4, 1)), PeriodStart), Person, [Values]
from periods
fiddle

How to Group by 6 days in Postgresql

I want to convert this type of data to 6Days GROUP BY format.
+-----+--------------+------------+
| gid | cnt | date |
+-----+--------------+------------+
| 1 | 1 | 2012-02-05 |
| 2 | 2 | 2012-02-06 |
| 3 | 1 | 2012-02-07 |
| 4 | 1 | 2012-02-08 |
| 5 | 1 | 2012-02-09 |
| 6 | 2 | 2012-02-10 |
| 7 | 3 | 2012-02-11 |
| 8 | 1 | 2012-02-12 |
| 9 | 1 | 2012-02-13 |
| 10 | 2 | 2012-02-14 |
| 11 | 3 | 2012-02-15 |
| 12 | 4 | 2012-02-16 |
| 13 | 1 | 2012-02-17 |
| 14 | 1 | 2012-02-18 |
| 15 | 1 | 2012-02-19 |
| 16 | NULL | 2012-02-20 |
| 17 | 6 | 2012-02-21 |
| 18 | NULL | 2012-02-22 |
+-----+--------------+------------+
↓↓↓↓↓↓↓↓↓↓↓↓↓↓
The date is a continuous format.
If I understand correctly you need something like this:
WITH x AS (SELECT date::date, (random() * 3)::int AS cnt FROM generate_series('2012-02-05'::date, '2012-02-22'::date, '1 day'::interval) AS date
)
SELECT start::date,
(start + '5 day'::interval)::date AS end,
sum(cnt)
FROM generate_series(
(SELECT min(date) FROM x),
(SELECT max(date) FROM x),
'5 day'::interval
) AS start
LEFT JOIN x ON (x.date >= start AND x.date <= start + '5 day'::interval)
GROUP BY 1, 2
ORDER BY 1
In x I emulate your table.