I need to pivot a table as show below using column "channel" and grouping it based on Units.
Actual table:
The result I need is shown below
I'm not an expert with pivotting and unpivoting concepts, I'm trying the below query to achieve the above result
SELECT [service_point_ID]
,isnull([1],0) - isnull([2],0) as net_usage_value
,[units]
,[1]
,[2]
,[channel_ID]
,[date]
,[time]
,[is_estimate]
,[UTC_offset]
,[import_history_id]
FROM #temp1
AS SourceTable PIVOT(sum(usage_value) FOR channel IN([1],[2])) AS PivotTable
If I execute this query I'm getting the below result
The same logic is achieved in r -Refernce link Pivot using Mutiple columns
Here is the SQL fiddle for this one
CREATE TABLE #temp1
(
Service_point_ID varchar(10) NUll,
usage_value decimal(18,6) NULL,
units varchar(10) NUll,
[date] Date NULL,
[time] time NULL,
channel varchar(2) NULL,
[Channel_ID] varchar(2) NULL,
is_estimate varchar(2) NULL,
UTC_Offset varchar(20) NULL
)
INSERT INTO #temp1 VALUES ('123',1.000000,'kvarh','2017-01-01','0015','1','11','A','-500')
INSERT INTO #temp1 VALUES ('123',0.200000,'kvarh','2017-01-01','0015','2','11','A','-500')
INSERT INTO #temp1 VALUES ('123',0.200000,'kwh','2017-01-01','0015','1','11','A','-500')
INSERT INTO #temp1 VALUES ('123',0.400000,'kwh','2017-01-01','0015','2','11','A','-500')
Any help is much appreciated.
This is solution using pivot function:
declare #table table(
service_point_id int,
usage_value float,
units varchar(10),
[date] date,
[time] char(4),
channel int,
channel_id int,
is_estimate char(1),
utc_offset int,
import_history int,
datecreated datetime
)
--example data you provided
insert into #table values
(123, 1, 'kvarh', '2017-01-01', '0015', 1, 11, 'A', -500, 317, '2018-03-20 10:32:42.817'),
(123, 0.2, 'kwh', '2017-01-01', '0015', 1, 33, 'A', -500, 317, '2018-03-20 10:32:42.817'),
(123, 0.3, 'kvarh', '2017-01-01', '0015', 2, 11, 'A', -500, 317, '2018-03-20 10:32:42.817'),
(123, 0.4, 'kwh', '2017-01-01', '0015', 2, 33, 'A', -500, 317, '2018-03-20 10:32:42.817')
--pivot query that does the work, it's only matter of aggregation one column, as mentioned already, so pivot query is really simple and concise
select *, [1]-[2] [net_usage_value] from
(select * from #table) [t]
pivot (
max(usage_value)
for channel in ([1],[2])
) [a]
SELECT [service_point_ID]
sum(,isnull([1],0) - isnull([2],0)) as net_usage_value
,[units]
,sum(isnull([1],0))[1]
,sum(isnull([2],0))[2]
,[channel_ID]
,[date]
,[time]
,[is_estimate]
,[UTC_offset]
,[import_history_id]
FROM #temp1
AS SourceTable PIVOT(sum(usage_value) FOR channel IN([1],[2])) AS PivotTable
group by [service_point_ID], [units],[channel_ID]
,[date]
,[time]
,[is_estimate]
,[UTC_offset]
,[import_history_id]
Inner join will out perform the pivot syntax. SQL Server pivot vs. multiple join
select a.usage_value - b.usage_value as net_usage_value , other columns
from #temp1 a inner join #temp1 b on a.service_point_id = b.service_point_id
and a.units = b.units
and a.channel = 1
and b.channel = 2
gets around the group by as well.
Related
I have a fairly poorly designed DB that I'm trying to pull reports from. I'm attempting to sum the value on the column GuestCount, however with the structure of the joins, i'm getting a cartesian situation that's making the sum inaccurate. I can't use Sum(Distinct) because I'm not trying to sum the distinct values in GuestCount, but rather the sum of distinct rows.
Here's the SQL to set up the Tables:
CREATE TABLE TesttblTransactions (
ID int,
[sysdate] date,
TxnHour tinyint,
Facility nvarchar(50),
TableID int,
[Check] int,
Item int,
Parent int
)
Create Table TesttblTablesGuests (
ID int,
Facility nvarchar(50),
TableID int,
GuestCount tinyint,
TableDate Date
)
Create Table TesttblFacilities (
ID int,
ClientKey nvarchar(50),
Brand nvarchar(50),
OrgFacilityID nvarchar(50),
UnitID smallint
)
INSERT INTO testtbltransactions (
ID,
[Sysdate],
TxnHour,
Facility,
TableID,
[Check],
Item,
Parent
)
VALUES
(
1,
'20221201',
7,
'JOES',
1001,
12345,
8898989,
0
),
(
2,
'20221201',
7,
'JOES',
1001,
12345,
8776767,
1
),
(
3,
'20221201',
7,
'JOES',
1001,
12345,
856643,
0
),
(
4,
'20221201',
7,
'THE DIVE',
1001,
67890,
662342,
0
),
(
5,
'20221201',
7,
'THE DIVE',
1001,
67890,
244234,
0
),
(
6,
'20221201',
7,
'JOES',
1002,
12344,
873323,
0
);
INSERT INTO testtblTablesGuests (
ID,
Facility,
TableID,
GuestCount,
TableDate
)
VALUES
(
1,
'JOES',
1001,
4,
'20221201'
),
(
2,
'THE DIVE',
1001,
1,
'20221201'
),
(
3,
'JOES',
1002,
1,
'20221201'
);
INSERT INTO testtblFacilities (
ID,
ClientKey,
Brand,
OrgFacilityID,
UnitID
)
VALUES
(
1,
'JOES',
'Joes Hospitality Group LLC',
'Joes Bar',
987
),
(
2,
'THE DIVE',
'The Dive Restaurant Group',
'The Dive',
565
);
--Here's the SQL that I need for reporting but can't seem to get working:
Declare #StartDate as Date = '12-1-2022'
Declare #EndDate as Date = '12-1-2022'
--The query we want to work
SELECT
TesttblFacilities.ClientKey,
TesttblFacilities.Brand,
format(testtbltransactions.sysdate,'yyyy-MM-dd') AS [Date],
'H' AS Freqency,
Testtbltransactions.[TxnHour] AS [Hour],
TesttblFacilities.UnitID AS [UnitID],
'Dine In Guest Count' as Metric,
Sum(TesttblTablesGuests.GuestCount) AS [Value]
FROM ((Testtbltransactions
JOIN Testtbltablesguests ON (Testtbltablesguests.TableDate = Testtbltransactions.sysdate) AND (Testtbltransactions.FACILITY = Testtbltablesguests.facility) AND (Testtbltransactions.tableid = Testtbltablesguests.tableid))
JOIN TesttblFacilities ON Testtbltransactions.FACILITY = TesttblFacilities.ClientKey)
Where (((Testtbltransactions.parent)=0))
and Testtbltransactions.sysdate >= #StartDate
and Testtbltransactions.sysdate <= #EndDate
GROUP BY TesttblFacilities.ClientKey, Testtblfacilities.UnitID,TesttblFacilities.Brand, Testtbltransactions.facility, Testtbltransactions.sysdate, Testtbltransactions.TxnHour`
I'm getting 9 and 2, instead of 5 and 1.
In the comments NBK suggested doing a subquery - and it took me a while but I think i found something that works.. . .
Declare #StartDate as Date = '12-1-2022'
Declare #EndDate as Date = '12-1-2022'
Select
t1.txnhour,
t1.facility,
SUM(t1.guestcount) from
(
Select Distinct
TesttblTransactions.TableID as TableID,
testtbltransactions.[txnHour] as txnhour,
testtbltransactions.Facility as Facility,
testtbltablesguests.GuestCount as guestcount,
testtbltransactions.Parent as parent
From TesttblTransactions
Join TesttblTablesGuests on TesttblTablesGuests.TableID = TesttblTransactions.TableID and testtbltablesguests.Facility = TesttblTransactions.Facility
Where (((Testtbltransactions.parent)=0))
and Testtbltransactions.sysdate >= #StartDate
and Testtbltransactions.sysdate <= #EndDate
) T1
Group by t1.Facility, t1.txnhour, t1.Facility
I'm going to continue to refine this, but I think I should be able to move forward with this.
Looking for non fancy, easily debugable for junior developer solution...
In SQL Server 2008 R2, I have to update data from #data table to #tests table in desired format. I am not sure how I would archive result using T-SQL query?
NOTE: temp tables have only 3 columns for sample purpose only but real table have more than 50 columns for each set.
Here is what my tables look like:
IF OBJECT_ID('tempdb..#tests') IS NOT NULL
DROP TABLE #tests
GO
CREATE TABLE #tests
(
id int,
FirstName varchar(100),
LastName varchar(100),
UniueNumber varchar(100)
)
IF OBJECT_ID('tempdb..#data') IS NOT NULL
DROP TABLE #data
GO
CREATE TABLE #data
(
id int,
FirstName1 varchar(100),
LastName1 varchar(100),
UniueNumber1 varchar(100),
FirstName2 varchar(100),
LastName2 varchar(100),
UniueNumber2 varchar(100),
FirstName3 varchar(100),
LastName3 varchar(100),
UniueNumber3 varchar(100),
FirstName4 varchar(100),
LastName4 varchar(100),
UniueNumber4 varchar(100),
FirstName5 varchar(100),
LastName5 varchar(100),
UniueNumber5 varchar(100),
FirstName6 varchar(100),
LastName6 varchar(100),
UniueNumber6 varchar(100),
FirstName7 varchar(100),
LastName7 varchar(100),
UniueNumber7 varchar(100)
)
INSERT INTO #data
VALUES (111, 'Tom', 'M', '12345', 'Sam', 'M', '65432', 'Chris', 'PATT', '54656', 'Sean', 'Meyer', '865554', 'Mike', 'Max', '999999', 'Tee', 'itc', '656546444', 'Mickey', 'Mul', '65443231')
INSERT INTO #data
VALUES (222, 'Kurr', 'P', '22222', 'Yammy', 'G', '33333', 'Saras', 'pi', '55555', 'Man', 'Shey', '666666', 'Max', 'Dopit', '66666678', '', '', '', '', '', '')
INSERT INTO #data
VALUES (333, 'Mia', 'K', '625344', 'Tee', 'TE', '777766', 'david', 'mot', '4444444', 'Jeff', 'August', '5666666', 'Mylee', 'Max', '0000000', '', '', '', 'Amy', 'Marr', '55543444')
SELECT *
FROM #data
I want to insert/update data into #tests table from #data table.
Insert data into #tests table if id and UniqueNumber combination does not exists from #data table. If combination exists then update data into #tests table from #data table
This is desired output into #tests table
Here is an option that will dynamically UNPIVOT your data without using Dynamic SQL
To be clear: UNPIVOT would be more performant, but you don't have to enumerate the 50 columns.
This is assuming your columns end with a NUMERIC i.e. FirstName##
Example
Select ID
,FirstName
,LastName
,UniueNumber -- You could use SSN = UniueNumber
From (
SELECT A.ID
,Grp
,Col = replace([Key],Grp,'')
,Value
FROM #data A
Cross Apply (
Select [Key]
,Value
,Grp = substring([Key],patindex('%[0-9]%',[Key]),25)
From OpenJson( (Select A.* For JSON Path,Without_Array_Wrapper ) )
) B
) src
Pivot ( max(Value) for Col in ([FirstName],[LastName],[UniueNumber]) ) pvt
Order By ID,Grp
Results
UPDATE XML Version
Select ID
,FirstName
,LastName
,UniueNumber
From (
SELECT A.ID
,Grp = substring(Item,patindex('%[0-9]%',Item),50)
,Col = replace(Item,substring(Item,patindex('%[0-9]%',Item),50),'')
,Value
FROM #data A
Cross Apply ( values (convert(xml,(Select A.* for XML RAW)))) B(XData)
Cross Apply (
Select Item = xAttr.value('local-name(.)', 'varchar(100)')
,Value = xAttr.value('.','varchar(max)')
From B.XData.nodes('//#*') xNode(xAttr)
) C
Where Item not in ('ID')
) src
Pivot ( max(Value) for Col in (FirstName,LastName,UniueNumber) ) pvt
Order By ID,Grp
One way is to query each group of columns separately and UNION the results
SELECT
id int,
FirstName1 as FirstName,
LastName1 as LastName,
UniueNumber1 AS SSN
FROM #data
UNION
SELECT
id int,
FirstName2 as FirstName,
LastName2 as LastName,
UniueNumber2 AS SSN
FROM #data
UNION
...
There's not a way to cleanly "loop through" the 7 groups of columns - you'll spend more time building a loop to create the query dynamically than just copying and pasting the query 6 times and changing the number.
Of course, it's best to avoid the type of structure you have in #data now if at all possible.
I have created a CTE (common table Expression) as follows:
DECLARE #N VARCHAR(100)
WITH CAT_NAM AS (
SELECT ID, NAME
FROM TABLE1
WHERE YEAR(DATE) = YEAR(GETDATE())
)
SELECT #N = STUFF((
SELECT ','''+ NAME+''''
FROM CAT_NAM
WHERE ID IN (20,23,25,30,37)
FOR XML PATH ('')
),1,1,'')
The result of above CTE is 'A','B','C','D','F'
Now I need to check 4 different columns CAT_NAM_1,CAT_NAM_2,CAT_NAM_3,CAT_NAM_4 in the result of CTE and form it as one column like follow:
Select
case when CAT_NAM_1 in (#N) then CAT_NAM_1
when CAT_NAM_2 in (#N) then CAT_NAM_2
when CAT_NAM_3 in (#N) then CAT_NAM_3
when CAT_NAM_4 in (#N) then CAT_NAM_4
end as CAT
from table2
When I'm trying to do the above getting error please help me to do.
If my approach is wrong help me with right one.
I am not exactly sure what you are trying to do, but if I understand the following script shows one possible technique. I have created some table variables to mimic the data you presented and then wrote a SELECT statement to do what I think you asked (but I am not sure).
DECLARE #TABLE1 AS TABLE (
ID INT NOT NULL,
[NAME] VARCHAR(10) NOT NULL,
[DATE] DATE NOT NULL
);
INSERT INTO #TABLE1(ID,[NAME],[DATE])
VALUES (20, 'A', '2021-01-01'), (23, 'B', '2021-02-01'),
(25, 'C', '2021-03-01'),(30, 'D', '2021-04-01'),
(37, 'E', '2021-05-01'),(40, 'F', '2021-06-01');
DECLARE #TABLE2 AS TABLE (
ID INT NOT NULL,
CAT_NAM_1 VARCHAR(10) NULL,
CAT_NAM_2 VARCHAR(10) NULL,
CAT_NAM_3 VARCHAR(10) NULL,
CAT_NAM_4 VARCHAR(10) NULL
);
INSERT INTO #TABLE2(ID,CAT_NAM_1,CAT_NAM_2,CAT_NAM_3,CAT_NAM_4)
VALUES (1,'A',NULL,NULL,NULL),(2,NULL,'B',NULL,NULL);
;WITH CAT_NAM AS (
SELECT ID, [NAME]
FROM #TABLE1
WHERE YEAR([DATE]) = YEAR(GETDATE())
AND ID IN (20,23,25,30,37,40)
)
SELECT CASE
WHEN EXISTS(SELECT 1 FROM CAT_NAM WHERE CAT_NAM.[NAME] = CAT_NAM_1) THEN CAT_NAM_1
WHEN EXISTS(SELECT 1 FROM CAT_NAM WHERE CAT_NAM.[NAME] = CAT_NAM_2) THEN CAT_NAM_2
WHEN EXISTS(SELECT 1 FROM CAT_NAM WHERE CAT_NAM.[NAME] = CAT_NAM_3) THEN CAT_NAM_3
WHEN EXISTS(SELECT 1 FROM CAT_NAM WHERE CAT_NAM.[NAME] = CAT_NAM_4) THEN CAT_NAM_4
ELSE '?' -- not sure what you want if there is no match
END AS CAT
FROM #TABLE2;
You can do a bit of set-based logic for this
SELECT
ct.NAME
FROM table2 t2
CROSS APPLY (
SELECT v.NAME
FROM (VALUES
(t2.CAT_NAM_1),
(t2.CAT_NAM_2),
(t2.CAT_NAM_3),
(t2.CAT_NAM_4)
) v(NAME)
INTERSECT
SELECT ct.NAM
FROM CAT_NAM ct
WHERE ct.ID IN (20,23,25,30,37)
) ct;
This is a new version of my question, since it seems to be confusing. Sorry. I figured it out. See the code if you're interested. Notes to solve are in there. Thanks for your help!
I got it to work this far, but the OriginaionL (L is for Little and B is for Big) is not correct. It's taking the correct date but not Origination.
CREATE TABLE MyTable
(
LoadTagID INT,
EnteredDateTime datetime,
JobNumber VARCHAR(50),
Origination VARCHAR(50)
)
INSERT INTO MyTable VALUES
(1, '2015-02-09 00:00:00.00', 11111, 'Here')
,(2, '2015-02-09 00:00:00.00', 22222, 'There')
,(3, '2016-03-09 00:00:00.00', 11111, 'Outside')
,(4, '2016-08-09 00:00:00.00', 12578, 'Anywhere')
,(252, '2017-06-29 00:00:00.00', 12345, 'Here')
,(253, '2017-08-01 00:00:00.00', 99999, 'There')
,(254, '2017-08-04 00:00:00.00', 12345, 'Outside')
,(255, '2017-08-09 00:00:00.00', 12345, 'Anywhere')
,(256, '2017-08-10 00:00:00.00', 99999, 'Anywhere')
,(257, '2017-08-10 00:00:00.00', 123456, 'Anywhere')
,(258, '2017-08-11 00:00:00.00', 123456, 'Over Yonder')
,(259, '2017-08-13 00:00:00.00', 99999, 'Under The Bridge')
--Select * From MyTable
CREATE TABLE #LTTB1 --MAX
(
LoadTagID varchar(50),
JobNumber varchar(50),
EnteredDateTime varchar(50),
Origination varchar(50)
)
CREATE TABLE #LTTB2 --MIN
(
LoadTagID varchar(50),
JobNumber varchar(50),
EnteredDateTime varchar(50),
Origination varchar(50)
)
CREATE TABLE #LTTB3
(
LoadTagIDL varchar(50),
JobNumberL varchar(50),
EnteredDateTimeL
varchar(50),
OriginationL varchar(50)
, LoadTagID varchar(50),
JobNumber varchar(50),
EnteredDateTime varchar(50),
Origination varchar(50)
)
INSERT INTO #LTTB1
SELECT
MAX(LoadTagID) AS LoadTagID,
JobNumber,
MAX(EnteredDateTime) AS EnteredDateTime,
MAX(Origination) AS Origination
FROM MyTable
WHERE CONVERT (Date, EnteredDateTime) >= CONVERT (Date, GETDATE()-10) --Gets the last 10 days.
GROUP BY JobNumber ORDER BY JobNumber
INSERT INTO #LTTB2
SELECT MIN(LoadTagID) AS LoadTagIDL,
JobNumber AS JobNumberL,
MIN(EnteredDateTime) AS EnteredDateTimeL,
MAX(Origination) AS OriginationL --MAX! This needed to be max!! Why?
FROM MyTable
Where CONVERT (Date, EnteredDateTime) >= CONVERT (Date, GETDATE()-60) --Goes further back in case one is a long.
GROUP BY JobNumber ORDER BY JobNumber
INSERT INTO #LTTB3
SELECT L.LoadTagID AS LoadTagIDL
, L.JobNumber AS JobNumberL
, L.EnteredDateTime AS EnteredDateTimeL
, L.Origination AS OriginationL
, B.LoadTagID, B.JobNumber, B.EnteredDateTime, B.Origination
FROM #LTTB1 B --MAX
INNER JOIN #LTTB2 L ON B.JobNumber = L.JobNumber
Select * From #LTTB3
So for JobNumber 12345 6/29 is correct, but it should be "Here" and not "Anywhere:
For 99999 everything is correct but for 8/1 it should be "There" and not Anywhere. That seems to be the middle value in the set. I'm so confused.
Does anyone know why it's grabbing that value? Thank you.
SELECT *
FROM mytable
WHERE LoadTagID=(SELECT MIN(LoadTagID)
FROM mytable)
OR LoadTagID=(SELECT MAX(LoadTagID)
FROM mytable);
query according to the output you want
CREATE TABLE MyTable
(
LoadTagID INT,
Date Date,
Job INT,
Origination VARCHAR(20)
)
INSERT INTO MyTable
VALUES(
252, '6/29/17', 12345, 'Here')
,(253, '8/1/17', 99999, 'There')
,(254, '8/4/17', 12345, 'Outside')
,(255, '8/8/17', 12345, 'Anywhere')
--SELECT * FROM MyTable
SELECT * INTO #Table1
FROM MyTable
WHERE LoadTagID IN (SELECT MIN(LoadTagID)
FROM MyTable)
SELECT * INTO #Table2
FROM MyTable
WHERE LoadTagID IN (SELECT MAX(LoadTagID)
FROM MyTable)
SELECT * INTO #T3
FROM ( SELECT * FROM #Table1 T1
UNION ALL
SELECT * FROM #Table2 T2
) A
SELECT #T3.Date,
#T3.Job,
#T3.LoadTagID,
#T3.Origination
FROM #T3
LEFT JOIN #Table1 T1
ON T1.Job=#T3.Job
WHERE T1.Job IS NOT NULL
INSERT INTO #LTTB1
SELECT
MAX(LoadTagID) AS LoadTagID,
JobNumber,
MAX(EnteredDateTime) AS EnteredDateTime,
MAX(Origination) AS Origination
FROM MyTable
WHERE CONVERT (Date, EnteredDateTime) >= CONVERT (Date, GETDATE()-10) --Gets the last 10 days.
GROUP BY JobNumber ORDER BY JobNumber
INSERT INTO #LTTB2
select LoadTagID
,JobNumber
,EnteredDateTime
,Origination from (
select *, ROW_NUMBER() Over(partition by jobnumber order by EnteredDateTime) l
from MyTable
Where CONVERT (Date, EnteredDateTime) >= CONVERT (Date, GETDATE()-60)
)lk
where lk.l=1
INSERT INTO #LTTB3
SELECT L.LoadTagID AS LoadTagIDL
, L.JobNumber AS JobNumberL
, L.EnteredDateTime AS EnteredDateTimeL
, L.Origination AS OriginationL
, B.LoadTagID, B.JobNumber, B.EnteredDateTime, B.Origination
FROM #LTTB1 B --MAX
INNER JOIN #LTTB2 L ON B.JobNumber = L.JobNumber
Select * From MyTable
Select * From #LTTB3
--I hope, your prob has been solved now..
I have a table product in which I have 8 columns and I have to count the distinct values. To achieve this, I need to create a stored procedure.
So, how to create it?
This is the sample I have wrote for two columns:
create or replace
PROCEDURE GET_COUNT
( count_prod OUT INT,
count_partner OUT INT) IS
BEGIN
SELECT
count(*),
count(distinct(BUSINESS))
INTO
count_prod,
count_partner from Product where ACTIVE Like 'Y';
END GET_COUNT;
Is this right or I have to do it using cursor?
Try this:
DECLARE #DataSource TABLE
(
[Col01] CHAR(1)
,[Col02] TINYINT
,[Col03] CHAR(3)
);
INSERT INTO #DataSource ([Col01], [Col02], [Col03])
VALUES ('A', 1, 'AAA')
,('A', 1, 'AAA')
,('A', 2, 'AAA')
,('B', 1, 'AAA')
,('B', 3, 'AAB')
,('A', 3, 'ACA');
SELECT COUNT(DISTINCT [Col01]) AS [Col01Count]
,COUNT(DISTINCT [Col02]) AS [Col02Count]
,COUNT(DISTINCT [Col03]) AS [Col03Count]
FROM #DataSource;
The COUNT aggregate (and most of the aggregate functions) supports a DISTINCT clause which should solve your issue.