Pivot table multiple column values in a single column - sql

I've got a table like that,
id PID VID Type PriceA PriceB
41 297 2 128 70.000 80.000
42 297 3 256 90.000 100.000
43 297 4 300 110.000 120.000
44 297 5 400 130.000 140.000
45 294 2 128 10.000 50.000
46 294 3 256 20.000 60.000
47 294 4 300 30.000 70.000
48 294 5 400 40.000 80.000
49 294 6 450 50.000 85.000
50 294 7 470 45.000 75.000
What I want to do is a query with PID parameter and get a result like that
PID | 128 | 256 | 300 | 400
297 | 70.000 / 80.0000 | 90.000 / 100.000 | 110.000 / 120.000 | 130.000 / 140.000
I have tried several different options pivot table subquery etc., but I could not make it.

This is full working exmaple:
CREATE TABLE DataSource
(
[ID] TINYINT
,[PID] SMALLINT
,[VID] TINYINT
,[Type] SMALLINT
,[PriceA] VARCHAR(32)
,[PriceB] VARCHAR(32)
)
INSERT INTO DataSource ([ID],[PID],[VID],[Type],[PriceA],[PriceB])
VALUES (41,297,2,128,70.000,80.000)
,(42,297,3,256,90.000,100.000)
,(43,297,4,300,110.000,120.000)
,(44,297,5,400,130.000,140.000)
,(45,294,2,128,10.000,50.000)
,(46,294,3,256,20.000,60.000)
,(47,294,4,300,30.000,70.000)
,(48,294,5,400,40.000,80.000)
,(49,294,6,450,50.000,85.000)
,(50,294,7,470,45.000,75.000)
SELECT *
FROM
(
SELECT [PID]
,[Type]
,[PriceA] + ' / ' + [PriceB] AS [Price]
FROM DataSource
) AS DataSource
PIVOT
(
MAX([Price]) FOR [Type] IN ([128],[256],[300],[400],[450], [470])
) PVT
The output is like this:
The idea is to build the column [PriceA] + ' / ' + [PriceB] and then to make the pivot.
Note, that I have hardcoded the posible [Type] values. If you need to make this dynamic you can build a dynamic PIVOT building the SQL string and then executing it with sp_executesql procedure like is done here.

Related

Why can't I convert this varchar to numeric?

I have a table with values pasted in, but they initially start as varchar. I need to convert them to numeric, so I did
convert(decimal(10,3), cw.col7)
But this is returning Error 8114: Error converting data type varchar to numeric. The reason I made this question is because it does not give this error for a similar data set. Are there sometimes strange anomalies when using convert() or decimal()? Or should I maybe convert to float first?
The data:
col7
490.440
2
934
28,108.000
33,226.000
17,347.000
1,561.000
57
0
421.350
64
1,100.000
0
0
3,584
202.432
0
3,280
672.109
1,150
0
104
411.032
18,016
40
510,648
443,934.000
18,705
322,254
301
9,217
18,075
16,100
395
706,269
418,313
7,170
40,450
2,423
1,300
2,311
94,000.000
17,463
0
228
884
557
153
13
0
0
212.878
45,000.000
152
24,400
3,675
11,750
987
23,725
268,071
4,520.835
286,000
112,912.480
9,000
1,316
1,020
215,244
123,967
6,911
1,088.750
138,644
16,924
7,848
33,017
464,463
618
72,391
9,367
507,635.950
588,087
92,890
17,266
0
1,414,547
89,080
664
101,635
1,552,992
175
356
7,000
0
0
445
507,381
24,016
469,983
0
0
147,737
3,521
88,210
18,433.000
21,775
3,607
34,774
7,642
42,680
1,255
10,880
350,409.800
19,394.520
2,476,257.400
778.480
1,670.440
9,710
24,931.600
3,381.800
2,900
18,000
4,121
3,750
62,200
952
29.935
17.795
11.940
902
36,303
1,240
1,020
617
817
620
92,648
70,925
82,924
19,162.200
1,213.720
2,871
3,180
91,600
645
607
155,100
6
840
1,395
112
6,721
3,850
40
4,032
5,912
1,040
872
56
1,856
179
Try_Convert(money,...) will handle the comma, while decimal(10, 3) will return null
Example
Select col7
,AsMoney = Try_Convert(money,col7)
,AsDecimal = Try_Convert(decimal(10, 3),col7)
from YourTable
Returns
Try using cast and remove comma
SELECT CAST(repalce(cw.col7,',','') AS DECIMAL(10,3))
from your_table
and as suggested by Jhon Cappelleti you need more that 3 decimal so you should use
SELECT CAST(repalce(cw.col7,',','') AS DECIMAL(12,4))
from your_table
Run this query:
select cw.col7
from cw
where try_convert(decimal(10, 3), cw.col7) is null;
This will show you the values that do not convert successfully. (If cw.col7 could be NULL then add and cw.col7 is not null to make the output more meaningful.)

How to make a start writing this CTE in SQL Server?

I have a table which contains data of different elements in a database. This is a count of all elements in the database (which is restored daily, so no DDL/DML triggers possible)
Table looks like this:
LogDate SYSTEM_TABLE VIEW SQL_TABLE_VALUED_FUNCTION DEFAULT_CONSTRAINT SQL_STORED_PROCEDURE RULE FOREIGN_KEY_CONSTRAINT SERVICE_QUEUE SQL_INLINE_TABLE_VALUED_FUNCTION CHECK_CONSTRAINT USER_TABLE PRIMARY_KEY_CONSTRAINT INTERNAL_TABLE TYPE_TABLE SQL_TRIGGER SQL_SCALAR_FUNCTION UNIQUE_CONSTRAINT
20150204 45 253 60 1401 5259 2 784 3 4 95 2141 1604 26 4 16 195 33
20150203 45 253 60 1401 5259 2 784 3 4 95 2141 1604 16 4 16 195 33
20150202 45 253 60 1401 5259 2 784 3 4 95 2141 1604 21 4 16 195 33
20150201 45 253 60 1401 5259 2 784 3 4 95 2141 1604 25 4 16 195 33
20150131 45 253 60 1401 5259 2 784 3 4 95 2141 1604 21 4 16 195 33
What I would like to do is compare the most recent log date (20150204) with the previous logdate (20150203) and see if there are any changes between the elements. This will then fire off an email to the relevant developer for them to investigate (but this section isn't important at hte moment, just highlighting the changes between the logdates for now)
ETA:
It's part of a much larger query that uses temp tables etc:
IF OBJECT_ID('tempdb..#DBTotsTEMP') is not null drop table #DBTotsTEMP
--declare variables
DECLARE #DynamicPivotQuery as NVARCHAR(MAX)
DECLARE #ColumnName as NVARCHAR(MAX)
SELECT RIGHT(date, 4) + RIGHT(LEFT(date, 5), 2) + LEFT(date, 2) AS LogDate, [count] as CNT, type_desc
INTO [#DBTotsTEMP]
FROM BI_STG.SSRS.MH_DB_Totals
ORDER BY LogDate DESC
SELECT #ColumnName= ISNULL(#ColumnName + ',', '') + QUOTENAME([type_desc])
FROM (
SELECT DISTINCT [type_desc] FROM #DBTotsTEMP) as TypeDescs
SET #DynamicPivotQuery =
N'SELECT LogDate, ' + #ColumnName + '
INTO #MH_DB_Totals
FROM #DBTotsTEMP
PIVOT (SUM([CNT])
FOR [type_desc] in (' + #ColumnName + ')) as PVTTable
ORDER BY LogDate desc
select * from #MH_DB_Totals'
EXEC sp_executesql #DynamicPivotQuery
and I've got no idea where the CTE part should go, or how to highlight changes in the figures!
Sorted it, used a SELECT INTO with the IDENTITY function to add in row numbers then joined temp table back to itself on rowid = rowid-1.

Getting more data while converting data int to float and doing division and Multiplying with int?

I have three columns as shown in below tableA
Student Day Shifts
129 11 4
91 9 6
166 19 8
164 26 12
146 11 6
147 16 8
201 8 3
164 4 2
186 8 6
165 7 4
171 10 4
104 5 4
1834 134 67
I am writing a tvf to calculate Value of Points generated for Students as below
ALTER function Statagic(
#StartDate date
)
RETURNS TABLE
AS
RETURN
(
with src as
( select
Division=case when Shifts=0 then 0 else cast(Day as float)/cast(Shifts as float) end,*
from TableA
)
,tgt as
(select *,Points=Student*Division from src
)
select * from tgt)
When i execute above tvf(select * from Statagic('3/16/2014'))
My output is below
129 11 4 2.75 354.75
91 9 6 1.5 136.5
166 19 8 2.375 394.25
164 26 12 2.16666666666667 355.333333333333
146 11 6 1.83333333333333 267.666666666667
147 16 8 2 294
201 8 3 2.66666666666667 536
164 4 2 2 328
186 8 6 1.33333333333333 248
165 7 4 1.75 288.75
171 10 4 2.5 427.5
104 5 4 1.25 130
1834 134 67 2 3668
Note :
If you see the last row for three columns in the table is the total of rest column.So when you see the last row in the Output of TVF for last two columns when i am adding i am not getting same data i am getting more.
Guys please help me i am struggling to fix this bug i tried in all ways but i am unable to fix it.
select 354.75+136.5+394.25+355.333333333333+267.666666666667+294+536+328+248+288.75+427.5+130=3760.750000000000
3668 is not euql to 3760.75(I am getting more 100 value)

HSQLDB query to replace a null value with a value derived from another record

This is a small excerpt from a much larger table, call it LOG:
RN EID FID FRID TID TFAID
1 364 509 7045 null 7452
2 364 509 7045 7452 null
3 364 509 7045 7457 null
4 375 512 4525 5442 5241
5 375 513 4525 5863 5241
6 375 515 4525 2542 5241
7 576 621 5632 null 5452
8 576 621 5632 2595 null
9 672 622 5632 null 5966
10 672 622 5632 2635 null
I would like a query that will replace the null in the 'TFAID' column with the value from the 'TFAID' column from the 'FID' column that matches.
Desired output would therefore be:
RN EID FID FRID TID TFAID
1 364 509 7045 null 7452
2 364 509 7045 7452 7452
3 364 509 7045 7457 7452
4 375 512 4525 5442 5241
5 375 513 4525 5863 5241
6 375 515 4525 2542 5241
7 576 621 5632 null 5452
8 576 621 5632 2595 5452
9 672 622 5632 null 5966
10 672 622 5632 2635 5966
I know that something like
SELECT RN,
EID,
FID,
FRID,
TID,
(COALESCE TFAID, {insert clever code here}) AS TFAID
FROM LOG
is what I need, but I can't for the life of me come up with the clever bit of SQL that will fill in the proper TFAID.
HSQLDB supports SQL features that can be used as alternatives. These features are not supported by some other databases.
CREATE TABLE LOG (RN INT, EID INT, FID INT, FRID INT, TID INT, TFAID INT);
-- using LATERAL
SELECT l.RN, l.EID, l.FID, l.FRID, l.TID,
COALESCE(l.TFAID, f.TFAID) AS TFAID
FROM LOG l , LATERAL (SELECT MAX(TFAID) AS TFAID FROM LOG f WHERE f.FID = l.FID) f
-- using scalar subquery
SELECT l.RN, l.EID, l.FID, l.FRID, l.TID,
COALESCE(l.TFAID, (SELECT MAX(TFAID) AS TFAID FROM LOG f WHERE f.FID = l.FID)) AS TFAID
FROM LOG l
Here is one approach. This aggregates the log to get the value and then joins the result in:
SELECT l.RN, l.EID, l.FID, l.FRID, l.TID,
COALESCE(l.TFAID, f.TFAID) AS TFAID
FROM LOG l join
(select fid, max(tfaid) as tfaid
from log
group by fid
) f
on l.fid = f.fid;
There may be other approaches that are more efficient. However, HSQL doesn't implement all SQL features.

SQL Pivot Table isn't working

SQL 2005
I have a temp table:
Year PercentMale PercentFemale PercentHmlss PercentEmployed TotalSrvd
2008 100 0 0 100 1
2009 55 40 0 80 20
2010 64 35 0 67 162
2011 69 27 0 34 285
2012 56 43 10 1 58
and I want to create a query to display the data like this:
2008 2009 2010 2011 2012
PercentMale 100 55 64 69 56
PercentFemale - 40 35 27 43
PercentHmlss - - - - 10
PercentEmployed 100 80 67 34 1
TotalSrvd 1 20 162 285 58
Can I use a pivot table to accomplish this? If so, how? I've tried using a pivot but have found no success.
select PercentHmlss,PercentMale,Percentfemale,
PercentEmployed,[2008],[2009],[2010],[2011],[2012] from
(select PercentHmlss,PercentMale, Percentfemale, PercentEmployed,
TotalSrvd,year from #TempTable)as T
pivot (sum (TotalSrvd) for year
in ([2008],[2009],[2010],[2011],[2012])) as pvt
This is the result:
PercentHmlss PercentMale Percentfemale PercentEmployed [2008] [2009] [2010] [2011] [2012]
0 55 40 80 NULL 20 NULL NULL NULL
0 64 35 67 NULL NULL 162 NULL NULL
0 69 27 34 NULL NULL NULL 285 NULL
0 100 0 100 1 NULL NULL NULL NULL
10 56 43 1 NULL NULL NULL NULL 58
Thanks.
For this to work you will want to perform an UNPIVOT and then a PIVOT
SELECT *
from
(
select year, quantity, type
from
(
select year, percentmale, percentfemale, percenthmlss, percentemployed, totalsrvd
from t
) x
UNPIVOT
(
quantity for type
in
([percentmale]
, [percentfemale]
, [percenthmlss]
, [percentemployed]
, [totalsrvd])
) u
) x1
pivot
(
sum(quantity)
for Year in ([2008], [2009], [2010], [2011], [2012])
) p
See a SQL Fiddle with a Demo
Edit Further explanation:
You were close with your PIVOT query that you tried, in that you got the data for the Year in the column format that you wanted. However, since you want the data that was contained in the columns initially percentmale, percentfemale, etc in the row of data - you need to unpivot the data first.
Basically, what you are doing is taking the original data and placing it all in rows based on the year. The UNPIVOT is going to place your data in the format (Demo):
Year Quantity Type
2008 100 percentmale
2008 0 percentfemale
etc
Once you have transformed the data into this format, then you can perform the PIVOT to get the result you want.