Consider the following query :
DECLARE #T1 TABLE(
[Id] [int] IDENTITY(1,1) NOT NULL,
[Data] VARCHAR(100),
[Column1] VARCHAR(100),
[Column2] VARCHAR(100),
[Column3] VARCHAR(100));
INSERT INTO #T1([Data],[Column1],[Column2],[Column3])
VALUES
('Data1','C11','C21','C31'),
('Data2','C12','C22','C32'),
('Data3','C13','C23','C33'),
('Data4','C14','C24','C34'),
('Data5','C15','C25','C35');
SELECT * FROM #T1;
The output looks like the following:
Now we want to keep the Data column and for each other column stack the result of select for that column into the final table. In other words the following query produces the output:
-- I am looking for a better solution than below!
DECLARE #output TABLE([Data] VARCHAR(100),[Column] VARCHAR(100));
INSERT INTO #output([Data],[Column])
(SELECT [Data],[Column1] FROM #T1
UNION
SELECT [Data],[Column2] FROM #T1
UNION
SELECT [Data],[Column3] FROM #T1)
SELECT * FROM #output
What would be a better cleaner approach than above to produce the final output? As the number of columns increases it means for every single new column I need to have a separate insert which appears to be a crude solution. Ideally I am looking for a pivot-based solution but I couldn't come up with something concrete.
Certainly Yogesh's solution would be more performant. However, since your columns expand over time, here is one approach that will "dynamically" unpivot your data WITHOUT actually using Dynamic SQ:
Example
Select A.[Data]
,C.*
From #T1 A
Cross Apply ( values (cast((Select A.* for XML RAW) as xml))) B(XMLData)
Cross Apply (
Select Item = xAttr.value('local-name(.)', 'varchar(100)')
,Value = xAttr.value('.','varchar(100)')
From XMLData.nodes('//#*') xNode(xAttr)
Where xAttr.value('local-name(.)','varchar(100)') not in ('Id','Data','Other-Columns','To-Exclude')
) C
Returns
I often use apply instead of union :
select t1.data, t2.cols
from #t1 t1 cross apply
( values ([column1]), ([column2]), ([column3]) ) t2(cols);
Related
I have a table, I want to pivot the table, my desired output is #tab2.
My table is as follows:
declare #tab1 table(name varchar(50),mobile varchar(10),address varchar(100))
insert into #tab1 values('Test','2612354598','CG-10')
select * from #tab1
My desired output is:
declare #tab2 table(colname varchar(50),value varchar(100))
insert into #tab2 values('name','Test'),('mobile','2612354598'),('address','CG-10')
select * from #tab2
Please help
what are you looking is Unpivot not Pivot. Use Unpivot as follows. make sure that all columns have same datatype.
select
colname,
value
from #tab1
unpivot
(
value
for colname in ([name]
,[mobile]
,[address]
)
) unpiv;
If using SQL server you can use UNPIVOT.
SELECT colname, valueid
FROM
(SELECT CAST(name as varchar(100)) name, CAST(mobile as varchar(100))
mobile, address FROM #tab1) p
UNPIVOT
(valueid FOR colname IN
(name,mobile, address)
)AS unpvt;
You need to CAST() the columns so the type is the same for all of them.
I have a table with two columns. ParameterName and ParameterValue.
The easy case is when my parameter has a value like this:
The problem is, sometimes, a parameter value can come from another parameter. Like this:
This situation may get more complicated and the second parameter also gets its value from the third parameter and so on...
I know it might be a common case and might have an easy solution but I couldn't find the answer and I don't know what is the name of this type of situation.
Can anyone help me? I need to bring the value for all parameters. I thought the answer was recursive cte but after trying it, it seems it is not the answer.
I put the code for my sample table below:
DECLARE #T TABLE
( ParameterName NVARCHAR(128) NULL,
ParameterValue NVARCHAR(128) NULL
)
INSERT #T
VALUES ( '$A', 'SOME VALUE'),
( '$B', '$A')
SELECT * FROM #T
The answer was recursive CTE and worked like this:
I also added more data to my table.
DECLARE #T TABLE
( ParameterName NVARCHAR(128) NULL,
ParameterValue NVARCHAR(128) NULL
)
INSERT #T
VALUES ( '$A', 'SOME VALUE'),
( '$B', '$A'),
( '$C', 'AAAAA'),
( '$D', '$A'),
( '$E', '$D')
;WITH VALS
AS ( SELECT ParameterName, ParameterValue
FROM #T
WHERE ParameterValue NOT LIKE '$%'
UNION ALL
SELECT T.ParameterName, V.ParameterValue
FROM #T AS T
INNER JOIN VALS AS V ON T.ParameterValue = V.ParameterName
)
SELECT * FROM VALS
Now it works like this:
I have a table with over 500 columns, dynamically created and named by the user. New columns can be created by the user, but none can be deleted.
I was given the task to program a keyword search that searches through all columns for a specific string, and returns the ID of that record. As you can imagine, the query currently looks something like:
SELECT form_id FROM table_name WHERE col1 LIKE '%str%' OR col2 LIKE '%str%' or col3 LIKE '%str%'.. etc.
It is unbelievably slow. To combat this, I'm trying to create another table, where this data is stored in a different format like this:
form_id, col_name, value
1, 'col2', 'some random value'
1, 'col1', 'another random value'
And then searching using:
SELECT id FROM new_table_name WHERE value LIKE '%str%'
I can export all the data and format it, and insert it into the new table. But how would I go about keeping the new table updated? Is it possible to have triggers that automatically insert/update the new table when the original one is modified? Even though I don't know the column names before hand?
Another option just for fun
Declare #YourTable Table (EmpID int,EmpName varchar(50),Salary int,Location varchar(100))
Insert Into #YourTable Values
(1,'Arul',100,null)
,(2,'Jane',120,'New York')
If 2016+ use JSON
Select*
From #YourTable A
Where (Select A.* For JSON Path,Without_Array_Wrapper ) like '%Jane%'
If <2016 use XML
Select*
From #YourTable A
Where (Select A.* For XML Raw ) like '%Jane%'
Both would Return
EmpID EmpName Salary Location
2 Jane 120 New York
If you want an exact match you can quote the string as such '%"Jane"%'
You can construct the table by unpivoting the original table:
select t.form_id, v.col, v.value
from t cross apply
(values ('col1', col1), ('col2', col2), . . . ) v(col, value);
You can then keep it up-to-date with insert and delete triggers for existing data. Then you will need DDL triggers to handle users adding new columns.
Seems like you are looking for an EAV model.
Here is one approach that does NOT require you to list the 500 columns.
Full Disclosure: This is NOT recommended for HUGE tables. UNPIVOT is more performant.
Also note that if you DON'T want null values remove ,ELEMENTS XSINIL
Example
Declare #YourTable Table (EmpID int,EmpName varchar(50),Salary int,Location varchar(100))
Insert Into #YourTable Values
(1,'Arul',100,null)
,(2,'Jane',120,'New York')
Select Entity = A.EmpID
,C.*
From #YourTable A
Cross Apply ( values (cast((Select A.* for XML RAW,ELEMENTS XSINIL) as xml))) B(XMLData)
Cross Apply (
Select Attribute = a.value('local-name(.)','varchar(100)')
,Value = a.value('.','varchar(max)')
From B.XMLData.nodes('/row') as C1(n)
Cross Apply C1.n.nodes('./*') as C2(a)
) C
Returns
Entity Attribute Value
1 EmpID 1
1 EmpName Arul
1 Salary 100
1 Location <<-- NULL values display as an empty string ... see note regarding nulls
2 EmpID 2
2 EmpName Jane
2 Salary 120
2 Location New York
EDIT - If 2016+ ... JSON
Select A.[EmpID]
,Attribute = B.[Key]
,Value = B.[Value]
From #YourTable A
Cross Apply ( Select * From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper )) ) B
I have this table stored in variable #oldValues and #newValues:
The two tables above will contain 1 row maximum. My goal is to insert this to a new table JSON TABLE:
DECLARE #jsonTable TABLE
(
[Field] nvarchar(max),
[From] nvarchar(max),
[To] nvarchar(max)
);
and store the from to values from old and new variable
Output must be like this:
[Field] [From] [To] // this is a column name
------------------------------------
CommitteeID 1 1
CommitteeName Test Test2
CommitteeMemberID 1 3
How can I achieve that?
Thanks in advance
It can be plain
select 'CommitteeId' [Field], (select cast(CommitteeId as varchar(max)) from #oldValues) [From], (select cast(CommitteeId as varchar(max)) from #newValues)[To]
union all
select 'CommitteeName', (select CommitteeName from #oldValues), (select CommitteeName from #newValues)
union all
select 'CommitteeId', (select cast(CommitteeMemberId as varchar(max)) from #oldValues), (select cast(CommitteeMemberId as varchar(max)) from #newValues)
If you have only one row:
select v.*
from #oldValues ov cross join
#newValues nv outer apply
(values ('CommitteeId', ov.CommitteeId, nv.CommitteeId),
('CommitteeName', ov.CommitteeName, nv.CommitteeName),
('CommitteeMemberID', ov.CommitteeMemberID, nv.CommitteeMemberID)
) v(field, [from], [to]);
Note: This assumes that the types for the values are all compatible. Otherwise, you may need to convert/cast values to strings.
EDIT:
To be explicit, the casts are:
select v.*
from #oldValues ov cross join
#newValues nv outer apply
(values ('CommitteeId', cast(ov.CommitteeId as nvarchar(255)), cast(nv.CommitteeId as nvarchar(255))),
('CommitteeName', cast(ov.CommitteeName as nvarchar(255)), cast(nv.CommitteeName as nvarchar(255))),
('CommitteeMemberID', cast(ov.CommitteeMemberID as nvarchar(255)), cast(nv.CommitteeMemberID as nvarchar(255)))
) v(field, [from], [to]);
I think UNPIVOT operator most proper solution for the need.
For UNPIVOT operation all column types should be same, that's why we cast all column type to the same.
DECLARE #oldValues as TABLE (CommitteeID INT, CommitteeName VARCHAR(20), CommitteeMemberID INT)
INSERT INTO #oldValues VALUES (1,'Test',1)
DECLARE #newValues as TABLE (CommitteeID INT, CommitteeName VARCHAR(20), CommitteeMemberID INT)
INSERT INTO #newValues VALUES (1,'Test2',3)
DECLARE #jsonTable TABLE
(
[Field] nvarchar(max),
[From] nvarchar(max),
[To] nvarchar(max)
);
;WITH FromTable AS (
SELECT [Field] , [From]
FROM (SELECT CAST(CommitteeID AS VARCHAR(255)) CommitteeID,
CAST(CommitteeName AS VARCHAR(255)) CommitteeName,
CAST(CommitteeMemberID AS VARCHAR(255)) CommitteeMemberID
FROM #oldValues) p
UNPIVOT ( [From] FOR [Field]
IN ( CommitteeID , CommitteeName , CommitteeMemberID)) as UNPVT
)
, ToTable AS (
SELECT [Field] , [To]
FROM (SELECT CAST(CommitteeID AS VARCHAR(255)) CommitteeID,
CAST(CommitteeName AS VARCHAR(255)) CommitteeName,
CAST(CommitteeMemberID AS VARCHAR(255)) CommitteeMemberID
FROM #newValues) p
UNPIVOT ( [To] FOR [Field]
IN ( CommitteeID , CommitteeName , CommitteeMemberID)) as UNPVT
)
SELECT F.*, T.[To] FROM FromTable F FULL JOIN ToTable T ON F.[Field] = T.[Field]
New columns can be easily added to SELECT and IN part of the query.
For easily determine missing column I used FULL JOIN
Simple question, just out of curiosity.
For example select 1,2,3 that will show table with one column and three rows.
Something like this: select values(1),(2),(3)
*with one select statement
An example for my comment in your post.
DECLARE #TABLE TABLE (ONE INT, TWO INT, THREE INT)
INSERT INTO #TABLE VALUES (1,2,3)
SELECT UP.COL, UP.VALUE
FROM #TABLE
UNPIVOT (VALUE FOR COL IN (ONE,TWO,THREE)) UP
Query:
DECLARE #t TABLE (i1 INT, i2 INT, i3 INT)
INSERT INTO #t VALUES (1, 2, 3)
SELECT t.*
FROM #t
CROSS APPLY (
VALUES(i1), (i2), (i3)
) t(value)
Output:
value
-----------
1
2
3
Additional info:
http://blog.devart.com/is-unpivot-the-best-way-for-converting-columns-into-rows.html
As it appears there is a simple code that I've been searching for:
select n from (values (1),(2),(3)) D(c);