Row into column SQL Server 2005/8 - sql

I've just started to get into SQL Server deeper and I have a problem. I have to transform a row into a column, but I can't figure it out.
The row looks like this:
Columns: T1 T2 T3 .........T20
Values: 1 0 9 ......... 15
I want to receive something like this:
Col Val
________
T1 1
T2 0
T3 9
........
T20 15
I know i have to use a pivot, i have read about this, but can't figure it out

You have to use UNPIVOT table operator for this, like this:
SELECT col, val
FROM Tablename AS t
UNPIVOT
(
Val
FOR Col IN (T1, T2, ..., T20)
) AS u;
SQL Fiddle Demo.
Update 1
If you want to do this dynamically for any number of columns, without the need to write them manually, the only way I can think of is by reading these columns from the table information_schema.columns to get the list of columns' names of the table. Then use dynamic SQL to compose the statement FOR col IN ... dynamically like this:
DECLARE #cols AS NVARCHAR(MAX);
DECLARE #query AS NVARCHAR(MAX);
select #cols = STUFF((SELECT distinct ',' +
QUOTENAME(column_name)
FROM information_schema.columns
WHERE table_name = 'tablename'
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 1, '');
SELECT #query = ' SELECT col, val
FROM tablename AS t
UNPIVOT
(
val
FOR col IN ( ' + #cols + ' )
) AS u;';
EXECUTE(#query);
Updated SQL Fiddle Demo
This will give you:
| COL | VAL |
-------------
| T1 | 1 |
| T10 | 15 |
| T11 | 33 |
| T12 | 31 |
| T13 | 12 |
| T14 | 10 |
| T15 | 12 |
| T16 | 9 |
| T17 | 10 |
| T18 | 2 |
| T19 | 40 |
| T2 | 0 |
| T20 | 21 |
| T3 | 9 |
| T4 | 2 |
| T5 | 3 |
| T6 | 10 |
| T7 | 14 |
| T8 | 15 |
| T9 | 20 |

Related

selecting specific fields during runtime from table

How do we select fields dynamically?
I've got a table Table1:
+----------+-------+----+
| Table1Id | x | y |
+==========+=======+====+
| 52 | alex | aa |
+----------+-------+----+
| 43 | liza | aa |
+----------+-------+----+
| 21 | harry | bb |
+----------+-------+----+
| 21 | harry | bb |
+----------+-------+----+
I'd like to join on this Table2:
+----------+----------+--------+------+
| Table2Id | Table1Id | aa | bb |
+==========+==========+========+======+
| 1 | 52 | red | tall |
+----------+----------+--------+------+
| 2 | 43 | blue | thin |
+----------+----------+--------+------+
| 3 | 21 | orange | fat |
+----------+----------+--------+------+
The result I'm looking for is:
+-------+-------+----+----------+
| xyzid | x | y | NewField |
+=======+=======+====+==========+
| 52 | alex | aa | red |
+-------+-------+----+----------+
| 43 | liza | aa | blue |
+-------+-------+----+----------+
| 21 | harry | bb | fat |
+-------+-------+----+----------+
As you can see, Table1 has data in the y column the exact field name to grab from Table2.
How do select specific fields from a table, where those fields are actually stored as data in another table?
Just another option which would be a bit more dynamic
No need for the CASE.
table2 could have any number of columns.
No need to convert columns to string or a common datatype.
BUT ... I suspect a bit less performant than The Impalar's answer.
I should note that this is for 2017+. Oddly enough, 2016 requires a "string literal" for JSON_VALUE or JSON_QUERY.
Example or dbFiddle
Select Distinct
xyzid = A.Table1Id
,A.x
,A.y
,NewField = JSON_VALUE(B.JS,'$.'+A.Y)
From Table1 A
Join ( Select [Table1Id]
,JS=(Select B1.* For JSON Path,Without_Array_Wrapper)
From Table2 B1 ) B
on A.[Table1Id]=B.[Table1Id]
Results
UPDATE - 2016 Version
Select xyzid = A.Table1Id
,A.x
,A.y
,NewField = (select max([Value]) from OpenJSON(B.JS) where [key]=A.Y collate database_default)
From Table1 A
Join ( Select [Table1Id]
,JS=(Select B1.* For JSON Path,Without_Array_Wrapper )
From Table2 B1 ) B
on A.[Table1Id]=B.[Table1Id]
2nd 2016 Approach -- Perhaps slightly more performant dbFiddle
Select xyzid = A.Table1Id
,A.x
,A.y
,NewField = B.Value
From Table1 A
Join (
Select [Table1Id]
,[Key]
,Value
From Table2 B1
Cross Apply OpenJson((Select B1.* For JSON Path,Without_Array_Wrapper ) )
) B
on A.[Table1Id]=B.[Table1Id]
and A.Y = B.[Key] collate database_default
You can select columns dynamically using CASE, as in:
select
a.Table1Id as xyzid,
a.x,
a.y,
case
when a.y = 'aa' then b.aa
when a.y = 'bb' then b.bb
end as NewField
from (select distinct * from table1) a
join table2 b on b.Table1Id = a.Table1Id
Result:
xyzid x y newfield
------ ------ --- --------
52 alex aa red
43 liza aa blue
21 harry bb fat
See running example at DB Fiddle.

SQL query to get values of schema table row value in data table's column value

There are 2 database tables as shown below, Table_1 & Table_2.
Table_1 column index is match with id value of table_2.
Table_1
| date |city_1 | city_2 | ... | city_100 |
+-----------+-------+--------+-----+----------+
| 20.02.2018| 2 | 44 | ... | 98 |
| 21.02.2018| 1 | 25 | ... | 17 |
| ... | ... | ... | ... | ... |
Table_2
| id | name |
+-------+---------+
| 1 | newyork |
| 2 | london |
| ... | ... |
| 100 | istanbul|
Expected result is below
| date | city_1 | city_2 | ... | city_100 |
+-----------+------------+------------+-------+-----------+
| 20.02.2018| london | india | ... | canada |
| 21.02.2018| newyork | srilanka | ... | austria |
| ... | ... | ... | ... | ... |
What is the SQL query to get result above?
Thanks
You have to join Table_1 with Table_2 as many as many city columns you have, like this:
SELECT
t1.date, c1.name, c2.name, c3.name, ... c100.name
FROM Table_1 AS t1
JOIN Table_2 AS c1 ON t1.city_1 = c1.id
JOIN Table_2 AS c2 ON t1.city_2 = c2.id
JOIN Table_2 AS c3 ON t1.city_3 = c3.id
...
JOIN Table_2 AS c100 ON t1.city_100 = c100.id
If you are using Postgres, you can do something like this:
select data.id,
x.names[1] as city_name_1,
x.names[2] as city_name_2,
x.names[3] as city_name_3
from data
join lateral (
select array_agg(ct.name order by e.idx) as names
from unnest(array[city_1, city_2, city_3]) with ordinality as e(id, idx)
left join cities ct on e.id = ct.id
) x on true;
You still need to list all city "names" twice: once in the array inside the derived table, and once on the outside to get each name in a separate column.
If you can also live with a comma separated list of names you could use something like this:
select d.id,
string_agg(x.name, ',' order by x.idx) as names
from data d
join lateral (
select ct.name, e.idx
from unnest(array[city_1, city_2, city_3]) with ordinality as e(id, idx)
left join cities ct on e.id = ct.id
) x on true
group by data.id;
Or you can aggregate all the names into a single JSON value, then you don't need to hardcode any column name:
select d.id, x.names
from data d
join lateral (
select jsonb_object_agg(j.col, ct.name) as names
from cities ct
left join jsonb_each_text(to_jsonb(d) - 'id') as j(col, id) on j.id::int = ct.id
where j.col is not null
) x on true;
(I replaced your date column with an id column in my example)
Online example: https://rextester.com/NEBGX64778
In SQL Server (you haven't specified a DBMS), you could do something like this.
Sample table structure and data
CREATE TABLE sample (
date date, city_1 int, city_2 int, city_3 int, city_n int
);
INSERT INTO sample
VALUES
('2018-02-20', 4, 44, 98, ..),
('2018-02-21', 1, 25, 17, ..);
CREATE TABLE names (
id int,
name varchar(50)
);
INSERT INTO names
VALUES
(1, 'NewYork'),
(4, 'London'),
(17, 'Istanbul'),
(25, 'Colombo'),
(44, 'Vienna'),
(98, 'Helsinki');
Query 01
SELECT *
FROM (SELECT t1.date, t1.city, names.NAME
FROM (SELECT date, upvt.city, upvt.id
FROM sample
UNPIVOT ( id
FOR city IN (city_1, city_2, city_3, city_n) ) upvt) t1
INNER JOIN names ON t1.id = names.id) t2
PIVOT ( Min(NAME)
FOR city IN (city_1, city_2, city_3, city_n) ) AS pvt;
Query 01: Output
+----------------------+----------+----------+----------+----------+
| date | city_1 | city_2 | city_3 | city_n |
+----------------------+----------+----------+----------+----------+
| 20/02/2018 00:00:00 | London | Vienna | Helsinki | .... |
| 21/02/2018 00:00:00 | NewYork | Colombo | Istanbul | .... |
+----------------------+----------+----------+----------+----------+
Online Demo: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=a14672b53b457d4ae59e6c9076cd9755
But if you don't want to write column names (city_1, city_2, city_n), then you can use this dynamic query.
Query 02: Get the column names
Example: city_1, city_2, city_n
SELECT column_name
FROM information_schema.columns
WHERE table_name = N'sample'
AND column_name LIKE 'city_%';
Query 02: Output
+-------------+
| column_name |
+-------------+
| city_1 |
| city_2 |
| city_3 |
+-------------+
Query 03: Dynamic Query
DECLARE #cols AS NVARCHAR(max),
#query AS NVARCHAR(max);
SET #cols = STUFF(( SELECT distinct ',' + QUOTENAME(column_name)
FROM information_schema.columns
WHERE table_name = N'sample' AND column_name LIKE 'city_%'
FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'');
SET #query = 'SELECT *
FROM (SELECT t1.date, t1.city, names.NAME
FROM (SELECT date, upvt.city, upvt.id
FROM sample
UNPIVOT (id
FOR city IN ('+ #cols +')) upvt) t1
INNER JOIN names ON t1.id = names.id) t2
PIVOT (Min(NAME)
FOR city IN ('+ #cols +')) AS pvt';
--select #query;
--select #cols;
execute(#query);
Query 03: Output
+----------------------+----------+----------+----------+
| date | city_1 | city_2 | city_3 |
+----------------------+----------+----------+----------+
| 20/02/2018 00:00:00 | London | Vienna | Helsinki |
| 21/02/2018 00:00:00 | NewYork | Colombo | Istanbul |
+----------------------+----------+----------+----------+
Online Demo: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=e2d7f10a22a3e11044fc552ff73b14c5

Getting the last updated name

I am having a table having records like this:
+------+------+
| ID | name |
+------+------+
| 1 | A |
| 2 | B |
| 3 | C |
| 4 | A |
| 5 | B |
| 6 | A |
| 7 | A |
| 8 | A |
+------+------+
I need to get value of A after it was last updated from a different value, for example here it would be the row at ID 6.
Try this query (MySQL syntax):
select min(ID)
from records
where name = 'A'
and ID >=
(
select max(ID)
from records
where name <> 'A'
);
Illustration:
select * from records;
+------+------+
| ID | name |
+------+------+
| 1 | A |
| 2 | B |
| 3 | C |
| 4 | A |
| 5 | B |
| 6 | A |
| 7 | A |
| 8 | A |
+------+------+
-- run query:
+---------+
| min(ID) |
+---------+
| 6 |
+---------+
Using the Lag function...
SELECT Max([ID])
FROM (SELECT [name], [ID],
Lag([name]) OVER (ORDER BY [ID]) AS PrvVal
FROM tablename) tbl
WHERE [name] = 'A'
AND prvval <> 'A'
Online Demo: http://www.sqlfiddle.com/#!18/a55eb/2/0
If you want to get the whole row, you can do this...
SELECT Top 1 *
FROM (SELECT [name], [ID],
Lag([name]) OVER (ORDER BY [ID]) AS PrvVal
FROM tablename) tbl
WHERE [name] = 'A' AND prvval <> 'A'
ORDER BY [ID] DESC
Online Demo: http://www.sqlfiddle.com/#!18/a55eb/22/0
The ANSI SQL below uses a self-join on the previous id.
And the where-clause gets those with a name that's different from the previous.
select max(t1.ID) as ID
from YourTable as t1
left join YourTable as t2 on t1.ID = t2.ID+1
where (t1.name <> t2.name or t2.name is null)
and t1.name = 'A';
It should work on most RDBMS, including MS Sql Server.
Note that with the ID+1 that there's an assumption that are no gaps between the ID's.

SUM values in SQL starting from a specific point in another table

I have a table that lists the index/order, the name, and the value. For example, it looks like this:
TABLE1:
ID | NAME | VALUE
1 | A | 2
2 | B | 5
3 | C | 2
4 | D | 7
5 | E | 0
Now, I have another table that has a random list of NAMEs. It'll just show either A, B, C, D, or E. Depending on what the NAME is, I wanted to calculate the SUM of all the values that it will take to get to E. Does that make sense?
So if for example, my table looks like this:
TABLE2:
NAME
D
B
A
I'd want another column next to NAME that'll show the sum. So D would have 7 because the next event is E. B would have to be the sum of 5, 2, and 7 because B is 5, and C is 2, and D is 7. And A would have the sum of 2, 5, 3, and 7 and so on.
Hopefully this is easy to understand.
I actually don't have much at all aside from joining the two tables and getting the current value of the NAME. But I wasn't sure how to increment and so on and keep adding?
SELECT T2.NAME, T1.VALUE
FROM Table1 T1
LEFT JOIN Table2 T2 ON T1.NAME = T2.NAME
Is doing this even possible? Or am I wasting my time? Should I be referring to actual code to do this? Or should I make a function?
I wasn't sure where to start and I was hoping someone could help me out.
Thank you in advance!
The query is in two parts; this is hard to see at first, so I'll walk through each step.
Step 1: Obtain the rolling sum
Join table1 to itself for any letters greater than itself:
select *
from table1 t1
inner join table1 t2 on t2.name >= t1.name
order by t1.name
This produces the following table
+ -- + ---- + ----- + -- + ---- + ----- +
| id | name | value | id | name | value |
+ -- + ---- + ----- + -- + ---- + ----- +
| 1 | A | 2 | 1 | A | 2 |
| 1 | A | 2 | 2 | B | 5 |
| 1 | A | 2 | 3 | C | 2 |
| 1 | A | 2 | 4 | D | 7 |
| 1 | A | 2 | 5 | E | 0 |
| 2 | B | 5 | 2 | B | 5 |
| 2 | B | 5 | 3 | C | 2 |
| 2 | B | 5 | 4 | D | 7 |
| 2 | B | 5 | 5 | E | 0 |
| 3 | C | 2 | 3 | C | 2 |
| 3 | C | 2 | 4 | D | 7 |
| 3 | C | 2 | 5 | E | 0 |
| 4 | D | 7 | 4 | D | 7 |
| 4 | D | 7 | 5 | E | 0 |
| 5 | E | 0 | 5 | E | 0 |
+ -- + ---- + ----- + -- + ---- + ----- +
Notice that if we group by the name from t1, we can get the rolling sum by summing the values from t2. This query
select t1.name,
SUM(t2.value) as SumToE
from table1 t1
inner join table1 t2
on t2.name >= t1.name
group by t1.name
gives us the rolling sums we want
+ ---- + ------ +
| name | sumToE |
+ ---- + ------ +
| A | 16 |
| B | 14 |
| C | 9 |
| D | 7 |
| E | 0 |
+ ---- + ------ +
Note: This is equivalent to using a windowed function that sums over a set, but it is much easier to visually see what you're doing via this joining technique.
Step 2: Join the rolling sum
Now that you have this rolling sum for each letter, you simply join it to table2 for the letters you want
select t1.*
from table2 t2
inner join (
select t1.name,
SUM(t2.value) as SumToE
from table1 t1
inner join table1 t2
on t2.name >= t1.name
group by t1.name
) t1 on t1.name = t2.name
Result:
+ ---- + ------ +
| name | sumToE |
+ ---- + ------ +
| A | 16 |
| B | 14 |
| D | 7 |
+ ---- + ------ +
As gregory suggests, you can do this with a simple windowed function, which (in this case) will sum up all the rows after and including the current one based on the ID value. Obviously there are a number of different ways in which you can slice your data, though I'll leave that up to you to explore :)
declare #t table(ID int,Name nvarchar(50),Val int);
insert into #t values(1,'A',2),(2,'B',5),(3,'C',2),(4,'D',7),(5,'E',0);
select ID -- The desc makes the preceding work the right way. This is
,Name -- essentially shorthand for "sum(Val) over (order by ID rows between current row and unbounded following)"
,Val -- which is functionally the same, but a lot more typing...
,sum(Val) over (order by ID desc rows unbounded preceding) as s
from #t
order by ID;
Which will output:
+----+------+-----+----+
| ID | Name | Val | s |
+----+------+-----+----+
| 1 | A | 2 | 16 |
| 2 | B | 5 | 14 |
| 3 | C | 2 | 9 |
| 4 | D | 7 | 7 |
| 5 | E | 0 | 0 |
+----+------+-----+----+
CREATE TABLE #tempTable2(name VARCHAR(1))
INSERT INTO #tempTable2(name)
VALUES('D')
INSERT INTO #tempTable2(name)
VALUES('B')
INSERT INTO #tempTable2(name)
VALUES('A')
CREATE TABLE #tempTable(id INT, name VARCHAR(1), value INT)
INSERT INTO #temptable(id,name,value)
VALUES(1,'A',2)
INSERT INTO #temptable(id,name,value)
VALUES(2,'B',5)
INSERT INTO #temptable(id,name,value)
VALUES(3,'C',2)
INSERT INTO #temptable(id,name,value)
VALUES(4,'D',7)
INSERT INTO #temptable(id,name,value)
VALUES(5,'E',0)
;WITH x AS
(
SELECT id, value, name, RunningTotal = value
FROM dbo.#temptable
WHERE id = (SELECT MAX(id) FROM #temptable)
UNION ALL
SELECT y.id, y.value, y.name, x.RunningTotal + y.value
FROM x
INNER JOIN dbo.#temptable AS y ON
y.id = x.id - 1
)
SELECT x.id, x.value, x.name, x.RunningTotal
FROM x
JOIN #tempTable2 t2 ON
x.name = t2.name
ORDER BY x.id
DROP TABLE #tempTable
DROP TABLE #tempTable2

How can I get any row which substring is any?

I have a table with a varchar(max). I would like to know if it's possible to get all rows that contains any substring from a list of it.
I know that for one value I can use like 'AAA%', but I don't know if like has any way to say something like where IN().
Something like this:
select * from TableA where
TableA.Field1 contains any (select Fild1 from TableB where field2 > 5);
Where TableA.Field 1 and TableB.Field1 are varchar(max).
Thank you so much.
select * from TableA
where exists (select 1 from TableB
where TableA.Field1 like '%' + TableB.Fild1 + '%'
and TableB.field2 > 5)
SQL Server includes the CONTAINS function so something like this will work fine.
SELECT *
FROM TableA
JOIN TableB ON CONTAINS(TableA.Field1, TableB.Fild1) AND TableB.field2 > 5
You may need to adjust for your requirements.
Here is the documentation on contains
https://msdn.microsoft.com/en-us/library/ms187787.aspx
You can use a dynamic sql query.
Query
declare #query varchar(max)
select #query = 'select * from t1 where ' +
STUFF
(
(
select ' OR col1 like ''' + col1 + '%'''
from t2
for xml path('')
),
1,4,'');
execute(#query);
Example
Table - t1
+----+-------+
| id | col1 |
+----+-------+
| 1 | AAAAA |
| 2 | BBBBB |
| 3 | CCCCC |
| 4 | DDDDD |
| 5 | EEEEE |
+----+-------+
Table - t2
+------+
| col1 |
+------+
| AA |
| BB |
| CC |
+------+
Output
+----+-------+
| id | col1 |
+----+-------+
| 1 | AAAAA |
| 2 | BBBBB |
| 3 | CCCCC |
+----+-------+