How to transpose a table in sql? - sql

a | b |
+------+-------+
| 1 | a,b,c |
| 1 | d,e,f |
| 1 | g,h |
+------+-------+
I would want output like below with a sql script
1 , a,b,c,d,e,f,g,h

I'm assuming you're using sql-server for this, but you can modify the query to work for MySQL.
This one is a little tough, but you can use the STUFF argument to concatenate the strings. It'll come out with as a query similar to this:
SELECT
[a],
STUFF((
SELECT ', ' + [b]
FROM #YourTable
WHERE (a = 1)
FOR XML PATH(''),TYPE).value('(./text())[1]','VARCHAR(MAX)')
,1,2,'') AS NameValues
FROM #YourTable Results
GROUP BY [a]
Basically, you use the STUFF argument to iterate through the rows where a=1 in your table and concatenate them with a comma and space in-between each value for the [b] column.
Then you group them by the [a] column to prevent you from returning one row for each original [a] row in the table.
I encourage you to check out this post for credits to the query and other possible solutions to your answer.

For Postgres you can use:
select a, string_agg(b::text, ',' order by b)
from the_table
group by a;

Related

Pivot with column name in Postgres

I have the following table tbl:
column1 | column2 | column 3
-----------------------------------
1 | 'value1' | 3
2 | 'value2' | 4
How to do "pivot" with column names to produce output like:
column1 | 1 | 2
column2 | 'value1' |'value2'
column3 | 3 | 4
As has been commented, the issue of data types is undefined in the question.
If you are OK with all result columns being type text (every data type can be converted to text), you can use one of these:
Plain SQL
WITH cte AS (
SELECT nu.*
FROM tbl t
, LATERAL (
VALUES
(1, t.column1::text)
, (2, t.column2)
, (3, t.column3::text)
) nu(rn, c)
)
SELECT *
FROM (TABLE cte OFFSET 0 LIMIT 3) c1
JOIN (TABLE cte OFFSET 3 LIMIT 3) c2 USING (rn);
The same with useful column names:
WITH cte AS (
SELECT nu.*
FROM tbl t
, LATERAL (
VALUES
('column1', t.column1::text)
, ('column2', t.column2)
, ('column3', t.column3::text)
) nu(rn, c)
)
SELECT * FROM (
SELECT *
FROM (TABLE cte OFFSET 0 LIMIT 3) c1
JOIN (TABLE cte OFFSET 3 LIMIT 3) c2 USING (rn)
) t (key, row1, row2);
Works in any modern version of Postgres.
The SQL string has to be adapted to the number of rows and columns. See fiddles below!
Using a document type as stepping stone
Makes for shorter code.
With many rows and many columns, performance of the SQL solution may scale better because the intermediate derived table is smaller.
(The thread is limited as you can't have more than ~ 1600 table columns in Postgres.)
Since everything is converted to text anyway, hstore seems most efficient. See:
Key value pair in PostgreSQL
SELECT key
, arr[1] AS row1
, arr[2] AS row2
FROM (
SELECT x.key, array_agg(x.value) AS arr
FROM tbl t, each(hstore(t)) x
GROUP BY 1
) sub
ORDER BY 1;
Technically speaking we would have to enforce the right sort order when in array_agg(), but that should work without explicit ORDER BY. To be absolutely sure you can add one: array_agg(x.value ORDER BY t.ctid) Using ctid for lack of information.
You can do the same with JSON functions in (Postgres 9.3+). Just replace each(hstore(t) with json_each_text(row_to_json(t). The rest is identical.
These fiddles demonstrate how to scale each query:
Original example with 2 rows of 3 columns:
db<>fiddle here
Scaled up to 3 rows of 4 columns:
db<>fiddle here

Split SQL string with specific string instead of separator?

I have table that looks like:
|ID | String
|546 | 1,2,1,5,7,8
|486 | 2,4,8,1,5,1
|465 | 18,11,20,1,4,18,11
|484 | 11,10,11,12,50,11
I want to split the string to this:
|ID | String
|546 | 1,2
|546 | 1,5
|486 | 1,5,1
|486 | 1
|465 | 1,4
My goal is to show ID and all the strings starting with 1 with just the next number after them.
I filtered all rows without '%1,%' and I don't know how to continue.
If you use SQL Server 2016+, you may try to use a JSON-based approach. You need to transform the data into a valid JSON array and parse the JSON array with OPENJSON(). Note that STRING_SPLIT() is not an option here, because as is mentioned in the documentation, the output rows might be in any order and the order is not guaranteed to match the order of the substrings in the input string.
Table:
CREATE TABLE Data (
ID int,
[String] varchar(100)
)
INSERT INTO Data
(ID, [String])
VALUES
(546, '1,2,1,5,7,8'),
(486, '2,4,8,1,5,1'),
(465, '18,11,20,1,4,18,11'),
(484, '11,10,11,12,50,11')
Statement:
SELECT
ID,
CONCAT(FirstValue, ',', SecondValue) AS [String]
FROM (
SELECT
d.ID,
j.[value] As FirstValue,
LEAD(j.[value]) OVER (PARTITION BY d.ID ORDER BY CONVERT(int, j.[key])) AS SecondValue
FROM Data d
CROSS APPLY OPENJSON(CONCAT('[', d.[String], ']')) j
) t
WHERE t.FirstValue = '1'
Result:
----------
ID String
----------
465 1,4
486 1,5
486 1,
546 1,2
546 1,5
Something like :
SELECT ID, S.value
FROM Data
CROSS APPLY STRING_SPLIT(REPLACE(',' + String, ',1,', '#1,'), '#') AS S
WHERE value LIKE '1,%'
?

How to search in sql if any string char of a string is contained in another string?

I am developing some complex sql conversion queries where I have to search a column with concatenated chars (e.g. 'LKA') for another string with concatenated chars in any order (e.g. 'AL').
I already tried using the LIKE keyword like this:
SELECT UMCOD FROM x WHERE UMCOD LIKE y
Where UMCOD is for example LKA and y AL. This clearly cannot work because I did not use wildcards.
For example I have the following sql tables:
CREATE TABLE `searchable_chars` ( `charstring` CHAR(9) NOT NULL );
CREATE TABLE `searchme` ( `tosearch` CHAR(9) NOT NULL );
INSERT INTO `searchable_chars` (`charstring`) VALUES ('LKA');
INSERT INTO `searchme` (`tosearch`) VALUES ('AL'), ('L'), ('U'), ('KU'), ('A');
The query (like the one below not working):
SELECT x.charString, y.toSearch FROM `searchable_chars` AS x LEFT JOIN `searchme` AS y ON x.charstring LIKE y.toSearch
should return the following table
+------------+----------+
| charString | toSearch |
+------------+----------+
| LKA | AL |
| LKA | L |
| LKA | KU |
| LKA | A |
+------------+----------+
I hope you know what I mean. I know how to solve it using js or any other language, but I want to solve it using pure SQL.
try below query..
SELECT x.charString, y.toSearch FROM `searchable_chars` AS x LEFT JOIN `searchme` AS y ON x.charstring LIKE '%'+ y.toSearch +'%'
I have not used DB2, but according to documentation there is a REGEXP_LIKE predicate which might possibly be used to do what you want:
SELECT x.charString, y.toSearch FROM `searchable_chars` AS x
LEFT JOIN `searchme` AS y ON REGEXP_LIKE(x.charstring , '[' + y.toSearch + ']')
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/db2/rbafzregexp_like.htm
As I have no DB2 instance, I have not tried this...

SQL Server: Split values from columns with multiple values, into multiple rows [duplicate]

This question already has answers here:
Turning a Comma Separated string into individual rows
(16 answers)
Closed 4 years ago.
I have data that currently looks like this (pipe indicates separate columns):
ID | Sex | Purchase | Type
1 | M | Apple, Apple | Food, Food
2 | F | Pear, Barbie, Soap | Food, Toys, Cleaning
As you can see, the Purchase and Type columns feature multiple values that are comma delimited (some of the cells in these columns actually have up to 50+ values recorded within). I want the data to look like this:
ID | Sex | Purchase | Type
1 | M | Apple | Food
1 | M | Apple | Food
2 | F | Pear | Food
2 | F | Barbie | Toys
2 | F | Soap | Cleaning
Any ideas on how would I be able to do this with SQL? Thanks for your help everyone.
Edit: Just to show that this is different to some of the other questions. The key here is that data for each unique row is contained across two separate columns i.e. the second word in "Purchase" should be linked with the second word in "Type" for ID #1. The other questions I've seen was where the multiple values had been contained in just one column.
Basically you will required a delimited spliter function. There are many around. Here i am using DelimitedSplit8K from Jeff Moden http://www.sqlservercentral.com/articles/Tally+Table/72993/
-- create the sample table
create table #sample
(
ID int,
Sex char,
Purchase varchar(20),
Type varchar(20)
)
-- insert the sample data
insert into #sample (ID, Sex, Purchase, Type) select 1, 'M', 'Apple,Apple', 'Food,Food'
insert into #sample (ID, Sex, Purchase, Type) select 2, 'M', 'Pear,Barbie,Soap', 'Food,Toys,Cleaning'
select s.ID, s.Sex, Purchase = p.Item, Type = t.Item
from #sample s
cross apply DelimitedSplit8K(Purchase, ',') p
cross apply DelimitedSplit8K(Type, ',') t
where p.ItemNumber = t.ItemNumber
drop table #sample
EDIT: The original question as posted had the data as strings, with pipe characters as column delimiters and commas within the columns. The below solution works for that.
The question has since been edited to show that the input data is actually in columns, not as a single string.
I've left the solution here as an interesting version of the original question.
This is an interesting problem. I have a solution that works for a single row of your data. I dont know from the question if you are going to process it row by row, but I assume you will.
If so, this will work. I suspect there might be a better way using xml or without the temp tables, but in any case this is one solution.
declare #row varchar(1000); set #row='2 | F | Pear, Barbie, Soap | Food, Toys, Cleaning'
declare #v table(i int identity, val varchar(1000), subval varchar(100))
insert #v select value as val, subval from STRING_SPLIT(#row,'|')
cross apply (select value as subval from STRING_SPLIT(value,',') s) subval
declare #v2 table(col_num int, subval varchar(100), correlation int)
insert #v2
select col_num, subval,
DENSE_RANK() over (partition by v.val order by i) as correlation
from #v v
join (
select val, row_number()over (order by fst) as Col_Num
from (select val, min(i) as fst from #v group by val) colnum
) c on c.val=v.val
order by i
select col1.subval as ID, col2.subval as Sex, col3.subval as Purchase, col4.subval as Type
from #v2 col1
join #v2 col2 on col2.col_num=2
join #v2 col3 on col3.col_num=3
join #v2 col4 on col4.col_num=4 and col4.correlation=col3.correlation
where col1.col_num=1
Result is:
ID Sex Purchase Type
2 F Pear Food
2 F Barbie Toys
2 F Soap Cleaning

Adding Row Numbers To a SELECT Query Result in SQL Server Without use Row_Number() function

i need Add Row Numbers To a SELECT Query without using Row_Number() function.
and without using user defined functions or stored procedures.
Select (obtain the row number) as [Row], field1, field2, fieldn from aTable
UPDATE
i am using SAP B1 DIAPI, to make a query , this system does not allow the use of rownumber() function in the select statement.
Bye.
I'm not sure if this will work for your particular situation or not, but can you execute this query with a stored procedure? If so, you can:
A) Create a temp table with all your normal result columns, plus a Row column as an auto-incremented identity.
B) Select-Insert your original query, sans the row column (SQL will fill this in automatically for you)
C) Select * on the temp table for your result set.
Not the most elegant solution, but will accomplish the row numbering you are wanting.
This query will give you the row_number,
SELECT
(SELECT COUNT(*) FROM #table t2 WHERE t2.field <= t1.field) AS row_number,
field,
otherField
FROM #table t1
but there are some restrictions when you want to use it. You have to have one column in your table (in the example it is field) which is unique and numeric and you can use it as a reference. For example:
DECLARE #table TABLE
(
field INT,
otherField VARCHAR(10)
)
INSERT INTO #table(field,otherField) VALUES (1,'a')
INSERT INTO #table(field,otherField) VALUES (4,'b')
INSERT INTO #table(field,otherField) VALUES (6,'c')
INSERT INTO #table(field,otherField) VALUES (7,'d')
SELECT * FROM #table
returns
field | otherField
------------------
1 | a
4 | b
6 | c
7 | d
and
SELECT
(SELECT COUNT(*) FROM #table t2 WHERE t2.field <= t1.field) AS row_number,
field,
otherField
FROM #table t1
returns
row_number | field | otherField
-------------------------------
1 | 1 | a
2 | 4 | b
3 | 6 | c
4 | 7 | d
This is the solution without functions and stored procedures, but as I said there are the restrictions. But anyway, maybe it is enough for you.
RRUZ, you might be able to hide the use of a function by wrapping your query in a View. It would be transparent to the caller. I don't see any other options, besides the ones already mentioned.