Is there a built in function for comma separated column values in DB2 SQL?
Example: If there are columns with an ID and it has 3 rows with the same ID but have three different roles, the data should be concatenated with a comma.
ID | Role
------------
4555 | 2
4555 | 3
4555 | 4
The output should look like the following, per row:
4555 2,3,4
LISTAGG function is new function in DB2 LUW 9.7
see example:
create table myTable (id int, category int);
insert into myTable values (1, 1);
insert into myTable values (2, 2);
insert into myTable values (5, 1);
insert into myTable values (3, 1);
insert into myTable values (4, 2);
example: select without any order in grouped column
select category, LISTAGG(id, ', ') as ids from myTable group by category;
result:
CATEGORY IDS
--------- -----
1 1, 5, 3
2 2, 4
example: select with order by clause in grouped column
select
category,
LISTAGG(id, ', ') WITHIN GROUP(ORDER BY id ASC) as ids
from myTable
group by category;
result:
CATEGORY IDS
--------- -----
1 1, 3, 5
2 2, 4
I think with this smaller query, you can do what you want.
This is equivalent of MySQL's GROUP_CONCAT in DB2.
SELECT
NUM,
SUBSTR(xmlserialize(xmlagg(xmltext(CONCAT( ', ',ROLES))) as VARCHAR(1024)), 3) as ROLES
FROM mytable
GROUP BY NUM;
This will output something like:
NUM ROLES
---- -------------
1 111, 333, 555
2 222, 444
assumming your original result was something like that:
NUM ROLES
---- ---------
1 111
2 222
1 333
2 444
1 555
Depending of the DB2 version you have, you can use XML functions to achieve this.
Example table with some data
create table myTable (id int, category int);
insert into myTable values (1, 1);
insert into myTable values (2, 2);
insert into myTable values (3, 1);
insert into myTable values (4, 2);
insert into myTable values (5, 1);
Aggregate results using xml functions
select category,
xmlserialize(XMLAGG(XMLELEMENT(NAME "x", id) ) as varchar(1000)) as ids
from myTable
group by category;
results:
CATEGORY IDS
-------- ------------------------
1 <x>1</x><x>3</x><x>5</x>
2 <x>2</x><x>4</x>
Use replace to make the result look better
select category,
replace(
replace(
replace(
xmlserialize(XMLAGG(XMLELEMENT(NAME "x", id) ) as varchar(1000))
, '</x><x>', ',')
, '<x>', '')
, '</x>', '') as ids
from myTable
group by category;
Cleaned result
CATEGORY IDS
-------- -----
1 1,3,5
2 2,4
Just saw a better solution using XMLTEXT instead of XMLELEMENT here.
Since DB2 9.7.5 there is a function for that:
LISTAGG(colname, separator)
check this for more information: Using LISTAGG to Turn Rows of Data into a Comma Separated List
My problem was to transpose row fields(CLOB) to column(VARCHAR) with a CSV and use the transposed table for reporting. Because transposing on report layer slows down the report.
One way to go is to use recursive SQL. You can find many articles about that but its difficult and resource consuming if you want to join all your recursive transposed columns.
I created multiple global temp tables where I stored single transposed columns with one key identifier. Eventually, I had 6 temp tables for joining 6 columns but due to limited resource allocation I wasnt able to bring all columns together. I opted to below 3 formulas and then I just had to run 1 query which gave me output in 10 seconds.
I found various articles on using XML2CLOB functions and have found 3 different ways.
REPLACE(VARCHAR(XML2CLOB(XMLAGG(XMLELEMENT(NAME "A",ALIASNAME.ATTRIBUTENAME)))),'', ',') AS TRANSPOSED_OUTPUT
NVL(TRIM(',' FROM REPLACE(REPLACE(REPLACE(CAST(XML2CLOB(XMLAGG(XMLELEMENT(NAME "E", ALIASNAME.ATTRIBUTENAME))) AS VARCHAR(100)),'',' '),'',','), '', 'Nothing')), 'Nothing') as TRANSPOSED_OUTPUT
RTRIM(REPLACE(REPLACE(REPLACE(VARCHAR(XMLSERIALIZE(XMLAGG(XMLELEMENT(NAME "A",ALIASNAME.ATTRIBUTENAME) ORDER BY ALIASNAME.ATTRIBUTENAME) AS CLOB)), '',','),'',''),'','')) AS TRANSPOSED_OUTPUT
Make sure you are casting your "ATTRIBUTENAME" to varchar in a subquery and then calling it here.
other possibility, with recursive cte
with tablewithrank as (
select id, category, rownumber() over(partition by category order by id) as rangid , (select count(*) from myTable f2 where f1.category=f2.category) nbidbycategory
from myTable f1
),
cte (id, category, rangid, nbidbycategory, rangconcat) as (
select id, category, rangid, nbidbycategory, cast(id as varchar(500)) from tablewithrank where rangid=1
union all
select f2.id, f2.category, f2.rangid, f2.nbidbycategory, cast(f1.rangconcat as varchar(500)) || ',' || cast(f2.id as varchar(500)) from cte f1 inner join tablewithrank f2 on f1.rangid=f2.rangid -1 and f1.category=f2.category
)
select category, rangconcat as IDS from cte
where rangid=nbidbycategory
Try this:
SELECT GROUP_CONCAT( field1, field2, field3 ,field4 SEPARATOR ', ')
Related
I have a table with three columns:
[ID] [name] [link]
1 sample_name_1 sample_link_1
2 sample_name_2 sample_link_2
3 sample_name_3 sample_link_3
I need to somehow group them into one column, so the ideal result is this:
[one_column]
1
sample_name_1
sample_name_1
2
sample_name_2
sample_link_2
3
sample_name_3
sample_link_3
Does anyone have any suggestions on where to look and how to get it done in SQL Server?
You may try to use VALUES table value constructor with CROSS APPLY:
Table:
CREATE TABLE MyTable (
ID int,
name varchar(50),
link varchar(50)
)
INSERT INTO MyTable (ID, name, link)
VALUES
(1, 'sample_name_1', 'sample_link_1'),
(2, 'sample_name_2', 'sample_link_2'),
(3, 'sample_name_3', 'sample_link_3')
Statement:
SELECT v.one_column
FROM MyTable t
CROSS APPLY (VALUES
(1, CONVERT(varchar(50), ID)),
(2, CONVERT(varchar(50), name)),
(3, CONVERT(varchar(50), link))
) v (rn, one_column)
ORDER BY t.ID, v.rn
Result:
one_column
1
sample_name_1
sample_link_1
2
sample_name_2
sample_link_2
3
sample_name_3
sample_link_3
While this is something you should do in your presentation layer (i.e. your app or Website) you can do this in SQL:
select one column
from
(
select cast(id as varchar(10)) as one column, id as sortkey1, 1 as sortkey2 from mytable
union all
select name as one column, id as sortkey1, 2 as sortkey2 from mytable
union all
select link as one column, id as sortkey1, 3 as sortkey2 from mytable
) unioned
order by sortkey1, sortkey2;
This question already has answers here:
SQL Query to concatenate column values from multiple rows in Oracle
(10 answers)
Closed 4 years ago.
I have a table like this
id | Name
===========
1 | A
2 | A
3 | A
4 | B
5 | B
6 | C
i am writing select id from tbl where name = "A", i want to get all three ids (1,2,3) like this separated by comma in a single variable and then I want to use that variable in another select query having IN clause, any help please?
As others have pointed out, using listagg() should do the trick:
SELECT listagg(id, ',') WITHIN GROUP (ORDER BY id) as concatenation
FROM mytable
WHERE name = 'A'
this is working:
create table ns_1111(col1 number,col2 varchar(20));
insert into ns_1111 values(1,'A');
insert into ns_1111 values(2,'A');
insert into ns_1111 values(3,'A');
insert into ns_1111 values(4,'B');
insert into ns_1111 values(5,'B');
insert into ns_1111 values(6,'C');
SELECT * FROM ns_1111;
select * from (SELECT LISTAGG(col1, ', ') WITHIN GROUP (ORDER BY col1)
FROM ns_1111 group by col2) where rownum<=1 ;
output:
1, 2, 3
Is there a built in function for comma separated column values in DB2 SQL?
Example: If there are columns with an ID and it has 3 rows with the same ID but have three different roles, the data should be concatenated with a comma.
ID | Role
------------
4555 | 2
4555 | 3
4555 | 4
The output should look like the following, per row:
4555 2,3,4
LISTAGG function is new function in DB2 LUW 9.7
see example:
create table myTable (id int, category int);
insert into myTable values (1, 1);
insert into myTable values (2, 2);
insert into myTable values (5, 1);
insert into myTable values (3, 1);
insert into myTable values (4, 2);
example: select without any order in grouped column
select category, LISTAGG(id, ', ') as ids from myTable group by category;
result:
CATEGORY IDS
--------- -----
1 1, 5, 3
2 2, 4
example: select with order by clause in grouped column
select
category,
LISTAGG(id, ', ') WITHIN GROUP(ORDER BY id ASC) as ids
from myTable
group by category;
result:
CATEGORY IDS
--------- -----
1 1, 3, 5
2 2, 4
I think with this smaller query, you can do what you want.
This is equivalent of MySQL's GROUP_CONCAT in DB2.
SELECT
NUM,
SUBSTR(xmlserialize(xmlagg(xmltext(CONCAT( ', ',ROLES))) as VARCHAR(1024)), 3) as ROLES
FROM mytable
GROUP BY NUM;
This will output something like:
NUM ROLES
---- -------------
1 111, 333, 555
2 222, 444
assumming your original result was something like that:
NUM ROLES
---- ---------
1 111
2 222
1 333
2 444
1 555
Depending of the DB2 version you have, you can use XML functions to achieve this.
Example table with some data
create table myTable (id int, category int);
insert into myTable values (1, 1);
insert into myTable values (2, 2);
insert into myTable values (3, 1);
insert into myTable values (4, 2);
insert into myTable values (5, 1);
Aggregate results using xml functions
select category,
xmlserialize(XMLAGG(XMLELEMENT(NAME "x", id) ) as varchar(1000)) as ids
from myTable
group by category;
results:
CATEGORY IDS
-------- ------------------------
1 <x>1</x><x>3</x><x>5</x>
2 <x>2</x><x>4</x>
Use replace to make the result look better
select category,
replace(
replace(
replace(
xmlserialize(XMLAGG(XMLELEMENT(NAME "x", id) ) as varchar(1000))
, '</x><x>', ',')
, '<x>', '')
, '</x>', '') as ids
from myTable
group by category;
Cleaned result
CATEGORY IDS
-------- -----
1 1,3,5
2 2,4
Just saw a better solution using XMLTEXT instead of XMLELEMENT here.
Since DB2 9.7.5 there is a function for that:
LISTAGG(colname, separator)
check this for more information: Using LISTAGG to Turn Rows of Data into a Comma Separated List
My problem was to transpose row fields(CLOB) to column(VARCHAR) with a CSV and use the transposed table for reporting. Because transposing on report layer slows down the report.
One way to go is to use recursive SQL. You can find many articles about that but its difficult and resource consuming if you want to join all your recursive transposed columns.
I created multiple global temp tables where I stored single transposed columns with one key identifier. Eventually, I had 6 temp tables for joining 6 columns but due to limited resource allocation I wasnt able to bring all columns together. I opted to below 3 formulas and then I just had to run 1 query which gave me output in 10 seconds.
I found various articles on using XML2CLOB functions and have found 3 different ways.
REPLACE(VARCHAR(XML2CLOB(XMLAGG(XMLELEMENT(NAME "A",ALIASNAME.ATTRIBUTENAME)))),'', ',') AS TRANSPOSED_OUTPUT
NVL(TRIM(',' FROM REPLACE(REPLACE(REPLACE(CAST(XML2CLOB(XMLAGG(XMLELEMENT(NAME "E", ALIASNAME.ATTRIBUTENAME))) AS VARCHAR(100)),'',' '),'',','), '', 'Nothing')), 'Nothing') as TRANSPOSED_OUTPUT
RTRIM(REPLACE(REPLACE(REPLACE(VARCHAR(XMLSERIALIZE(XMLAGG(XMLELEMENT(NAME "A",ALIASNAME.ATTRIBUTENAME) ORDER BY ALIASNAME.ATTRIBUTENAME) AS CLOB)), '',','),'',''),'','')) AS TRANSPOSED_OUTPUT
Make sure you are casting your "ATTRIBUTENAME" to varchar in a subquery and then calling it here.
other possibility, with recursive cte
with tablewithrank as (
select id, category, rownumber() over(partition by category order by id) as rangid , (select count(*) from myTable f2 where f1.category=f2.category) nbidbycategory
from myTable f1
),
cte (id, category, rangid, nbidbycategory, rangconcat) as (
select id, category, rangid, nbidbycategory, cast(id as varchar(500)) from tablewithrank where rangid=1
union all
select f2.id, f2.category, f2.rangid, f2.nbidbycategory, cast(f1.rangconcat as varchar(500)) || ',' || cast(f2.id as varchar(500)) from cte f1 inner join tablewithrank f2 on f1.rangid=f2.rangid -1 and f1.category=f2.category
)
select category, rangconcat as IDS from cte
where rangid=nbidbycategory
Try this:
SELECT GROUP_CONCAT( field1, field2, field3 ,field4 SEPARATOR ', ')
Say I have an table like this:
DROP TABLE tmp;
CREATE TABLE tmp (id SERIAL, name TEXT);
INSERT INTO tmp VALUES (1, 'one'), (2, 'two'), (3, 'three'), (4, 'four'), (5, 'five');
SELECT id, name FROM tmp;
It's like this:
id | name
----+-------
1 | one
2 | two
3 | three
4 | four
5 | five
(5 rows)
Then I have an array of ARRAY[3,1,2]. I want to get query the table by this array, so I can get an array of ARRAY['three', 'one', 'two']. I think this should be very easy but I just can't get it figured out.
Thanks in advance.
To preserve the array order, it needs to be unnested with the index order (using row_number()), then joined to the tmp table:
SELECT array_agg(name ORDER BY f.ord)
FROM (
select row_number() over() as ord, a
FROM unnest(ARRAY[3, 1, 2]) AS a
) AS f
JOIN tmp ON tmp.id = f.a;
array_agg
-----------------
{three,one,two}
(1 row)
Use unnest function:
SELECT id, name FROM tmp
WHERE id IN (SELECT unnest(your_array));
There is a different technique as suggested by Eelke:
You can also use the any operator
SELECT id, name FROM tmp WHERE id = ANY ARRAY[3, 1, 2];
If you want to return the array as output then try this:
SELECT array_agg(name) FROM tmp WHERE id = ANY (ARRAY[3, 1, 2]);
SQL FIDDLE
I have a table like below
DECLARE #ProductTotals TABLE
(
id int,
value nvarchar(50)
)
which has following value
1, 'abc'
2, 'abc'
1, 'abc'
3, 'abc'
I want to update this table so that it has the following values
1, 'abc'
2, 'abc_1'
1, 'abc'
3, 'abc_2'
Could someone help me out with this
Use a cursor to move over the table and try to insert every row in a second temporary table. If you get a collision (technically with a select), you can run a second query to get the maximum number (if any) that's appended to your item.
Once you know what maximum number is used (use isnull to cover the case of the first duplicate) just run an update over your original table and keep going with your scan.
Are you looking to remove duplicates? or just change the values so they aren't duplicate?
to change the values use
update producttotals
set value = 'abc_1'
where id =2;
update producttotals
set value = 'abc_2'
where id =3;
to find duplicate rows do a
select id, value
from producttotals
group by id, value
having count() > 2;
Assuming SQL Server 2005 or greater
DECLARE #ProductTotals TABLE
(
id int,
value nvarchar(50)
)
INSERT INTO #ProductTotals
VALUES (1, 'abc'),
(2, 'abc'),
(1, 'abc'),
(3, 'abc')
;WITH CTE as
(SELECT
ROW_NUMBER() OVER (Partition by value order by id) rn,
id,
value
FROM
#ProductTotals),
new_values as (
SELECT
pt.id,
pt.value,
pt.value + '_' + CAST( ROW_NUMBER() OVER (partition by pt.value order by pt.id) as varchar) new_value
FROM
#ProductTotals pt
INNER JOIN CTE
ON pt.id = CTE.id
and pt.value = CTE.value
WHERE
pt.id NOT IN (SELECT id FROM CTE WHERE rn = 1)) --remove any with the lowest ID for the value
UPDATE
#ProductTotals
SET
pt.value = nv.new_value
FROM
#ProductTotals pt
inner join new_values nv
ON pt.id = nv.id and pt.value = nv.value
SELECT * FROM #ProductTotals
Will produce the following
id value
----------- --------------------------------------------------
1 abc
2 abc_1
1 abc
3 abc_2
Explanation of the SQL
The first CTE creates a row number Value. So the numbering gets restarted whenever it sees a new value
rn id value
-------------------- ----------- --------
1 1 abc
2 1 abc
3 2 abc
4 3 abc
The second CTE called new_values ignores any IDs that are assoicated with with a RN of 1. So rn 1 and rn 2 get removed because they share the same ID. It also uses ROW_NUMBER() again to determine the number for the new_value
id value new_value
----------- ------ -------------
2 abc abc_1
3 abc abc_2
The final statement just updates the Old value with the new value