SQL Server 2000 equivalent of GROUP_CONCAT function - sql

I tried to use the GROUP_CONCAT function in SQL Server 2000 but it returns an error:
'group_concat' is not a recognized function name"
So I guess there is an other function for group_concat in SQL Server 2000? Can you tell me what it is?

Unfortunately since you are using SQL Server 2000 you cannot use FOR XML PATH to concatenate the values together.
Let's say we have the following sample Data:
CREATE TABLE yourtable ([id] int, [name] varchar(4));
INSERT INTO yourtable ([id], [name])
VALUES (1, 'John'), (1, 'Jim'),
(2, 'Bob'), (3, 'Jane'), (3, 'Bill'), (4, 'Test'), (4, '');
One way you could generate the list together would be to create a function. A sample function would be:
CREATE FUNCTION dbo.List
(
#id int
)
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE #r VARCHAR(8000)
SELECT #r = ISNULL(#r+', ', '') + name
FROM dbo.yourtable
WHERE id = #id
and Name > '' -- add filter if you think you will have empty strings
RETURN #r
END
Then when you query the data, you will pass a value into the function to concatenate the data into a single row:
select distinct id, dbo.list(id) Names
from yourtable;
See SQL Fiddle with Demo. This gives you a result:
| ID | NAMES |
-------------------
| 1 | John, Jim |
| 2 | Bob |
| 3 | Jane, Bill |
| 4 | Test |

Related

Using GROUP BY with FOR XML PATH in SQL Server 2016

I am trying to
group by ID and
aggregate multiple comments into a single row
Right now, I can do the no. 2 part for a single ID (ID = 1006), but I would like to aggregate comments for all IDs. I am struggling where and how to add "group by" clause in my query.
Here is the query:
create table Comments (ID int, Comment nvarchar(150), RegionCode int)
insert into Comments values (1006, 'I', 1)
, (1006, 'am', 1)
, (1006, 'good', 1)
, (1006, 'bad', 2)
, (2, 'You', 1)
, (2, 'are', 1)
, (2, 'awesome', 1)
SELECT
SUBSTRING((SELECT Comment
FROM Comments
WHERE ID = 1006 AND RegionCode != 2
FOR XML PATH('')), 1, 999999) AS Comment_Agg
My desired result looks something like this:
FYI, I am using FOR XML PATH here to aggregate multiple comments into a single row because STRING_AGG function is not supported in my version - SQL Server 2016 (v13.x).
Please try the following solution.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID int, Comment nvarchar(150));
INSERT INTO #tbl VALUES
(1006, 'I'),
(1006, 'am'),
(1006, 'good'),
(2, 'You'),
(2, 'are'),
(2, 'awesome');
-- DDL and sample data population, end
DECLARE #separator CHAR(1) = SPACE(1);
SELECT p.ID
, STUFF((SELECT #separator + Comment
FROM #tbl AS c
WHERE c.ID = p.ID
FOR XML PATH('')), 1, LEN(#separator), '') AS Result
FROM #tbl AS p
GROUP BY p.ID
ORDER BY p.ID;
Output
+------+-----------------+
| ID | Result |
+------+-----------------+
| 2 | You are awesome |
| 1006 | I am good |
+------+-----------------+

Adding hyphen in a column in sql table

I need help understanding how to add a hyphen to a column where the values are as follows,
8601881, 9700800,2170
The hyphen is supposed to be just before the last digit. There are multiple such values in the column and the length of numbers could be 5,6 or more but the hyphen has to be before the last digit.
Any help is greatly appreciated.
The expected output should be as follows,
860188-1,970080-0,217-0
select concat(substring(value, 1, len(value)-1), '-', substring(value, len(value), 1)) from data;create table data(value varchar(100));
Here is the full example:
create table data(value varchar(100));
insert into data values('6789567');
insert into data values('98765434');
insert into data values('1234567');
insert into data values('876545');
insert into data values('342365');
select concat(substring(value, 1, len(value)-1), '-', substring(value, len(value), 1)) from data;
| (No column name) |
| :--------------- |
| 678956-7 |
| 9876543-4 |
| 123456-7 |
| 87654-5 |
| 34236-5 |
In case OP meant there can be multiple numbers in the column value here is the solution:
create table data1(value varchar(100));
insert into data1 values('6789567,5467474,846364');
insert into data1 values('98765434,6474644,76866,68696');
insert into data1 values('1234567,35637373');
select t.value, string_agg(concat(substring(token.value, 1, len(token.value)-1), '-',
substring(token.value, len(token.value), 1)), ',') as result
from data1 t cross apply string_split(value, ',') as token group by t.value;
value | result
:--------------------------- | :-------------------------------
1234567,35637373 | 123456-7,3563737-3
6789567,5467474,846364 | 678956-7,546747-4,84636-4
98765434,6474644,76866,68696 | 9876543-4,647464-4,7686-6,6869-6
Using SQL SERVER 2017, you can leverage STRING_SPLIT, STUFF, & STRING_AGG to handle this fairly easily.
DECLARE #T TABLE (val VARCHAR(100)) ;
INSERT INTO #T (val) VALUES ('8601881,9700800,2170') ;
SELECT t.val,
STRING_AGG(STUFF(ss.value, LEN(ss.value), 0, '-'), ',') AS Parsed
FROM #T AS t
CROSS APPLY STRING_SPLIT(t.val, ',') AS ss
GROUP BY t.val ;
Returns
8601881,9700800,2170 => 860188-1,970080-0,217-0
STRING_SPLIT breaks them into individual values, STUFF inserts the hyphen into each individual value, STRING_AGG combines them back into a single row per original value.
You can use LEN and LEFT/RIGHT method to get your desired output. Logic are given below:
Note: this will work for any length's value.
DECLARE #T VARCHAR(MAX) = '8601881'
SELECT LEFT(#T,LEN(#T)-1)+'-'+RIGHT(#T,1)
If you have "dash/hyphen" in your data, and you have to store it in varchar or nvarchar just append N before the data.
For example:
insert into users(id,studentId) VALUES (6,N'12345-1001-67890');

Updating a column with JSON data in another table

I have seen a lot on JSON and SQL Server but haven't been able to find what I am looking for.
I want to update columns in one table by retrieving JSON values from another table.
Lets say I have the below table:
table : people
+-------+-----------+
| id | name |
+-------+-----------+
| 1 | John |
| 2 | Mary |
| 3 | Jeff |
| 4 | Bill |
| 5 | Bob |
+-------+-----------+
And lets pretend I have another table filled with rows of JSON like the following:
table : archive
+-------+----------------------------------------------------------------+
| id | json |
+-------+----------------------------------------------------------------+
| 1 |[{ "Column":"name","values": { "old": "Jeff", "new": "John"}}] |
| 2 |[{ "Column":"name","values": { "old": "Rose", "new": "Mary"}}] |
+-------+----------------------------------------------------------------+
Now the idea is to change Johns name to Jeff.
UPDATE people
SET name = JSON_QUERY(archive.json, '$values.old')
WHERE ID = 1
The above SQL may make no sense but I'm just trying to get across my current logic of what I'm trying to do. I hope it makes some sense.
If more information is needed please ask.
You can read your json using openjson and a double cross apply with with clause. Then you can use an update from to change the values in #people:
declare #people table (id int, [name] varchar(50))
insert into #people values
(1, 'John')
,(2, 'Mary')
,(3, 'Jeff')
,(4, 'Bill')
,(5, 'Bob' )
declare #json table (id int, [json] nvarchar(max))
insert into #json values
(1,'[{ "Column":"name","values": { "old": "Jeff", "new": "John"}}]')
,(2,'[{ "Column":"name","values": { "old": "Rose", "new": "Mary"}}]')
update #people
set [name] = d.old
from #people p
inner join
(
select id
, c.old
, c.new
from #json a
cross apply openjson(json) with
(
[Column] nvarchar(50)
, [values] nvarchar(MAX) as JSON
) b
cross apply openjson(b.[values]) with
(
old nvarchar(50)
, new nvarchar(50)
) c
) d
on p.id = d.id
Before update:
After update:
I asked you some questions in a comment above
Two remarks: Am I correct, that you mixed old and new values? And am I
correct, that the above is just a sample and you are looking for a
generic solution, where the updates might affect different columns,
maybe even more than one per row? At least the JSON would allow more
elements in the object-array.
But - as a start - you can try this:
--mockup your set-up (thanks #Andrea, I used yours)
declare #people table (id int, [name] varchar(50))
insert into #people values
(1, 'John')
,(2, 'Mary')
,(3, 'Jeff')
,(4, 'Bill')
,(5, 'Bob' )
declare #json table (id int, [json] nvarchar(max))
insert into #json values
(1,'[{ "Column":"name","values": { "old": "Jeff", "new": "John"}}]')
,(2,'[{ "Column":"name","values": { "old": "Rose", "new": "Mary"}}]')
--This will - at least - return everything you need. The rest is - assumably - dynamic statement building and EXEC():
SELECT p.*
,A.[Column]
,JSON_VALUE(A.[values],'$.old') AS OldValue
,JSON_VALUE(A.[values],'$.new') As NewValue
FROM #people p
INNER JOIN #json j ON p.id=j.id
CROSS APPLY OPENJSON(j.[json])
WITH([Column] VARCHAR(100), [values] NVARCHAR(MAX) AS JSON) A;
The result (old and new seems to be mistaken):
id name Column OldValue NewValue
1 John name Jeff John
2 Mary name Rose Mary

How to find next bigger / equal text in SQL Server

I have a table like this
id | name
------+-----------
1 | aaa
5 | aaa
2 | bbb
4 | bbb
10 | bbb
7 | ccc
9 | ccc
In my Windows Forms app, I need to "find next" button
first find "aaa" with id 1
then find "aaa" with id 5
and then find "bbb" with id 2
I use this query
select
min(name)
from
table
where
[name] >='#name'
but it always returns "aaa"
and
select
min(name)
from
table
where
[name] >'#name'
this does not return other id's
select top 1
[name],[id]
from [table]
where ( [name] = #name
and [id] > #id
)
or [name] > #name
order by [name],[id]
or
lead looks on the next row when the rows are ordered by the order by expressions within the over clause.
select [name],[next_id] as [id]
from (select [id],[name],lead([id]) over (order by [name],#id) as [next_id]
from [table]
) t
where [name] = #name
and [id] = #id
You can sort the records and then on each button click just read the next record:
select name,
id
from Table
order by name,
id
Alright, I think this is what you want:
create table TestTable(id int, name nvarchar(max))
GO
insert into TestTable
values
(1, 'aaa'),
(5, 'aaa'),
(2, 'bbb'),
(4, 'bbb'),
(10, 'bbb'),
(7, 'ccc'),
(9, 'ccc')
GO
create function dbo.FindNext(#lastResultOrdinal int) returns nvarchar(max)
as begin
return (select name from TestTable order by name, id offset #lastResultOrdinal rows fetch next 1 rows only)
end
GO
create function dbo.LastOrdinalWasLast(#lastResultOrdinal int) returns bit
as begin
if ((select count(id) from TestTable) = #lastResultOrdinal)
return 1
return 0
end
GO
--Fetching First Result:
select dbo.FindNext(0)
select dbo.LastOrdinalWasLast(1)
--Fetching Last Result:
select dbo.FindNext(6)
select dbo.LastOrdinalWasLast(7)
The dbo.FindNext function supposed to do the work giving it the ordinal (zero-based index + 1) of the current row; while dbo.LastOrdinalWasLast makes sure that there's no more rows to fetch.

How do I join an unknown number of rows to another row?

I have this scenario:
Table A:
---------------
ID| SOME_VALUE|
---------------
1 | 123223 |
2 | 1232ff |
---------------
Table B:
------------------
ID | KEY | VALUE |
------------------
23 | 1 | 435 |
24 | 1 | 436 |
------------------
KEY is a reference to to Table A's ID. Can I somehow join these tables so that I get the following result:
Table C
-------------------------
ID| SOME_VALUE| | |
-------------------------
1 | 123223 |435 |436 |
2 | 1232ff | | |
-------------------------
Table C should be able to have any given number of columns depending on how many matching values that are found in Table B.
I hope this enough to explain what I'm after here.
Thanks.
You need to use a Dynamic PIVOT clause in order to do this.
EDIT:
Ok so I've done some playing around and based on the following sample data:
Create Table TableA
(
IDCol int,
SomeValue varchar(50)
)
Create Table TableB
(
IDCol int,
KEYCol int,
Value varchar(50)
)
Insert into TableA
Values (1, '123223')
Insert Into TableA
Values (2,'1232ff')
Insert into TableA
Values (3, '222222')
Insert Into TableB
Values( 23, 1, 435)
Insert Into TableB
Values( 24, 1, 436)
Insert Into TableB
Values( 25, 3, 45)
Insert Into TableB
Values( 26, 3, 46)
Insert Into TableB
Values( 27, 3, 435)
Insert Into TableB
Values( 28, 3, 437)
You can execute the following Dynamic SQL.
declare #sql varchar(max)
declare #pivot_list varchar(max)
declare #pivot_select varchar(max)
Select
#pivot_list = Coalesce(#Pivot_List + ', ','') + '[' + Value +']',
#Pivot_select = Coalesce(#pivot_Select, ', ','') +'IsNull([' + Value +'],'''') as [' + Value + '],'
From
(
Select distinct Value From dbo.TableB
)PivotCodes
Set #Sql = '
;With p as (
Select a.IdCol,
a.SomeValue,
b.Value
From dbo.TableA a
Left Join dbo.TableB b on a.IdCol = b.KeyCol
)
Select IdCol, SomeValue ' + Left(#pivot_select, Len(#Pivot_Select)-1) + '
From p
Pivot ( Max(Value) for Value in (' + #pivot_list + '
)
)as pvt
'
exec (#sql)
This gives you the following output:
Although this works at the moment it would be a nightmare to maintain. I'd recommend trying to achieve these results somewhere else. i.e not in SQL!
Good luck!
As Barry has amply illustrated, it's possible to get multiple columns using a dynamic pivot.
I've got a solution that might get you what you need, except that it puts all of the values into a single VARCHAR column. If you can split those results, then you can get what you need.
This method is a trick in SQL Server 2005 that you can use to form a string out of a column of values.
CREATE TABLE #TableA (
ID INT,
SomeValue VARCHAR(50)
);
CREATE TABLE #TableB (
ID INT,
TableAKEY INT,
BValue VARCHAR(50)
);
INSERT INTO #TableA VALUES (1, '123223');
INSERT INTO #TableA VALUES (2, '1232ff');
INSERT INTO #TableA VALUES (3, '222222');
INSERT INTO #TableB VALUES (23, 1, 435);
INSERT INTO #TableB VALUES (24, 1, 436);
INSERT INTO #TableB VALUES (25, 3, 45);
INSERT INTO #TableB VALUES (26, 3, 46);
INSERT INTO #TableB VALUES (27, 3, 435);
INSERT INTO #TableB VALUES (28, 3, 437);
SELECT
a.ID
,a.SomeValue
,RTRIM(bvals.BValues) AS ValueList
FROM #TableA AS a
OUTER APPLY (
-- This has the effect of concatenating all of
-- the BValues for the given value of a.ID.
SELECT b.BValue + ' ' AS [text()]
FROM #TableB AS b
WHERE a.ID = b.TableAKEY
ORDER BY b.ID
FOR XML PATH('')
) AS bvals (BValues)
ORDER BY a.ID
;
You'll get this as a result:
ID SomeValue ValueList
--- ---------- --------------
1 123223 435 436
2 1232ff NULL
3 222222 45 46 435 437
This looks like something a database shouldn't do. Firstly; a table cannot have arbitrary number of columns depending on whatever you'll store. So you will have to put up a maximum number of values anyway. You can get around this by using comma seperated values as value for that cell (or a similar pivot-like solution).
However; if you do have table A and B; i recommend keeping to those two tables; as they seem to be pretty normalised. Should you need a list of b.value given an input a.some_value, the following sql query gives that list.
select b.value from a,b where b.key=a.id a.some_value='INPUT_VALUE';