Dynamically Create tables and Insert into it from another table with CSV values - sql

Have a Table with the CSV Values in the columns as below
ID Name text
1 SID,DOB 123,12/01/1990
2 City,State,Zip NewYork,NewYork,01234
3 SID,DOB 456,12/21/1990
What is need to get is 2 tables in this scenario as out put with the corresponding values
ID SID DOB
1 123 12/01/1990
3 456 12/21/1990
ID City State Zip
2 NewYork NewYork 01234
Is there any way of achieving it using a Cursor or any other method in SQL server?

There are several ways that this can be done. One way that I would suggest would be to split the data from the comma separated list into multiple rows.
Since you are using SQL Server, you could implement a recursive CTE to split the data, then apply a PIVOT function to create the columns that you want.
;with cte (id, NameItem, Name, textItem, text) as
(
select id,
cast(left(Name, charindex(',',Name+',')-1) as varchar(50)) NameItem,
stuff(Name, 1, charindex(',',Name+','), '') Name,
cast(left(text, charindex(',',text+',')-1) as varchar(50)) textItem,
stuff(text, 1, charindex(',',text+','), '') text
from yt
union all
select id,
cast(left(Name, charindex(',',Name+',')-1) as varchar(50)) NameItem,
stuff(Name, 1, charindex(',',Name+','), '') Name,
cast(left(text, charindex(',',text+',')-1) as varchar(50)) textItem,
stuff(text, 1, charindex(',',text+','), '') text
from cte
where Name > ''
and text > ''
)
select id, SID, DOB
into table1
from
(
select id, nameitem, textitem
from cte
where nameitem in ('SID', 'DOB')
) d
pivot
(
max(textitem)
for nameitem in (SID, DOB)
) piv;
See SQL Fiddle with Demo. The recursive version will work great but if you have a large dataset, you could have some performance issues so you could also use a user defined function to split the data:
create FUNCTION [dbo].[Split](#String1 varchar(MAX), #String2 varchar(MAX), #Delimiter char(1))
returns #temptable TABLE (colName varchar(MAX), colValue varchar(max))
as
begin
declare #idx1 int
declare #slice1 varchar(8000)
declare #idx2 int
declare #slice2 varchar(8000)
select #idx1 = 1
if len(#String1)<1 or #String1 is null return
while #idx1 != 0
begin
set #idx1 = charindex(#Delimiter,#String1)
set #idx2 = charindex(#Delimiter,#String2)
if #idx1 !=0
begin
set #slice1 = left(#String1,#idx1 - 1)
set #slice2 = left(#String2,#idx2 - 1)
end
else
begin
set #slice1 = #String1
set #slice2 = #String2
end
if(len(#slice1)>0)
insert into #temptable(colName, colValue) values(#slice1, #slice2)
set #String1 = right(#String1,len(#String1) - #idx1)
set #String2 = right(#String2,len(#String2) - #idx2)
if len(#String1) = 0 break
end
return
end;
Then you can use a CROSS APPLY to get the result for each row:
select id, SID, DOB
into table1
from
(
select t.id,
c.colname,
c.colvalue
from yt t
cross apply dbo.split(t.name, t.text, ',') c
where c.colname in ('SID', 'DOB')
) src
pivot
(
max(colvalue)
for colname in (SID, DOB)
) piv;
See SQL Fiddle with Demo

You'd need to approach this as a multi-step ETL project. I'd probably start with exporting the two types of rows into a couple staging tables. So, for example:
select * from yourtable /* rows that start with a number */
where substring(text,1,1) in
('0','1','2','3','4','5','6','7','8','9')
select * from yourtable /* rows that don't start with a number */
where substring(text,1,1)
not in ('0','1','2','3','4','5','6','7','8','9')
/* or simply this to follow your example explicitly */
select * from yourtable where name like 'sid%'
select * from yourtable where name like 'city%'
Once you get the two types separated then you can split them out with one of the already written split functions found readily out on the interweb.
Aaron Bertrand (who is on here often) has written up a great post on the variety of ways to split comma delimted strings using SQL. Each of the methods are compared and contrasted here.
http://www.sqlperformance.com/2012/07/t-sql-queries/split-strings
If your row count is minimal (under 50k let's say) and it's going to be a one time operation than pick the easiest way and don't worry too much about all the performance numbers.
If you have a ton of rows or this is an ETL process that will run all the time then you'll really want to pay attention to that stuff.

A simple solution using cursors to build temporary tables. This has the limitation of making all columns VARCHAR and would be slow for large amounts of data.
--** Set up example data
DECLARE #Source TABLE (ID INT, Name VARCHAR(50), [text] VARCHAR(200));
INSERT INTO #Source
(ID, Name, [text])
VALUES (1, 'SID,DOB', '123,12/01/1990')
, (2, 'City,State,Zip', 'NewYork,NewYork,01234')
, (3, 'SID,DOB', '456,12/21/1990');
--** Declare variables
DECLARE #Name VARCHAR(200) = '';
DECLARE #Text VARCHAR(1000) = '';
DECLARE #SQL VARCHAR(MAX);
--** Set up cursor for the tables
DECLARE cursor_table CURSOR FAST_FORWARD READ_ONLY FOR
SELECT s.Name
FROM #Source AS s
GROUP BY Name;
OPEN cursor_table
FETCH NEXT FROM cursor_table INTO #Name;
WHILE ##FETCH_STATUS = 0
BEGIN
--** Dynamically create a temp table with the specified columns
SET #SQL = 'CREATE TABLE ##Table (' + REPLACE(#Name, ',', ' VARCHAR(50),') + ' VARCHAR(50));';
EXEC(#SQL);
--** Set up cursor to insert the rows
DECLARE row_cursor CURSOR FAST_FORWARD READ_ONLY FOR
SELECT s.Text
FROM #Source AS s
WHERE Name = #Name;
OPEN row_cursor;
FETCH NEXT FROM row_cursor INTO #Text;
WHILE ##FETCH_STATUS = 0
BEGIN
--** Dynamically insert the row
SELECT #SQL = 'INSERT INTO ##Table VALUES (''' + REPLACE(#Text, ',', ''',''') + ''');';
EXEC(#SQL);
FETCH NEXT FROM row_cursor INTO #Text;
END
--** Display the table
SELECT *
FROM ##Table;
--** Housekeeping
CLOSE row_cursor;
DEALLOCATE row_cursor;
DROP TABLE ##Table;
FETCH NEXT FROM cursor_table INTO #Name;
END
CLOSE cursor_table;
DEALLOCATE cursor_table;

Related

Compare 2 columns value in a table and show the matching values in a new column . column values are comma seperated

Here I want to compare the comma-separated values from the column roles and userroles, then show the matching values in another column.
I found an example compare comma separated values in sql and used the cursor to iterate rows one by one. it's working for me. but I feel there should be a better way to do it.
Any help is much appreciated.
Create table #temp4
(
user_id int,
permission_id int,
roles varchar(max),
userroles varchar(max),
matchingrolesinthisrow varchar(max))
Insert Into #temp4 values
( 1, -12010, '2341,8760,3546', '1000,1001,1002,1003', null),
( 1, -334, '1002,1001,3467', '2341,1002,3467', null),
( 2, -12349, '9876,9982,6548', '1001,1002,2341', null)
below is the result table I am looking for.
user_id
permission_id
roles
userroles
matchingrolesinthisrow
1
-12010
2341,8760,3546
1000,1001,1002,1003
1
-334
1002,1001,3467
2341,1002,3467
1002,3467
2
-12349
9876,9982,6548
1001,1002,2341
My attempt so far and it's working. please guide me to do this in a better way.
DECLARE #user_id INT
DECLARE #permission_id INT
DECLARE #roles VARCHAR(MAX)
DECLARE #userroles VARCHAR(MAX)
DECLARE #matchingrolesinthisrow VARCHAR(MAX)
declare cur CURSOR LOCAL for
select user_id, permission_id, roles, userroles, matchingrolesinthisrow from #temp4 order by 1
open cur
fetch next from cur into #user_id, #permission_id, #roles, #userroles, #matchingrolesinthisrow
while ##FETCH_STATUS = 0 BEGIN
print (#roles)
print(#userroles)
--execute on each row
UPDATE #temp4
SET matchingrolesinthisrow = T1.[Item]
FROM [developers].[Split](#roles, ',') AS T1
INNER JOIN [developers].[Split](#userroles, ',') AS T2 on T1.[Item] = T2.[Item]
Where roles = #roles and userroles = #userroles and permission_id = #permission_id and user_id = #user_id
fetch next from cur into #user_id, #permission_id, #roles, #userroles, #matchingrolesinthisrow
END
close cur
deallocate cur
--Split function
CREATE FUNCTION [developers].[Split]
(
#s VARCHAR(max),
#split CHAR(1)
)
RETURNS #temptable TABLE ([Item] VARCHAR(MAX))
AS
BEGIN
DECLARE #x XML
SELECT #x = CONVERT(xml,'<root><s>' + REPLACE(#s,#split,'</s><s>') + '</s></root>');
INSERT INTO #temptable
SELECT [Value] = T.c.value('.','varchar(20)')
FROM #X.nodes('/root/s') T(c);
RETURN
END;
Ideally, you should store each inidividual piece of information in separate row, so you should have two separate tables roles and userroles which are foreign-keyed on this one.
Be that as it may, this does not need cursors. You can use STRING_SPLIT and STRING_AGG to get the result you want very easily:
SELECT
t4.user_id,
t4.permission_id,
t4.roles,
t4.userroles,
matchingrolesinthisrow = (
SELECT STRING_AGG(r.value, ',')
FROM STRING_SPLIT(t4.roles, ',') r
JOIN STRING_SPLIT(t4.userroles, ',') ur ON ur.value = r.value
)
FROM #temp4 t4;
SQL Fiddle
If you are on an early version of SQL Server, you will have to use custom Table Valued Functions to do this.
Modified the shared answer to work in the lower version of SQL Server.
SELECT
t4.user_id,
t4.permission_id,
t4.roles,
t4.userroles,
STUFF((SELECT N',' + CONVERT(nvarchar(2000),r.value)
FROM STRING_SPLIT(t4.roles, ',') r
INNER JOIN STRING_SPLIT(t4.userroles, ',') ur ON ur.value = r.value
FOR XML PATH(N''), TYPE).value(N'.[1]', N'nvarchar(max)'), 1, 1, N'') as matchingrolesinthisrow
FROM #temp4 t4;

how to reverse the strings in one of the column in sql server? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want all the entries in the name column reversed and stored in another column in SQL SERVER 2008.
I do not want to use the string reverse function.
I want to do it using loops.
name reversename
---------- ----------
john nhoj
kevin nivek
paul luap
table structure-
contractor (contractno, name, email, phoneno)
I want only the entire name column in reverse order.
I HAVE TRIED THIS
DECLARE #revString VARCHAR(55)=''
DECLARE #string2 VARCHAR(55)= (SELECT NAME FROM CONTRACTOR)
DECLARE #ln INT=LEN(#string2)
WHILE #ln > 0
BEGIN
SET #revString = #revString + SUBSTRING(#string2, #ln, 1)
SET #ln= #ln - 1
END
SELECT #string2, #revString,#ln
Found a solution using loops too
DECLARE #NAME VARCHAR(MAX)
DECLARE #REVERSE TABLE(
Name VARCHAR(MAX),
ReverseName VARCHAR(MAX))
DECLARE NAME_CURSOR CURSOR FOR
SELECT DISTINCT NAME FROM CONTRACTOR
OPEN NAME_CURSOR
FETCH NEXT FROM NAME_CURSOR
INTO
#NAME
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #revString VARCHAR(55)=''
DECLARE #string2 VARCHAR(55)= #NAME
DECLARE #ln INT=LEN(#string2)
WHILE #ln > 0
BEGIN
SET #revString = #revString + SUBSTRING(#string2, #ln, 1)
SET #ln= #ln - 1
END
INSERT INTO #REVERSE VALUES (#string2, #revString)
FETCH NEXT FROM NAME_CURSOR INTO #NAME
END
SELECT * FROM #REVERSE
CLOSE NAME_CURSOR
DEALLOCATE NAME_CURSOR
OK, clearly a homework assignment. Syntax and techniques aside, the take-away here is that you should REALLY avoid loops when possible... think data sets.
Consider the following
1) Subquery B1 will create a record for each character in the string by using an ad-hoc tally table. A tally/numbers table would do the trick as well. The internal/temporary results would look like this:
2) The XML Path portion in the Cross Apply B will consolidate the records in DESCENDING order of N
Example
Declare #YourTable Table ([name] varchar(50))
Insert Into #YourTable Values
('john')
,('kevin')
,('paul')
Select A.Name
,ReverseName = B.NewString
From #YourTable A
Cross Apply (
Select NewString = Stuff((Select '' +C
From (
Select N,C = substring(A.Name,N,1)
From (Select Top (len(A.Name)) N=Row_Number() Over (Order By (Select NULL)) From master..spt_values ) A1
) B1
Order By N Desc
FOR XML PATH(''),TYPE).value('(./text())[1]','NVARCHAR(MAX)')
,1,0,'')
) B
Final Results
This is using a kind of loop by joining with a tally table and concatenation the values in reversed order.
DECLARE #test table(name varchar(10))
INSERT #test values('John'),('Tom Jones')
;WITH N(N)AS
(
SELECT
1
FROM(VALUES(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))M(N)
),
tally(N)AS
(
SELECT ROW_NUMBER()OVER(ORDER BY N.N)
FROM N,N a,N b,N c,N d,N e,N f
)
SELECT
*
FROM
#test t
CROSS APPLY
(
SELECT
(
SELECT z
FROM
(SELECT substring(name, N, 1) z, N
FROM tally
WHERE n < =LEN(name)) t1
ORDER BY N DESC
FOR xml path(''), type
).value('.', 'varchar(max)') z
) y
Create this function
CREATE FUNCTION [dbo].[revString] (#input VARCHAR(250))
RETURNS VARCHAR(250)
AS BEGIN
DECLARE #strCount int;
DECLARE #revStr varchar(250) ='';
declare #cnt int ;
set #strCount = LEN(#input);
set #cnt= #strCount;
While #cnt>0
begin
set #revStr= #revStr + substring(#input,#cnt,1);
set #cnt =#cnt-1;
end
RETURN #revStr
END
select dbo.revString('ASSDE') --Results (EDSSA)

Finding Uppercase Character then Adding Space

I bought a SQL World City/State database. In the state database it has the state names pushed together. Example: "NorthCarolina", or "SouthCarolina"...
IS there a way in SQL to loop and find the uppercase characters and add a space???
this way "NorthCarolina" becomes "North Carolina"???
Create this function
if object_id('dbo.SpaceBeforeCaps') is not null
drop function dbo.SpaceBeforeCaps
GO
create function dbo.SpaceBeforeCaps(#s varchar(100)) returns varchar(100)
as
begin
declare #return varchar(100);
set #return = left(#s,1);
declare #i int;
set #i = 2;
while #i <= len(#s)
begin
if ASCII(substring(#s,#i,1)) between ASCII('A') and ASCII('Z')
set #return = #return + ' ' + substring(#s,#i,1)
else
set #return = #return + substring(#s,#i,1)
set #i = #i + 1;
end;
return #return;
end;
GO
Then you can use it to update your database
update tbl set statename = select dbo.SpaceBeforeCaps(statename);
There's a couple ways to approach this
Construct a function using a pattern and the PATINDEX feature.
Chain minimal REPLACE statements for each case (e.g. REPLACE(state_name, 'hC', 'h C' for your example case). This seems is kind of a hack, but might actually give you the best performance, since you have such a small set of replacements.
If you absolutely cannot create functions and need this as a one-off, you can use a recursive CTE to break the string up (and add the space at the same time where required), then recombine the characters using FOR XML. Elaborate example below:
-- some sample data
create table #tmp (id int identity primary key, statename varchar(100));
insert #tmp select 'NorthCarolina';
insert #tmp select 'SouthCarolina';
insert #tmp select 'NewSouthWales';
-- the complex query updating the "statename" column in the "#tmp" table
;with cte(id,seq,char,rest) as (
select id,1,cast(left(statename,1) as varchar(2)), stuff(statename,1,1,'')
from #tmp
union all
select id,seq+1,case when ascii(left(rest,1)) between ascii('A') and ascii('Z')
then ' ' else '' end + left(rest,1)
, stuff(rest,1,1,'')
from cte
where rest > ''
), recombined as (
select a.id, (select b.char+''
from cte b
where a.id = b.id
order by b.seq
for xml path, type).value('/','varchar(100)') fixed
from cte a
group by a.id
)
update t
set statename = c.fixed
from #tmp t
join recombined c on c.id = t.id
where statename != c.fixed;
-- check the result
select * from #tmp
----------- -----------
id statename
----------- -----------
1 North Carolina
2 South Carolina
3 New South Wales

Compare two list items

I am trying to compare a database field which stores list items (comma separated) with unfortunately a variable which is also a list item.
Example:
In this case, a user can belong to multiple groups, and content access is also allocated to multiple groups.
contentid | group
(1) (c,d)
(2) (a,c)
(3) (b)
So, I need to select all content where user is in group (a,c). In this case, contentid 1,2 should be returned.
Here's a safe but slow solution for SQL 2008
BEGIN
-- setup
DECLARE #tbl TABLE (
[contentid] INT
,[group] VARCHAR(MAX)
)
INSERT INTO #tbl VALUES
(1, 'c,d')
,(2, 'a,c')
,(3, 'd')
-- send your request as simple xml
DECLARE #param XML
SET #param = '<g>a</g><g>c</g>'
-- query
SELECT DISTINCT contentid
FROM #tbl t
INNER JOIN #param.nodes('/g') AS t2(g)
ON ',' + t.[group] + ',' LIKE '%,' + t2.g.value('.', 'varchar(max)') + ',%'
END
You just pass your query in as an XML snippet instead of a comma separated list.
If your group names are single characters or you can be sure the names are not character-subsets of each other (ie: GroupA, GroupAB), then the query can be optimized to.
ON t.[group] LIKE '%' + t2.g.value('.', 'varchar(max)') + '%'
If you're using a RDBMS without XML parsing capability you'll have to use string split your query into a temp table and work it that way.
You really should not be using comma separated values inside your columns. It would be much better if the [group] column only contained one value and you had repeated entries with a UNIQUE constraint on the composite (contentid, group).
You might find this question and answer useful : How do I split a string so I can access item x?
Or you could always use something like this :
create function SplitString(
#string varchar(max),
#delimiter char(1)
)
returns #items table (item varchar(max))
as
begin
declare #index int set #index = 0
if (#delimiter is null) set #delimiter = ','
declare #prevdelimiter int set #prevdelimiter = 0
while (#index < len(#string)) begin
if (substring(#string, #index, 1) = #delimiter) begin
insert into #items
select substring(#string, #prevdelimiter, #index-#prevdelimiter)
set #prevdelimiter = #index + 1
end
set #index = #index + 1
end
--last item (or only if there were no delimiters)
insert into #items
select substring(#string, #prevdelimiter, #index - #prevdelimiter + 1)
return
end
go
declare #content table(contentid int, [group] varchar(max))
insert into #content
select 1, 'c,d'
union
select 2, 'a,c'
union
select 3, 'b'
declare #groups varchar(max) set #groups = 'a,c'
declare #grouptable table(item varchar(max))
insert into #grouptable
select * from dbo.SplitString(#groups, ',')
select * From #content
where (select count(*) from #grouptable g1 join dbo.SplitString([group], ',') g2 on g1.item = g2.item) > 0

Inserting records from a table with deliminated strings

I have a table structure that contains a identifier column and a column that contains a deliminated string. What I would like to achieve is to insert the deliminated string into a new table as individual records for each of the values in the split deliminated string.
My table structure for the source table is as follows:
CREATE TABLE tablea(personID VARCHAR(8), delimStr VARCHAR(100))
Some sample data:
INSERT INTO tablea (personID, delimStr) VALUES ('A001','Monday, Tuesday')
INSERT INTO tablea (personID, delimStr) VALUES ('A002','Monday, Tuesday, Wednesday')
INSERT INTO tablea (personID, delimStr) VALUES ('A003','Monday')
My destination table is as follows:
CREATE TABLE tableb(personID VARCHAR(8), dayName VARCHAR(10))
I am attempting to create a Stored Procedure to undertake the insert, my SP so far looks like:
CREATE PROCEDURE getTKWorkingDays
#pos integer = 1
, #previous_pos integer = 0
AS
BEGIN
DECLARE #value varchar(50)
, #string varchar(100)
, #ttk varchar(8)
WHILE #pos > 0
BEGIN
SELECT #ttk = personID
, #string = delimStr
FROM dbo.tablea
SET #pos = CHARINDEX(',', #string, #previous_pos + 1)
IF #pos > 0
BEGIN
SET #value = SUBSTRING(#string, #previous_pos + 1, #pos - #previous_pos - 1)
INSERT INTO dbo.tableb ( personID, dayName ) VALUES ( #ttk, #value )
SET #previous_pos = #pos
END
END
IF #previous_pos < LEN(#string)
BEGIN
SET #value = SUBSTRING(#string, #previous_pos + 1, LEN(#string))
INSERT INTO dbo.tableb ( tkinit, dayName ) VALUES ( #ttk, #value )
END
END
The data that was inserted (only 1 records out of the 170 or so in the original table which after spliting the deliminated string should result in about 600 or so records in the new table), was incorrect.
What I am expecting to see using the sample data above is:
personID dayName
A001 Monday
A001 Tuesday
A002 Monday
A002 Tuesday
A002 Wednesday
A003 Monday
Is anyone able to point out any resources or identify where I am going wrong, and how to make this query work?
The Database is MS SQL Server 2000.
I thank you in advance for any assistance you are able to provide.
Matt
Well your SELECT statement which gets the "next" person doesn't have a WHERE clause, so I'm not sure how SQL Server will know to move to the next person. If this is a one-time task, why not use a cursor?
CREATE TABLE #n(n INT PRIMARY KEY);
INSERT #n(n) SELECT TOP 100 number FROM [master].dbo.spt_values
WHERE number > 0 GROUP BY number ORDER BY number;
DECLARE
#PersonID VARCHAR(8), #delimStr VARCHAR(100),
#str VARCHAR(100), #c CHAR(1);
DECLARE c CURSOR LOCAL FORWARD_ONLY STATIC READ_ONLY
FOR SELECT PersonID, delimStr FROM dbo.tablea;
OPEN c;
FETCH NEXT FROM c INTO #PersonID, #delimStr;
SET #c = ',';
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #delimStr = #c + #delimStr + #c;
-- INSERT dbo.tableb(tkinit, [dayName])
SELECT #PersonID, LTRIM(SUBSTRING(#delimStr, n+1, CHARINDEX(#c, #delimStr, n+1)-n-1))
FROM #n AS n
WHERE n.n <= LEN(#delimStr) - 1
AND SUBSTRING(#delimStr, n.n, 1) = #c;
FETCH NEXT FROM c INTO #PersonID, #delimStr;
END
CLOSE c;
DEALLOCATE c;
DROP TABLE #n;
If you create a permanent numbers table (with more than 100 rows, obviously) you can use it for many purposes. You could create a split function that allows you to do the above without a cursor (well, without an explicit cursor). But this would probably work best later, when you finally get off of SQL Server 2000. Newer versions of SQL Server have much more flexible and extensible ways of performing splitting and joining.