Related
My table Data looks like
Sno Componet Subcomponent IRNo
1 1 C1 to C100 001
2 1 C101 to C200 002
3 1 C201 to C300 003
4 1 C301,C400 004
5 1 C401,C500 005
If user enter C50 into textbox then it will get the data from First Row.Mean C50 between C1 to C100(C1,C100)
as same as if user enter C340 , then it will the data from SNO 4.
Means C340 between C301,C400(C301 to C400)
How can I write the query for this in sql server?
This is a terrible design and should be replaced with a better one if possible.
If re-designing is not possible then this answer by Eduard Uta is a good one, but still has one drawback compared to my suggested solution:
It assumes that the Subcomponent will always contain exactly one letter and a number, and that the range specified in the table has the same letter in both sides. a range like AB1 to AC100 might be possible (at least I don't think there's a way to prevent it using pure t-sql).
This is the only reason I present my solution as well. Eduard already got my vote up.
DECLARE #Var varchar(50) = 'C50'
-- also try 'AB150' and 'C332'
;WITH CTE AS (
SELECT Sno, Comp, SubComp,
LEFT(FromValue, PATINDEX('%[0-9]%', FromValue)-1) As FromLetter,
CAST(RIGHT(FromValue, LEN(FromValue) - (PATINDEX('%[0-9]%', FromValue)-1)) as int) As FromNumber,
LEFT(ToValue, PATINDEX('%[0-9]%', ToValue)-1) As ToLetter,
CAST(RIGHT(ToValue, LEN(ToValue) - (PATINDEX('%[0-9]%', ToValue)-1)) as int) As ToNumber
FROM
(
SELECT Sno, Comp, SubComp,
LEFT(SubComp,
CASE WHEN CHARINDEX(' to ', SubComp) > 0 THEN
CHARINDEX(' to ', SubComp)-1
WHEN CHARINDEX(',', SubComp) > 0 THEN
CHARINDEX(',', SubComp)-1
END
) FromValue,
RIGHT(SubComp,
CASE WHEN CHARINDEX(' to ', SubComp) > 0 THEN
LEN(SubComp) - (CHARINDEX(' to ', SubComp) + 3)
WHEN CHARINDEX(',', SubComp) > 0 THEN
CHARINDEX(',', SubComp)-1
END
) ToValue
FROM T
) InnerQuery
)
SELECT Sno, Comp, SubComp
FROM CTE
WHERE LEFT(#Var, PATINDEX('%[0-9]%', #Var)-1) BETWEEN FromLetter AND ToLetter
AND CAST(RIGHT(#Var, LEN(#Var) - (PATINDEX('%[0-9]%', #Var)-1)) as int) BETWEEN FromNumber And ToNumber
sqlfiddle here
No comments about the design. One solution for your question is using a CTE to sanitize the range boundaries and get them to a format that you can work with like so:
DECLARE #inputVal varchar(100) = 'C340'
-- sanitize input:
SELECT #inputVal = RIGHT(#inputVal, (LEN(#inputVal)-1))
;WITH cte (Sno,
SubcomponentStart,
SubcomponentEnd,
IRNo
)
AS
(
SELECT
Sno,
CASE WHEN Subcomponent LIKE '%to%'
THEN REPLACE(SUBSTRING(Subcomponent, 2, CHARINDEX('to', Subcomponent)), 'to','')
ELSE REPLACE(SUBSTRING(Subcomponent, 2,CHARINDEX(',', Subcomponent)), ',','')
END as SubcomponentStart,
CASE WHEN Subcomponent LIKE '%to%'
THEN REPLACE(SUBSTRING(Subcomponent, CHARINDEX('to', Subcomponent)+4, LEN(Subcomponent)), 'to', '')
ELSE REPLACE(SUBSTRING(Subcomponent, CHARINDEX(',', Subcomponent)+3, LEN(Subcomponent)), ',', '')
END as SubcomponentEnd,
IRNo
from test
)
SELECT t.*
FROM test t
INNER JOIN cte c
ON t.Sno = c.Sno
WHERE CAST(#inputVal as int) BETWEEN CAST(c.SubcomponentStart as INT) AND CAST(c.SubcomponentEnd as INT)
SQL Fiddle / tested here: http://sqlfiddle.com/#!6/1b9f0/19
For example you're getting UserEntry in variable #UserEntry, entry value is 'C5'.
-- Start From Here --
set #UserEntry = substring(#UserEntry,2,len(#UserEntry)-1)
select * from <tablename> where convert(int,#UserEntry)>=convert(int,SUBSTRING(Subcomponent,2,charindex('to',Subcomponent,1)-2)) and convert(int,#UserEntry)<=convert(int,(SUBSTRING(Subcomponent,charindex('c',Subcomponent,2)+1,len(Subcomponent)-charindex('c',Subcomponent,3))))
Say I have a column in a database that consists of a comma separated list of IDs (please don't ask why :( ), i.e. a column like this:
id | ids
----------
1 | 1,3,4
2 | 2
3 | 1,2,5
And a table the ids relate to:
id | thing
---------------
1 | fish
2 | elephant
3 | monkey
4 | mongoose
5 | kiwi
How can I select a comma separated list of the things, based of an id in the first table? For instance, selecting 1 would give me, 'fish,monkey,mongoose', 3 would give me 'fish,elephant,kiwi' etc.?
Thanks!
Try this
SELECT ID, things = STUFF(
(
SELECT ',' + t2.thing
FROM Table2 AS t2
INNER JOIN Table1 AS ti
ON ',' + ti.ids + ',' LIKE '%,' + CONVERT(VARCHAR(12), t2.id) + ',%'
WHERE ti.ID = tout.ID
FOR XML PATH, TYPE
).value('.[1]', 'nvarchar(max)'), 1, 1, '')
FROM Table1 AS tout
ORDER BY ID
SQL FIDDLE DEMO
Basically this will be the whole query:
WITH CTE AS
(
SELECT t1.id, t2.thing
FROM Table1 t1
CROSS APPLY dbo.DelimitedSplit8K(ids,',') x
INNER JOIN Table2 t2 ON x.item = t2.id
)
SELECT DISTINCT id,
STUFF ((SELECT ',' + c1.thing FROM CTE c1
WHERE c1.id = c2.id
FOR XML PATH ('')
),1,1,'')AS things
FROM CTE c2
But first you may notice I have used DelimitedSplit8K function for splitting. It is available from SQLServerCentral - http://www.sqlservercentral.com/articles/Tally+Table/72993/
but I will post the code below. You can use any other splitting function as well, but this one is really good and fast.
Other steps, I have already mentioned in comments. After splitting we JOIN to other tables to get the names and then use STUFF and FOR XML PATH to concatenate names back to one string.
SQLFiddleDEMO
Splitting function:
CREATE FUNCTION [dbo].[DelimitedSplit8K]
/**********************************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Notes:
1. Leading a trailing delimiters are treated as if an empty string element were present.
2. Consecutive delimiters are treated as if an empty string element were present between them.
3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Statistics on this function may be found at the following URL:
http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx
CROSS APPLY Usage Examples and Tests:
--=====================================================================================================================
-- TEST 1:
-- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are
-- laid out in the comments
--=====================================================================================================================
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- In the following comments, "b" is a blank and "E" is an element in the left to right order.
-- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks
-- are preserved no matter where they may appear.
SELECT *
INTO #JBMTest
FROM ( --# & type of Return Row(s)
SELECT 0, NULL UNION ALL --1 NULL
SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)
SELECT 2, SPACE(1) UNION ALL --1 b (1 space)
SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)
SELECT 4, ',' UNION ALL --2 b b (both are empty strings)
SELECT 5, '55555' UNION ALL --1 E
SELECT 6, ',55555' UNION ALL --2 b E
SELECT 7, ',55555,' UNION ALL --3 b E b
SELECT 8, '55555,' UNION ALL --2 b B
SELECT 9, '55555,1' UNION ALL --2 E E
SELECT 10, '1,55555' UNION ALL --2 E E
SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E
SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E
SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b
SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b
SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)
SELECT 16, 'This,is,a,test.' --E E E E
) d (SomeID, SomeValue)
;
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM #JBMTest test
CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split
;
--=====================================================================================================================
-- TEST 2:
-- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against
-- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because
-- they are "control" characters. More specifically, this test will show you what happens to various non-accented
-- letters for your given collation depending on the delimiter you chose.
--=====================================================================================================================
WITH
cteBuildAllCharacters (String,Delimiter) AS
(
SELECT TOP 256
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)
FROM master.sys.all_columns
)
SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM cteBuildAllCharacters c
CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split
ORDER BY ASCII_Value, split.ItemNumber
;
-----------------------------------------------------------------------------------------------------------------------
Other Notes:
1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolvedexternally from this
function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use NVARCHAR(MAX) will cause it to run twice as slow. It's just the nature of
VARCHAR(MAX) whether it fits in-row or not.
7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows that the UNPIVOT method
is quite machine dependent and can slow things down quite a bit.
-----------------------------------------------------------------------------------------------------------------------
Credits:
This code is the product of many people's efforts including but not limited to the following:
cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a bit of extra speed
and finally redacted by Jeff Moden for a different slant on readability and compactness. Hat's off to Paul White for
his simple explanations of CROSS APPLY and for his detailed testing efforts. Last but not least, thanks to
Ron "BitBucket" McCullough and Wayne Sheffield for their extreme performance testing across multiple machines and
versions of SQL Server. The latest improvement brought an additional 15-20% improvement over Rev 05. Special thanks
to "Nadrek" and "peter-757102" (aka Peter de Heer) for bringing such improvements to light. Nadrek's original
improvement brought about a 10% performance gain and Peter followed that up with the content of Rev 07.
I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL
and to Adam Machanic for leading me to it many years ago.
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
-----------------------------------------------------------------------------------------------------------------------
Revision History:
Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)
Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny
bit of extra speed.
Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra
documentation.
Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this
type of function.
Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the
function is used in an UPDATE statement even though the function makes no external references.
Rev 05 - 02 Apr 2011 - Jeff Moden
- Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and
for strings that have wider elements. The redaction of this code involved removing ALL concatenation of
delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,
and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one
instance of one add and one instance of a subtract. The length calculation for the final element (not
followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF
combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be
had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a
single CPU box than the original code especially near the 8K boundary.
- Modified comments to include more sanity checks on the usage example, etc.
- Removed "other" notes 8 and 9 as they were no longer applicable.
Rev 06 - 12 Apr 2011 - Jeff Moden
- Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and
the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived
in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.
Rev 07 - 06 May 2011 - Peter de Heer, a further 15-20% performance enhancement has been discovered and incorporated
into this code which also eliminated the need for a "zero" position in the cteTally table.
**********************************************************************************************************************/
--===== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover NVARCHAR(4000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString,t.N,1) = #pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(#pString, l.N1, l.L1)
FROM cteLen l
;
Try this one -
Query:
DECLARE #temp TABLE (id INT, ids NVARCHAR(50))
INSERT INTO #temp (id, ids)
VALUES (1, '1,3,4'), (2, '2'), (3, '1,2,5')
DECLARE #thing TABLE (id INT, thing NVARCHAR(50))
INSERT INTO #thing (id, thing)
VALUES (1, 'fish'), (2, 'elephant'), (3, 'monkey'), (4, 'mongoose'), (5, 'kiwi')
;WITH cte AS (
SELECT t.id, t2.thing
FROM (
SELECT
id = t.c.value('#n', 'INT')
, token = t.c.value('#s', 'NVARCHAR(50)')
FROM (
SELECT field = CAST('<t s = "' +
REPLACE(
t.ids + ','
, ','
, '" n = "' + CAST(t.id AS VARCHAR(10))
+ '" /><t s = "') + '" />' AS XML)
FROM #temp t
) d
CROSS APPLY field.nodes('/t') t(c)
WHERE t.c.exist('#n') = 1
) t
JOIN #thing t2 ON t.token = t2.id
)
SELECT id, things = STUFF((
SELECT ', ' + t2.thing
FROM cte t2
WHERE t2.id = t.id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
FROM #temp t
Results:
id things
----------- --------------------------
1 fish, monkey, mongoose
2 elephant
3 fish, elephant, kiwi
First, read this: http://www.sommarskog.se/arrays-in-sql.html
One way which works even in sql-server 2005 and below(?) is using this Split function:
CREATE FUNCTION Split
(
#ItemList NVARCHAR(MAX),
#delimiter CHAR(1)
)
RETURNS #IDTable TABLE (Item VARCHAR(50))
AS
BEGIN
DECLARE #tempItemList NVARCHAR(MAX)
SET #tempItemList = #ItemList
DECLARE #i INT
DECLARE #Item NVARCHAR(4000)
SET #tempItemList = REPLACE (#tempItemList, ' ', '')
SET #i = CHARINDEX(#delimiter, #tempItemList)
WHILE (LEN(#tempItemList) > 0)
BEGIN
IF #i = 0
SET #Item = #tempItemList
ELSE
SET #Item = LEFT(#tempItemList, #i - 1)
INSERT INTO #IDTable(Item) VALUES(#Item)
IF #i = 0
SET #tempItemList = ''
ELSE
SET #tempItemList = RIGHT(#tempItemList, LEN(#tempItemList) - #i)
SET #i = CHARINDEX(#delimiter, #tempItemList)
END
RETURN
END
Now this query works:
Declare #firstID int
SET #firstID = 1
SELECT a.id, a.thing as Animal
FROM dbo.Animals a
WHERE id IN(
SELECT Item
FROM dbo.Split((SELECT TOP 1 ids FROM dbo.Things WHERE id=#firstID), ',')
)
Demo
I'm currently doing a data conversion project and need to strip all alphabetical characters from a string. Unfortunately I can't create or use a function as we don't own the source machine making the methods I've found from searching for previous posts unusable.
What would be the best way to do this in a select statement? Speed isn't too much of an issue as this will only be running over 30,000 records or so and is a once off statement.
You can do this in a single statement. You're not really creating a statement with 200+ REPLACEs are you?!
update tbl
set S = U.clean
from tbl
cross apply
(
select Substring(tbl.S,v.number,1)
-- this table will cater for strings up to length 2047
from master..spt_values v
where v.type='P' and v.number between 1 and len(tbl.S)
and Substring(tbl.S,v.number,1) like '[0-9]'
order by v.number
for xml path ('')
) U(clean)
Working SQL Fiddle showing this query with sample data
Replicated below for posterity:
create table tbl (ID int identity, S varchar(500))
insert tbl select 'asdlfj;390312hr9fasd9uhf012 3or h239ur ' + char(13) + 'asdfasf'
insert tbl select '123'
insert tbl select ''
insert tbl select null
insert tbl select '123 a 124'
Results
ID S
1 390312990123239
2 123
3 (null)
4 (null)
5 123124
CTE comes for HELP here.
;WITH CTE AS
(
SELECT
[ProductNumber] AS OrigProductNumber
,CAST([ProductNumber] AS VARCHAR(100)) AS [ProductNumber]
FROM [AdventureWorks].[Production].[Product]
UNION ALL
SELECT OrigProductNumber
,CAST(STUFF([ProductNumber], PATINDEX('%[^0-9]%', [ProductNumber]), 1, '') AS VARCHAR(100) ) AS [ProductNumber]
FROM CTE WHERE PATINDEX('%[^0-9]%', [ProductNumber]) > 0
)
SELECT * FROM CTE
WHERE PATINDEX('%[^0-9]%', [ProductNumber]) = 0
OPTION (MAXRECURSION 0)
output:
OrigProductNumber ProductNumber
WB-H098 098
VE-C304-S 304
VE-C304-M 304
VE-C304-L 304
TT-T092 092
RichardTheKiwi's script in a function for use in selects without cross apply,
also added dot because in my case I use it for double and money values within a varchar field
CREATE FUNCTION dbo.ReplaceNonNumericChars (#string VARCHAR(5000))
RETURNS VARCHAR(1000)
AS
BEGIN
SET #string = REPLACE(#string, ',', '.')
SET #string = (SELECT SUBSTRING(#string, v.number, 1)
FROM master..spt_values v
WHERE v.type = 'P'
AND v.number BETWEEN 1 AND LEN(#string)
AND (SUBSTRING(#string, v.number, 1) LIKE '[0-9]'
OR SUBSTRING(#string, v.number, 1) LIKE '[.]')
ORDER BY v.number
FOR
XML PATH('')
)
RETURN #string
END
GO
Thanks RichardTheKiwi +1
Well if you really can't use a function, I suppose you could do something like this:
SELECT REPLACE(REPLACE(REPLACE(LOWER(col),'a',''),'b',''),'c','')
FROM dbo.table...
Obviously it would be a lot uglier than that, since I only handled the first three letters, but it should give the idea.
I have a column "A" that are separated by commas and I want to find all the unique values in Column A.
Here's a very short example:
Column A
111, 222
333
444
777,999
I want a query which gives me the following value:
Column C
111
222
333
444
777
999
Ignoring the obvious problems with your table design as alluded to in all the comments and accepting that this might prove very slow on a huge table here's how I might do it.
First... I would create a statement that would turn all the rows into one big massive comma delimited list.
DECLARE #tmp VarChar(max)
SET #tmp = ''
SELECT #tmp = #tmp + ColumnA + ',' FROM TableA
Then use the table valued udf split described by this SO article to turn that massive string back into a table with a distinct clause to ensure that it's unique.
https://stackoverflow.com/a/2837662/261997
SELECT DISTINCT * FROM dbo.Split(',', #tmp)
You can use the well-known Split function in combation with outer apply to split rows into multiple rows:
select ltrim(rtrim(s.s)) as colC
from #t t
cross apply
dbo.split(',', t.colA) s
Full code example:
if object_id('dbo.Split') is not null
drop function dbo.Split
go
CREATE FUNCTION dbo.Split (#sep char(1), #s varchar(512))
RETURNS table
AS
RETURN (
WITH Pieces(pn, start, stop) AS (
SELECT 1, 1, CHARINDEX(#sep, #s)
UNION ALL
SELECT pn + 1, stop + 1, CHARINDEX(#sep, #s, stop + 1)
FROM Pieces
WHERE stop > 0
)
SELECT pn,
SUBSTRING(#s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS s
FROM Pieces
)
go
declare #t table (colA varchar(max))
insert #t select '111, 223'
union all select '333'
union all select '444'
union all select '777,999';
select ltrim(rtrim(s.s)) as colC
from #t t
cross apply
dbo.split(',', t.colA) s
I have a table that has a column named 'languages', but it has the following types of values :
english; polish; portuguese;
.. etc.
I want to split so I can insert it in another table as:
english
polish
portugese
And go on.
I already searched in Google and find this split function:
CREATE FUNCTION dbo.Split (#sep char(1), #s varchar(512))
RETURNS table
AS
RETURN (
WITH Pieces(pn, start, stop) AS (
SELECT 1, 1, CHARINDEX(#sep, #s)
UNION ALL
SELECT pn + 1, stop + 1, CHARINDEX(#sep, #s, stop + 1)
FROM Pieces
WHERE stop > 0
)
SELECT pn,
SUBSTRING(#s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS s
FROM Pieces
)
I already tested it with this :
SELECT * FROM dbo.Split(' ', 'I hate bunnies')
So I tried to adapt this to my case :
INSERT INTO labbd11..language(language) SELECT s FROM dbo.Split(';', disciplinabd..movies.languages)
Then it gives me this exception:
The multi-part identifier "disciplinabd..movies.languages" could not be bound. Severity 16
Any ideas ?
Best regards,
Valter Henrique.
Use CROSS APPLY
INSERT INTO labbd11..language(language)
SELECT DISTINCT s.s
FROM disciplinabd..movies m
CROSS APPLY dbo.Split(';', m.languages) S
But if I read your query correctly, you are splitting the languages from ALL movies, and inserting the resultant languages from the movie into the language table (1 column only). Hope this is a test query, otherwise it has no business merit at all.