How to pass an array into a SQL Server stored procedure?
For example, I have a list of employees. I want to use this list as a table and join it with another table. But the list of employees should be passed as parameter from C#.
SQL Server 2016 (or newer)
You can pass in a delimited list or JSON and use STRING_SPLIT() or OPENJSON().
STRING_SPLIT():
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List varchar(max)
AS
BEGIN
SET NOCOUNT ON;
SELECT value FROM STRING_SPLIT(#List, ',');
END
GO
EXEC dbo.DoSomethingWithEmployees #List = '1,2,3';
OPENJSON():
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List varchar(max)
AS
BEGIN
SET NOCOUNT ON;
SELECT value FROM OPENJSON(CONCAT('["',
REPLACE(STRING_ESCAPE(#List, 'JSON'),
',', '","'), '"]')) AS j;
END
GO
EXEC dbo.DoSomethingWithEmployees #List = '1,2,3';
I wrote more about this here:
Handling an unknown number of parameters in SQL Server
Ordered String Splitting in SQL Server with OPENJSON
SQL Server 2008 (or newer)
First, in your database, create the following two objects:
CREATE TYPE dbo.IDList
AS TABLE
(
ID INT
);
GO
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List AS dbo.IDList READONLY
AS
BEGIN
SET NOCOUNT ON;
SELECT ID FROM #List;
END
GO
Now in your C# code:
// Obtain your list of ids to send, this is just an example call to a helper utility function
int[] employeeIds = GetEmployeeIds();
DataTable tvp = new DataTable();
tvp.Columns.Add(new DataColumn("ID", typeof(int)));
// populate DataTable from your List here
foreach(var id in employeeIds)
tvp.Rows.Add(id);
using (conn)
{
SqlCommand cmd = new SqlCommand("dbo.DoSomethingWithEmployees", conn);
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter tvparam = cmd.Parameters.AddWithValue("#List", tvp);
// these next lines are important to map the C# DataTable object to the correct SQL User Defined Type
tvparam.SqlDbType = SqlDbType.Structured;
tvparam.TypeName = "dbo.IDList";
// execute query, consume results, etc. here
}
SQL Server 2005
If you are using SQL Server 2005, I would still recommend a split function over XML. First, create a function:
CREATE FUNCTION dbo.SplitInts
(
#List VARCHAR(MAX),
#Delimiter VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT Item = CONVERT(INT, Item) FROM
( SELECT Item = x.i.value('(./text())[1]', 'varchar(max)')
FROM ( SELECT [XML] = CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>') + '</i>').query('.')
) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
WHERE Item IS NOT NULL
);
GO
Now your stored procedure can just be:
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
SELECT EmployeeID = Item FROM dbo.SplitInts(#List, ',');
END
GO
And in your C# code you just have to pass the list as '1,2,3,12'...
I find the method of passing through table valued parameters simplifies the maintainability of a solution that uses it and often has increased performance compared to other implementations including XML and string splitting.
The inputs are clearly defined (no one has to guess if the delimiter is a comma or a semi-colon) and we do not have dependencies on other processing functions that are not obvious without inspecting the code for the stored procedure.
Compared to solutions involving user defined XML schema instead of UDTs, this involves a similar number of steps but in my experience is far simpler code to manage, maintain and read.
In many solutions you may only need one or a few of these UDTs (User defined Types) that you re-use for many stored procedures. As with this example, the common requirement is to pass through a list of ID pointers, the function name describes what context those Ids should represent, the type name should be generic.
Based on my experience, by creating a delimited expression from the employeeIDs, there is a tricky and nice solution for this problem. You should only create an string expression like ';123;434;365;' in-which 123, 434 and 365 are some employeeIDs. By calling the below procedure and passing this expression to it, you can fetch your desired records. Easily you can join the "another table" into this query. This solution is suitable in all versions of SQL server. Also, in comparison with using table variable or temp table, it is very faster and optimized solution.
CREATE PROCEDURE dbo.DoSomethingOnSomeEmployees #List AS varchar(max)
AS
BEGIN
SELECT EmployeeID
FROM EmployeesTable
-- inner join AnotherTable on ...
where #List like '%;'+cast(employeeID as varchar(20))+';%'
END
GO
Use a table-valued parameter for your stored procedure.
When you pass it in from C# you'll add the parameter with the data type of SqlDb.Structured.
See here: http://msdn.microsoft.com/en-us/library/bb675163.aspx
Example:
// Assumes connection is an open SqlConnection object.
using (connection)
{
// Create a DataTable with the modified rows.
DataTable addedCategories =
CategoriesDataTable.GetChanges(DataRowState.Added);
// Configure the SqlCommand and SqlParameter.
SqlCommand insertCommand = new SqlCommand(
"usp_InsertCategories", connection);
insertCommand.CommandType = CommandType.StoredProcedure;
SqlParameter tvpParam = insertCommand.Parameters.AddWithValue(
"#tvpNewCategories", addedCategories);
tvpParam.SqlDbType = SqlDbType.Structured;
// Execute the command.
insertCommand.ExecuteNonQuery();
}
You need to pass it as an XML parameter.
Edit: quick code from my project to give you an idea:
CREATE PROCEDURE [dbo].[GetArrivalsReport]
#DateTimeFrom AS DATETIME,
#DateTimeTo AS DATETIME,
#HostIds AS XML(xsdArrayOfULong)
AS
BEGIN
DECLARE #hosts TABLE (HostId BIGINT)
INSERT INTO #hosts
SELECT arrayOfUlong.HostId.value('.','bigint') data
FROM #HostIds.nodes('/arrayOfUlong/u') as arrayOfUlong(HostId)
Then you can use the temp table to join with your tables.
We defined arrayOfUlong as a built in XML schema to maintain data integrity, but you don't have to do that. I'd recommend using it so here's a quick code for to make sure you always get an XML with longs.
IF NOT EXISTS (SELECT * FROM sys.xml_schema_collections WHERE name = 'xsdArrayOfULong')
BEGIN
CREATE XML SCHEMA COLLECTION [dbo].[xsdArrayOfULong]
AS N'<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="arrayOfUlong">
<xs:complexType>
<xs:sequence>
<xs:element maxOccurs="unbounded"
name="u"
type="xs:unsignedLong" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>';
END
GO
Context is always important, such as the size and complexity of the array. For small to mid-size lists, several of the answers posted here are just fine, though some clarifications should be made:
For splitting a delimited list, a SQLCLR-based splitter is the fastest. There are numerous examples around if you want to write your own, or you can just download the free SQL# library of CLR functions (which I wrote, but the String_Split function, and many others, are completely free).
Splitting XML-based arrays can be fast, but you need to use attribute-based XML, not element-based XML (which is the only type shown in the answers here, though #AaronBertrand's XML example is the best as his code is using the text() XML function. For more info (i.e. performance analysis) on using XML to split lists, check out "Using XML to pass lists as parameters in SQL Server" by Phil Factor.
Using TVPs is great (assuming you are using at least SQL Server 2008, or newer) as the data is streamed to the proc and shows up pre-parsed and strongly-typed as a table variable. HOWEVER, in most cases, storing all of the data in DataTable means duplicating the data in memory as it is copied from the original collection. Hence using the DataTable method of passing in TVPs does not work well for larger sets of data (i.e. does not scale well).
XML, unlike simple delimited lists of Ints or Strings, can handle more than one-dimensional arrays, just like TVPs. But also just like the DataTable TVP method, XML does not scale well as it more than doubles the datasize in memory as it needs to additionally account for the overhead of the XML document.
With all of that said, IF the data you are using is large or is not very large yet but consistently growing, then the IEnumerable TVP method is the best choice as it streams the data to SQL Server (like the DataTable method), BUT doesn't require any duplication of the collection in memory (unlike any of the other methods). I posted an example of the SQL and C# code in this answer:
Pass Dictionary to Stored Procedure T-SQL
As others have noted above, one way to do this is to convert your array to a string and then split the string inside SQL Server.
As of SQL Server 2016, there's a built-in way to split strings called
STRING_SPLIT()
It returns a set of rows that you can insert into your temp table (or real table).
DECLARE #str varchar(200)
SET #str = "123;456;789;246;22;33;44;55;66"
SELECT value FROM STRING_SPLIT(#str, ';')
would yield:
value
-----
123
456
789
246
22
33
44
55
66
If you want to get fancier:
DECLARE #tt TABLE (
thenumber int
)
DECLARE #str varchar(200)
SET #str = "123;456;789;246;22;33;44;55;66"
INSERT INTO #tt
SELECT value FROM STRING_SPLIT(#str, ';')
SELECT * FROM #tt
ORDER BY thenumber
would give you the same results as above (except the column name is "thenumber"), but sorted. You can use the table variable like any other table, so you can easily join it with other tables in the DB if you want.
Note that your SQL Server install has to be at compatibility level 130 or higher in order for the STRING_SPLIT() function to be recognized. You can check your compatibility level with the following query:
SELECT compatibility_level
FROM sys.databases WHERE name = 'yourdatabasename';
Most languages (including C#) have a "join" function you can use to create a string from an array.
int[] myarray = {22, 33, 44};
string sqlparam = string.Join(";", myarray);
Then you pass sqlparam as your parameter to the stored procedure above.
This will help you. :) Follow the next steps,
Open the Query Editor
Copy Paste the following code as it is, it will create the Function which converts the String to Int
CREATE FUNCTION dbo.SplitInts
(
#List VARCHAR(MAX),
#Delimiter VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT Item = CONVERT(INT, Item) FROM
( SELECT Item = x.i.value('(./text())[1]', 'varchar(max)')
FROM ( SELECT [XML] = CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>') + '</i>').query('.')
) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
WHERE Item IS NOT NULL
);
GO
Create the Following stored procedure
CREATE PROCEDURE dbo.sp_DeleteMultipleId
#List VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
DELETE FROM TableName WHERE Id IN( SELECT Id = Item FROM dbo.SplitInts(#List, ','));
END
GO
Execute this SP Using exec sp_DeleteId '1,2,3,12' this is a string of Id's which you want to delete,
You can convert your array to string in C# and pass it as a Stored Procedure parameter as below,
int[] intarray = { 1, 2, 3, 4, 5 };
string[] result = intarray.Select(x=>x.ToString()).ToArray();
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandText = "sp_DeleteMultipleId";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("#Id",SqlDbType.VARCHAR).Value=result ;
This will delete multiple rows in a single stored proc call. All the best.
There is no support for array in sql server but there are several ways by which you can pass collection to a stored proc .
By using datatable
By using XML.Try converting your collection in an xml format and then pass it as an input to a stored procedure
The below link may help you
passing collection to a stored procedure
Starting in SQL Server 2016 you can bring the list in as an NVARCHAR() and use OPENJSON
DECLARE #EmployeeList nvarchar(500) = '[1,2,15]'
SELECT *
FROM Employees
WHERE ID IN (SELECT VALUE FROM OPENJSON(#EmployeeList ))
I've been searching through all the examples and answers of how to pass any array to sql server without the hassle of creating new Table type,till i found this linK, below is how I applied it to my project:
--The following code is going to get an Array as Parameter and insert the values of that
--array into another table
Create Procedure Proc1
#UserId int, //just an Id param
#s nvarchar(max) //this is the array your going to pass from C# code to your Sproc
AS
declare #xml xml
set #xml = N'<root><r>' + replace(#s,',','</r><r>') + '</r></root>'
Insert into UserRole (UserID,RoleID)
select
#UserId [UserId], t.value('.','varchar(max)') as [RoleId]
from #xml.nodes('//root/r') as a(t)
END
Hope you enjoy it
Starting in SQL Server 2016 you can simply use split string
Example:
WHERE (#LocationId IS NULL OR Id IN (SELECT items from Split_String(#LocationId, ',')))
CREATE TYPE dumyTable
AS TABLE
(
RateCodeId int,
RateLowerRange int,
RateHigherRange int,
RateRangeValue int
);
GO
CREATE PROCEDURE spInsertRateRanges
#dt AS dumyTable READONLY
AS
BEGIN
SET NOCOUNT ON;
INSERT tblRateCodeRange(RateCodeId,RateLowerRange,RateHigherRange,RateRangeValue)
SELECT *
FROM #dt
END
It took me a long time to figure this out, so in case anyone needs it...
This is based on the SQL 2005 method in Aaron's answer, and using his SplitInts function (I just removed the delim param since I'll always use commas). I'm using SQL 2008 but I wanted something that works with typed datasets (XSD, TableAdapters) and I know string params work with those.
I was trying to get his function to work in a "where in (1,2,3)" type clause, and having no luck the straight-forward way. So I created a temp table first, and then did an inner join instead of the "where in". Here is my example usage, in my case I wanted to get a list of recipes that don't contain certain ingredients:
CREATE PROCEDURE dbo.SOExample1
(
#excludeIngredientsString varchar(MAX) = ''
)
AS
/* Convert string to table of ints */
DECLARE #excludeIngredients TABLE (ID int)
insert into #excludeIngredients
select ID = Item from dbo.SplitInts(#excludeIngredientsString)
/* Select recipies that don't contain any ingredients in our excluded table */
SELECT r.Name, r.Slug
FROM Recipes AS r LEFT OUTER JOIN
RecipeIngredients as ri inner join
#excludeIngredients as ei on ri.IngredientID = ei.ID
ON r.ID = ri.RecipeID
WHERE (ri.RecipeID IS NULL)
Related
Is it possible to replace multiple words in a string in sql without using multiple replace functions?
For example I have a string where I need to replace word 'POLYESTER' with 'POLY' , 'COTTON' with 'CTN', 'GRAPHIC' with 'GRPHC' etc in order to keep the string length at a max of say 30 without much loosing the readability of contents in it(can't use substring to limit chars since it can trim the end meaningful parts of string completely). So we decided to short some keywords like above.
Current query I have used :
SELECT
REPLACE(REPLACE('**Some string value **COTTON **Some string value ** POLYESTER', 'POLYESTER', 'POLY'), 'COTTON', 'CTN')
If I have 10 keywords like this, what will be the best way to achieve the result other than using multiple replace function. I am using SQL Server 2012.
considering sql server is your only instrument (not a c# or another application), as a workaroud; use a temp or persistent table to store replacement options.
IF OBJECT_ID('tempdb..#tmp') IS NOT NULL
DROP TABLE #tmp
CREATE TABLE #tmp (
fromText VARCHAR(16),
toText VARCHAR(16)
);
INSERT INTO #tmp (fromText, toText)
VALUES
('POLYESTER', 'POLY'),
('COTTON', 'CTN'),
('GRAPHIC', 'GRPHC')
DECLARE #someValue AS NVARCHAR(MAX) =
'**Some string value **COTTON **Some string value ** POLYESTER';
SELECT #someValue = REPLACE(#someValue, fromText, toText) FROM #tmp;
PRINT #someValue
and the result is:
**Some string value **CTN **Some string value ** POLY.
The answer of mehmetx is actually very nice.
If you need your replacement functionality on a regular basis, you could think about using a normal table instead of a temporary table.
But if you need this logic only once in a while, and performance is not much of an issue, you could avoid the additional replacements table altogether and use a table expression in the FROM clause instead. Something like this:
DECLARE #someValue AS NVARCHAR(MAX) = '**Some string value **COTTON **Some string value ** POLYESTER';
SELECT #someValue = REPLACE(#someValue, fromText, toText)
FROM
(VALUES
('POLYESTER', 'POLY'),
('COTTON', 'CTN'),
('GRAPHIC', 'GRPHC')
) AS S (fromText, toText);
EDIT:
I noticed, that this logic regrettably does not work as expected when used in an UPDATE statement to update existing data in a table.
For that purpose (if needed), I created a user-defined function that performs the replacement logic. I called it MultiReplace. And it does not use the replacement data from a temporary table, but from a "normal" table, which I called Replacements.
The following code demonstrates it. It uses a data table called MyData, which gets updated with all replacements in the Replacements table using the MultiReplace function:
IF OBJECT_ID('MultiReplace') IS NOT NULL
DROP FUNCTION MultiReplace;
IF OBJECT_ID('Replacements') IS NOT NULL
DROP TABLE Replacements;
IF OBJECT_ID('MyData') IS NOT NULL
DROP TABLE MyData;
GO
CREATE TABLE Replacements (
fromText VARCHAR(100),
toText VARCHAR(100)
);
CREATE TABLE MyData (
SomeValue VARCHAR(MAX)
)
GO
CREATE FUNCTION MultiReplace(#someValue AS VARCHAR(MAX))
RETURNS VARCHAR(MAX)
AS
BEGIN
SELECT #someValue = REPLACE(#someValue, fromText, toText) FROM Replacements;
RETURN #someValue;
END;
GO
INSERT INTO MyData (SomeValue)
VALUES
('**Some string value **COTTON **Some string value ** POLYESTER');
INSERT INTO Replacements (fromText, toText)
VALUES
('POLYESTER', 'POLY'),
('COTTON', 'CTN'),
('GRAPHIC', 'GRPHC');
SELECT * FROM MyData;
UPDATE MyData SET SomeValue = dbo.MultiReplace(SomeValue)
SELECT * FROM MyData;
But perhaps using multiple REPLACE statements might be more straightforward after all?...
EDIT 2:
Based on the short conversation in the comments, I could propose a simpler solution that uses multiple REPLACE statements in a clearer way. I have only tested it on SQL Server 2019; I am not sure if it will work correctly on SQL Server 2012.
Again, I use a table called MyData for testing here. But there are no additional database objects anymore.
Regrettably, I did not get it to work with a temporary table containing the replacement values.
-- Preparations:
IF OBJECT_ID('MyData') IS NOT NULL
DROP TABLE MyData;
CREATE TABLE MyData (
SomeValue VARCHAR(MAX)
);
INSERT INTO MyData
VALUES
('**Some string value **COTTON **Some string value ** POLYESTER'),
('**Another string value **GRAPHIC **Another string value ** POLYESTER');
-- Actual work:
SELECT * FROM MyData; -- Show the state before updating
DECLARE #someValue VARCHAR(MAX);
UPDATE MyData
SET
#someValue = SomeValue,
#someValue = REPLACE(#someValue, 'POLYESTER', 'POLY'),
#someValue = REPLACE(#someValue, 'COTTON', 'CTN'),
#someValue = REPLACE(#someValue, 'GRAPHIC', 'GRPHC'),
SomeValue = #someValue;
SELECT * FROM MyData; -- Show the state after updating
I am new bee in learning microsoft technologies.
I struck with an issue in sql server where I need your help.
Well, I have a XML file with below format,
please see it for your reference
<Permissions><Denied><User/><Roles/><Groups/></Denied><Allowed><Users/><Roles>admin,user,reader,writer,</Roles><Groups/></Allowed></Permissions>
In which I need to read Roles node values and insert those comma separated values as single row where I I will pass permissionid as a parameter in stored procedure.
here is the table columns (I need to insert single role for single row in test table based on transitionid)
create table test
(
empid int identity(1,1),
roles varchar(40),
transitionid int
)
You have two problems here: getting the data from the XML and splitting it.
If you're using SQL 2016 you're in luck - there's a new STRING_SPLIT function. You could use that like so:
declare #xml xml = '<Permissions><Denied><User/><Roles/><Groups/></Denied><Allowed><Users/><Roles>admin,user,reader,writer,</Roles><Groups/></Allowed></Permissions>';
declare #test table
(
empid int identity(1,1),
roles varchar(40),
transitionid int
)
INSERT #test (roles)
select b.value
FROM #xml.nodes('//Roles/text()')x(csv)
CROSS APPLY STRING_SPLIT(CAST(x.csv.query('.') AS VARCHAR(MAX)), ',')b
where b.value <> ''
select * from #test
Otherwise, you'll have to do something similar using a custom string splitting method, which you can find more about How do I split a string so I can access item x? or https://sqlperformance.com/2012/07/t-sql-queries/split-strings - basically, both require either writing a custom T-SQL function or CLR code that is imported into SQL Server. You could then use the same method as above (replacing STRING_SPLIT with the name of your custom string splitting function).
For a project, we are using a table (named txtTable) that contains all the texts. And each column contains a different language (for example column L9 is English, column L7 is German, etc..).
TextID L9 L7 L16 L10 L12
------------------------------------------------------
26 Archiving Archivierung NULL NULL NULL
27 Logging Protokollierung NULL NULL NULL
28 Comments Kommentar NULL NULL NULL
This table is located in a database on a Microsoft SQL Server 2005. The big problem is that this database name changes each time the program is restarted. This is a behavior typically for this third-party program and cannot be changed.
Next to this database and on the same server is our own database. In this database are several tables that point to the textID for generating data for reporting (SQL Server Reporting Services) in the correct language. This database contains also a table "ProjectSettings" with some properties like the name of the texttable database, and the stored procedures to generate the reporting data.
The way we now are requesting the right texts of the right language from this table with the changing database name is by creating a dynamic SQL query and execute it in a stored procedure.
Now we were wondering if there is a cleaner way to get the texts in the right language. We were thinking about creating a function with the textID and the language as a parameter, but we cannot find a good way to do this. We thought about a function so we just can use it in the select statement, but this doesn’t work:
CREATE FUNCTION [dbo].[GetTextFromLib]
(
#TextID int,
#LanguageColumn Varchar(5)
)
RETURNS varchar(255)
AS
BEGIN
-- return variables
DECLARE #ResultVar varchar(255)
-- Local variables
DECLARE #TextLibraryDatabaseName varchar(1000)
DECLARE #nvcSqlQuery varchar(1000)
-- get the report language database name
SELECT #TextLibraryDatabaseName = TextLibraryDatabaseName FROM ProjectSettings
SET #nvcSqlQuery = 'SELECT #ResultVar =' + #LanguageColumn + ' FROM [' + #TextLibraryDatabaseName + '].dbo.TXTTable WHERE TEXTID = ' + cast(#TextID as varchar(30))
EXEC(#nvcSqlQuery)
-- Return the result of the function
RETURN #ResultVar
END
Is there any way to work around this so we don’t have to use the dynamic sql in our stored procedures so it is only ‘contained’ in 1 function?
Thanks in advance & kind regards,
Kurt
Yes, it is possible with the help of synonym mechanism introduced with SQL Server 2005. So, you can create synonym during your setting up procedure based on data from ProjectSettings table and you can use it in your function. Your code will look something like this:
UPDATE: The code of function is commented here because it still contains dynamic SQL which does not work in function as Kurt said in his comment. New version of function is below this code.
-- Creating synonym for TXTTable table
-- somewhere in code when processing current settings
-- Suppose your synonym name is 'TextLibrary'
--
-- Drop previously created synonym
IF EXISTS (SELECT * FROM sys.synonyms WHERE name = N'TextLibrary')
DROP SYNONYM TextLibrary
-- Creating synonym using dynamic SQL
-- Local variables
DECLARE #TextLibraryDatabaseName varchar(1000)
DECLARE #nvcSqlQuery varchar(1000)
-- get the report language database name
SELECT #TextLibraryDatabaseName = TextLibraryDatabaseName FROM ProjectSettings
SET #nvcSqlQuery = 'CREATE SYNONYM TextLibrary FOR [' + #TextLibraryDatabaseName + '].dbo.TXTTable'
EXEC(#nvcSqlQuery)
-- Synonym created
/* UPDATE: This code is commented but left for discussion consistency
-- Function code
CREATE FUNCTION [dbo].[GetTextFromLib]
(
#TextID int,
#LanguageColumn Varchar(5)
)
RETURNS varchar(255)
AS
BEGIN
-- return variables
DECLARE #ResultVar varchar(255)
-- Local variables
DECLARE #nvcSqlQuery varchar(1000)
SET #nvcSqlQuery = 'SELECT #ResultVar =' + #LanguageColumn + ' FROM TextLibrary WHERE TEXTID = ' + cast(#TextID as varchar(30))
EXEC(#nvcSqlQuery)
-- Return the result of the function
RETURN #ResultVar
END
*/
UPDATE This is one more attempt to solve the problem. Now it uses some XML trick:
-- Function code
CREATE FUNCTION [dbo].[GetTextFromLib]
(
#TextID int,
#LanguageColumn Varchar(5)
)
RETURNS varchar(255)
AS
BEGIN
-- return variables
DECLARE #ResultVar varchar(255)
-- Local variables
DECLARE #XmlVar XML
-- Select required record into XML variable
-- XML has each table column value in element with corresponding name
SELECT #XmlVar = ( SELECT * FROM TextLibrary
WHERE TEXTID = #TextID
FOR XML RAW, ELEMENTS )
-- Select value of required element from XML
SELECT #ResultVar = Element.value('(.)[1]', 'varchar(255)')
FROM #XmlVar.nodes('/row/*') AS T(Element)
WHERE Element.value('local-name(.)', 'varchar(50)') = #LanguageColumn
-- Return the result of the function
RETURN #ResultVar
END
Hope this helps.
Credits to answerer of this question at Stackoverflow - How to get node name and values from an xml variable in t-sql
To me, it sounds like a total PITA... However, how large is this database of "words" you are dealing with. Especially if it is not changing much and remains pretty constant. Why not have on some normal cycle (such as morning), just have one dynamic query generated that queries the one that changes and synchronize it to a "standard" table name in YOUR database that won't change. Then, all your queries run against YOUR version and completely remove the constant dynamic queries every time. Yes there would need to be this synchronizing stored procedure to run, but if it can be run on a schedule, you should be fine, and again, how large is the table of "words" for proper language context.
Scenario
I have a stored procedure written in T-Sql using SQL Server 2005.
"SEL_ValuesByAssetName"
It accepts a unique string "AssetName".
It returns a table of values.
Question
Instead of calling the stored procedure multiple times and having to make a database call everytime I do this, I want to create another stored procedure that accepts a list of all the "AssetNames", and calls the stored procedure "SEL_ValueByAssetName" for each assetname in the list, and then returns the ENTIRE TABLE OF VALUES.
Pseudo Code
foreach(value in #AllAssetsList)
{
#AssetName = value
SEL_ValueByAssetName(#AssetName)
UPDATE #TempTable
}
How would I go about doing this?
It will look quite crippled with using Stored Procedures. But can you use Table-Valued Functions instead?
In case of Table-Valued functions it would look something like:
SELECT al.Value AS AssetName, av.* FROM #AllAssetsList AS al
CROSS APPLY SEL_ValuesByAssetName(al.Value) AS av
Sample implementation:
First of all, we need to create a Table-Valued Parameter type:
CREATE TYPE [dbo].[tvpStringTable] AS TABLE(Value varchar(max) NOT NULL)
Then, we need a function to get a value of a specific asset:
CREATE FUNCTION [dbo].[tvfGetAssetValue]
(
#assetName varchar(max)
)
RETURNS TABLE
AS
RETURN
(
-- Add the SELECT statement with parameter references here
SELECT 0 AS AssetValue
UNION
SELECT 5 AS AssetValue
UNION
SELECT 7 AS AssetValue
)
Next, a function to return a list AssetName, AssetValue for assets list:
CREATE FUNCTION [dbo].[tvfGetAllAssets]
(
#assetsList tvpStringTable READONLY
)
RETURNS TABLE
AS
RETURN
(
-- Add the SELECT statement with parameter references here
SELECT al.Value AS AssetName, av.AssetValue FROM #assetsList al
CROSS APPLY tvfGetAssetValue(al.Value) AS av
)
Finally, we can test it:
DECLARE #names tvpStringTable
INSERT INTO #names VALUES ('name1'), ('name2'), ('name3')
SELECT * FROM [Test].[dbo].[tvfGetAllAssets] (#names)
In MSSQL 2000 I would make #allAssetsList a Varchar comma separated values list. (and keep in mind that maximum length is 8000)
I would create a temporary table in the memory, parse this string and insert into that table, then do a simple query with the condition where assetName in (select assetName from #tempTable)
I wrote about MSSQL 2000 because I am not sure whether MSSQL 2005 has some new data type like an array that can be passed as a literal to the SP.
I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic.
I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way?
Instead of calling:
SELECT ejc_concatFormDetails(formuid, categoryName)
I would like to make it work like:
SELECT concatColumnValues(SELECT someColumn FROM SomeTable)
Here is my function definition:
FUNCTION [DNet].[ejc_concatFormDetails](#formuid AS int, #category as VARCHAR(75))
RETURNS VARCHAR(1000) AS
BEGIN
DECLARE #returnData VARCHAR(1000)
DECLARE #currentData VARCHAR(75)
DECLARE dataCursor CURSOR FAST_FORWARD FOR
SELECT data FROM DNet.ejc_FormDetails WHERE formuid = #formuid AND category = #category
SET #returnData = ''
OPEN dataCursor
FETCH NEXT FROM dataCursor INTO #currentData
WHILE (##FETCH_STATUS = 0)
BEGIN
SET #returnData = #returnData + ', ' + #currentData
FETCH NEXT FROM dataCursor INTO #currentData
END
CLOSE dataCursor
DEALLOCATE dataCursor
RETURN SUBSTRING(#returnData,3,1000)
END
As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar.
How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?
Others have answered your main question - but let me point out another problem with your function - the terrible use of a CURSOR!
You can easily rewrite this function to use no cursor, no WHILE loop - nothing like that. It'll be tons faster, and a lot easier, too - much less code:
FUNCTION DNet.ejc_concatFormDetails
(#formuid AS int, #category as VARCHAR(75))
RETURNS VARCHAR(1000)
AS
RETURN
SUBSTRING(
(SELECT ', ' + data
FROM DNet.ejc_FormDetails
WHERE formuid = #formuid AND category = #category
FOR XML PATH('')
), 3, 1000)
The trick is to use the FOR XML PATH('') - this returns a concatenated list of your data columns and your fixed ', ' delimiters. Add a SUBSTRING() on that and you're done! As easy as that..... no dogged-slow CURSOR, no messie concatenation and all that gooey code - just one statement and that's all there is.
You can use table-valued parameters:
CREATE FUNCTION MyFunction(
#Data AS TABLE (
Column1 int,
Column2 nvarchar(50),
Column3 datetime
)
)
RETURNS NVARCHAR(MAX)
AS BEGIN
/* here you can do what you want */
END
You can use Table Valued Parameters as of SQL Server 2008, which would allow you to pass a TABLE variable in as a parameter. The limitations and examples for this are all in that linked article.
However, I'd also point out that using a cursor could well be painful for performance.
You don't need to use a cursor, as you can do it all in 1 SELECT statement:
SELECT #MyCSVString = COALESCE(#MyCSVString + ', ', '') + data
FROM DNet.ejc_FormDetails
WHERE formuid = #formuid AND category = #category
No need for a cursor
Your question is a bit unclear. In your first SQL statement it looks like you're trying to pass columns to the function, but there is no WHERE clause. In the second SQL statement you're passing a collection of rows (results from a SELECT). Can you supply some sample data and expected outcome?
Without fully understanding your goal, you could look into changing the parameter to be a table variable. Fill a table variable local to the calling code and pass that into the function. You could do that as a stored procedure though and wouldn't need a function.