update table with dynamic sql query - sql

For a project, we are using a table (named txtTable) that contains all the texts. And each column contains a different language (for example column L9 is English, column L7 is German, etc..).
TextID L9 L7 L16 L10 L12
------------------------------------------------------
26 Archiving Archivierung NULL NULL NULL
27 Logging Protokollierung NULL NULL NULL
28 Comments Kommentar NULL NULL NULL
This table is located in a database on a Microsoft SQL Server 2005. The big problem is that this database name changes each time the program is restarted. This is a behavior typically for this third-party program and cannot be changed.
Next to this database and on the same server is our own database. In this database are several tables that point to the textID for generating data for reporting (SQL Server Reporting Services) in the correct language. This database contains also a table "ProjectSettings" with some properties like the name of the texttable database, and the stored procedures to generate the reporting data.
The way we now are requesting the right texts of the right language from this table with the changing database name is by creating a dynamic SQL query and execute it in a stored procedure.
Now we were wondering if there is a cleaner way to get the texts in the right language. We were thinking about creating a function with the textID and the language as a parameter, but we cannot find a good way to do this. We thought about a function so we just can use it in the select statement, but this doesn’t work:
CREATE FUNCTION [dbo].[GetTextFromLib]
(
#TextID int,
#LanguageColumn Varchar(5)
)
RETURNS varchar(255)
AS
BEGIN
-- return variables
DECLARE #ResultVar varchar(255)
-- Local variables
DECLARE #TextLibraryDatabaseName varchar(1000)
DECLARE #nvcSqlQuery varchar(1000)
-- get the report language database name
SELECT #TextLibraryDatabaseName = TextLibraryDatabaseName FROM ProjectSettings
SET #nvcSqlQuery = 'SELECT #ResultVar =' + #LanguageColumn + ' FROM [' + #TextLibraryDatabaseName + '].dbo.TXTTable WHERE TEXTID = ' + cast(#TextID as varchar(30))
EXEC(#nvcSqlQuery)
-- Return the result of the function
RETURN #ResultVar
END
Is there any way to work around this so we don’t have to use the dynamic sql in our stored procedures so it is only ‘contained’ in 1 function?
Thanks in advance & kind regards,
Kurt

Yes, it is possible with the help of synonym mechanism introduced with SQL Server 2005. So, you can create synonym during your setting up procedure based on data from ProjectSettings table and you can use it in your function. Your code will look something like this:
UPDATE: The code of function is commented here because it still contains dynamic SQL which does not work in function as Kurt said in his comment. New version of function is below this code.
-- Creating synonym for TXTTable table
-- somewhere in code when processing current settings
-- Suppose your synonym name is 'TextLibrary'
--
-- Drop previously created synonym
IF EXISTS (SELECT * FROM sys.synonyms WHERE name = N'TextLibrary')
DROP SYNONYM TextLibrary
-- Creating synonym using dynamic SQL
-- Local variables
DECLARE #TextLibraryDatabaseName varchar(1000)
DECLARE #nvcSqlQuery varchar(1000)
-- get the report language database name
SELECT #TextLibraryDatabaseName = TextLibraryDatabaseName FROM ProjectSettings
SET #nvcSqlQuery = 'CREATE SYNONYM TextLibrary FOR [' + #TextLibraryDatabaseName + '].dbo.TXTTable'
EXEC(#nvcSqlQuery)
-- Synonym created
/* UPDATE: This code is commented but left for discussion consistency
-- Function code
CREATE FUNCTION [dbo].[GetTextFromLib]
(
#TextID int,
#LanguageColumn Varchar(5)
)
RETURNS varchar(255)
AS
BEGIN
-- return variables
DECLARE #ResultVar varchar(255)
-- Local variables
DECLARE #nvcSqlQuery varchar(1000)
SET #nvcSqlQuery = 'SELECT #ResultVar =' + #LanguageColumn + ' FROM TextLibrary WHERE TEXTID = ' + cast(#TextID as varchar(30))
EXEC(#nvcSqlQuery)
-- Return the result of the function
RETURN #ResultVar
END
*/
UPDATE This is one more attempt to solve the problem. Now it uses some XML trick:
-- Function code
CREATE FUNCTION [dbo].[GetTextFromLib]
(
#TextID int,
#LanguageColumn Varchar(5)
)
RETURNS varchar(255)
AS
BEGIN
-- return variables
DECLARE #ResultVar varchar(255)
-- Local variables
DECLARE #XmlVar XML
-- Select required record into XML variable
-- XML has each table column value in element with corresponding name
SELECT #XmlVar = ( SELECT * FROM TextLibrary
WHERE TEXTID = #TextID
FOR XML RAW, ELEMENTS )
-- Select value of required element from XML
SELECT #ResultVar = Element.value('(.)[1]', 'varchar(255)')
FROM #XmlVar.nodes('/row/*') AS T(Element)
WHERE Element.value('local-name(.)', 'varchar(50)') = #LanguageColumn
-- Return the result of the function
RETURN #ResultVar
END
Hope this helps.
Credits to answerer of this question at Stackoverflow - How to get node name and values from an xml variable in t-sql

To me, it sounds like a total PITA... However, how large is this database of "words" you are dealing with. Especially if it is not changing much and remains pretty constant. Why not have on some normal cycle (such as morning), just have one dynamic query generated that queries the one that changes and synchronize it to a "standard" table name in YOUR database that won't change. Then, all your queries run against YOUR version and completely remove the constant dynamic queries every time. Yes there would need to be this synchronizing stored procedure to run, but if it can be run on a schedule, you should be fine, and again, how large is the table of "words" for proper language context.

Related

Passing an INT array into a SQL stored procedure [duplicate]

How to pass an array into a SQL Server stored procedure?
For example, I have a list of employees. I want to use this list as a table and join it with another table. But the list of employees should be passed as parameter from C#.
SQL Server 2016 (or newer)
You can pass in a delimited list or JSON and use STRING_SPLIT() or OPENJSON().
STRING_SPLIT():
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List varchar(max)
AS
BEGIN
SET NOCOUNT ON;
SELECT value FROM STRING_SPLIT(#List, ',');
END
GO
EXEC dbo.DoSomethingWithEmployees #List = '1,2,3';
OPENJSON():
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List varchar(max)
AS
BEGIN
SET NOCOUNT ON;
SELECT value FROM OPENJSON(CONCAT('["',
REPLACE(STRING_ESCAPE(#List, 'JSON'),
',', '","'), '"]')) AS j;
END
GO
EXEC dbo.DoSomethingWithEmployees #List = '1,2,3';
I wrote more about this here:
Handling an unknown number of parameters in SQL Server
Ordered String Splitting in SQL Server with OPENJSON
SQL Server 2008 (or newer)
First, in your database, create the following two objects:
CREATE TYPE dbo.IDList
AS TABLE
(
ID INT
);
GO
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List AS dbo.IDList READONLY
AS
BEGIN
SET NOCOUNT ON;
SELECT ID FROM #List;
END
GO
Now in your C# code:
// Obtain your list of ids to send, this is just an example call to a helper utility function
int[] employeeIds = GetEmployeeIds();
DataTable tvp = new DataTable();
tvp.Columns.Add(new DataColumn("ID", typeof(int)));
// populate DataTable from your List here
foreach(var id in employeeIds)
tvp.Rows.Add(id);
using (conn)
{
SqlCommand cmd = new SqlCommand("dbo.DoSomethingWithEmployees", conn);
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter tvparam = cmd.Parameters.AddWithValue("#List", tvp);
// these next lines are important to map the C# DataTable object to the correct SQL User Defined Type
tvparam.SqlDbType = SqlDbType.Structured;
tvparam.TypeName = "dbo.IDList";
// execute query, consume results, etc. here
}
SQL Server 2005
If you are using SQL Server 2005, I would still recommend a split function over XML. First, create a function:
CREATE FUNCTION dbo.SplitInts
(
#List VARCHAR(MAX),
#Delimiter VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT Item = CONVERT(INT, Item) FROM
( SELECT Item = x.i.value('(./text())[1]', 'varchar(max)')
FROM ( SELECT [XML] = CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>') + '</i>').query('.')
) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
WHERE Item IS NOT NULL
);
GO
Now your stored procedure can just be:
CREATE PROCEDURE dbo.DoSomethingWithEmployees
#List VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
SELECT EmployeeID = Item FROM dbo.SplitInts(#List, ',');
END
GO
And in your C# code you just have to pass the list as '1,2,3,12'...
I find the method of passing through table valued parameters simplifies the maintainability of a solution that uses it and often has increased performance compared to other implementations including XML and string splitting.
The inputs are clearly defined (no one has to guess if the delimiter is a comma or a semi-colon) and we do not have dependencies on other processing functions that are not obvious without inspecting the code for the stored procedure.
Compared to solutions involving user defined XML schema instead of UDTs, this involves a similar number of steps but in my experience is far simpler code to manage, maintain and read.
In many solutions you may only need one or a few of these UDTs (User defined Types) that you re-use for many stored procedures. As with this example, the common requirement is to pass through a list of ID pointers, the function name describes what context those Ids should represent, the type name should be generic.
Based on my experience, by creating a delimited expression from the employeeIDs, there is a tricky and nice solution for this problem. You should only create an string expression like ';123;434;365;' in-which 123, 434 and 365 are some employeeIDs. By calling the below procedure and passing this expression to it, you can fetch your desired records. Easily you can join the "another table" into this query. This solution is suitable in all versions of SQL server. Also, in comparison with using table variable or temp table, it is very faster and optimized solution.
CREATE PROCEDURE dbo.DoSomethingOnSomeEmployees #List AS varchar(max)
AS
BEGIN
SELECT EmployeeID
FROM EmployeesTable
-- inner join AnotherTable on ...
where #List like '%;'+cast(employeeID as varchar(20))+';%'
END
GO
Use a table-valued parameter for your stored procedure.
When you pass it in from C# you'll add the parameter with the data type of SqlDb.Structured.
See here: http://msdn.microsoft.com/en-us/library/bb675163.aspx
Example:
// Assumes connection is an open SqlConnection object.
using (connection)
{
// Create a DataTable with the modified rows.
DataTable addedCategories =
CategoriesDataTable.GetChanges(DataRowState.Added);
// Configure the SqlCommand and SqlParameter.
SqlCommand insertCommand = new SqlCommand(
"usp_InsertCategories", connection);
insertCommand.CommandType = CommandType.StoredProcedure;
SqlParameter tvpParam = insertCommand.Parameters.AddWithValue(
"#tvpNewCategories", addedCategories);
tvpParam.SqlDbType = SqlDbType.Structured;
// Execute the command.
insertCommand.ExecuteNonQuery();
}
You need to pass it as an XML parameter.
Edit: quick code from my project to give you an idea:
CREATE PROCEDURE [dbo].[GetArrivalsReport]
#DateTimeFrom AS DATETIME,
#DateTimeTo AS DATETIME,
#HostIds AS XML(xsdArrayOfULong)
AS
BEGIN
DECLARE #hosts TABLE (HostId BIGINT)
INSERT INTO #hosts
SELECT arrayOfUlong.HostId.value('.','bigint') data
FROM #HostIds.nodes('/arrayOfUlong/u') as arrayOfUlong(HostId)
Then you can use the temp table to join with your tables.
We defined arrayOfUlong as a built in XML schema to maintain data integrity, but you don't have to do that. I'd recommend using it so here's a quick code for to make sure you always get an XML with longs.
IF NOT EXISTS (SELECT * FROM sys.xml_schema_collections WHERE name = 'xsdArrayOfULong')
BEGIN
CREATE XML SCHEMA COLLECTION [dbo].[xsdArrayOfULong]
AS N'<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="arrayOfUlong">
<xs:complexType>
<xs:sequence>
<xs:element maxOccurs="unbounded"
name="u"
type="xs:unsignedLong" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>';
END
GO
Context is always important, such as the size and complexity of the array. For small to mid-size lists, several of the answers posted here are just fine, though some clarifications should be made:
For splitting a delimited list, a SQLCLR-based splitter is the fastest. There are numerous examples around if you want to write your own, or you can just download the free SQL# library of CLR functions (which I wrote, but the String_Split function, and many others, are completely free).
Splitting XML-based arrays can be fast, but you need to use attribute-based XML, not element-based XML (which is the only type shown in the answers here, though #AaronBertrand's XML example is the best as his code is using the text() XML function. For more info (i.e. performance analysis) on using XML to split lists, check out "Using XML to pass lists as parameters in SQL Server" by Phil Factor.
Using TVPs is great (assuming you are using at least SQL Server 2008, or newer) as the data is streamed to the proc and shows up pre-parsed and strongly-typed as a table variable. HOWEVER, in most cases, storing all of the data in DataTable means duplicating the data in memory as it is copied from the original collection. Hence using the DataTable method of passing in TVPs does not work well for larger sets of data (i.e. does not scale well).
XML, unlike simple delimited lists of Ints or Strings, can handle more than one-dimensional arrays, just like TVPs. But also just like the DataTable TVP method, XML does not scale well as it more than doubles the datasize in memory as it needs to additionally account for the overhead of the XML document.
With all of that said, IF the data you are using is large or is not very large yet but consistently growing, then the IEnumerable TVP method is the best choice as it streams the data to SQL Server (like the DataTable method), BUT doesn't require any duplication of the collection in memory (unlike any of the other methods). I posted an example of the SQL and C# code in this answer:
Pass Dictionary to Stored Procedure T-SQL
As others have noted above, one way to do this is to convert your array to a string and then split the string inside SQL Server.
As of SQL Server 2016, there's a built-in way to split strings called
STRING_SPLIT()
It returns a set of rows that you can insert into your temp table (or real table).
DECLARE #str varchar(200)
SET #str = "123;456;789;246;22;33;44;55;66"
SELECT value FROM STRING_SPLIT(#str, ';')
would yield:
value
-----
123
456
789
246
22
33
44
55
66
If you want to get fancier:
DECLARE #tt TABLE (
thenumber int
)
DECLARE #str varchar(200)
SET #str = "123;456;789;246;22;33;44;55;66"
INSERT INTO #tt
SELECT value FROM STRING_SPLIT(#str, ';')
SELECT * FROM #tt
ORDER BY thenumber
would give you the same results as above (except the column name is "thenumber"), but sorted. You can use the table variable like any other table, so you can easily join it with other tables in the DB if you want.
Note that your SQL Server install has to be at compatibility level 130 or higher in order for the STRING_SPLIT() function to be recognized. You can check your compatibility level with the following query:
SELECT compatibility_level
FROM sys.databases WHERE name = 'yourdatabasename';
Most languages (including C#) have a "join" function you can use to create a string from an array.
int[] myarray = {22, 33, 44};
string sqlparam = string.Join(";", myarray);
Then you pass sqlparam as your parameter to the stored procedure above.
This will help you. :) Follow the next steps,
Open the Query Editor
Copy Paste the following code as it is, it will create the Function which converts the String to Int
CREATE FUNCTION dbo.SplitInts
(
#List VARCHAR(MAX),
#Delimiter VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT Item = CONVERT(INT, Item) FROM
( SELECT Item = x.i.value('(./text())[1]', 'varchar(max)')
FROM ( SELECT [XML] = CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>') + '</i>').query('.')
) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
WHERE Item IS NOT NULL
);
GO
Create the Following stored procedure
CREATE PROCEDURE dbo.sp_DeleteMultipleId
#List VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
DELETE FROM TableName WHERE Id IN( SELECT Id = Item FROM dbo.SplitInts(#List, ','));
END
GO
Execute this SP Using exec sp_DeleteId '1,2,3,12' this is a string of Id's which you want to delete,
You can convert your array to string in C# and pass it as a Stored Procedure parameter as below,
int[] intarray = { 1, 2, 3, 4, 5 };
string[] result = intarray.Select(x=>x.ToString()).ToArray();
 
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandText = "sp_DeleteMultipleId";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("#Id",SqlDbType.VARCHAR).Value=result ;
This will delete multiple rows in a single stored proc call. All the best.
There is no support for array in sql server but there are several ways by which you can pass collection to a stored proc .
By using datatable
By using XML.Try converting your collection in an xml format and then pass it as an input to a stored procedure
The below link may help you
passing collection to a stored procedure
Starting in SQL Server 2016 you can bring the list in as an NVARCHAR() and use OPENJSON
DECLARE #EmployeeList nvarchar(500) = '[1,2,15]'
SELECT *
FROM Employees
WHERE ID IN (SELECT VALUE FROM OPENJSON(#EmployeeList ))
I've been searching through all the examples and answers of how to pass any array to sql server without the hassle of creating new Table type,till i found this linK, below is how I applied it to my project:
--The following code is going to get an Array as Parameter and insert the values of that
--array into another table
Create Procedure Proc1
#UserId int, //just an Id param
#s nvarchar(max) //this is the array your going to pass from C# code to your Sproc
AS
declare #xml xml
set #xml = N'<root><r>' + replace(#s,',','</r><r>') + '</r></root>'
Insert into UserRole (UserID,RoleID)
select
#UserId [UserId], t.value('.','varchar(max)') as [RoleId]
from #xml.nodes('//root/r') as a(t)
END
Hope you enjoy it
Starting in SQL Server 2016 you can simply use split string
Example:
WHERE (#LocationId IS NULL OR Id IN (SELECT items from Split_String(#LocationId, ',')))
CREATE TYPE dumyTable
AS TABLE
(
RateCodeId int,
RateLowerRange int,
RateHigherRange int,
RateRangeValue int
);
GO
CREATE PROCEDURE spInsertRateRanges
#dt AS dumyTable READONLY
AS
BEGIN
SET NOCOUNT ON;
INSERT tblRateCodeRange(RateCodeId,RateLowerRange,RateHigherRange,RateRangeValue)
SELECT *
FROM #dt
END
It took me a long time to figure this out, so in case anyone needs it...
This is based on the SQL 2005 method in Aaron's answer, and using his SplitInts function (I just removed the delim param since I'll always use commas). I'm using SQL 2008 but I wanted something that works with typed datasets (XSD, TableAdapters) and I know string params work with those.
I was trying to get his function to work in a "where in (1,2,3)" type clause, and having no luck the straight-forward way. So I created a temp table first, and then did an inner join instead of the "where in". Here is my example usage, in my case I wanted to get a list of recipes that don't contain certain ingredients:
CREATE PROCEDURE dbo.SOExample1
(
#excludeIngredientsString varchar(MAX) = ''
)
AS
/* Convert string to table of ints */
DECLARE #excludeIngredients TABLE (ID int)
insert into #excludeIngredients
select ID = Item from dbo.SplitInts(#excludeIngredientsString)
/* Select recipies that don't contain any ingredients in our excluded table */
SELECT r.Name, r.Slug
FROM Recipes AS r LEFT OUTER JOIN
RecipeIngredients as ri inner join
#excludeIngredients as ei on ri.IngredientID = ei.ID
ON r.ID = ri.RecipeID
WHERE (ri.RecipeID IS NULL)

Execute table valued function from row values

Given a table as below where fn contains the name of an existing table valued functions and param contains the param to be passed to the function
fn | param
----------------
'fn_one' | 1001
'fn_two' | 1001
'fn_one' | 1002
'fn_two' | 1002
Is there a way to get a resulting table like this by using set-based operations?
The resulting table would contain 0-* lines for each line from the first table.
param | resultval
---------------------------
1001 | 'fn_one_result_a'
1001 | 'fn_one_result_b'
1001 | 'fn_two_result_one'
1002 | 'fn_two_result_one'
I thought I could do something like (pseudo)
select t1.param, t2.resultval
from table1 t1
cross join exec sp_executesql('select * from '+t1.fn+'('+t1.param+')') t2
but that gives a syntax error at exec sp_executesql.
Currently we're using cursors to loop through the first table and insert into a second table with exec sp_executesql. While this does the job correctly, it is also the heaviest part of a frequently used stored procedure and I'm trying to optimize it. Changes to the data model would probably imply changes to most of the core of the application and that would cost more then just throwing hardware at sql server.
I believe that this should do what you need, using dynamic SQL to generate a single statement that can give you your results and then using that with EXEC to put them into your table. The FOR XML trick is a common one for concatenating VARCHAR values together from multiple rows. It has to be written with the AS [text()] for it to work.
--=========================================================
-- Set up
--=========================================================
CREATE TABLE dbo.TestTableFunctions (function_name VARCHAR(50) NOT NULL, parameter VARCHAR(20) NOT NULL)
INSERT INTO dbo.TestTableFunctions (function_name, parameter)
VALUES ('fn_one', '1001'), ('fn_two', '1001'), ('fn_one', '1002'), ('fn_two', '1002')
CREATE TABLE dbo.TestTableFunctionsResults (function_name VARCHAR(50) NOT NULL, parameter VARCHAR(20) NOT NULL, result VARCHAR(200) NOT NULL)
GO
CREATE FUNCTION dbo.fn_one
(
#parameter VARCHAR(20)
)
RETURNS TABLE
AS
RETURN
SELECT 'fn_one_' + #parameter AS result
GO
CREATE FUNCTION dbo.fn_two
(
#parameter VARCHAR(20)
)
RETURNS TABLE
AS
RETURN
SELECT 'fn_two_' + #parameter AS result
GO
--=========================================================
-- The important stuff
--=========================================================
DECLARE #sql VARCHAR(MAX)
SELECT #sql =
(
SELECT 'SELECT ''' + T1.function_name + ''', ''' + T1.parameter + ''', F.result FROM ' + T1.function_name + '(' + T1.parameter + ') F UNION ALL ' AS [text()]
FROM
TestTableFunctions T1
FOR XML PATH ('')
)
SELECT #sql = SUBSTRING(#sql, 1, LEN(#sql) - 10)
INSERT INTO dbo.TestTableFunctionsResults
EXEC(#sql)
SELECT * FROM dbo.TestTableFunctionsResults
--=========================================================
-- Clean up
--=========================================================
DROP TABLE dbo.TestTableFunctions
DROP TABLE dbo.TestTableFunctionsResults
DROP FUNCTION dbo.fn_one
DROP FUNCTION dbo.fn_two
GO
The first SELECT statement (ignoring the setup) builds a string which has the syntax to run all of the functions in your table, returning the results all UNIONed together. That makes it possible to run the string with EXEC, which means that you can then INSERT those results into your table.
A couple of quick notes though... First, the functions must all return identical result set structures - the same number of columns with the same data types (technically, they might be able to be different data types if SQL Server can always do implicit conversions on them, but it's really not worth the risk). Second, if someone were able to update your functions table they could use SQL injection to wreak havoc on your system. You'll need that to be tightly controlled and I wouldn't let users just enter in function names, etc.
You cannot access objects by referencing their names in a SQL statement. One method would be to use a case statement:
select t1.*,
(case when fn = 'fn_one' then dbo.fn_one(t1.param)
when fn = 'fn_two' then dbo.fn_two(t1.param)
end) as resultval
from table1 t1 ;
Interestingly, you could encapsulate the case as another function, and then do:
select t1.*, dbo.fn_generic(t1.fn, t1.param) as resultval
from table1 t1 ;
However, in SQL Server, you cannot use dynamic SQL in a user-defined function (defined in T-SQL), so you would still need to use case or similar logic.
Either of these methods is likely to be much faster than a cursor, because they do not require issuing multiple queries.

Efficiently replacing many characters from a string

I would like to know the most efficient way of removing any occurrence of characters like , ; / " from a varchar column.
I have a function like this but it is incredibly slow. The table has about 20 million records.
CREATE FUNCTION [dbo].[Udf_getcleanedstring] (#s VARCHAR(255))
returns VARCHAR(255)
AS
BEGIN
DECLARE #o VARCHAR(255)
SET #o = Replace(#s, '/', '')
SET #o = Replace(#o, '-', '')
SET #o = Replace(#o, ';', '')
SET #o = Replace(#o, '"', '')
RETURN #o
END
Whichever method you use it is probably worth adding a
WHERE YourCol LIKE '%[/-;"]%'
Except if you suspect that a very large proportion of rows will in fact contain at least one of the characters that need to be stripped.
As you are using this in an UPDATE statement then simply adding the WITH SCHEMABINDING attribute can massively improve things and allow the UPDATE to proceed row by row rather than needing to cache the entire operation in a spool first for Halloween Protection
Nested REPLACE calls in TSQL are slow anyway though as they involve multiple passes through the strings.
You could knock up a CLR function as below (if you haven't worked with these before then they are very easy to deploy from an SSDT project as long as CLR execution is permitted on the server). The UPDATE plan for this too does not contain a spool.
The Regular Expression uses (?:) to denote a non capturing group with the various characters of interest separated by the alternation character | as /|-|;|\" (the " needs to be escaped in the string literal so is preceded by a slash).
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Text.RegularExpressions;
public partial class UserDefinedFunctions
{
private static readonly Regex regexStrip =
new Regex("(?:/|-|;|\")", RegexOptions.Compiled);
[SqlFunction]
public static SqlString StripChars(SqlString Input)
{
return Input.IsNull ? null : regexStrip.Replace((string)Input, "");
}
}
I want to show the huge performance differences between the using with 2 types of USER DIFINED FUNCTIONS:
User TABLE function
User SCALAR function
See the test example :
use AdventureWorks2012
go
-- create table for the test
create table dbo.FindString (ColA int identity(1,1) not null primary key,ColB varchar(max) );
declare #text varchar(max) = 'A web server can handle a Hypertext Transfer Protocol request either by reading
a file from its file ; system based on the URL <> path or by handling the request using logic that is specific
to the type of resource. In the case that special logic is invoked the query string will be available to that logic
for use in its processing, along with the path component of the URL.';
-- init process in loop 1,000,000
insert into dbo.FindString(ColB)
select #text
go 1000000
-- use one of the scalar function from the answers which post in this thread
alter function [dbo].[udf_getCleanedString]
(
#s varchar(max)
)
returns varchar(max)
as
begin
return replace(replace(replace(replace(#s,'/',''),'-',''),';',''),'"','')
end
go
--
-- create from the function above new function an a table function ;
create function [dbo].[utf_getCleanedString]
(
#s varchar(255)
)
returns table
as return
(
select replace(replace(replace(replace(#s,'/',''),'-',''),';',''),'"','') as String
)
go
--
-- clearing the buffer cach
DBCC DROPCLEANBUFFERS ;
go
-- update process using USER TABLE FUNCTIO
update Dest with(rowlock) set
dest.ColB = D.String
from dbo.FindString dest
cross apply utf_getCleanedString(dest.ColB) as D
go
DBCC DROPCLEANBUFFERS ;
go
-- update process using USER SCALAR FUNCTION
update Dest with(rowlock) set
dest.ColB = dbo.udf_getCleanedString(dest.ColB)
from dbo.FindString dest
go
AND these are the execution plan :
As you can see the UTF is much better the USF ,they 2 doing the same thing replacing string, but one return scalar and the other return as a table
Another important parameter for you to see (SET STATISTICS IO ON ;)
How about nesting them together in a single call:
create function [dbo].[udf_getCleanedString]
(
#s varchar(255)
)
returns varchar(255)
as
begin
return replace(replace(replace(replace(#s,'/',''),'-',''),';',''),'"','')
end
Or you may want to do an UPDATE on the table itself for the first time. Scalar functions are pretty slow.
Here is a similar question asked previously, I like this approach mentioned here.
How to Replace Multiple Characters in SQL?
declare #badStrings table (item varchar(50))
INSERT INTO #badStrings(item)
SELECT '>' UNION ALL
SELECT '<' UNION ALL
SELECT '(' UNION ALL
SELECT ')' UNION ALL
SELECT '!' UNION ALL
SELECT '?' UNION ALL
SELECT '#'
declare #testString varchar(100), #newString varchar(100)
set #teststring = 'Juliet ro><0zs my s0x()rz!!?!one!#!#!#!'
set #newString = #testString
SELECT #newString = Replace(#newString, item, '') FROM #badStrings
select #newString -- returns 'Juliet ro0zs my s0xrzone'

SQL query help - declaration of variables within a function

I'm trying to write a SQL function but an having problems with declaring the variables I need for use in the WHERE clause.
Here's the code:
CREATE FUNCTION fn_getEmployeePolicies(#employeeid smallint)
RETURNS TABLE
AS
DECLARE #empLoc varchar
DECLARE #empBusA varchar
DECLARE #empType varchar
#empLoc = SELECT Location FROM fn_getEmployeeDetails(#employeeid)
#empBusA = SELECT BusinessArea FROM fn_getEmployeeDetails(#employeeid)
#empType = SELECT Type FROM fn_getEmployeeDetails(#employeeid)
RETURN select PolicyId, PolicyGroupBusinessArea.BusinessArea, policysignoff.PolicyGroupLocation.Location, policysignoff.PolicyGroupEmployeeType.EmployeeType
from policysignoff.PolicyGroupPolicy
LEFT JOIN policysignoff.PolicyGroupBusinessArea on policysignoff.PolicyGroupBusinessArea.PolicyGroupId=policysignoff.PolicyGroupPolicy.PolicyGroupId
LEFT JOIN policysignoff.PolicyGroupLocation on policysignoff.PolicyGroupLocation.PolicyGroupId=policysignoff.PolicyGroupPolicy.PolicyGroupId
LEFT JOIN policysignoff.PolicyGroupEmployeeType on policysignoff.PolicyGroupEmployeeType.PolicyGroupId=policysignoff.PolicyGroupPolicy.PolicyGroupId
where BusinessArea = #empBusA
AND EmployeeType = #empType
AND Location = #empLoc
GO
The logic I am trying to build in is:
'given an employeeId, return all "applicable" policies'
An "Applicable" policy is one where the Business Area, Location and EmployeeType match that of the user.
I am trying to use another function (fn_getEmployeeDetails) to return the BusArea, Loc & EmpType for the given user.
Then with the results of that (stored as variables) I can run my select statement to return the policies.
The problem i am having is trying to get the variables declared correctly within the function.
Any help or tips would be appreciated.
Thanks in advance!
Without knowing what your error actually is, I can only say that you're properly not after using varchar as datatype without specifying length.
DECLARE #empLoc varchar will declare a varchar with length 1.
Chances are it should be something like varchar(255) or similar.
Second to set variables you'll either need to use SET and use paranthisis for selects or set it into the statement:
SET #empLoc = (SELECT Location FROM fn_getEmployeeDetails(#employeeid))
or
SELECT #empLoc = Location FROM fn_getEmployeeDetails(#employeeid)
There are subtle differences between these two methods, but for your purpose right now I don't think it's important.
EDIT:
Based on your comment you lack a BEGIN after AS, and an END before GO.
Basically - your function syntax is mixing up "inline" table function with "multi-statement" function.
Such a function "template" should look something like this:
CREATE FUNCTION <Table_Function_Name, sysname, FunctionName>
(
-- Add the parameters for the function here
<#param1, sysname, #p1> <data_type_for_param1, , int>,
<#param2, sysname, #p2> <data_type_for_param2, , char>
)
RETURNS
<#Table_Variable_Name, sysname, #Table_Var> TABLE
(
-- Add the column definitions for the TABLE variable here
<Column_1, sysname, c1> <Data_Type_For_Column1, , int>,
<Column_2, sysname, c2> <Data_Type_For_Column2, , int>
)
AS
BEGIN
-- Fill the table variable with the rows for your result set
RETURN
END
GO
(script taken from sql server management studio)

How do I make a function in SQL Server that accepts a column of data?

I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic.
I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way?
Instead of calling:
SELECT ejc_concatFormDetails(formuid, categoryName)
I would like to make it work like:
SELECT concatColumnValues(SELECT someColumn FROM SomeTable)
Here is my function definition:
FUNCTION [DNet].[ejc_concatFormDetails](#formuid AS int, #category as VARCHAR(75))
RETURNS VARCHAR(1000) AS
BEGIN
DECLARE #returnData VARCHAR(1000)
DECLARE #currentData VARCHAR(75)
DECLARE dataCursor CURSOR FAST_FORWARD FOR
SELECT data FROM DNet.ejc_FormDetails WHERE formuid = #formuid AND category = #category
SET #returnData = ''
OPEN dataCursor
FETCH NEXT FROM dataCursor INTO #currentData
WHILE (##FETCH_STATUS = 0)
BEGIN
SET #returnData = #returnData + ', ' + #currentData
FETCH NEXT FROM dataCursor INTO #currentData
END
CLOSE dataCursor
DEALLOCATE dataCursor
RETURN SUBSTRING(#returnData,3,1000)
END
As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar.
How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?
Others have answered your main question - but let me point out another problem with your function - the terrible use of a CURSOR!
You can easily rewrite this function to use no cursor, no WHILE loop - nothing like that. It'll be tons faster, and a lot easier, too - much less code:
FUNCTION DNet.ejc_concatFormDetails
(#formuid AS int, #category as VARCHAR(75))
RETURNS VARCHAR(1000)
AS
RETURN
SUBSTRING(
(SELECT ', ' + data
FROM DNet.ejc_FormDetails
WHERE formuid = #formuid AND category = #category
FOR XML PATH('')
), 3, 1000)
The trick is to use the FOR XML PATH('') - this returns a concatenated list of your data columns and your fixed ', ' delimiters. Add a SUBSTRING() on that and you're done! As easy as that..... no dogged-slow CURSOR, no messie concatenation and all that gooey code - just one statement and that's all there is.
You can use table-valued parameters:
CREATE FUNCTION MyFunction(
#Data AS TABLE (
Column1 int,
Column2 nvarchar(50),
Column3 datetime
)
)
RETURNS NVARCHAR(MAX)
AS BEGIN
/* here you can do what you want */
END
You can use Table Valued Parameters as of SQL Server 2008, which would allow you to pass a TABLE variable in as a parameter. The limitations and examples for this are all in that linked article.
However, I'd also point out that using a cursor could well be painful for performance.
You don't need to use a cursor, as you can do it all in 1 SELECT statement:
SELECT #MyCSVString = COALESCE(#MyCSVString + ', ', '') + data
FROM DNet.ejc_FormDetails
WHERE formuid = #formuid AND category = #category
No need for a cursor
Your question is a bit unclear. In your first SQL statement it looks like you're trying to pass columns to the function, but there is no WHERE clause. In the second SQL statement you're passing a collection of rows (results from a SELECT). Can you supply some sample data and expected outcome?
Without fully understanding your goal, you could look into changing the parameter to be a table variable. Fill a table variable local to the calling code and pass that into the function. You could do that as a stored procedure though and wouldn't need a function.