sql scalar-valued functions incredibly slow - sql

SQL Server 2012 - I have a view (complicated) and one of the columns needs to have anything non-numeric stripped out. The following works to a point;
STUFF(dbo.campaign_tracking_clicks.tt_cpn, 1, PATINDEX('%[0-9]%', dbo.campaign_tracking_clicks.tt_cpn) - 1, '') AS emailerid
I get an error if anything but numbers are at the end of the value.
I have a scalar-valued function'
/****** Object: UserDefinedFunction [dbo].[KeepNumCharacters] Script Date: 10/11/2016 1:05:51 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER Function [dbo].[KeepNumCharacters](#Temp VarChar(100)) Returns VarChar(100)
AS
Begin
While PatIndex('%[^0-9]%', #Temp) > 0
Set #Temp = Stuff(#Temp, PatIndex('%[^0-9+]%', #Temp), 1, '')
Return #TEmp
End
I'm using;
dbo.KeepNumCharacters(dbo.campaign_tracking_clicks.tt_cpn) AS emailerid
But, it's taking a very long time to execute. I've searched and searched but without finding an alternative.

Yes, scalar user-defined functions often make queries slow. Sometimes very slow.
See for example T-SQL User-Defined Functions: the good, the bad, and the ugly.
In your case I don't see how to rewrite the scalar function into an inlined table-valued function.
One option is to add an extra column to your table that would hold the result of your scalar function calculations. You can write a trigger that would keep its content in sync with the main column as the main column changes.
It will slow down the updates and inserts, but it will speed up your SELECT queries.

Related

How to do row-level security allowing multiple accessId to match the row's id in SQL Server?

I use a function as BLOCK and FILTER PREDICATE in order to implement row-level security.
CREATE FUNCTION [dbo].[AccessPredicate]
(#accessId int)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
SELECT 1 AS AccessPredicateResult
WHERE #accessId = CAST(SESSION_CONTEXT(N'accessId ') AS int)
I now need to modify the session context variable to hold any number of accessId. Since it is a SQL_VARIANT type, I cannot use a TABLE variable or custom one. I resorted to a CSV list as NVARCHAR, e.g. 1,2,3
That means I need to update the SELECT query in my function to do a 'WHERE #accessId IN (...)'. Of course I can't just
SELECT 1 AS AccessPredicateResult
WHERE #accessId IN (CAST(SESSION_CONTEXT(N'accessId ') AS VARCHAR)) -- where accessId = '1,2,3'
Without liking the implications, I tried to do that by putting the whole query into a #sql VARCHAR variable and then EXEC sp_executesql #sql but that failed because SQL Server cannot have a DECLARE in a table valued function (for whatever reason!?). Neither does the table valued function allow EXEC a stored procedure as far as my research suggests.
I'm blocked by this, can't think of a way to do this now.
So I've got the allowed accessId in my C# code and somehow need to persist them in the session context (or are there other, well performing, methods?) so that my predicate can use it to confirm the row in question is accessible. What's the best way forward now?
Thanks
You can use STRING_SPLIT in a subquery
CREATE OR ALTER FUNCTION dbo.AccessPredicate
(#accessId int)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
SELECT 1 AS AccessPredicateResult
WHERE #accessId IN (
SELECT s.value
FROM STRING_SPLIT(CAST(SESSION_CONTEXT(N'accessId') AS varchar(4000)), ',')
);
I warn you though, that RLS is not foolproof security, and can be circumvented by a user able to craft custom queries.
As a side note: Why does SQL Server not allow variables or EXEC in functions?
EXEC is not allowed because a function must not be side-affecting.
Furthermore, an inline table function must be a single statement, therefore it makes no sense to DECLARE variables. It is effectively a parameterized view, and functions that way as far as the compiler is concerned.
If you do need to "declare" variables, you can use a VALUES virual table, along with CROSS APPLY. For example:
CREATE OR ALTER FUNCTION dbo.AccessPredicate
(#accessId int)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
SELECT c2.Calc2 AS AccessPredicateResult
FROM (VALUES(
SomeCalculationHere
)) AS c1(Calc1)
CROSS APPLY (VALUES(
SomeFunction(c1.Calc1)
)) AS c2(Calc2);

Using results of one stored procedure in another stored procedure - SQL Server

Is there any way to use the results of one stored procedure in another stored procedure without using a table variable or temp table? The logic is as follows:
IF (#example = 1)
BEGIN
DECLARE #temp TABLE (Id INT);
INSERT INTO #temp
EXEC [Procedure1] #ItemId = #StockId
set #Cost = (select top 1 id from #temp)
Ideally i would like to know if there is a way to do this without having to use a temp table. Looked around online but can't find anything that works. Any advice would be great.
In general, if you want to use user-defined code in a SELECT, then it is better to phrase the code as a user-defined function rather than a user-defined procedure.
That is, procedures should be used for their side effects and functions should be used for their return values.
That said, you can use openquery (documented here) to run an exec on a linked server. The linked server can be the server you are running on.

Pass list to SQL custom type

I want to pass a list of school years from Coldfusion 10 to a stored procedure on SQL Server 2008 R2. I created a custom type in MSSQL:
CREATE TYPE YearListType AS Table (years VARCHAR(10))
And then my stored procedure declares this in my stored procedure:
CREATE PROCEDURE [getCounts]
#years YearListType readonly
....
SELECT .......
WHERE school_year IN (SELECT * FROM #years)
Now in my Coldfusion, I call the stored procedure this way:
<cfstoredproc procedure="[getCounts]" datasource="...">
<cfprocparam cfsqltype="cf_sql_varchar" value="#yrlist#">
....
</cfstoredproc>
The variable yrlist is a comma delimited list. Sample value looks like:
"2001-2002,2002-2003,2003-2004"
When I execute, I get a CF error:
Operand type clash: varchar is incompatible with YearListType
I understand the error, but I don't know how else to pass the list. I tried adding list="yes" to the cfprocparam, but I get an error saying the list parameter isn't compatible with cfprocparam. As far as I know, there is no cf_sql_list type, is there?
How should I pass a list of values to my stored procedure? Should I even use a custom SQL data type at all?
I've read this and this, but I can't figure out the Coldfusion side of it.
I've done this before. You first need to create a function to split the list apart then call that in your SQL like this. FTR: I didn't create this, I found it via Google several years ago. If I had the original source links I would have referenced those hre. I'm sure Google still has them. Just search for "T-SQL fnSplit".
SELECT blah FROM foo WHERE bar IN (fnSplit(#valueList, ','))
This is the function you need to create.
USE [DBNAME]
GO
/****** Object: UserDefinedFunction [dbo].[fnSplit] Script Date: 09/06/2011 19:08:22 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [dbo].[fnSplit](
#sInputList VARCHAR(8000) -- List of delimited items
, #sDelimiter VARCHAR(8000) = ',' -- delimiter that separates items
) RETURNS #List TABLE (item VARCHAR(8000))
BEGIN
DECLARE #sItem VARCHAR(8000)
WHILE CHARINDEX(#sDelimiter,#sInputList,0) <> 0
BEGIN
SELECT
#sItem=RTRIM(LTRIM(SUBSTRING(#sInputList,1,CHARINDEX(#sDelimiter,#sInputList,0)-1))),
#sInputList=RTRIM(LTRIM(SUBSTRING(#sInputList,CHARINDEX(#sDelimiter,#sInputList,0)+LEN(#sDelimiter),LEN(#sInputList))))
IF LEN(#sItem) > 0
INSERT INTO #List SELECT #sItem
END
IF LEN(#sInputList) > 0
INSERT INTO #List SELECT #sInputList -- Put the last item in
RETURN
END
GO
ColdFusion does not have a "list" attribute for <cfprocparam>, but it does for <cfqueryparam>. So you could try calling your stored procedure using a sql statement, like you would from within SQL Server Management Studio, and use <cfqueryparam> to wrap your parameters.
This still won't address the issue that ColdFusion does not (and cannot) understand SQL Server custom datatypes. If you are generating the year list programmatically rather than from user input (or you are sanitizing the user input before using it), you can omit the cfsqltype attribute from <cfqueryparam>.

SQL to Oracle Date Issue

Currently in middle of migration from SQL Server to Oracle. Whats the best practices that i should applied across?
And we also encounter some problem like the dateadd functions not working in oracle.
MSSQL Code
USE [TEST]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [dbo].[GET_MONTHS_LAST_DAY](#MON int)
RETURNS DATETIME
AS
BEGIN
RETURN DATEADD(dd, -DAY(DATEADD(m,1,getdate())), DATEADD(m,-MON+1,datediff(dd,0,getdate())))
END
Converted Oracle
create or replace
FUNCTION GET_MONTHS_LAST_DAY
(
v_MON IN NUMBER
)
RETURN DATE
AS
BEGIN
RETURN utils.dateadd('DD', -utils.day_(utils.dateadd('M', 1, SYSDATE)), utils.dateadd('M', -v_MON + 1, utils.datediff('DD', 0, SYSDATE)));
END;
Any idea why i cannot compile the oracle functions? The only thing i see here is the dateadd functions are not available in oracle. Thanks.
Your script looks like Oracle SQL Developer automatic conversion script.
You need generate package utils. Yo can use button "generate utils package" ( 2nd button in capture)in util window.

Print Dynamic Parameter Values

I've used dynamic SQL for many tasks and continuously run into the same problem: Printing values of variables used inside the Dynamic T-SQL statement.
EG:
Declare #SQL nvarchar(max), #Params nvarchar(max), #DebugMode bit, #Foobar int
select #DebugMode=1,#Foobar=364556423
set #SQL='Select #Foobar'
set #Params=N'#Foobar int'
if #DebugMode=1 print #SQL
exec sp_executeSQL #SQL,#Params
,#Foobar=#Foobar
The print results of the above code are simply "Select #Foobar". Is there any way to dynamically print the values & variable names of the sql being executed? Or when doing the print, replace parameters with their actual values so the SQL is re-runnable?
I have played with creating a function or two to accomplish something similar, but with data type conversions, pattern matching truncation issues, and non-dynamic solutions. I'm curious how other developers solve this issue without manually printing each and every variable manually.
I dont believe the evaluated statement is available, meaning your example query 'Select #FooBar' is never persisted anywhere as 'Select 364556243'
Even in a profiler trace you would see the statement hit the cache as '(#Foobar int)select #foobar'
This makes sense, since a big benefit of using sp_executesql is that it is able to cache the statement in a reliable form without variables evaluated, otherwise if it replaced the variables and executed that statement we would just see the execution plan bloat.
updated: Here's a step in right direction:
All of this could be cleaned up and wrapped in a nice function, with inputs (#Statement, #ParamDef, #ParamVal) and would return the "prepared" statement. I'll leave some of that as an exercise for you, but please post back when you improve it!
Uses split function from here link
set nocount on;
declare #Statement varchar(100), -- the raw sql statement
#ParamDef varchar(100), -- the raw param definition
#ParamVal xml -- the ParamName -to- ParamValue mapping as xml
-- the internal params:
declare #YakId int,
#Date datetime
select #YakId = 99,
#Date = getdate();
select #Statement = 'Select * from dbo.Yak where YakId = #YakId and CreatedOn > #Date;',
#ParamDef = '#YakId int, #Date datetime';
-- you need to construct this xml manually... maybe use a table var to clean this up
set #ParamVal = ( select *
from ( select '#YakId', cast(#YakId as varchar(max)) union all
select '#Date', cast(#Date as varchar(max))
) d (Name, Val)
for xml path('Parameter'), root('root')
)
-- do the work
declare #pStage table (pName varchar(100), pType varchar(25), pVal varchar(100));
;with
c_p (p)
as ( select replace(ltrim(rtrim(s)), ' ', '.')
from dbo.Split(',', #ParamDef)d
),
c_s (pName, pType)
as ( select parsename(p, 2), parsename(p, 1)
from c_p
),
c_v (pName, pVal)
as ( select p.n.value('Name[1]', 'varchar(100)'),
p.n.value('Val[1]', 'varchar(100)')
from #ParamVal.nodes('root/Parameter')p(n)
)
insert into #pStage
select s.pName, s.pType, case when s.pType = 'datetime' then quotename(v.pVal, '''') else v.pVal end -- expand this case to deal with other types
from c_s s
join c_v v on
s.pName = v.pName
-- replace pName with pValue in statement
select #Statement = replace(#Statement, pName, isnull(pVal, 'null'))
from #pStage
where charindex(pName, #Statement) > 0;
print #Statement;
On the topic of how most people do it, I will only speak to what I do:
Create a test script that will run the procedure using a wide range of valid and invalid input. If the parameter is an integer, I will send it '4' (instead of 4), but I'll only try 1 oddball string value like 'agd'.
Run the values against a data set of representative size and data value distribution for what I'm doing. Use your favorite data generation tool (there are several good ones on the market) to speed this up.
I'm generally debugging like this on a more ad hoc basis, so collecting the results from the SSMS results window is as far as I need to take it.
The best way I can think of is to capture the query as it comes across the wire using a SQL Trace. If you place something unique in your query string (as a comment), it is very easy to apply a filter for it in the trace so that you don't capture more than you need.
However, it isn't all peaches & cream.
This is only suitable for a Dev environment, maybe QA, depending on how rigid your shop is.
If the query takes a long time to run, you can mitigate that by adding "TOP 1", "WHERE 1=2", or a similar limiting clause to the query string if #DebugMode = 1. Otherwise, you could end up waiting a while for it to finish each time.
For long queries where you can't add something the query string only for debug mode, you could capture the command text in a StmtStarted event, then cancel the query as soon as you have the command.
If the query is an INSERT/UPDATE/DELETE, you will need to force a rollback if #DebugMode = 1 and you don't want the change to occur. In the event you're not currently using an explicit transaction, doing that would be extra overhead.
Should you go this route, there is some automation you can achieve to make life easier. You can create a template for the trace creation and start/stop actions. You can log the results to a file or table and process the command text from there programatically.