How to make this SQL Query more dynamic and linear? - sql

I have a view named VIEW_FORM_SALE_SUBMISSION_INFO
I have query as
select a.FormSl
from VIEW_FORM_SALE_SUBMISSION_INFO as a
where FormSl between convert(int,'113990') and convert(int,'1131000')
My objective:
I want to pass these two values inside the convert() function as a parameter
I want this query to be more linear, i.e. can I make these convertions earlier to this query so that I can use between operator without as simple as possible?

How about this?
select a.FormSl
from VIEW_FORM_SALE_SUBMISSION_INFO as a
where FormSl between #param1 and #param2
Where #param1 and #param2 are integer parameters?

To parameterise this, you could make a table valued function.
CREATE FUNCTION myFunction (#lower_bound AS VARCHAR(32), #upper_bound AS VARCHAR(32))
RETURNS TABLE AS
RETURN
select a.FormSl
from VIEW_FORM_SALE_SUBMISSION_INFO as a
where FormSl between convert(int,#lower_bound) and convert(int,#upper_bound)
SELECT * FROM dbo.myFunction('11111', '22222')
In terms of optimising the Convert, what you have is as optimal as it will get.
If you want to improve it further, don't pass in strings, pass int integers in the first place.

Related

Table Variable in SQL Server Function from Input Columns

I would like to create a function that returns a column based on input from three other columns. As temporary tables are not allowed within functions, is it possible to create a table variable from three input columns?
CREATE FUNCTION dbo.convert_value(
#CustomerID VARCHAR(MAX),
#CustomerValue VARCHAR(MAX),
#CustomerDescription VARCHAR(MAX)
)
RETURNS FLOAT
AS BEGIN
DECLARE #CustomerTable TABLE (
UniquePatientUID VARCHAR(MAX),
ResultValue VARCHAR(MAX),
PracticeDescription VARCHAR(MAX)
);
-- How can I insert #UniquePatientUID, #ResultValue and #PracticeDescription into #CustomerTable
END
The context of this question is that I have a SQL script that uses temporary tables and many UPDATE and ALTER TABLE statements, that I need to convert into a function. That script begins with the three columns mentioned, and adds a fourth column, Converted_Value, which is calculated with several hundred lines of code and manipulating temporary tables. Is there any hope here?
A table variable insert is really not different than a regular insert. Don't use temp tables. You can alter the table as well, or just declare it initially with that fourth column and allow it to be NULL.
INSERT INTO #CustomerTable (UniquePatientUID, ResultValue, PracticeDescription)
VALUES(#CustomerID, #CustomerValue, #CustomerDescription);
Don't forget to return the FLOAT.
Table Variable is a table so, you can just use INSERT INTO ... VALUES....
INSERT INTO #CustomerTable (UniquePatientUID,ResultValue,PracticeDescription )
VALUES
(#UniquePatientUID, #ResultValue , #PracticeDescription)
Unless you need a table variable for some specific reason, why not just work with the variables as a derived table expression? i.e.
;with inputs (UniquePatientUID, ResultValue, PracticeDescription) as
(
select #UniquePatientUID, #ResultValue, #PracticeDescription
)
select *
from inputs
Table variables fall out of scope after the function call, and you can't pass table types in or out of functions either. So really all a table variable does here is serve as a means of place keeping that's more familiar to SQL developers. But they're not free, which is the only reason I'm curious what your use case is.
If you don't need to return them as a set or something similar, you can just interact with the variables directly too.

FnSplit not working for SQL stored procedure to take multiple parameters

I have a stored procedure that currently takes in one value(ChainId) for a parameter. I am trying to allow the user to select multiple values of(ChainId). My where statement is below. Could someone help point me in a better direction than I am going now. Currently the query will run and return no data if I select multiple values for the parameter.
WHERE EndAuth is null AND CL.CHIND in(
SELECT [Value] FROM dbo.FnSplit(#ChainId, ','))
ORDER BY CL.CHIND
This is a popular function in SQL Server, so I'll assume you're working with that. Make sure your parameter is of type Varchar(MAX). #ChainId is passed as your string (ideally for SSRS) and ',' is passed as your delimiter. In SSRS, if you have a text box for your users to manually enter multiple values, they will enter something like 'value1, value2, value3'.
Test this out:
Declare #Yes_No Varchar(Max)
Set #Yes_No = 'y,n'
Select #yes_no
Select * from SplitString('y,n',',')
Select * from SplitString(#Yes_No,',')
Your results will be
y,n
----
y
n
----
y
n
Why I say to use Varchar(Max) and not int, or Varchar(10) for example, is because that would stop the function from reading all the values prematurely.
Try this:
Declare #Yes_No Varchar(1)
Set #Yes_No = 'y,n'
Select * from SplitString(#Yes_No,',')
The result will be:
y
The reason is because the function only accepts a value of 1 character in length, and splits that. As you can see, there isn't much to split.
This is just the way SSRS accepts parameters. FN_Split isn't necessarily a built-in function, but a widely popular one designed to allow you to pass multiple values to a string, with a pre-specified delimiter. So make sure you also go to your parameter in the report and specify that it will allow for multiple values. You will also want to supply a list of potential values for your users to select from. You'll either do this by manually populating a small list or providing another data source in the form of a stored procedure or table.

How do I use collection as a parameter in SQL Server (2008) SP?

So, I have this stored procedure (let's call it SP1) that gets as parameters two strings (say string1 and string2)
However, I need to call this stored procedure a lot from my code, so in order to make things faster, I was thinking of doing it in bulk. Collecting all of the parameters into a collection of some sort, and then send this.
From what I understand I need to use a DataTable on the code side, and a custom table type as the parameter on the SQL Server side - ok, cool. But...now what?
But... how do I get from there to the point where I actually go
EXEC SP1 string1, string2 or something along those lines?
Not sure whether this is what you like to achieve, instead of parsing those two string parameters, you would like to get a table holding all of the string row?
If so, you could you use UDF within SP,
Check here:
CREATE FUNCTION [dbo].[Name]
(
#parameter
)
RETURNS #Table TABLE
(
col1 varchar(50),
col2 varchar(50)
)
AS
BEGIN
**the query that inserts each records to TABLE**
END
Then using this [Name] UDF in your SP
Create a table type
CREATE TYPE Strings AS TABLE ( String1 VARCHAR(50), String2 VARCHAR(50) )
Alter your procedure to accept table type as input
ALTER PROCEDURE Usp_procname (#strings STRINGS Readonly)
AS
..
To call the procedure
DECLARE #strings STRINGS
INSERT INTO #strings
(String1,String2)
SELECT 'String1','String2'
UNION ALL
SELECT 'String3','String4'
UNION ALL
SELECT 'String5','String6'
EXEC Usp_procname #strings
By this way you can pass more than one string in a single call. Make sure you update the logic inside the procedure to handle more than one string
In a case like this one, I usually concatenate strings and split them on the server side or pass even an xml if I have multiple columns. Sql it's very fast when processing xml. You can try all the methods and check the processing time and after that choose the best method.

How do I pass an array of values to a function in Sybase (15.x)

It seems that Sybase omitted the function median() in Sybase ASE (15.x) while the typical, sum(), min(), max(), count(), etc... are available.
As a result, I was thinking that I could create a UDF (User Defined Function) that would fill that gap. I see a few examples of UDF taking a value (or fixed set of values) and returning a value; like this one: http://www.sypron.nl/udf.html.
Unfortunately, I don't see any example where a function takes an array of values as parameters. I saw the ugly hack to concatenate all the values into a long string and pass that, but I would rather try to explore a cleaner way of doing it. I could also require that whatever calls the function to insert the data into a predetermined tmp table that can then be read by the function, but that seems ugly too.
Any suggestions?
You can use temporary table to use it as array. Consider below example
create table #t
(
id int
)
insert into #T values (1)
create function fun
returns int
as
declare #id int
select #id = id from #T
return #id
go
select dbo.fun()

T-SQL User defined function overloading?

I understand that T-SQL is not object oriented. I need to write a set of functions that mimics method overloading in C#.
Is function overloading supported in T-SQL in any way? If there is a hack to do this, is it recommended?
No, there is no way to do this.
I recommend you revisit the requirement, as "make apples look like oranges" is often difficult to do, and of questionable value.
One thing I have done successfully is to write the function in such a way as to allow it to handle null values, and then call it with nulls in place of the parameters you would like to omit.
Example:
create function ActiveUsers
(
#departmentId int,
#programId int
)
returns int
as
begin
declare #count int
select #count = count(*)
from users
where
departmentId = isnull(#departmentId, departmentId)
and programId = isnull(#programId, programId)
return #count
end
go
Uses:
select ActiveUsers(1,3) -- users in department 1 and program 3
select ActiveUsers(null,3) -- all users in program 3, regardless of department
select ActiveUsers(null,null) -- all users
You could pass in a sql_variant, but it comes with all sorts of hazards around it; you can't really use strong typing like you can with OO languages and overloading.
If you need to find the base type within your function, you can use the SQL_VARIANT_PROPERTY function.
You can pass in an array of values within a single string and parse them out using this techique by Erland Sommarskog.
Create a function with a varchar(max) parameter or several if necessary, then have your parameter values in that string like:
param1;param2;parma3;param4
or
param1:type;param2:type;param3:type
or
calltype|param1;param2;param3
etc, you are only limited by your imagination...
Use the technique from the link to split apart this array and use program logic to use those values as you wish.
One solution would be to utilize the sql_variant data type. This example works as long as you use the same datatype for both values. Returns whatever datatype you send it.
create function dbo.Greater(
#val1 sql_variant
,#val2 sql_variant
) returns sql_variant
as
begin
declare #rV sql_variant
set #rV = case when #val1 >= #val2 then #val1
else #val2 end
return #rV
end
go
A solution I've had some luck with is either creating a number of functions that each takes a different data type - or casting all input to NVARCHAR(MAX).
1. creating a number of functions that each takes a different data type
CREATE FUNCTION [dbo].[FunctionNameDatetime2]
CREATE FUNCTION [dbo].[FunctionNameInt]
CREATE FUNCTION [dbo].[FunctionNameString] --(this is not a typo)
CREATE FUNCTION [dbo].[FunctionNameUniqueidentifier]
...
Problem: duplication of code, and a lot functions
2. Cast all input to NVARCHAR(MAX)
CREATE FUNCTION [dbo].[IntToNvarchar]
(
#Key INT
)
RETURNS NVARCHAR(MAX)
AS
BEGIN
RETURN ISNULL(CAST(#Key AS NVARCHAR), '');
END
CREATE FUNCTION [dbo].[FunctionName]
(
#Key NVARCHAR(MAX)
)
RETURNS CHAR(32)
AS
BEGIN
DECLARE #something CHAR(32)
do stuff ...
RETURN #something;
END
SELECT [dbo].[FunctionName]([dbo].[IntToNvarchar](25))
Problem: less elegant code than overloading
I overload Functions all the time, but I happen to know that these kind of issues are often highly dependent on platform.
On our DB2 system, I routinely overload like the following:
CREATE Function Schema1.F1 (parm date)
returns date
return date + 1month;
CREATE Function Schema1.F1 (parm timestamp)
returns date
return date(timestamp) + 1month;
This is actually quite useful when you have multiple queries which have similar formating requirements.
The only problem I have found about this so far, is you better be sure that you want the function because the standard drop function "schema"."name" fails because it cannot determine which function to drop. If anyone knows how to drop overloaded sql functions, let me know!