Related
Why would someone use WHERE 1=1 AND <conditions> in a SQL clause (Either SQL obtained through concatenated strings, either view definition)
I've seen somewhere that this would be used to protect against SQL Injection, but it seems very weird.
If there is injection WHERE 1 = 1 AND injected OR 1=1 would have the same result as injected OR 1=1.
Later edit: What about the usage in a view definition?
Thank you for your answers.
Still,
I don't understand why would someone use this construction for defining a view, or use it inside a stored procedure.
Take this for example:
CREATE VIEW vTest AS
SELECT FROM Table WHERE 1=1 AND table.Field=Value
If the list of conditions is not known at compile time and is instead built at run time, you don't have to worry about whether you have one or more than one condition. You can generate them all like:
and <condition>
and concatenate them all together. With the 1=1 at the start, the initial and has something to associate with.
I've never seen this used for any kind of injection protection, as you say it doesn't seem like it would help much. I have seen it used as an implementation convenience. The SQL query engine will end up ignoring the 1=1 so it should have no performance impact.
Just adding a example code to Greg's answer:
dim sqlstmt as new StringBuilder
sqlstmt.add("SELECT * FROM Products")
sqlstmt.add(" WHERE 1=1")
''// From now on you don't have to worry if you must
''// append AND or WHERE because you know the WHERE is there
If ProductCategoryID <> 0 then
sqlstmt.AppendFormat(" AND ProductCategoryID = {0}", trim(ProductCategoryID))
end if
If MinimunPrice > 0 then
sqlstmt.AppendFormat(" AND Price >= {0}", trim(MinimunPrice))
end if
I've seen it used when the number of conditions can be variable.
You can concatenate conditions using an " AND " string. Then, instead of counting the number of conditions you're passing in, you place a "WHERE 1=1" at the end of your stock SQL statement and throw on the concatenated conditions.
Basically, it saves you having to do a test for conditions and then add a "WHERE" string before them.
Seems like a lazy way to always know that your WHERE clause is already defined and allow you to keep adding conditions without having to check if it is the first one.
Indirectly Relevant: when 1=2 is used:
CREATE TABLE New_table_name
as
select *
FROM Old_table_name
WHERE 1 = 2;
this will create a new table with same schema as old table. (Very handy if you want to load some data for compares)
I found this pattern useful when I'm testing or double checking things on the database, so I can very quickly comment other conditions:
CREATE VIEW vTest AS
SELECT FROM Table WHERE 1=1
AND Table.Field=Value
AND Table.IsValid=true
turns into:
CREATE VIEW vTest AS
SELECT FROM Table WHERE 1=1
--AND Table.Field=Value
--AND Table.IsValid=true
1 = 1 expression is commonly used in generated sql code. This expression can simplify sql generating code reducing number of conditional statements.
Actually, I've seen this sort of thing used in BIRT reports. The query passed to the BIRT runtime is of the form:
select a,b,c from t where a = ?
and the '?' is replaced at runtime by an actual parameter value selected from a drop-down box. The choices in the drop-down are given by:
select distinct a from t
union all
select '*' from sysibm.sysdummy1
so that you get all possible values plus "*". If the user selects "*" from the drop down box (meaning all values of a should be selected), the query has to be modified (by Javascript) before being run.
Since the "?" is a positional parameter and MUST remain there for other things to work, the Javascript modifies the query to be:
select a,b,c from t where ((a = ?) or (1==1))
That basically removes the effect of the where clause while still leaving the positional parameter in place.
I've also seen the AND case used by lazy coders whilst dynamically creating an SQL query.
Say you have to dynamically create a query that starts with select * from t and checks:
the name is Bob; and
the salary is > $20,000
some people would add the first with a WHERE and subsequent ones with an AND thus:
select * from t where name = 'Bob' and salary > 20000
Lazy programmers (and that's not necessarily a bad trait) wouldn't distinguish between the added conditions, they'd start with select * from t where 1=1 and just add AND clauses after that.
select * from t where 1=1 and name = 'Bob' and salary > 20000
where 1=0, This is done to check if the table exists. Don't know why 1=1 is used.
While I can see that 1=1 would be useful for generated SQL, a technique I use in PHP is to create an array of clauses and then do
implode (" AND ", $clauses);
thus avoiding the problem of having a leading or trailing AND. Obviously this is only useful if you know that you are going to have at least one clause!
Here's a closely related example: using a SQL MERGE statement to update the target tabled using all values from the source table where there is no common attribute on which to join on e.g.
MERGE INTO Circles
USING
(
SELECT pi
FROM Constants
) AS SourceTable
ON 1 = 1
WHEN MATCHED THEN
UPDATE
SET circumference = 2 * SourceTable.pi * radius;
If you came here searching for WHERE 1, note that WHERE 1 and WHERE 1=1 are identical. WHERE 1 is used rarely because some database systems reject it considering WHERE 1 not really being boolean.
Why would someone use WHERE 1=1 AND <proper conditions>
I've seen homespun frameworks do stuff like this (blush), as this allows lazy parsing practices to be applied to both the WHERE and AND Sql keywords.
For example (I'm using C# as an example here), consider the conditional parsing of the following predicates in a Sql query string builder:
var sqlQuery = "SELECT * FROM FOOS WHERE 1 = 1"
if (shouldFilterForBars)
{
sqlQuery = sqlQuery + " AND Bars > 3";
}
if (shouldFilterForBaz)
{
sqlQuery = sqlQuery + " AND Baz < 12";
}
The "benefit" of WHERE 1 = 1 means that no special code is needed:
For AND - whether zero, one or both predicates (Bars and Baz's) should be applied, which would determine whether the first AND is required. Since we already have at least one predicate with the 1 = 1, it means AND is always OK.
For no predicates at all - In the case where there are ZERO predicates, then the WHERE must be dropped. But again, we can be lazy, because we are again guarantee of at least one predicate.
This is obviously a bad idea and would recommend using an established data access framework or ORM for parsing optional and conditional predicates in this way.
Having review all the answers i decided to perform some experiment like
SELECT
*
FROM MyTable
WHERE 1=1
Then i checked with other numbers
WHERE 2=2
WHERE 10=10
WHERE 99=99
ect
Having done all the checks, the query run town is the same. even without the where clause. I am not a fan of the syntax
This is useful in a case where you have to use dynamic query in which in where
clause you have to append some filter options. Like if you include options 0 for status is inactive, 1 for active. Based from the options, there is only two available options(0 and 1) but if you want to display All records, it is handy to include in where close 1=1.
See below sample:
Declare #SearchValue varchar(8)
Declare #SQLQuery varchar(max) = '
Select [FirstName]
,[LastName]
,[MiddleName]
,[BirthDate]
,Case
when [Status] = 0 then ''Inactive''
when [Status] = 1 then ''Active''
end as [Status]'
Declare #SearchOption nvarchar(100)
If (#SearchValue = 'Active')
Begin
Set #SearchOption = ' Where a.[Status] = 1'
End
If (#SearchValue = 'Inactive')
Begin
Set #SearchOption = ' Where a.[Status] = 0'
End
If (#SearchValue = 'All')
Begin
Set #SearchOption = ' Where 1=1'
End
Set #SQLQuery = #SQLQuery + #SearchOption
Exec(#SQLQuery);
Saw this in production code and asked seniors for help.
Their answer:
-We use 1=1 so when we have to add a new condition we can just type
and <condition>
and get on with it.
I do this usually when I am building dynamic SQL for a report which has many dropdown values a user can select. Since the user may or may not select the values from each dropdown, we end up getting a hard time figuring out which condition was the first where clause. So we pad up the query with a where 1=1 in the end and add all where clauses after that.
Something like
select column1, column2 from my table where 1=1 {name} {age};
Then we would build the where clause like this and pass it as a parameter value
string name_whereClause= ddlName.SelectedIndex > 0 ? "AND name ='"+ ddlName.SelectedValue+ "'" : "";
As the where clause selection are unknown to us at runtime, so this helps us a great deal in finding whether to include an 'AND' or 'WHERE'.
Making "where 1=1" the standard for all your queries also makes it trivially easy to validate the sql by replacing it with where 1 = 0, handy when you have batches of commands/files.
Also makes it trivially easy to find the end of the end of the from/join section of any query. Even queries with sub-queries if properly indented.
I first came across this back with ADO and classic asp, the answer i got was: performance.
if you do a straight
Select * from tablename
and pass that in as an sql command/text you will get a noticeable performance increase with the
Where 1=1
added, it was a visible difference. something to do with table headers being returned as soon as the first condition is met, or some other craziness, anyway, it did speed things up.
Using a predicate like 1=1 is a normal hint sometimes used to force the access plan to use or not use an index scan. The reason why this is used is when you are using a multi-nested joined query with many predicates in the where clause where sometimes even using all of the indexes causes the access plan to read each table - a full table scan. This is just 1 of many hints used by DBAs to trick a dbms into using a more efficient path. Just don't throw one in; you need a dba to analyze the query since it doesn't always work.
Here is a use case... however I am not too concerned with the technicalities of why I should or not use 1 = 1.
I am writing a function, using pyodbc to retrieve some data from SQL Server. I was looking for a way to force a filler after the where keyword in my code. This was a great suggestion indeed:
if _where == '': _where = '1=1'
...
...
...
cur.execute(f'select {predicate} from {table_name} where {_where}')
The reason is because I could not implement the keyword 'where' together inside the _where clause variable. So, I think using any dummy condition that evaluates to true would do as a filler.
Is there difference in performance when querying someting like this?
Query 1:
SELECT * FROM Customer WHERE Name=Name
Query 2:
SELECT * FROM Customer
I will use it in conditional select all
SELECT * FROM Customer
WHERE Name = CASE WHEN #QueryAll='true' THEN Name ELSE #SearchValue END
If there's no performance issue in Query 1 and 2, I think it is a short code for this one:
IF #QueryAll='true'
SELECT * FROM Customer
ELSE
SELECT * FROM Customer WHERE Name=#SearchValue
You should read Dynamic Search Conditions in T‑SQL by Erland Sommarskog.
If you use SQL Server 2008 or later, then use OPTION(RECOMPILE) and write the query like this:
SELECT *
FROM Customer
WHERE
(Name = #SearchValue OR #QueryAll='true')
OPTION (RECOMPILE);
I usually pass NULL for #SearchValue to indicate that this parameter should be ignored, rather than using separate parameter #QueryAll. In this convention the query becomes this:
SELECT *
FROM Customer
WHERE
(Name = #SearchValue OR #SearchValue IS NULL)
OPTION (RECOMPILE);
Edit
For details see the link above. In short, OPTION(RECOMPILE) instructs SQL Server to recompile execution plan of the query every time it is run and SQL Server will not cache the generated plan. Recompilation also means that values of any variables are effectively inlined into the query and optimizer knows them.
So, if #SearchValue is NULL, optimizer is smart enough to generate the plan as if the query was this:
SELECT *
FROM Customer
If #SearchValue has a non-NULL value 'abc', optimizer is smart enough to generate the plan as if the query was this:
SELECT *
FROM Customer
WHERE (Name = 'abc')
The obvious drawback of OPTION(RECOMPILE) is added overhead for recompilation (usually around few hundred milliseconds), which can be significant if you run the query very often.
The query 2 would be faster with an index on name column, and you should specify just the fields you'll need, not all of them.
For some guidance in optional parameter queries, take a look here: Sometimes the Simplest Solution Isn't the Best Solution (The Optional Parameter Problem)
Why would someone use WHERE 1=1 AND <conditions> in a SQL clause (Either SQL obtained through concatenated strings, either view definition)
I've seen somewhere that this would be used to protect against SQL Injection, but it seems very weird.
If there is injection WHERE 1 = 1 AND injected OR 1=1 would have the same result as injected OR 1=1.
Later edit: What about the usage in a view definition?
Thank you for your answers.
Still,
I don't understand why would someone use this construction for defining a view, or use it inside a stored procedure.
Take this for example:
CREATE VIEW vTest AS
SELECT FROM Table WHERE 1=1 AND table.Field=Value
If the list of conditions is not known at compile time and is instead built at run time, you don't have to worry about whether you have one or more than one condition. You can generate them all like:
and <condition>
and concatenate them all together. With the 1=1 at the start, the initial and has something to associate with.
I've never seen this used for any kind of injection protection, as you say it doesn't seem like it would help much. I have seen it used as an implementation convenience. The SQL query engine will end up ignoring the 1=1 so it should have no performance impact.
Just adding a example code to Greg's answer:
dim sqlstmt as new StringBuilder
sqlstmt.add("SELECT * FROM Products")
sqlstmt.add(" WHERE 1=1")
''// From now on you don't have to worry if you must
''// append AND or WHERE because you know the WHERE is there
If ProductCategoryID <> 0 then
sqlstmt.AppendFormat(" AND ProductCategoryID = {0}", trim(ProductCategoryID))
end if
If MinimunPrice > 0 then
sqlstmt.AppendFormat(" AND Price >= {0}", trim(MinimunPrice))
end if
I've seen it used when the number of conditions can be variable.
You can concatenate conditions using an " AND " string. Then, instead of counting the number of conditions you're passing in, you place a "WHERE 1=1" at the end of your stock SQL statement and throw on the concatenated conditions.
Basically, it saves you having to do a test for conditions and then add a "WHERE" string before them.
Seems like a lazy way to always know that your WHERE clause is already defined and allow you to keep adding conditions without having to check if it is the first one.
Indirectly Relevant: when 1=2 is used:
CREATE TABLE New_table_name
as
select *
FROM Old_table_name
WHERE 1 = 2;
this will create a new table with same schema as old table. (Very handy if you want to load some data for compares)
I found this pattern useful when I'm testing or double checking things on the database, so I can very quickly comment other conditions:
CREATE VIEW vTest AS
SELECT FROM Table WHERE 1=1
AND Table.Field=Value
AND Table.IsValid=true
turns into:
CREATE VIEW vTest AS
SELECT FROM Table WHERE 1=1
--AND Table.Field=Value
--AND Table.IsValid=true
1 = 1 expression is commonly used in generated sql code. This expression can simplify sql generating code reducing number of conditional statements.
Actually, I've seen this sort of thing used in BIRT reports. The query passed to the BIRT runtime is of the form:
select a,b,c from t where a = ?
and the '?' is replaced at runtime by an actual parameter value selected from a drop-down box. The choices in the drop-down are given by:
select distinct a from t
union all
select '*' from sysibm.sysdummy1
so that you get all possible values plus "*". If the user selects "*" from the drop down box (meaning all values of a should be selected), the query has to be modified (by Javascript) before being run.
Since the "?" is a positional parameter and MUST remain there for other things to work, the Javascript modifies the query to be:
select a,b,c from t where ((a = ?) or (1==1))
That basically removes the effect of the where clause while still leaving the positional parameter in place.
I've also seen the AND case used by lazy coders whilst dynamically creating an SQL query.
Say you have to dynamically create a query that starts with select * from t and checks:
the name is Bob; and
the salary is > $20,000
some people would add the first with a WHERE and subsequent ones with an AND thus:
select * from t where name = 'Bob' and salary > 20000
Lazy programmers (and that's not necessarily a bad trait) wouldn't distinguish between the added conditions, they'd start with select * from t where 1=1 and just add AND clauses after that.
select * from t where 1=1 and name = 'Bob' and salary > 20000
where 1=0, This is done to check if the table exists. Don't know why 1=1 is used.
While I can see that 1=1 would be useful for generated SQL, a technique I use in PHP is to create an array of clauses and then do
implode (" AND ", $clauses);
thus avoiding the problem of having a leading or trailing AND. Obviously this is only useful if you know that you are going to have at least one clause!
Here's a closely related example: using a SQL MERGE statement to update the target tabled using all values from the source table where there is no common attribute on which to join on e.g.
MERGE INTO Circles
USING
(
SELECT pi
FROM Constants
) AS SourceTable
ON 1 = 1
WHEN MATCHED THEN
UPDATE
SET circumference = 2 * SourceTable.pi * radius;
If you came here searching for WHERE 1, note that WHERE 1 and WHERE 1=1 are identical. WHERE 1 is used rarely because some database systems reject it considering WHERE 1 not really being boolean.
Why would someone use WHERE 1=1 AND <proper conditions>
I've seen homespun frameworks do stuff like this (blush), as this allows lazy parsing practices to be applied to both the WHERE and AND Sql keywords.
For example (I'm using C# as an example here), consider the conditional parsing of the following predicates in a Sql query string builder:
var sqlQuery = "SELECT * FROM FOOS WHERE 1 = 1"
if (shouldFilterForBars)
{
sqlQuery = sqlQuery + " AND Bars > 3";
}
if (shouldFilterForBaz)
{
sqlQuery = sqlQuery + " AND Baz < 12";
}
The "benefit" of WHERE 1 = 1 means that no special code is needed:
For AND - whether zero, one or both predicates (Bars and Baz's) should be applied, which would determine whether the first AND is required. Since we already have at least one predicate with the 1 = 1, it means AND is always OK.
For no predicates at all - In the case where there are ZERO predicates, then the WHERE must be dropped. But again, we can be lazy, because we are again guarantee of at least one predicate.
This is obviously a bad idea and would recommend using an established data access framework or ORM for parsing optional and conditional predicates in this way.
Having review all the answers i decided to perform some experiment like
SELECT
*
FROM MyTable
WHERE 1=1
Then i checked with other numbers
WHERE 2=2
WHERE 10=10
WHERE 99=99
ect
Having done all the checks, the query run town is the same. even without the where clause. I am not a fan of the syntax
This is useful in a case where you have to use dynamic query in which in where
clause you have to append some filter options. Like if you include options 0 for status is inactive, 1 for active. Based from the options, there is only two available options(0 and 1) but if you want to display All records, it is handy to include in where close 1=1.
See below sample:
Declare #SearchValue varchar(8)
Declare #SQLQuery varchar(max) = '
Select [FirstName]
,[LastName]
,[MiddleName]
,[BirthDate]
,Case
when [Status] = 0 then ''Inactive''
when [Status] = 1 then ''Active''
end as [Status]'
Declare #SearchOption nvarchar(100)
If (#SearchValue = 'Active')
Begin
Set #SearchOption = ' Where a.[Status] = 1'
End
If (#SearchValue = 'Inactive')
Begin
Set #SearchOption = ' Where a.[Status] = 0'
End
If (#SearchValue = 'All')
Begin
Set #SearchOption = ' Where 1=1'
End
Set #SQLQuery = #SQLQuery + #SearchOption
Exec(#SQLQuery);
Saw this in production code and asked seniors for help.
Their answer:
-We use 1=1 so when we have to add a new condition we can just type
and <condition>
and get on with it.
I do this usually when I am building dynamic SQL for a report which has many dropdown values a user can select. Since the user may or may not select the values from each dropdown, we end up getting a hard time figuring out which condition was the first where clause. So we pad up the query with a where 1=1 in the end and add all where clauses after that.
Something like
select column1, column2 from my table where 1=1 {name} {age};
Then we would build the where clause like this and pass it as a parameter value
string name_whereClause= ddlName.SelectedIndex > 0 ? "AND name ='"+ ddlName.SelectedValue+ "'" : "";
As the where clause selection are unknown to us at runtime, so this helps us a great deal in finding whether to include an 'AND' or 'WHERE'.
Making "where 1=1" the standard for all your queries also makes it trivially easy to validate the sql by replacing it with where 1 = 0, handy when you have batches of commands/files.
Also makes it trivially easy to find the end of the end of the from/join section of any query. Even queries with sub-queries if properly indented.
I first came across this back with ADO and classic asp, the answer i got was: performance.
if you do a straight
Select * from tablename
and pass that in as an sql command/text you will get a noticeable performance increase with the
Where 1=1
added, it was a visible difference. something to do with table headers being returned as soon as the first condition is met, or some other craziness, anyway, it did speed things up.
Using a predicate like 1=1 is a normal hint sometimes used to force the access plan to use or not use an index scan. The reason why this is used is when you are using a multi-nested joined query with many predicates in the where clause where sometimes even using all of the indexes causes the access plan to read each table - a full table scan. This is just 1 of many hints used by DBAs to trick a dbms into using a more efficient path. Just don't throw one in; you need a dba to analyze the query since it doesn't always work.
Here is a use case... however I am not too concerned with the technicalities of why I should or not use 1 = 1.
I am writing a function, using pyodbc to retrieve some data from SQL Server. I was looking for a way to force a filler after the where keyword in my code. This was a great suggestion indeed:
if _where == '': _where = '1=1'
...
...
...
cur.execute(f'select {predicate} from {table_name} where {_where}')
The reason is because I could not implement the keyword 'where' together inside the _where clause variable. So, I think using any dummy condition that evaluates to true would do as a filler.
I'm creating a stored procedure for searching some data in my database according to some criteria input by the user.
My sql code looks like this:
Create Procedure mySearchProc
(
#IDCriteria bigint=null,
...
#MaxDateCriteria datetime=null
)
as
select Col1,...,Coln from MyTable
where (#IDCriteria is null or ID=#IDCriteria)
...
and (#MaxDateCriteria is null or Date<#MaxDateCriteria)
Edit : I've around 20 possible parameters, and each combination of n non-null parameters can happen.
Is it ok performance-wise to write this kind of code? (I'm using MS SQL Server 2008)
Would generating SQL code containing only the needed where clauses be notably faster?
OR clauses are notorious for causing performance issues mainly because they require table scans. If you can write the query without ORs you'll be better off.
where (#IDCriteria is null or ID=#IDCriteria)
and (#MaxDateCriteria is null or Date<#MaxDateCriteria)
If you write this criteria, then SQL server will not know whether it is better to use the index for IDs or the index for Dates.
For proper optimization, it is far better to write separate queries for each case and use IF to guide you to the correct one.
IF #IDCriteria is not null and #MaxDateCriteria is not null
--query
WHERE ID = #IDCriteria and Date < #MaxDateCriteria
ELSE IF #IDCriteria is not null
--query
WHERE ID = #IDCriteria
ELSE IF #MaxDateCriteria is not null
--query
WHERE Date < #MaxDateCriteria
ELSE
--query
WHERE 1 = 1
If you expect to need different plans out of the optimizer, you need to write different queries to get them!!
Would generating SQL code containing only the needed where clauses be notably faster?
Yes - if you expect the optimizer to choose between different plans.
Edit:
DECLARE #CustomerNumber int, #CustomerName varchar(30)
SET #CustomerNumber = 123
SET #CustomerName = '123'
SELECT * FROM Customers
WHERE (CustomerNumber = #CustomerNumber OR #CustomerNumber is null)
AND (CustomerName = #CustomerName OR #CustomerName is null)
CustomerName and CustomerNumber are indexed. Optimizer says : "Clustered
Index Scan with parallelization". You can't write a worse single table query.
Edit : I've around 20 possible parameters, and each combination of n non-null parameters can happen.
We had a similar "search" functionality in our database. When we looked at the actual queries issued, 99.9% of them used an AccountIdentifier. In your case, I suspect either one column is -always supplied- or one of two columns are always supplied. This would lead to 2 or 3 cases respectively.
It's not important to remove OR's from the whole structure. It is important to remove OR's from the column/s that you expect the optimizer to use to access the indexes.
So, to boil down the above comments:
Create a separate sub-procedure for each of the most popular variations of specific combinations of parameters, and within a dispatcher procedure call the appropriate one from an IF ELSE structure, the penultimate ELSE clause of which builds a query dynamically to cover the remaining cases.
Perhaps only one or two cases may be specifically coded at first, but as time goes by and particular combinations of parameters are identified as being statistically significant, implementation procedures may be written and the master IF ELSE construct extended to identify those cases and call the appropriate sub-procedure.
Regarding "Would generating SQL code containing only the needed where clauses be notably faster?"
I don't think so, because this way you effectively remove the positive effects of query plan caching.
You could perform selective queries, in order of the most common / efficient (indexed etc), parameters, and add PK(s) to a temporary table
That would create a (hopefully small!) subset of data
Then join that Temporary Table with the main table, using a full WHERE clause with
SELECT ...
FROM #TempTable AS T
JOIN dbo.MyTable AS M
ON M.ID = T.ID
WHERE (#IDCriteria IS NULL OR M.ID=#IDCriteria)
...
AND (#MaxDateCriteria IS NULL OR M.Date<#MaxDateCriteria)
style to refine the (small) subset.
What if constructs like these were replaced:
WHERE (#IDCriteria IS NULL OR #IDCriteria=ID)
AND (#MaxDateCriteria IS NULL OR Date<#MaxDateCriteria)
AND ...
with ones like these:
WHERE ID = ISNULL(#IDCriteria, ID)
AND Date < ISNULL(#MaxDateCriteria, DATEADD(millisecond, 1, Date))
AND ...
or is this just coating the same unoptimizable query in syntactic sugar?
Choosing the right index is hard for the optimizer. IMO, this is one of few cases where dynamic SQL is the best option.
this is one of the cases i use code building or a sproc for each searchoption.
since your search is so complex i'd go with code building.
you can do this either in code or with dynamic sql.
just be careful of SQL Injection.
I suggest one step further than some of the other suggestions - think about degeneralizing at a much higher abstraction level, preferably the UI structure. Usually this seems to happen when the problem is being pondered in data mode rather than user domain mode.
In practice, I've found that almost every such query has one or more non-null, fairly selective columns that would be reasonably optimizable, if one (or more) were specified. Furthermore, these are usually reasonable assumptions that users can understand.
Example: Find Orders by Customer; or Find Orders by Date Range; or Find Orders By Salesperson.
If this pattern applies, then you can decompose your hypergeneralized query into more purposeful subqueries that also make sense to users, and you can reasonably prompt for required values (or ranges), and not worry too much about crafting efficient expressions for subsidiary columns.
You may still end up with an "All Others" category. But at least then if you provide what is essentially an open-ended Query By Example form, then users will have some idea what they're getting into. Doing what you describe really puts you in the role of trying to out-think the query optimizer, which is folly IMHO.
I'm currently working with SQL 2005, so I don't know if the 2008 optimizer acts differently. That being said, I've found that you need to do a couple of things...
Make sure that you are using WITH (RECOMPILE) for your query
Use CASE statements to cause short-circuiting of the logic. At least in 2005 this is NOT done with OR statements. For example:
.
SELECT
...
FROM
...
WHERE
(1 =
CASE
WHEN #my_column IS NULL THEN 1
WHEN my_column = #my_column THEN 1
ELSE 0
END
)
The CASE statement will cause the SQL Server optimizer to recognize that it doesn't need to continue past the first WHEN. In this example it's not a big deal, but in my search procs a non-null parameter often meant searching in another table through a subquery for existence of a matching row, which got costly. Once I made this change the search procs started running much faster.
My suggestion is to build the sql string. You will gain maximum performance from index and reuse execution plan.
DECLARE #sql nvarchar(4000);
SET #sql = N''
IF #param1 IS NOT NULL
SET #sql = CASE WHEN #sql = N'' THEN N'' ELSE N' AND ' END + N'param1 = #param1';
IF #param2 IS NOT NULL
SET #sql = CASE WHEN #sql = N'' THEN N'' ELSE N' AND ' END + N'param2 = #param2';
...
IF #paramN IS NOT NULL
SET #sql = CASE WHEN #sql = N'' THEN N'' ELSE N' AND ' END + N'paramN = #paramN';
IF #sql <> N''
SET #sql = N' WHERE ' + #sql;
SET #sql = N'SELECT ... FROM myTable' + #sql;
EXEC sp_executesql #sql, N'#param1 type, #param2 type, ..., #paramN type', #param1, #param2, ..., #paramN;
Each time the procedure is called, passing different parameters, there is a different optimal execution plan for getting the data. The problem being, that SQL has cached an execution plan for your procedure and will use a sub-optimal (read terrible) execution plan.
I would recommend:
Create specific SPs for frequently run execution paths (i.e. passed parameter sets) optimised for each scenario.
Keep you main generic SP for edge cases (presuming they are rarely run) but use the WITH RECOMPILE clause to cause a new execution plan to be created each time the procedure is run.
We use OR clauses checking against NULLs for optional parameters to great affect. It works very well without the RECOMPILE option so long as the execution path is not drastically altered by passing different parameters.
The DBA here at work is trying to turn my straightforward stored procs into a dynamic sql monstrosity. Admittedly, my stored procedure might not be as fast as they'd like, but I can't help but believe there's an adequate way to do what is basically a conditional join.
Here's an example of my stored proc:
SELECT
*
FROM
table
WHERE
(
#Filter IS NULL OR table.FilterField IN
(SELECT Value FROM dbo.udfGetTableFromStringList(#Filter, ','))
)
The UDF turns a comma delimited list of filters (for example, bank names) into a table.
Obviously, having the filter condition in the where clause isn't ideal. Any suggestions of a better way to conditionally join based on a stored proc parameter are welcome. Outside of that, does anyone have any suggestions for or against the dynamic sql approach?
Thanks
You could INNER JOIN on the table returned from the UDF instead of using it in an IN clause
Your UDF might be something like
CREATE FUNCTION [dbo].[csl_to_table] (#list varchar(8000) )
RETURNS #list_table TABLE ([id] INT)
AS
BEGIN
DECLARE #index INT,
#start_index INT,
#id INT
SELECT #index = 1
SELECT #start_index = 1
WHILE #index <= DATALENGTH(#list)
BEGIN
IF SUBSTRING(#list,#index,1) = ','
BEGIN
SELECT #id = CAST(SUBSTRING(#list, #start_index, #index - #start_index ) AS INT)
INSERT #list_table ([id]) VALUES (#id)
SELECT #start_index = #index + 1
END
SELECT #index = #index + 1
END
SELECT #id = CAST(SUBSTRING(#list, #start_index, #index - #start_index ) AS INT)
INSERT #list_table ([id]) VALUES (#id)
RETURN
END
and then INNER JOIN on the ids in the returned table. This UDF assumes that you're passing in INTs in your comma separated list
EDIT:
In order to handle a null or no value being passed in for #filter, the most straightforward way that I can see would be to execute a different query within the sproc based on the #filter value. I'm not certain how this affects the cached execution plan (will update if someone can confirm) or if the end result would be faster than your original sproc, I think that the answer here would lie in testing.
Looks like the rewrite of the code is being addressed in another answer, but a good argument against dynamic SQL in a stored procedure is that it breaks the ownership chain.
That is, when you call a stored procedure normally, it executes under the permissions of the stored procedure owner EXCEPT when executing dynamic SQL with the execute command,for the context of the dynamic SQL it reverts back to the permissions of the caller, which may be undesirable depending on your security model.
In the end, you are probably better off compromising and rewriting it to address the concerns of the DBA while avoiding dynamic SQL.
I am not sure I understand your aversion to dynamic SQL. Perhaps it is that your UDF has nicely abstracted away some of the messyness of the problem, and you feel dynamic SQL will bring that back. Well, consider that most if not all DAL or ORM tools will rely extensively on dynamic SQL, and I think your problem could be restated as "how can I nicely abstract away the messyness of dynamic SQL".
For my part, dynamic SQL gives me exactly the query I want, and subsequently the performance and behavior I am looking for.
I don't see anything wrong with your approach. Rewriting it to use dynamic SQL to execute two different queries based on whether #Filter is null seems silly to me, honestly.
The only potential downside I can see of what you have is that it could cause some difficulty in determining a good execution plan. But if the performance is good enough as it is, there's no reason to change it.
No matter what you do (and the answers here all have good points), be sure to compare the performance and execution plans of each option.
Sometimes, hand optimization is simply pointless if it impacts your code maintainability and really produces no difference in how the code executes.
I would first simply look at changing the IN to a simple LEFT JOIN with NULL check (this doesn't get rid of your udf, but it should only get called once):
SELECT *
FROM table
LEFT JOIN dbo.udfGetTableFromStringList(#Filter, ',') AS filter
ON table.FilterField = filter.Value
WHERE #Filter IS NULL
OR filter.Value IS NOT NULL
It appears that you are trying to write a a single query to deal with two scenarios:
1. #filter = "x,y,z"
2. #filter IS NULL
To optimise scenario 2, I would INNER JOIN on the UDF, rather than use an IN clause...
SELECT * FROM table
INNER JOIN dbo.udfGetTableFromStringList(#Filter, ',') AS filter
ON table.FilterField = filter.Value
To optimise for scenario 2, I would NOT try to adapt the existing query, instead I would deliberately keep those cases separate, either an IF statement or a UNION and simulate the IF with a WHERE clause...
TSQL IF
IF (#filter IS NULL)
SELECT * FROM table
ELSE
SELECT * FROM table
INNER JOIN dbo.udfGetTableFromStringList(#Filter, ',') AS filter
ON table.FilterField = filter.Value
UNION to Simulate IF
SELECT * FROM table
INNER JOIN dbo.udfGetTableFromStringList(#Filter, ',') AS filter
ON table.FilterField = filter.Value
UNION ALL
SELECT * FROM table WHERE #filter IS NULL
The advantage of such designs is that each case is simple, and determining which is simple is it self simple. Combining the two into a single query, however, leads to compromises such as LEFT JOINs and so introduces significant performance loss to each.