Is there any difference Performance wise between these filter methods?
Method 1: WHERE (#Col1 IS NULL OR t.column = #Col1)
Method 2: WHERE 1 = case when #col1 is null then 1 else case when col1 = #col1 then 1 else 0 end end
If you know your Col1 column doesn't itself contain any null values, you can do this:
WHERE Col1 = COALESCE(#Col1, Col1)
Otherwise your CASE statement should typically do a little better than the OR. I add emphasis to "typically" because ever table is different. You should always profile to know for sure.
Unfortunately, typically the fastest way is to use dynamic sql to exclude the condition from the query in the first place if the parameter is null. But of course save that as an optimization of last resort.
Why not use Coalesce?
Where Col1 = Coalesce(#Col1, Col1)
EDIT: (thx to Joel's comment below) This works only if col1 does not allow Nulls, or if it does allow nulls and you want the null values excluded when #Col1 is null or absent. SO, if it allows nulls and you want them included when #Col1 parameter is null or absent then modify as follows:
Where Coalesce(#Col1, Col1) Is Null Or Col1 = Coalesce(#Col1, Col1)
Yes. CASE has a guaranteed execution order while OR does not. Many programmers rely on OR short-circuit and are surprised to learn that a set oriented declarative language like SQL does not guarantee boolean operator short-circuit.
That being said using OR and CASe in WHERE clauses is a bad practice. Separate the condition into a clear IF statement and have separate queries on each branch:
IF #col1 IS NOT NULL
SELECT ... WHERE col1 = #col1;
ELSE
SELECT ... WHERE <alternatecondition>;
Placing the condition inside the WHERE usually defeats the optimizer that cannot guess what #col1 will be and produces a bad plan involving a full scan.
Update
Since I got tired of explaining again and again that boolean short-circuit is not guaranteed in SQL, I decided to write a full blog column about it: SQL Server boolean operator short-circuit. There you'll find a simple counter example showing that boolean short-circuit is not only not guaranteed, but relying on it can actually be very dangerous as it can result in run time errors.
Related
I have a vertica table, "CUSTOMER" which contains around 10 columns. Each column contains few null values. So I have to write one query which will replace all the null values to '0'.
Is it possible to do it in vertica. Can anyone please help me on that.
You use coalesce():
select coalesce(col1, 0) as col1, . . .
from t;
You can incorporate similar logic into an update as well.
In a SELECT, as #GordonLinoff says, you use COALESCE(), or the slightly faster NVL(), IFNULL() or ISNULL() functions (they are all synonyms of each other and take exactly two arguments, while COALESCE() is more flexible - at a cost - with a variable-length argument list, returning the first non-null value of a list of arguments of varying length).
For updating, strive to update only the rows you need to update, and go, for each column:
UPDATE t SET col1=0 WHERE col1 IS NULL;
UPDATE t SET col2=0 WHERE col2 IS NULL;
Well, in an extreme case, you might end up updating the same row as often as its number of columns, then you have won nothing - but it's worth planning to minimise how often you update.
Or, you might consider:
UPDATE t SET
col1 = NVL(col1,0)
, col2 = NVL(col2,0)
, col3 = NVL(col3,0)
[...]
WHERE col1 IS NULL
OR col2 IS NULL
OR col3 IS NULL
[...]
;
Being columnar, and due to the fact that each UPDATE, in Vertica, is a DELETE and an INSERT anyway, it makes no difference if you update just one column or all columns.
I think I'm encountering a fairly simple problem in PL/SQL on an Oracle Database(10g) and I'm hoping one of you guys can help me out.
I'm trying to explain this as clear as possible, but it's hard for me.
When I try to compare varchar2 values of 2 different tables to check if I need to create a new record or I can re-use the ID of the existing one, the DB (or I) compares these values in a wrong way. All is fine when both the field contain a value, this results in 'a' = 'a' which it understands. But when both fields are NULL (or '' which Oracle will turn into NULL) it can not compare the fields.
I found a 'solution' to this problem but I'm certain there is a better way.
rowTable1 ROWTABLE1%ROWTYPE;
iReUsableID INT;
SELECT * INTO rowTable1
FROM TABLE1
WHERE TABLE1ID = 'someID';
SELECT TABLE2ID INTO iReUsableID
FROM TABLE2
WHERE NVL(SOMEFIELDNAME,' ') = NVL(rowTable1.SOMEFIELDNAME,' ');
So NVL changes the null value to ' ' after which it will compare in the right way.
Thanks in advance,
Dennis
You can use LNNVL function (http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions078.htm) and reverse the condition:
SELECT TABLE2ID INTO iReUsableID
FROM TABLE2
WHERE LNNVL(SOMEFIELDNAME != rowTable1.SOMEFIELDNAME);
Your method is fine, unless one of the values could be a space. The "standard" way of doing the comparison is to explicitly compare to NULL:
WHERE col1 = col2 or col1 is null and col2 is null
In Oracle, comparisons on strings are encumbered by the fact that Oracle treats the empty string as NULL. This is a peculiarity of Oracle and not a problem in other databases.
In Oracle (or any RDBMS I believe), one NULL is not equal to another NULL. Therefore, you need to use the workaround that you have stated if you want to force 2 NULL values to be considered the same. Additionally, you might want to default NULL values to '' (empty) rather than ' ' (space).
From Wikipedia (originally the ISO spec, but I couldn't access it):
Since Null is not a member of any data domain, it is not considered a "value", but rather a marker (or placeholder) indicating the absence of value. Because of this, comparisons with Null can never result in either True or False, but always in a third logical result, Unknown.
As mentioned by Jan Spurny, you can use LNNVL for comparison. However, it would be wrong to say that a comparison is actually being made when both values being compared are NULL.
This is indeed a simple and usable way to compare nulls.
You cannot compare NULLS directly since NULL is not equal NULL.
You must provide your own logic who you would like to compare, what you've done with NVL().
Take in mind, you are treating NULLS as space, so ' ' in one table would be equal to NULL in another table in your case.
There are some other ways (e.g. LNNVL ) but they are not some kind of a "better" way, I think.
What query should be faster
UPDATE table1
SET field1 = COALESCE(field1, someValue)
WHERE foreignKeyField = someKeyValue
or
UPDATE table1
SET field1 = someValue
WHERE foreignKeyField = someKeyValue AND field1 is null
in MS SQL Server? What does it depend on?
ISNULL will have less overhead, depends on NULL conditions, i guess. Here is the test comparision of COALESCE vs. ISNULL vs. IS NULL OR -- http://blogs.x2line.com/al/archive/2004/03/01/189.aspx
Also, check out this blog for Performance comparion:ISNULL vs. COALESCE
http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/performance-isnull-vs-coalesce.aspx which says :
ISNULL appears to pretty consistently out-perform COALESCE by an
average of 10 or 12 percent.
Well, let me first state that your queries have 2 different meanings. Updating table1 and setting field1 equal to something always, even if it's NULL, is not equivalent to updating the table only where field1 is NULL. What are your intentions on your update?
Take this Fiddle for example:
SELECT Field FROM Test WHERE Field IS NULL;
SELECT COALESCE(Field,'') FROM Test;
The first query returns a single record and the second returns 2 records.
You may be wondering about the performance of:
SELECT COALESCE(Field,'') FROM Test;
SELECT ISNULL(Field,'') FROM Test;
Although I haven't tested it, ISNULL is suppose to be 20-30% more efficient.
Hope some of this helps. Good luck.
This question already has answers here:
How to search with multiple criteria from a database with SQL?
(6 answers)
Closed 3 years ago.
If I have a query that runs a search with optional parameters, my usual approach is to pass in NULL for any unused criteria, and have a WHERE clause that looks like
WHERE (#Param IS NULL OR Field=#Param) AND (#Param2 IS NULL OR Field2=#Param2)
If I have a condition that maybe is a little more complex to evaluate, say a LIKE clause, is SQL Server going to evaluate the IS NULL and then short circuit the more complex condition? Is there something I could look at in the execution plan to find out if it is or not? Is this behavior dependent on the plan chosen by the optimizer, such that the answer might be different in my development and production environments (I've got limited access to see what's going on in production, so I'm hoping behaviors would be consistent).
I use CASE Statements.
So yours is like this:
WHERE (#Param IS NULL OR Field=#Param) AND (#Param2 IS NULL OR Field2=#Param2)
And I will write it like this:
WHERE CASE WHEN #Param IS NULL THEN 1 ELSE Field END = CASE WHEN #Param IS NULL THEN 1 ELSE #Param END AND
CASE WHEN #Param2 IS NULL THEN 1 ELSE Field2 END = CASE WHEN #Param2 IS NULL THEN 1 ELSE #Param2 END
Have a look at the query execution plans you get too, in a lot of cases when you use the OR condition the Query Optimizer uses a table scan. You can also replace the '=' with 'like' etc, and in fact you can even wrap that up in a case statement too. So you can replace the '=' with CASE WHEN #Param3 = 'eq' THEN = ELSE LIKE END
Hope this helps.
I found a more practical solution that does avoid the use of dynamic SQL.
Suppose you have a bit field and the parameter is 1, 0 or (null = both). Then use the following SQL:
WHERE a.FIELD=CASE WHEN ISNULL(#PARAM,-1)=-1 THEN a.FIELD ELSE #PARAM END
With other words: if you dont provide an explicit value for #param then tell that A.field = a.field which is always true. In case of nullable fields you need to apply an ISNULL on that field to avoid that null=null does not come through.
Example: on INT field that can be null,
AND ISNULL(a.CALL_GENERAL_REQUIREMENTS,-1)=CASE WHEN ISNULL(#CALL_GENERAL_REQUIREMENTS,-2)=-2 THEN ISNULL(a.CALL_GENERAL_REQUIREMENTS,-1) ELSE #CALL_GENERAL_REQUIREMENTS END
As you can see I apply -2 to the null value of the param #CALL_GENERAL_REQUIREMENTS this is because real values to select on can be 0 (not passed), 1 (passed), -1 (not evaluated yet). So a -2 means do not select on this field
Example on nullable string field:
AND ISNULL(a.CALL_RESULT,'')=CASE WHEN ISNULL(#CALL_RESULT,'')='' THEN ISNULL(a.CALL_RESULT,'') ELSE #CALL_RESULT END
All works as a charm and avoids a lot of hassle on creating a dynamic SQL string with the concatenation and does not require specific permissions to be assigned to be able to run an exec statement.
Hope this helps anyone as it helped me.
Have nice day
I have a stored procedure with a parameter name which I want to use in a where clause to match the value of a column i.e. something like
where col1 = name
Now of course this fails to match null to null because of the way null works. Do I need to do
where ((name is null and col1 is null) or col1 = name)
in situations like this or is there a more concise way of doing it?
You can use decode function in the following fashion:
where decode(col1, name, 0) is not null
Cite from SQL reference:
In a DECODE function, Oracle considers
two nulls to be equivalent.
I think your own suggestion is the best way to do it.
What you have done is correct. There is a more concise way, but it isn't really better:
where nvl(col1,'xx') = nvl(name,'xx')
The trouble is, you have to make sure that the value you use for nulls ('xx' is my example) couldn't actually be a real value in the data.
If col1 is indexed, it would be best (performance-wise) to split the query in two:
SELECT *
FROM mytable
WHERE col1 = name
UNION ALL
SELECT *
FROM mytable
WHERE name IS NULL AND col1 IS NULL
This way, Oracle can optimize both queries independently, so the first or second part won't be actually executed depending on the name passed being NULL or not.
Oracle, though, does not index NULL values of fields, so searching for a NULL value will always result in a full table scan.
If your table is large, holds few NULL values and you search for them frequently, you can create a function-based index:
CREATE INDEX ix_mytable_col1__null ON mytable (CASE WHEN col1 IS NULL THEN 1 END)
and use it in a query:
SELECT *
FROM mytable
WHERE col1 = name
UNION ALL
SELECT *
FROM mytable
WHERE CASE WHEN col1 IS NULL THEN 1 END = CASE WHEN name IS NULL THEN 1 END
Keep it the way you have it. It's more intuitive, less buggy, works in any database, and is faster. The concise way is not always the best. See (PLSQL) What is the simplest expression to test for a changed value in an Oracle on-update trigger?
SELECT * FROM table
WHERE paramater IS NULL OR column = parameter;