Get distinct value from a list of T-SQL parameters - sql

I am using a tool to produce SQL queries and I need to filter one of the queries with a multiple parameters.
The query is similar to this:
Select *
From Products
Where (#ProductTypeIds is null or Product.ProductTypeId in (#ProductTypeIds))
I know the above query is not correct on a traditional SQL, read on..
Essentially, I'm trying to apply a filter where if nothing is passed for #ProductTypeIds parameter, the where condition is not applied.
When multiple parameters are being passed, though, #ProductTypeIds is being translated by the tool into the following query:
Select *
From Products
Where (#ProductTypeIds1, #ProductTypeIds2 is null or Product.ProductTypeId in (#ProductTypeIds1, #ProductTypeIds2))
Which is clearly an invalid query. So I thought I could be clever and use COALESCE to check if they are null:
Select *
From Products
Where (COALESCE(#ProductTypeIds, null) is null or Product.ProductTypeId in (#ProductTypeIds))
This query is being translated correctly, however, now my use of COALESCE throws an error:
At least one of the arguments to COALESCE must be an expression that is not the NULL constant.
How can I efficiently check that #ProductTypeIds (which be being translated into #ProductTypeIds1, #ProductTypeIds2 is all null so I can apply the filter or ignore?
In other words, is there a way to Distinct a list of parameters to check if the final result is null ?
Thanks

I have no idea how your tool works, but try the following.
Instead of checking for null check for the value that will never come in your params like:
WHERE COALESCE(#ProductTypeIds1, #ProductTypeIds2, -666) == -666 OR ...

Related

Which one of these two SELECT statment are correct?

I am a little bit confusing and have no idea which one of these two SELECT statments are correct
SELECT Value FROM visibility WHERE site_info LIKE '%site_is_down%';
OR
SELECT Value FROM visibility WHERE site_info = 'site_is_down';
SInce I run both of these I get same result, but I am interesting which one is correct since Value column is VARCHAR data type OR both of these SELECT are incorect ?
Result set running first SELECT
Value
1. 0
Result set running second SELECT
Value
1. 0
The two statements do not do the same thing.
The first statement filters on rows whose site_infos contain string 'site_is_down'. The surrounding '%' are wildcards. So it would match on something like 'It looks like site_is_down right now'.
The second query, with the equality condition, filters on site_info whose content is exactly 'site_is_dow'.
Everything that the second query is also returned by the first query - but the opposite is not true.
Which statement is "correct" depends on your actual requirement.
If both queries are useful for you, I'd use the second query, as it is the simplest, and runs faster.

SQL conversion failed when converting

following situation:
a column xy is defined as varchar(25). In a view (SQL Server Mgmt Studio 2008) I filtered all values with letters (-> is not like '%[A-Z]%') and converted it to int (cast(xy as int)).
If I now try to make comprisons with that column (e.g. where xy < 1000), I'm getting a conversion error. And the message contains a value that should have been filtered with "is not like '%[A-Z]%'". Whats wrong??
thanks for help in advance...
this works (it folters out for example value 'G8111'):
SELECT unid
FROM CD_UNITS AS a INNER JOIN DEF_STATION AS b ON a.STATION = b.STATION
WHERE (b.CURENT = 'T') and UNID like '%[A-Z]%'
but when i put that in a view, an make select on it:
select * from my_view where xy < 3000
system says 'Conversion failed when converting the varchar value 'G8111' to data type int.' but 'G8111' should be filtered out in query above...
The optimizer does crazy things at times, so despite the fact that an "inner" filter1 "should" protect you, the optimizer may still push the conversion lower down than the filter and cause such errors.
The only semi-documented place where it will not do this is within a CASE expression:
The CASE statement(sic) evaluates its conditions sequentially and stops with the first condition whose condition is satisfied. In some situations, an expression is evaluated before a CASE statement receives the results of the expression as its input.
...
You should only depend on order of evaluation of the WHEN conditions for scalar expressions (including non-correlated sub-queries that return scalars), not for aggregate expressions
So the only way that should currently work would be:
CASE WHEN xy NOT LIKE '%[^0-9]%' THEN CONVERT(int,xy) END < 1000
This also uses a double-negative with LIKE to ensure that it only attempts the conversion when the value only contains digits.
1Whether this be in a subquery, a CTE, a View, or even just considering the logical processing order of SELECT and WHERE clauses. Within a single query, the optimizer can and will push conversion operations past filters.

SQL Select to keep out fields that are NULL

I am trying to connect a Filemaker DB to Firebird SQL DB in both ways import to FM and export back to Firebird DB.
So far it works using the MBS Plug-in but FM 13 Pro canot handle NULL.
That means that for example Timestamp fields that are empty (NULL) produce a "0" value.
Thats means in Time something like 01.01.1889 00:00:00.
So my idea was to simply ignore fields containing NULL.
But here my poor knowlege stops.
First I thought I can do this with WHERE, but this is ignoring whole records sets:
SELECT * FROM TABLE WHERE FIELD IS NOT NULL
Also I tried to filter it later on like this:
If (IsEmpty (MBS("SQL.GetFieldAsDateTime"; $command; "FIELD") ) = 0 ; MBS("SQL.GetFieldAsDateTime"; $command; "FIELD"))
With no result either.
This is a direct answer to halfbit's suggestion, which is correct but not for this SQL dialect. In a query to provide a replacement value when a field is NULL you need to use COALESCE(x,y). Where if X is null, Y will be used, and if Y is null then the field is NULL. Thats why it is common for me to use it like COALESCE(table.field,'') such that a constant is always outputted if table.field happens to be NULL.
select COALESCE(null,'Hello') as stackoverflow from rdb$database
You can use COALESCE() for more than two arguments, I just used two for conciseness.
I dont know the special SQL dialect, but
SELECT field1, field2, value(field, 0), ...FROM TABLE
should help you:
value gives the first argument, ie, your field if it is NOT NULL or the second argument if it is.

Check if value exists in Postgres array

Using Postgres 9.0, I need a way to test if a value exists in a given array. So far I came up with something like this:
select '{1,2,3}'::int[] #> (ARRAY[]::int[] || value_variable::int)
But I keep thinking there should be a simpler way to this, I just can't see it. This seems better:
select '{1,2,3}'::int[] #> ARRAY[value_variable::int]
I believe it will suffice. But if you have other ways to do it, please share!
Simpler with the ANY construct:
SELECT value_variable = ANY ('{1,2,3}'::int[])
The right operand of ANY (between parentheses) can either be a set (result of a subquery, for instance) or an array. There are several ways to use it:
SQLAlchemy: how to filter on PgArray column types?
IN vs ANY operator in PostgreSQL
Important difference: Array operators (<#, #>, && et al.) expect array types as operands and support GIN or GiST indices in the standard distribution of PostgreSQL, while the ANY construct expects an element type as left operand and can be supported with a plain B-tree index (with the indexed expression to the left of the operator, not the other way round like it seems to be in your example). Example:
Index for finding an element in a JSON array
None of this works for NULL elements. To test for NULL:
Check if NULL exists in Postgres array
Watch out for the trap I got into: When checking if certain value is not present in an array, you shouldn't do:
SELECT value_variable != ANY('{1,2,3}'::int[])
but use
SELECT value_variable != ALL('{1,2,3}'::int[])
instead.
but if you have other ways to do it please share.
You can compare two arrays. If any of the values in the left array overlap the values in the right array, then it returns true. It's kind of hackish, but it works.
SELECT '{1}' && '{1,2,3}'::int[]; -- true
SELECT '{1,4}' && '{1,2,3}'::int[]; -- true
SELECT '{4}' && '{1,2,3}'::int[]; -- false
In the first and second query, value 1 is in the right array
Notice that the second query is true, even though the value 4 is not contained in the right array
For the third query, no values in the left array (i.e., 4) are in the right array, so it returns false
unnest can be used as well.
It expands array to a set of rows and then simply checking a value exists or not is as simple as using IN or NOT IN.
e.g.
id => uuid
exception_list_ids => uuid[]
select * from table where id NOT IN (select unnest(exception_list_ids) from table2)
Hi that one works fine for me, maybe useful for someone
select * from your_table where array_column ::text ilike ANY (ARRAY['%text_to_search%'::text]);
"Any" works well. Just make sure that the any keyword is on the right side of the equal to sign i.e. is present after the equal to sign.
Below statement will throw error: ERROR: syntax error at or near "any"
select 1 where any('{hello}'::text[]) = 'hello';
Whereas below example works fine
select 1 where 'hello' = any('{hello}'::text[]);
When looking for the existence of a element in an array, proper casting is required to pass the SQL parser of postgres. Here is one example query using array contains operator in the join clause:
For simplicity I only list the relevant part:
table1 other_name text[]; -- is an array of text
The join part of SQL shown
from table1 t1 join table2 t2 on t1.other_name::text[] #> ARRAY[t2.panel::text]
The following also works
on t2.panel = ANY(t1.other_name)
I am just guessing that the extra casting is required because the parse does not have to fetch the table definition to figure the exact type of the column. Others please comment on this.

Matching BIT to DATETIME in CASE statement

I'm attempting to create a T-SQL case statement to filter a query based on whether a field is NULL or if it contains a value. It would be simple if you could assign NULL or NOT NULL as the result of a case but that doesn't appear possible.
Here's the psuedocode:
WHERE DateColumn = CASE #BitInput
WHEN 0 THEN (all null dates)
WHEN 1 THEN (any non-null date)
WHEN NULL THEN (return all rows)
From my understanding, the WHEN 0 condition can be achieved by not providing a WHEN condition at all (to return a NULL value).
The WHEN 1 condition seems like it could use a wildcard character but I'm getting an error regarding type conversion. Assigning the column to itself fixes this.
I have no idea what to do for the WHEN NULL condition. My internal logic seems to think assigning the column to itself should solve this but it does not as stated above.
I have recreated this using dynamic SQL but for various reasons I'd prefer to have it created in the native code.
I'd appreciate any input. Thanks.
The CASE expression (as OMG Ponies said) is mixing and matching datatypes (as you spotted), in addition you can not compare to NULL using = or WHEN.
WHERE
(#BitInput = 0 AND DateColumn IS NULL)
OR
(#BitInput = 1 AND DateColumn IS NOT NULL)
OR
#BitInput IS NULL
You could probably write it using CASE but what you want is an OR really.
You can also use IF..ELSE or UNION ALL to separate the 3 cases