How to prevent null values for table-valued function parameters? - sql

I have a TSQL Table-Valued Function and it is complex. I want to ensure that one of the parameters cannot be null. Yet, when I specify NOT NULL after my parameter declaration I am presented with SQL errors.
Is it possible to prevent a parameter of a Table-Valued Function to be assigned null by the calling SQL?

In my opinion, it'd be better to check for NULL values at the beginning of your function and use RAISERROR (no, that's not a typo) to raise an exception. EDIT: Unfortunately, this doesn't work for UDFs, so you'll have to go with option 2.
You also have the option of specifying "RETURNS NULL ON NULL INPUT" when you create your function. If this flag is specified, the function will return NULL if any of its inputs are null...kind of paradoxical, but it may be what you want.
From the MSDN CREATE FUNCTION documentation (quoted because they don't have an anchor on their page, bleh):
RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT
Specifies the OnNULLCall attribute of a scalar-valued function. If not
specified, CALLED ON NULL INPUT is
implied by default. This means that
the function body executes even if
NULL is passed as an argument.
If RETURNS NULL ON NULL INPUT is specified in a CLR function, it
indicates that SQL Server can return
NULL when any of the arguments it
receives is NULL, without actually
invoking the body of the function. If
the method of a CLR function specified
in already has a
custom attribute that indicates
RETURNS NULL ON NULL INPUT, but the
CREATE FUNCTION statement indicates
CALLED ON NULL INPUT, the CREATE
FUNCTION statement takes precedence.
The OnNULLCall attribute cannot be
specified for CLR table-valued
functions.
Hope it helps somewhat, and I do agree that it's needlessly confusing.

Related

How postgresql 'remember' result of function?

I faced with strange behavior of postgresql, please, could you clarify it for me?
I created function, which return constants from constants table.
CREATE TABLE constants ( key varchar PRIMARY KEY , value varchar );
CREATE OR REPLACE FUNCTION get_constant(_key varchar) RETURNS varchar
AS $$ SELECT value FROM constants WHERE key = _key; $$ LANGUAGE sql
IMMUTABLE;
Then I added a constant to the table.
insert into constants(key, value)
values('const', '1')
;
Then if I change the value of the constant and call the function:
select get_constant('const');
Then result is CORRECT.
BUT!
If I call function in other procedure, for example:
create or REPLACE PROCEDURE etl.test()
LANGUAGE plpgsql
AS $$
declare
begin
raise notice '%', etl.get_constant('const');
END $$;
Then it rememer first result of calling, and don't change result of raise notice, even if I change constant in table.
But if I recompile procedure - then new const-value printing correct.
I tried to find documentation about it, tried google: 'cache results of postgre SQL procedure', and ect., but found nothing.
Could you clarify it and attach link to documentation this issue?
The documentation for CREATE TABLE says this about the IMMUTABLE keyword:
IMMUTABLE indicates that the function cannot modify the database and always returns the same result when given the same argument values; that is, it does not do database lookups or otherwise use information not directly present in its argument list. If this option is given, any call of the function with all-constant arguments can be immediately replaced with the function value.
So by declaring etl.get_constant with that keyword, you're telling Postgres "the output of this function will always be the same for a given input, forever".
The call etl.get_constant('const') has "all-constant arguments" - the value 'const' won't ever change. Since you've told Postgres that etl.get_constant will always return the same output for the same input, it immediately replaces the function call with the result.
So when you call etl.test() it doesn't run etl.get_constant at all, it just returns the value it got earlier, which you told it would be valid forever.
Compare that with the next paragraph on the same page (emphasis mine):
STABLE indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same argument values, but that its result could change across SQL statements. This is the appropriate selection for functions whose results depend on database lookups, parameter variables (such as the current time zone), etc.
So if your "constant" is subject to change, but not within the scope of a particular query, you should mark it STABLE, not IMMUTABLE.

Is it possible to change the name of a parameter in a PostgresQL function?

I have a Postgres function defined as follows:
CREATE OR REPLACE FUNCTION my_test_function(query_since timestamp) RETURNS TABLE () ...
Can I update the name of the parameter, query_since, to query_from without dropping the function?
The documentation for CREATE OR REPLACE FUNCTION makes it clear that I can not change the argument types using this SQL command. While it does not specifically mention argument names, I suspect the same restriction applies.
Checking the manual: it does, in fact, mention the same restriction for argument names in the Description section:
To replace the current definition of an existing function, use CREATE OR REPLACE FUNCTION. It is not possible to change the name or
argument types of a function this way (if you tried, you would
actually be creating a new, distinct function).
Bold emphasis mine.
(But, like you commented, seems to refer to the function name rather than argument names.)
Either way, since you can refer to parameter (argument) names inside the function body in PL/pgSQL or SQL functions, simply renaming is not an option.

Infinite optional parameters

In essence, I'd like the ability to create a scalar function which accepts a variable number of parameters and concatenates them together to return a single VARCHAR. In other words, I want the ability to create a fold over an uncertain number of variables and return the result of the fold as a VARCHAR, similar to .Aggregate in C# or Concatenate in Common Lisp.
My (procedural) pseudo code for such a function is as follows:
define a VARCHAR variable
foreach non-null parameter convert it to a VARCHAR and add it to the VARCHAR variable
return the VARCHAR variable as the result of the function
Is there an idiomatic way to do something like this in MS-SQL? Does MS-SQL Server have anything similar to the C# params/Common Lisp &rest keyword?
-- EDIT --
Is it possible to do something similar to this without using table-valued parameters, so that a call to the function could look like:
MY_SCALAR_FUNC('A', NULL, 'C', 1)
instead of having to go through the rigmarole of setting up and inserting into a new temporary table each time the function is called?
For a set of items, you could consider passing a table of values to your function?
Pass table as parameter into sql server UDF
See also http://technet.microsoft.com/en-us/library/ms191165(v=sql.105).aspx
To answer your question directly, no, there is no equivalent to the params keyword. The approach I'd use is the one above - Create a user-defined table type, populate that one row per value, and pass that to your scalar function to operate on.
EDIT: If you want to avoid table parameters, and are on SQL 2012, look at the CONCAT function:
http://technet.microsoft.com/en-us/library/hh231515.aspx
CONCAT ( string_value1, string_value2 [, string_valueN ] )
This is only for the built-in CONCAT function, you couldn't roll-your-own function with "params" style declaration.

SSRS dynamic group expression needs to be null if a parameter is null

I have an SSRS report that I am doing dynamic grouping on. Regular grouping on a field name that is provided through a report parameter is working with no problems.
The problem that I am having is that I want to avoid the grouping if the parameter is null.
I tried what this article suggested to use (checking for null in the IIF statement) but it isn't working for me:
http://www.optimusbi.com/2012/10/12/dynamic-grouping-ssrs/
NOT WORKING:
Setting GROUP_3 report parameter to NULL and checking for null in the grouping expression.
=IIF(Parameters!GROUP_3.Value is Nothing,1,Fields(Parameters!GROUP_3.Value).Value)
Result:
The IIF expression doesn't seem to be evaluating the null value properly. I get this as the result...
The Group expression for the grouping ‘GROUP_3’ contains an error: The
expression references the field '', which does not exist in the Fields
collection. Expressions can only refer to fields within the current
dataset scope or, if inside an aggregate, the specified dataset scope.
Letters in the names of fields must use the correct case.
(rsRuntimeErrorInExpression)
I also tried setting the parameter to 'blank' and this but I get the same error message.
=IIF(Parameters!GROUP_3.Value = "",1,Fields(Parameters!GROUP_3.Value).Value)
Is there something I am doing wrong here? Any suggestions?
The Iif() call evaluates all parameters passed to it. So when GROUP_3 doesn't have a value, you're trying to reference a non-existent member of the Fields collection in the third parameter of Iif().
It's possible (though ugly) to work around this with another embedded IIF() thusly:
IIF(Parameters!GROUP_3.Value is Nothing,1,Fields(IIF(Parameters!GROUP_3.Value is Nothing,"VALID COLUMN NAME",Parameters!GROUP_3.Value)).Value)
In case it isn't obvious, you need to replace "VALID COLUMN NAME" with a column name from your dataset. Doesn't matter which column it is as long as it's always in the dataset. This means if GROUP_3 is nothing it's using a valid column name reference in parameter three just to get around the error. The IIF() call will still default it to '1' and the code should behave as you desire.

What is the correct way to check for null values in PL/SQL?

IF purpose = null THEN
v_purpose := '';
ELSE
v_purpose := ' for ' || purpose;
END IF;
When purpose is null, it still goes to the else...why?!
The correct test is
IF purpose IS NULL THEN
This is because NULL is not a value stored in a field. It is an attribute about the field stored elsewhere (but within in the row).
Setting a field to NULL appears to be an ordinary assignment so it is seems perfectly orthogonal to expect testing for it by direct comparison. However, for it to work as it does, I surmise the SQL assignment primitive has a magic hidden aspect which diverts assignment of the special symbol NULL into setting an attribute and not the field.
NULL is a special value in SQL that cannot be compared using the equality operator. You need to use special operators IS and IS NOT when testing if a value is NULL.
Here's a good overview of the idea. And an excerpt:
NOTE: Null In Oracle is an absence of information. A null can be
assigned but it cannot be equated with anything, including itself.
NULL values represent missing or unknown data. NULL values are not an
integer, a character, or any other specific data type. Note that NULL
is not the same as an empty data string or the numerical value '0'.
The result of the expression purpose = null is unknown, regardless of what purpose is (see #Paul Sasik's answer for more details). Since it's unknown, it's not necessarily true, so execution bypasses the inside of the IF block and falls into the ELSE block.