I want to call a function by passing multiple values on single parameter, like this:
SELECT * FROM jobTitle('270,378');
Here is my function.
CREATE OR REPLACE FUNCTION test(int)
RETURNS TABLE (job_id int, job_reference int, job_job_title text
, job_status text) AS
$$
BEGIN
RETURN QUERY
select jobs.id,jobs.reference, jobs.job_title,
ltrim(substring(jobs.status,3,char_length(jobs.status))) as status
FROM jobs ,company c
WHERE jobs."DeleteFlag" = '0'
and c.id= jobs.id and c.DeleteFlag = '0' and c.active = '1'
and (jobs.id = $1 or -1 = $1)
order by jobs.job_title;
END;
$$ LANGUAGE plpgsql;
Can someone help with the syntax? Or even provide sample code?
VARIADIC
Like #mu provided, VARIADIC is your friend. One more important detail:
You can also call a function using a VARIADIC parameter with an array type directly. Add the key word VARIADIC in the function call:
SELECT * FROM f_test(VARIADIC '{1, 2, 3}'::int[]);
is equivalent to:
SELECT * FROM f_test(1, 2, 3);
Other advice
In Postgres 9.1 or later right() with a negative length is faster and simpler to trim leading characters from a string:
right(j.status, -2)
is equivalent to:
substring(j.status, 3, char_length(jobs.status))
You have j."DeleteFlag" as well as j.DeleteFlag (without double quotes) in your query. This is probably incorrect. See:
PostgreSQL Error: Relation already exists
"DeleteFlag" = '0' indicates another problem. Unlike other RDBMS, Postgres properly supports the boolean data type. If the flag holds boolean data (true / false / NULL) use the boolean type. A character type like text would be inappropriate / inefficient.
Proper function
You don't need PL/pgSQL here. You can use a simpler SQL function:
CREATE OR REPLACE FUNCTION f_test(VARIADIC int[])
RETURNS TABLE (id int, reference int, job_title text, status text)
LANGUAGE sql AS
$func$
SELECT j.id, j.reference, j.job_title
, ltrim(right(j.status, -2)) AS status
FROM company c
JOIN job j USING (id)
WHERE c.active
AND NOT c.delete_flag
AND NOT j.delete_flag
AND (j.id = ANY($1) OR '{-1}'::int[] = $1)
ORDER BY j.job_title
$func$;
db<>fiddle here
Old sqlfiddle
Don't do strange and horrible things like converting a list of integers to a CSV string, this:
jobTitle('270,378')
is not what you want. You want to say things like this:
jobTitle(270, 378)
jobTitle(array[270, 378])
If you're going to be calling jobTitle by hand then a variadic function would probably be easiest to work with:
create or replace function jobTitle(variadic int[])
returns table (...) as $$
-- $1 will be an array if integers in here so UNNEST, IN, ANY, ... as needed
Then you can jobTitle(6), jobTitle(6, 11), jobTitle(6, 11, 23, 42), ... as needed.
If you're going to be building the jobTitle arguments in SQL then the explicit-array version would probably be easier to work with:
create or replace function jobTitle(int[])
returns table (...) as $$
-- $1 will be an array if integers in here so UNNEST, IN, ANY, ... as needed
Then you could jobTitle(array[6]), jobTitle(array[6, 11]), ... as needed and you could use all the usual array operators and functions to build argument lists for jobTitle.
I'll leave the function's internals as an exercise for the reader.
Related
I was trying to create a Snowflake SQL UDF
Where it computes the Values of the all values and will return the result to the user.
So firstly, i have tried the following approach
# The UDF that Returns the Result.
CREATE OR REPLACE FUNCTION PRODUCT_OF_COL_VAL()
RETURNS FLOAT
LANGUAGE SQL
AS
$$
SELECT EXP(SUM(LN(COL))) AS RESULT FROM SCHEMA.SAMPLE_TABLE
$$
The above code executes perfectly fine....
if you could see above (i have hardcoded the TABLE_NAME and COLUMN_VALUE) which is not i acutally want..
So, i have tried the following approach, by passing the column name dynamically..
create or replace function (COL VARCHAR)
RETURNS FLOAT
LANGUAGE SQL
AS
$$
SELECT EXP(SUM(LN(COL))) AS RESULT from SCHEMA.SAMPLE_TABLE
$$
But it throws the following issue...
Numeric Value 'Col' is not recognized
To elaborate more the Data type of the Column that i am passing is NUMBER(38,6)
and in the background its doing the following work..
EXP(SUM(LN(TO_DOUBLE(COL))))
Does anyone have any idea why this is running fine in Scenario 1 and not in Scenario 2 ?
Hopefully we will be able to have this kind of UDFs one day, in the meantime consider this answer using ARRAY_AGG() and a Python UDF:
Sample usage:
select count(*) how_many, multimy(array_agg(score)) multiplied, tags[0] tag
from stack_questions
where score > 0
group by tag
limit 100
The UDF in Python - which also protects against numbers beyond float's limits:
create or replace function multimy (x array)
returns float
language python
handler = 'x'
runtime_version = '3.8'
as
$$
import math
def x(x):
res = math.prod(x)
return res if math.log10(res)<308 else 'NaN'
$$
;
The parameter you defined in SQL UDF will be evaluated as a literal:
When you call the function like PRODUCT_OF_COL_VAL('Col'), the SQL statement you execute becomes:
SELECT EXP(SUM(LN('Col'))) AS RESULT from SCHEMA.SAMPLE_TABLE
What you want to do is to generate a new SQL based on parameters, and it's only possible using "stored procedures". Check this one:
Dynamic SQL in a Snowflake SQL Stored Procedure
I am trying to call/convert a numeric variable into string inside a user-defined function. I was thinking about using to_char, but it didn't pass.
My function is like this:
create or replace function ntile_loop(x numeric)
returns setof numeric as
$$
select
max("billed") as _____(to_char($1,'99')||"%"???) from
(select "billed", "id","cm",ntile(100)
over (partition by "id","cm" order by "billed")
as "percentile" from "table_all") where "percentile"=$1
group by "id","cm","percentile";
$$
language sql;
My purpose is to define a new variable "x%" as its name, with x varying as the function input. In context, x is numeric and will be called again later in the function as a numeric (this part of code wasn't included in the sample above).
What I want to return:
I simply want to return a block of code so that every time I change the percentile number, I don't have to run this block of code again and again. I'd like to calculate 5, 10, 20, 30, ....90th percentile and display all of them in the same table for each id+cm group.
That's why I was thinking about macro or function, but didn't find any solutions I like.
Thank you for your answers. Yes, I will definitely read basics while I am learning. Today's my second day to use SQL, but have to generate some results immediately.
Converting numeric to text is the least of your problems.
My purpose is to define a new variable "x%" as its name, with x
varying as the function input.
First of all: there are no variables in an SQL function. SQL functions are just wrappers for valid SQL statements. Input and output parameters can be named, but names are static, not dynamic.
You may be thinking of a PL/pgSQL function, where you have procedural elements including variables. Parameter names are still static, though. There are no dynamic variable names in plpgsql. You can execute dynamic SQL with EXECUTE but that's something different entirely.
While it is possible to declare a static variable with a name like "123%" it is really exceptionally uncommon to do so. Maybe for deliberately obfuscating code? Other than that: Don't. Use proper, simple, legal, lower case variable names without the need to double-quote and without the potential to do something unexpected after a typo.
Since the window function ntile() returns integer and you run an equality check on the result, the input parameter should be integer, not numeric.
To assign a variable in plpgsql you can use the assignment operator := for a single variable or SELECT INTO for any number of variables. Either way, you want the query to return a single row or you have to loop.
If you want the maximum billed from the chosen percentile, you don't GROUP BY x, y. That might return multiple rows and does not do what you seem to want. Use plain max(billed) without GROUP BY to get a single row.
You don't need to double quote perfectly legal column names.
A valid function might look like this. It's not exactly what you were trying to do, which cannot be done. But it may get you closer to what you actually need.
CREATE OR REPLACE FUNCTION ntile_loop(x integer)
RETURNS SETOF numeric as
$func$
DECLARE
myvar text;
BEGIN
SELECT INTO myvar max(billed)
FROM (
SELECT billed, id, cm
,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile
FROM table_all
) sub
WHERE sub.tile = $1;
-- do something with myvar, depending on the value of $1 ...
END
$func$ LANGUAGE plpgsql;
Long story short, you need to study the basics before you try to create sophisticated functions.
Plain SQL
After Q update:
I'd like to calculate 5, 10, 20, 30, ....90th percentile and display
all of them in the same table for each id+cm group.
This simple query should do it all:
SELECT id, cm, tile, max(billed) AS max_billed
FROM (
SELECT billed, id, cm
,ntile(100) OVER (PARTITION BY id, cm ORDER BY billed) AS tile
FROM table_all
) sub
WHERE (tile%10 = 0 OR tile = 5)
AND tile <= 90
GROUP BY 1,2,3
ORDER BY 1,2,3;
% .. modulo operator
GROUP BY 1,2,3 .. positional parameter
It looks like you're looking for return query execute, returning the result from a dynamic SQL statement:
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html
http://www.postgresql.org/docs/current/static/plpgsql-statements.html
if I write a query as such:
with WordBreakDown (idx, word, wordlength) as (
select
row_number() over () as idx,
word,
character_length(word) as wordlength
from
unnest(string_to_array('yo momma so fat', ' ')) as word
)
select
cast(wbd.idx + (
select SUM(wbd2.wordlength)
from WordBreakDown wbd2
where wbd2.idx <= wbd.idx
) - wbd.wordlength as integer) as position,
cast(wbd.word as character varying(512)) as part
from
WordBreakDown wbd;
... I get a table of 4 rows like so:
1;"yo"
4;"momma"
10;"so"
13;"fat"
... this is what I want. HOWEVER, if I wrap this into a function like so:
drop type if exists split_result cascade;
create type split_result as(
position integer,
part character varying(512)
);
drop function if exists split(character varying(512), character(1));
create function split(
_s character varying(512),
_sep character(1)
) returns setof split_result as $$
begin
return query
with WordBreakDown (idx, word, wordlength) as (
select
row_number() over () as idx,
word,
character_length(word) as wordlength
from
unnest(string_to_array(_s, _sep)) as word
)
select
cast(wbd.idx + (
select SUM(wbd2.wordlength)
from WordBreakDown wbd2
where wbd2.idx <= wbd.idx
) - wbd.wordlength as integer) as position,
cast(wbd.word as character varying(512)) as part
from
WordBreakDown wbd;
end;
$$ language plpgsql;
select * from split('yo momma so fat', ' ');
... I get:
1;"yo momma so fat"
I'm scratching my head on this. What am I screwing up?
UPDATE
Per the suggestions below, I have replaced the function as such:
CREATE OR REPLACE FUNCTION split(_string character varying(512), _sep character(1))
RETURNS TABLE (postition int, part character varying(512)) AS
$BODY$
BEGIN
RETURN QUERY
WITH wbd AS (
SELECT (row_number() OVER ())::int AS idx
,word
,length(word) AS wordlength
FROM unnest(string_to_array(_string, rpad(_sep, 1))) AS word
)
SELECT (sum(wordlength) OVER (ORDER BY idx))::int + idx - wordlength
,word::character varying(512) -- AS part
FROM wbd;
END;
$BODY$ LANGUAGE plpgsql;
... which keeps my original function signature for maximum compatibility, and the lion's share of the performance gains. Thanks to the answerers, I found this to be a multifaceted learning experience. Your explanations really helped me understand what was going on.
Observe this:
select length(' '::character(1));
length
--------
0
(1 row)
A cause of this confusion is a bizarre definition of character type in SQL standard. From Postgres documentation for character types:
Values of type character are physically padded with spaces to the specified width n, and are stored and displayed that way. However, the padding spaces are treated as semantically insignificant. Trailing spaces are disregarded when comparing two values of type character, and they will be removed when converting a character value to one of the other string types.
So you should use string_to_array(_s, rpad(_sep,1)).
You had several constructs that probably did not do what you think they would.
Here is a largely simplified version of your function, that is also quite a bit faster:
CREATE OR REPLACE FUNCTION split(_string text, _sep text)
RETURNS TABLE (postition int, part text) AS
$BODY$
BEGIN
RETURN QUERY
WITH wbd AS (
SELECT (row_number() OVER ())::int AS idx
,word
,length(word) AS wordlength
FROM unnest(string_to_array(_string, _sep)) AS word
)
SELECT (sum(wordlength) OVER (ORDER BY idx))::int + idx - wordlength
,word -- AS part
FROM wbd;
END;
$BODY$ LANGUAGE plpgsql;
Explanation
Use another window function to sum up the word lengths. Faster, simpler and cleaner. This makes for most of the performance gain. A lot of sub-queries slow you down.
Use the data type text instead of character varying or even character(). character varying and character are awful types, mostly just there for compatibility with the SQL standard and historical reasons. There is hardly anything you can do with those that could not better be done with text. In the meantime #Tometzky has explained why character(1) was a particularly bad choice for the parameter type. I fixed that by using text instead.
As #Tometzky demonstrated, unnest(string_to_array(..)) is faster than regexp_split_to_table(..) - even if just a tiny bit for small strings like we use here (max. 512 characters). So I switched back to your original expression.
length() does the same as character_length().
In a query with only one table source (and no other possible naming conflicts) you might as well not table-qualify column names. Simplifies the code.
We need an integer value in the end, so I cast all numerical values (bigint in this case) to integer right away, so additions and subtractions are done with integer arithmetic which is generally fastest.
'value'::int is just shorter syntax for cast('value' as integer) and otherwise equivalent.
I found the answer, but I don't understand it.
The string_to_array(_s, _sep) function does not split with a non-varying character; even if I wrote it like so it would not work:
string_to_array(_s, cast(_sep as character_varying(1)))
BUT if I redefined the parameters as such:
drop function if exists split(character varying(512), character(1));
create function split(
_s character varying(512),
_sep character varying(1)
... all of a sudden it works as I expected. Dunno what to make of this, and really not the answer I wanted... now I have changed the signature of the function, which is not what I wanted to do.
When I want to test the behavior of some PostgreSQL function FOO() I'd find it useful to execute a query like SELECT FOO(bar), bar being some data I use as a direct input without having to SELECT from a real table.
I read we can omit the FROM clause in a statement like SELECT 1 but I don't know the correct syntax for multiple inputs. I tried SELECT AVG(1, 2) for instance and it does not work.
How can I do that ?
With PostgreSQL you can use a VALUES expression to generate an inlined table:
VALUES computes a row value or set of row values specified by value expressions. It is most commonly used to generate a "constant table" within a larger command, but it can be used on its own.
Emphasis mine. Then you can apply your aggregate function to that "constant table":
select avg(x)
from (
values (1.0), (2.0)
) as t(x)
Or just select expr if expr is not an aggregate function:
select sin(1);
You could also define your own avg function that operates on an array and hide your FROM inside the function:
create function avg(double precision[]) returns double precision as $$
select avg(x) from unnest($1) as t(x);
$$ language 'sql';
And then:
=> select avg(array[1.0, 2.0, 3.0, 4.0]);
avg
-----
2.5
But that's just getting silly unless you're doing this quite often.
Also, if you're using 8.4+, you can write variadic functions and do away with the array. The internals are the same as the array version, you just add variadic to the argument list:
create function avg(variadic double precision[]) returns double precision as $$
select avg(x) from unnest($1) as t(x);
$$ language 'sql';
And then call it without the array stuff:
=> select avg(1.0, 1.2, 2.18, 11, 3.1415927);
avg
------------
3.70431854
(1 row)
Thanks to depesz for the round-about-through-google pointer to variadic function support in PostgreSQL.
To express a SET in most varieties of SQL, you need to actually express a table..
SELECT
AVG(inlineTable.val)
FROM
(
SELECT 1 AS Val
UNION ALL
SELECT 2 AS Val
)
AS inLineTable
I have the following function, based on the SQL Functions Returning Sets section of the PG docs, which accepts two arrays of equal length, and unpacks them into a set of rows with two columns.
CREATE OR REPLACE FUNCTION unpack_test(
in_int INTEGER[],
in_double DOUBLE PRECISION[],
OUT out_int INTEGER,
OUT out_double DOUBLE PRECISION
) RETURNS SETOF RECORD AS $$
SELECT $1[rowx] AS out_int, $2[rowx] AS out_double
FROM generate_series(1, array_upper($1, 1)) AS rowx;
$$ LANGUAGE SQL STABLE;
I execute the function in PGAdmin3, like this:
SELECT unpack_test(int_col, double_col) FROM test_data
It basically works, but the output looks like this:
|unpack_test|
|record |
|-----------|
|(1, 1) |
|-----------|
|(2, 2) |
|-----------|
...
In other words, the result is a single record, as opposed to two columns. I found this question that seems to provide an answer, but it deals with a function that selects from a table directly, whereas mine accepts the columns as arguments, since it needs to generate the series used to iterate over them. I therefore can't call it using SELECT * FROM function, as suggested in that answer.
First, you'll need to create a type for the return value of your function. Something like this could work:
CREATE TYPE unpack_test_type AS (out_int int, out_double double precision);
Then change your function to return this type instead of record.
Then you can use it like this:
SELECT (unpack_test).out_int, (unpack_test).out_double FROM
(SELECT unpack_test(int_col, double_col) FROM test_data) as test
It doesn't seem possible to take a function returning a generic record type and use it in this manner.