A PL/pgSQL function with varying return type (and varying inner query) - sql

The data
Suppose I have the following data:
create temp table my_data1 (
id serial, val text
);
create temp table my_data2 (
id serial, val int
);
insert into my_data1(id, val)
values (default, 'a'), (default, 'c'), (default, 'd'), (default, 'b');
insert into my_data2(id, val)
values (default, 1), (default, 3), (default, 4), (default, 2);
The problem
I would like to write a plpgsql function which has 2 arguments: tbl (taking values my_data1 or my_data2) and order_by (which can be id or val or null). The function should fetch all rows from the table specified in tbl and order them by the column specified in order_by.
Below there are 2 solutions I have found (see also sqlfiddle). The question is which of them is preferable, and if there exists an even better solution.
Solution using temp table
I came up with the following workaround:
create function my_work(tbl text, order_by text default null)
returns text as
$my_work$
declare
q text;
begin
q := 'select * from ' || quote_ident(tbl);
if order_by is not null then
q := q || ' order by ' || quote_ident(order_by);
end if;
return q;
end
$my_work$ language plpgsql;
create function my_fetch(_query text, into_table text)
returns void as
$my_fetch$
begin
execute format($$
create temp table %I
on commit drop
as %s
$$, quote_ident(into_table), _query);
end
$my_fetch$ language plpgsql;
Then it remains to execute the following lines (preferably surrounded with 'begin/commit'):
select my_fetch(my_work('my_data1','id'), 'my_tmp');
select * from my_tmp;
Are there any negative side effects in this solution, e.g. is creating a temp table costy?
Another solution (using pg_typeof)
I've also read a great post on various approaches to dynamic queries with varying results. From the options mentioned there it seems the following is the best solution for my situation:
create or replace function not_my_work(_tbl_type anyelement, order_by text default null)
returns setof anyelement as
$func$
declare
q text;
begin
q := format('
select *
from %s
', pg_typeof(_tbl_type));
if order_by is not null then
q := q || ' order by ' || quote_ident(order_by);
end if;
return query execute q;
end
$func$ language plpgsql;
select not_my_work(null::my_data1, 'id');
Does this approach have any advantages over the approach using temp table?

I have two comments to the first solution.
First, use or %I or quote_ident() in format() function, not both. Compare:
with q(s) as (
values ('abba'), ('ABBA')
)
select
quote_ident(s) ok1,
format('%I', s) ok2,
format('%I', quote_ident(s)) bad_idea
from q;
ok1 | ok2 | bad_idea
--------+--------+------------
abba | abba | abba
"ABBA" | "ABBA" | """ABBA"""
(2 rows)
Second, you do not need two functions:
create or replace function my_select(into_table text, tbl text, order_by text default null)
returns void as $function$
declare
q text;
begin
q := 'select * from ' || quote_ident(tbl);
if order_by is not null then
q := q || ' order by ' || order_by;
end if;
execute format($$
create temp table %I
on commit drop
as %s
$$, into_table, q);
end
$function$ language plpgsql;
begin;
select my_select('my_tmp', 'my_data1', 'id');
select * from my_tmp;
commit;
BEGIN
my_select
-----------
(1 row)
id | val
----+-----
1 | a
2 | c
3 | d
4 | b
(4 rows)
COMMIT
In this particular case, the second solution is better.
A temporary table is not particularly expensive, but still unnecessary.
The cost will be the more important the more data in the table.
If you have a good alternative to create a temporary table, use it.
Besides, the need to include the function call and the select query in a transaction can be a bit cumbersome in some cases.
The second solution is smart and is ideally suited to the task at hand.

Related

In Postgres, how would I retrieve the default value of a column, preferably inline in an insert statement?

Here's my example table:
CREATE TABLE IF NOT EXISTS public.cars
(
id serial PRIMARY KEY,
make varchar(32) not null,
model varchar(32),
has_automatic_transmission boolean not null default false,
created_on_date timestamptz not null DEFAULT NOW()
);
I have a function that allows my data service to insert a car into the database. It looks like this:
drop function if exists cars_insert;
create function cars_insert
(
in make_in text,
in model_in text,
in has_automatic_transmission_in boolean,
in created_on_date_in timestamptz
)
returns public.carsas
$$
declare result_set public.cars;
begin
insert into cars
(
make,
model,
has_automatic_transmission,
created_on_date
)
values
(
make_in,
model_in,
has_automatic_transmission_in,
created_on_date_in
)
returning * into result_set;
return result_set;
end;
$$
language 'plpgsql';
This works really well until the service wants to insert a car with no value for has_automatic_transmission or created_on_date. In that case they'd send null for those parameters and would expect the database to use a default value. But instead the database rejects that null for obvious reasons (NOT NULL!).
What I want to do is have the insert routine do a coalesce to DEFAULT, but that doesn't work. Here's the logic I want for the insert:
insert into cars
(
make,
model,
has_automatic_transmission,
created_on_date
)
values
(
make,
model,
COALESCE(has_automatic_transmission_in, DEFAULT),
COALESCE(created_on_date_in, DEFAULT)
)
How can I effectively achieve that? Ideally it'd be some method I can apply inline to every column so that we don't need special knowledge of which columns do or don't have defaults, but I'll take anything at this point...
Except I'd like to avoid Dynamic SQL if possible.
While you need to pass values to a function, and want to insert default values instead of NULL dynamically, you could look them up like this (but see disclaimer below!):
CREATE OR REPLACE FUNCTION cars_insert (make_in text
, model_in text
, has_automatic_transmission_in boolean
, created_on_date_in timestamptz)
RETURNS public.cars AS
$func$
INSERT INTO cars(make, model, has_automatic_transmission, created_on_date)
VALUES (make_in
, model_in
, COALESCE(has_automatic_transmission_in
, (SELECT pg_get_expr(d.adbin, d.adrelid)::bool -- default_value
FROM pg_catalog.pg_attribute a
JOIN pg_catalog.pg_attrdef d ON (d.adrelid, d.adnum) = (a.attrelid, a.attnum)
WHERE a.attrelid = 'public.cars'::regclass
AND a.attname = 'has_automatic_transmission'))
, COALESCE(created_on_date_in
, (SELECT pg_get_expr(d.adbin, d.adrelid)::timestamptz -- default_value
FROM pg_catalog.pg_attribute a
JOIN pg_catalog.pg_attrdef d ON (d.adrelid, d.adnum) = (a.attrelid, a.attnum)
WHERE a.attrelid = 'public.cars'::regclass
AND a.attname = 'created_on_date'))
)
RETURNING *;
$func$
LANGUAGE sql;
db<>fiddle here
You also have to know the column type to cast the text returned from pg_get_expr().
I simplified to an SQL function, as nothing here requires PL/pgSQL.
See:
Get the default values of table columns in Postgres?
However, this only works for constants and types where a cast from text is defined. Other expressions (incl. functions) are not evaluated without dynamic SQL. now() in the example only happens to work by coincidence, as 'now' (ignoring parentheses) is a special input string for timestamptz that evaluates to the the same as the function now(). Misleading coincidence. See:
Difference between now() and current_timestamp
To make it work for expressions that have to be evaluated, dynamic SQL is required - which you ruled out. But if dynamic SQL is allowed, it's much more efficient to build the target list of the INSERT dynamically and omit columns that are supposed get default values. Or keep the target list constant and switch NULL values for the DEFAULT keyword. See:
Function to INSERT dynamic list of columns in multiple tables
Test for null in function with varying parameters
Generate DEFAULT values in a CTE UPSERT using PostgreSQL 9.3
I like Erwin's solution from the playfulness point of view, but it is quite expensive to have these subqueries in every INSERT. For practical purposes, I would recommend one of the following:
Have four INSERT statements in the function, one for each combination of default/non-default arguments, and use IF statements to pick the right one.
Don't use DEFAULT, but write a BEFORE INSERT trigger that replaces NULLs with the appropriate value.
Of course this will add overhead too. You should benchmark the different options.
Building on the suggestions made by previous commentators, I would write a function that generates, in a dynamic fashion, an insert function for each table.
The advantage of such approach is that the resulting insert function will not use dynamic SQL at all.
Function generating function:
CREATE OR REPLACE FUNCTION f_generate_insert_function(tableid regclass) RETURNS VOID LANGUAGE PLPGSQL AS
$$
DECLARE
tablename text := tableid::text;
funcname text := tablename || '_insert';
ddl text := $ddl$
CREATE OR REPLACE FUNCTION %s (%s) RETURNS %s LANGUAGE PLPGSQL AS $func$
DECLARE
result_set %s;
BEGIN
INSERT INTO %s
(
%s
)
VALUES
(
%s
)
RETURNING * INTO result_set;
RETURN result_set;
END;
$func$
$ddl$;
argument_list text := '';
column_list text := '';
value_list text := '';
r record;
BEGIN
FOR r IN
SELECT attname nam, pg_catalog.format_type(atttypid, atttypmod) typ, pg_catalog.pg_get_expr(adbin, adrelid) def
FROM pg_catalog.pg_attribute
JOIN pg_catalog.pg_type t
ON t.oid = atttypid
LEFT JOIN pg_catalog.pg_attrdef
ON adrelid = attrelid AND adnum = attnum AND atthasdef
WHERE attrelid = tableid
AND attnum > 0
LOOP
IF r.def LIKE 'nextval%' THEN
CONTINUE;
END IF;
argument_list := argument_list || r.nam || '_in ' || r.typ || ',';
column_list := column_list || r.nam || ',';
IF r.def IS NULL THEN
value_list := value_list || r.nam || '_in,';
ELSE
value_list := value_list || 'coalesce(' || r.nam || '_in,' || r.def || '),';
END IF;
END LOOP;
argument_list := rtrim(argument_list, ',');
column_list := rtrim(column_list, ',');
value_list := rtrim(value_list, ',');
EXECUTE format(ddl, funcname, argument_list, tablename, tablename, tablename, column_list, value_list);
END;
$$;
In your case, the resulting insert function will be:
CREATE OR REPLACE FUNCTION public.cars_insert(make_in character varying, model_in character varying, has_automatic_transmission_in boolean, created_on_date_in timestamp with time zone)
RETURNS cars
LANGUAGE plpgsql
AS $function$
DECLARE
result_set cars;
BEGIN
INSERT INTO cars
(
make,model,has_automatic_transmission,created_on_date
)
VALUES
(
make_in,model_in,coalesce(has_automatic_transmission_in,false),coalesce(created_on_date_in,now())
)
RETURNING * INTO result_set;
RETURN result_set;
END;
$function$
You need two Insert Statements; one where the Nullable columns are filled and another one which omits these columns as the default is only used if you do not reference the columns for insert.

Replacing Placeholder values with another table's data

I have 2 tables .The first table contains rows with placeholders and the second table contains those placeholders values.
I want a query which fetches data from the first table and replaces placeholders with actual values which are stored in the second table.
Ex:
Table1 Data
id value
608CB424-90BF-4B08-8CF8-241C7635434F jdbc:postgresql://{POSTGRESIP}:{POSTGRESPORT}/{TESTDB}
CDA4C3D4-72B5-4422-8071-A29D32BD14E0 https://{SERVICEIP}/svc/{TESTSERVICE}/
Table2 Data
id placeolder value
201FEBFE-DF92-4474-A945-A592D046CA02 POSTGRESIP 1.2.3.4
20D9DE14-643F-4CE3-B7BF-4B7E01963366 POSTGRESPORT 5432
45611605-F2D9-40C8-8C0C-251E300E183C TESTDB mytest
FA8E2E4E-014C-4C1C-907E-64BAE6854D72 SERVICEIP 10.90.30.40
45B76C68-8A0F-4FD3-882F-CA579EC799A6 TESTSERVICE mytest-service
Required output is
id value
608CB424-90BF-4B08-8CF8-241C7635434F jdbc:postgresql://1.2.3.4:5432/mytest
CDA4C3D4-72B5-4422-8071-A29D32BD14E0 https://10.90.30.40/svc/mytest-service/
If you want to use Python-like named placeholders then you need the helper function written on plpythonu:
create extension plpythonu;
create or replace function formatpystring( str text, a json ) returns text immutable language plpythonu as $$
import json
d = json.loads(a)
return str.format(**d)
$$;
Then simple test:
select formatpystring('{foo}.{bar}', '{"foo": "win", "bar": "amp"}');
formatpystring
----------------
win.amp
Finally you need to compose those arguments from your tables. It is simple:
select t1.id, formatpystring(t1.value, json_object_agg(t2.placeholder, t2.value)) as value
from table1 as t1, table2 as t2
group by t1.id, t1.value;
(Query was not tested but you have the direction)
(Clumsy) dynamic SQL implementation, featuring an outer join, but generating a recursive function call:
This function will not be very efficient, but probably the translation table is relatively small.
CREATE TABLE xlat_table (aa text ,bb text);
INSERT INTO xlat_table (aa ,bb ) VALUES( 'BBB', '/1.2.3.4/')
,( 'ccc', 'OMG') ,( 'ddd', '/4.3.2.1/') ;
CREATE FUNCTION dothe_replacements(_arg1 text) RETURNS text
AS
$func$
DECLARE
script text;
braced text;
res text;
found record; -- (aa text, bb text, xx text);
BEGIN
script := '';
res := format('%L', _arg1);
for found IN SELECT xy.aa,xy.bb
, regexp_matches(_arg1, '{\w+}','g' ) AS xx
FROM xlat_table xy
LOOP
-- RAISE NOTICE '#xx=%', found.xx[1];
-- RAISE NOTICE 'aa=%', found.aa;
-- RAISE NOTICE 'bb=%', found.bb;
braced := '{'|| found.aa || '}';
IF (found.xx[1] = braced ) THEN
-- RAISE NOTICE 'Res=%', res;
script := format ('replace(%s, %L, %L)'
,res,braced,found.bb);
res := format('%s', script);
END IF;
END LOOP;
if(length(script) =0) THEN return res; END IF;
script :='Select '|| script;
-- RAISE NOTICE 'script=%', script;
EXECUTE script INTO res;
return res;
END;
$func$
LANGUAGE plpgsql;
SELECT dothe_replacements( 'aaa{BBB}ccc{ddd}eee' );
SELECT dothe_replacements( '{AAA}bbb{CCC}DDD}{EEE}' );
Results:
CREATE TABLE
INSERT 0 3
CREATE FUNCTION
dothe_replacements
-----------------------------
aaa/1.2.3.4/ccc/4.3.2.1/eee
(1 row)
dothe_replacements
--------------------------
'{AAA}bbb{CCC}DDD}{EEE}'
(1 row)
The above method has quadratic behaviour(wrt the numberof xlat-entries); which is horrible.
But,we could dynamically create a function (once) and call it multiple times
(a poor man's generator)
Selecting only the relevant entries from the xlat table should probably be added.
And, you should of course re-create the function everytime the xlat table is changed.
CREATE FUNCTION create_replacement_function(_name text) RETURNS void
AS
$func$
DECLARE
argname text;
res text;
script text;
braced text;
found record; -- (aa text, bb text, xx text);
BEGIN
script := '';
argname := '_arg1';
res :=format('%I', argname);
for found IN SELECT xy.aa,xy.bb
FROM xlat_table xy
LOOP
-- RAISE NOTICE 'aa=%', found.aa;
-- RAISE NOTICE 'bb=%', found.bb;
-- RAISE NOTICE 'Res=%', res;
braced := '{'|| found.aa || '}';
script := format ('replace(%s, %L, %L)'
,res,braced,found.bb);
res := format('%s', script);
END LOOP;
script :=FORMAT('CREATE FUNCTION %I (_arg1 text) RETURNS text AS
$omg$
BEGIN
RETURN %s;
END;
$omg$ LANGUAGE plpgsql;', _name, script);
RAISE NOTICE 'script=%', script;
EXECUTE script ;
return ;
END;
$func$
LANGUAGE plpgsql;
SELECT create_replacement_function( 'my_function');
SELECT my_function('aaa{BBB}ccc{ddd}eee' );
SELECT my_function( '{AAA}bbb{CCC}DDD}{EEE}' );
And the result:
CREATE FUNCTION
NOTICE: script=CREATE FUNCTION my_function (_arg1 text) RETURNS text AS
$omg$
BEGIN
RETURN replace(replace(replace(_arg1, '{BBB}', '/1.2.3.4/'), '{ccc}', 'OMG'), '{ddd}', '/4.3.2.1/');
END;
$omg$ LANGUAGE plpgsql;
create_replacement_function
-----------------------------
(1 row)
my_function
-----------------------------
aaa/1.2.3.4/ccc/4.3.2.1/eee
(1 row)
my_function
------------------------
{AAA}bbb{CCC}DDD}{EEE}
(1 row)
The following offers a plpgsql solution in a with a single function.
You'll notice I've 'renamed' the value column. It's bad practice using rserved/key words as object names. Also soq is the schema I use for all SO code.
The process first takes the holder-values from table2 and generates a set of key-value pairs (in this case hstore, but jsonb would also work). It then builds an array from the value column (my column name: val_string) containing the place_holder name from the value. Finally, it iterates that array replacing the actual holder-name with the value from the key-values using the array value as the lookup key.
The performance would not be great with a larger volume from either table. If you need to process a large volume at a time to a single row temp table may yield better performance.
create or replace function soq.replace_holders( place_holder_line_in text)
returns text
language plpgsql
as $$
declare
l_holder_values hstore;
l_holder_line text;
l_holder_array text[];
l_indx integer;
begin
-- transform cloumns to key-value pairs of holder-value
select string_agg(place,',')::hstore
into l_holder_values
from (
select concat( '"',place_holder,'"=>"',place_value,'"') place
from soq.table2
) p;
-- raise notice 'holder_array_in==%',l_holder_values;
-- extract the text line and build array of place_holder names
select phv, string_to_array (string_agg(v,','),',')
into l_holder_line,l_holder_array
from (
select replace(replace(place_holder_line_in,'{',''),'}','') phv
, replace(replace(replace(regexp_matches(place_holder_line_in,'({[^}]+})','g')::text ,'{',''),'}',''),'"','') v
) s
group by phv;
-- raise notice 'Array==%',l_holder_array::text;
-- replace each key from text line with the corresponding value
for l_indx in 1 .. array_length(l_holder_array,1)
loop
l_holder_line = replace(l_holder_line,l_holder_array[l_indx],l_holder_values -> l_holder_array[l_indx]);
end loop;
-- done
return l_holder_line;
end;
$$;
-- Test driver
select id, soq.replace_holders(val_string) result_value from soq.table1;
I have created a simple query for this solution and it working as required.
WITH RECURSIVE cte(id, value, level) AS (
SELECT id,value, 0 as level
FROM Table1
UNION
SELECT ts.id,replace(ts.value,'{'||tp.placeholder||'}',tp.value) as value, level+1
FROM cte ts, Table2 tp WHERE ts.value LIKE CONCAT('%',tp.placeholder, '%')
)
SELECT id, value FROM cte c
where level =
(
select Max(level)
from cte c2 where c.id=c2.id
)
Output is
id value
CDA4C3D4-72B5-4422-8071-A29D32BD14E0 https://10.90.30.40/svc/mytest-service/
608CB424-90BF-4B08-8CF8-241C7635434F jdbc:postgresql://1.2.3.4:5432/mytest

Multiple ALTER TABLE ADD COLUMN in one SQL function call

I came across some weird behaviour I'd like to understand.
I create a plpgsql function doing nothing except of ALTER TABLE ADD COLUMN. I call it 2 times on the same table:
A) In a single SELECT sentence
B) In a SQL function with same SELECT as in A)
Results are different: A) creates two columns, while B) creates only one column. Why?
Code:
CREATE FUNCTION add_text_column(table_name text, column_name text) RETURNS VOID
LANGUAGE plpgsql
AS $fff$
BEGIN
EXECUTE '
ALTER TABLE ' || table_name || '
ADD COLUMN ' || column_name || ' text;
';
END;
$fff$
;
-- this function is called only in B
CREATE FUNCTION add_many_text_columns(table_name text) RETURNS VOID
LANGUAGE SQL
AS $fff$
WITH
col_names (col_name) AS (
VALUES
( 'col_1' ),
( 'col_2' )
)
SELECT add_text_column(table_name, col_name)
FROM col_names
;
$fff$
;
-- A)
CREATE TABLE a (id integer);
WITH
col_names (col_name) AS (
VALUES
( 'col_1' ),
( 'col_2' )
)
SELECT add_text_column('a', col_name)
FROM col_names
;
SELECT * FROM a;
-- B)
CREATE TABLE b (id integer);
SELECT add_many_text_columns('b');
SELECT * FROM b;
Result:
CREATE FUNCTION
CREATE FUNCTION
CREATE TABLE
add_text_column
-----------------
(2 rows)
id | col_1 | col_2
----+-------+-------
(0 rows)
CREATE TABLE
add_many_text_columns
-----------------------
(1 row)
id | col_1
----+-------
(0 rows)
I'm using PostgreSQL 10.4. Please note that this is only a minimal working example, not the full functionality I need.
CREATE OR REPLACE FUNCTION g(i INTEGER)
RETURNS VOID AS $$
BEGIN
RAISE NOTICE 'g called with %', i;
END
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION t(i INTEGER)
RETURNS VOID AS $$
SELECT g(id)
FROM generate_series(1, i) id;
$$ LANGUAGE SQL;
What do you think happens when I run SELECT t(4)? The only statement printed from g() is g called with 1.
The reason for this is your add_many_text_columns function returns a single result (void). Because it's SQL and is simply returning the result of a SELECT statement, it seems to stop executing after getting the first result, which makes sense if you think of it - it can only return one result after all.
Now change the function to:
CREATE OR REPLACE FUNCTION t(i INTEGER)
RETURNS SETOF VOID AS $$
SELECT g(id)
FROM generate_series(1, i) id;
$$ LANGUAGE SQL;
And run SELECT t(4) again, and now this is printed:
g called with 1
g called with 2
g called with 3
g called with 4
Because the function now returns SETOF VOID, it doesn't stop after the first result and executes it fully.
So back to your functions, you could change your SQL function to return SETOF VOID, but it doesn't really make much sense - better I think to change it to plpgsql and have it do a PERFORM:
CREATE OR REPLACE FUNCTION t(i INTEGER)
RETURNS VOID AS $$
BEGIN
PERFORM g(id)
FROM generate_series(1, i) id;
END
$$ LANGUAGE plpgsql;
That will execute the statement fully and it still returns a single VOID.
eurotrash provided a good explanation.
Alternative solution 1
CREATE OR REPLACE FUNCTION t(i INTEGER)
RETURNS VOID AS
$func$
SELECT g(id)
FROM generate_series(1, i) id;
SELECT null::void;
$func$ LANGUAGE sql;
Because, quoting the manual:
SQL functions execute an arbitrary list of SQL statements, returning
the result of the last query in the list. In the simple (non-set)
case, the first row of the last query's result will be returned.
By adding a dummy SELECT at the end we avoid that Postgres stops after processing the the first row of the query with multiple rows.
Alternative solution 2
CREATE OR REPLACE FUNCTION t(i INTEGER)
RETURNS VOID AS
$func$
SELECT count(g(id))
FROM generate_series(1, i) id;
$func$ LANGUAGE sql;
By using an aggregate function, all underlying rows are processed in any case. The function returns bigint (that's what count() returns), so we get the number of rows as result.
Alternative solution 3
If you need to return void for some unknown reason, you can cast:
CREATE OR REPLACE FUNCTION t(i INTEGER)
RETURNS VOID AS
$func$
SELECT count(g(id))::text::void
FROM generate_series(1, i) id;
$func$ LANGUAGE sql;
The cast to text is a stepping stone because the cast from bigint to void is not defined.

Iterate through column names to get counts in a PL/pgSQL function

I have a table in my Postgres database that I'm trying to determine fill rates for (that is, I'm trying to understand how often data is/isn't missing). I need to make a function that, for each column (in a list of a couple dozen columns I've selected), counts the number and percentage of columns with non-null values.
The problem is, I don't really know how to iterate through a list of columns in a programmatic way, because I don't know how to reference a column from a string of its name. I've read about how you can use the EXECUTE command to run dynamically-written SQL, but I haven't been able to get it to work. Here's my current function:
CREATE OR REPLACE FUNCTION get_fill_rates() RETURNS TABLE (field_name text, fill_count integer, fill_percentage float) AS $$
DECLARE
fields text[] := array['column_a', 'column_b', 'column_c'];
total_rows integer;
BEGIN
SELECT reltuples INTO total_rows FROM pg_class WHERE relname = 'my_table';
FOR i IN array_lower(fields, 1) .. array_upper(fields, 1)
LOOP
field_name := fields[i];
EXECUTE 'SELECT COUNT(*) FROM my_table WHERE $1 IS NOT NULL' INTO fill_count USING field_name;
fill_percentage := fill_count::float / total_rows::float;
RETURN NEXT;
END LOOP;
END;
$$ LANGUAGE plpgsql;
SELECT * FROM get_fill_rates() ORDER BY fill_count DESC;
This function, as written, returns every field as having a 100% fill rate, which I know to be false. How can I make this function work?
I know you already solved it. But let me suggest you to avoid concatenating identifiers on dynamic queries, you can use format with a identifier wildcard instead:
CREATE OR REPLACE FUNCTION get_fill_rates() RETURNS TABLE (field_name text, fill_count integer, fill_percentage float) AS $$
DECLARE
fields text[] := array['column_a', 'column_b', 'column_c'];
table_name name := 'my_table';
total_rows integer;
BEGIN
SELECT reltuples INTO total_rows FROM pg_class WHERE relname = table_name;
FOREACH field_name IN ARRAY fields
LOOP
EXECUTE format('SELECT COUNT(*) FROM %I WHERE %I IS NOT NULL', table_name, field_name) INTO fill_count;
fill_percentage := fill_count::float / total_rows::float;
RETURN NEXT;
END LOOP;
END;
$$ LANGUAGE plpgsql;
Doing this way will help you preventing SQL-injection attacks and will reduce query parse overhead a bit. More info here.
I figured out the solution after I wrote my question but before I submitted it -- since I've already done the work of writing the question, I'll just go ahead and share the answer. The problem was in my EXECUTE statement, specifically with that USING field_name bit. I think it was getting treated as a string literal when I did it that way, which meant the query was evaluating if "a string literal" IS NOT NULL which of course, is always true.
Instead of parameterizing the column name, I need to inject it directly into the query string. So, I changed my EXECUTE line to the following:
EXECUTE 'SELECT COUNT(*) FROM my_table WHERE ' || field_name || ' IS NOT NULL' INTO fill_count;
Some problems in the code aside (see below), this can be substantially faster and simpler with a single scan over the table in a plain query:
SELECT v.*
FROM (
SELECT count(column_a) AS ct_column_a
, count(column_b) AS ct_column_b
, count(column_c) AS ct_column_c
, count(*)::numeric AS ct
FROM my_table
) sub
, LATERAL (
VALUES
(text 'column_a', ct_column_a, round(ct_column_a / ct, 3))
, (text 'column_b', ct_column_b, round(ct_column_b / ct, 3))
, (text 'column_c', ct_column_c, round(ct_column_c / ct, 3))
) v(field_name, fill_count, fill_percentage);
The crucial "trick" here is that count() only counts non-null values to begin with, no tricks required.
I rounded the percentage to 3 decimal digits, which is optional. For this I cast to numeric.
Use a VALUES expression to unpivot the results and get one row per field.
For repeated use or if you have a long list of columns to process, you can generate and execute the query dynamically. But, again, don't run a separate count for each column. Just build above query dynamically:
CREATE OR REPLACE FUNCTION get_fill_rates(tbl regclass, fields text[])
RETURNS TABLE (field_name text, fill_count bigint, fill_percentage numeric) AS
$func$
BEGIN
RETURN QUERY EXECUTE (
-- RAISE NOTICE '%', ( -- to debug if needed
SELECT
'SELECT v.*
FROM (
SELECT count(*)::numeric AS ct
, ' || string_agg(format('count(%I) AS %I', fld, 'ct_' || fld), ', ') || '
FROM ' || tbl || '
) sub
, LATERAL (
VALUES
(text ' || string_agg(format('%L, %2$I, round(%2$I/ ct, 3))', fld, 'ct_' || fld), ', (') || '
) v(field_name, fill_count, fill_pct)
ORDER BY v.fill_count DESC'
FROM unnest(fields) fld
);
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM get_fill_rates('my_table', '{column_a, column_b, column_c}');
As you can see, this works for any given table and column list now.
And all identifiers are properly quoted automatically, using format() or by the built-in virtues of the regclass type.
Related:
Table name as a PostgreSQL function parameter
How to unpivot a table in PostgreSQL
Query for crosstab view
Convert one row into multiple rows with fewer columns
Your original query could be improved like this, but this is just lipstick on a pig. Do not use this inefficient approach.
CREATE OR REPLACE FUNCTION get_fill_rates()
RETURNS TABLE (field_name text, fill_count bigint, fill_percentage float) AS
$$
DECLARE
fields text[] := '{column_a, column_b, column_c}'; -- must be legal identifiers!
total_rows float; -- use float right away
BEGIN
SELECT reltuples INTO total_rows FROM pg_class WHERE relname = 'my_table';
FOREACH field_name IN ARRAY fields -- use FOREACH
LOOP
EXECUTE 'SELECT COUNT(*) FROM big WHERE ' || field_name || ' IS NOT NULL'
INTO fill_count;
fill_percentage := fill_count / total_rows; -- already type float
RETURN NEXT;
END LOOP;
END
$$ LANGUAGE plpgsql;
Plus, pg_class.reltuples is only an estimate. Since you are counting anyway, use an actual count.
Related:
Iterating over integer[] in PL/pgSQL
Fast way to discover the row count of a table in PostgreSQL

Using dynamic query + user defined datatype in Postgres

I need a function to normalize my input table features values.
My features table has 9 columns out of which x1,x2...x6 are the input columns I need to scale.
I'm able to do it by using a static query:
create or replace function scale_function()
returns void as $$
declare tav1 features%rowtype; rang1 features%rowtype;
begin
select avg(n),avg(x0),avg(x1),avg(x2),avg(x3),avg(x4),avg(x5),avg(x6),avg(y)
into tav1 from features;
select max(n)-min(n),max(x0)-min(x0),max(x1)-min(x1),max(x2)-min(x2),max(x3)-min(x3),
max(x4)-min(x4),max(x5)-min(x5),max(x6)-min(x6),max(y)-min(y)
into rang1 from features;
update features
set x1= (x1-tav1.x1)/(rang1.x1),x2= (x2-tav1.x2)/(rang1.x2),
x3= (x3-tav1.x3)/(rang1.x3),x4= (x4-tav1.x4)/(rang1.x4),
x5= (x5-tav1.x5)/(rang1.x5),x6= (x6-tav1.x6)/(rang1.x6),
y= (y-tav1.y)/(rang1.y);
return;
end;
$$ language plpgsql;
But now I require a dynamic query to scale n column values i.e., x1,x2...,xn (say I've 200+ columns) in my features table. I'm trying this code but this won't work as there is an issue with a user defined data type:
create or replace function scale_function(n int)
returns void as $$
declare
tav1 features%rowtype;
rang1 features%rowtype;
query1 text :=''; query2 text :='';
begin
for i in 0..n
loop
query1 := query1 ||',avg(x'||i||')';
query2 := query2||',max(x'||i||')-min(x'||i||')';
end loop;
query1 := 'select avg(n)'||query1||',avg(y) into tav1 from features;';
execute query1;
query2 := 'select max(n)-min(n)'||query2||',max(y)-min(y) into rang1 from features;';
execute query2;
update features
set x1= (x1-tav1.x1)/(rang1.x1), ... ,xn=(xn-tav1.xn)/(rang1.xn)
,y= (y-tav1.y)/(rang1.y);
return;
end;
$$ language plpgsql;
Here I'm trying to take the avg() values of the columns into a user-defined rowtype tav1 and have to use that tav1 value to update.
Can any one help me how to update the features table values using dynamic query for 'n' such columns?
************ Error ************
ERROR: column "avg" specified more than once
SQL state: 42701
Context: SQL statement "select avg(n),avg(x0),avg(x1),avg(x2),avg(x3),avg(x4),avg(x5),avg(x6),avg(y) into tav1 from features;"
PL/pgSQL function scale_function(integer) line 12 at EXECUTE statement
I'm using PostgreSQL 9.3.0.
Basic UPDATE
Replace the first query with this much shorter and more efficient single UPDATE command:
UPDATE features
SET (x1,x2,x3,x4,x5,x6, y)
= ((x1 - g.avg1) / g.range1
, (x2 - g.avg2) / g.range2
-- , (x3 - ...
, (y - g.avgy) / g.rangey)
FROM (
SELECT avg(x1) AS avg1, max(x1) - min(x1) AS range1
, avg(x2) AS avg2, max(x2) - min(x2) AS range2
-- , avg(x3) ...
, avg(y) AS avgy, max(y) - min(y) AS rangey
FROM features
) g;
About the short UPDATE syntax:
SQL update fields of one table from fields of another one
Dynamic function
Building on the simpler query, here is a dynamic function for any number of columns:
CREATE OR REPLACE FUNCTION scale_function_dyn()
RETURNS void AS
$func$
DECLARE
cols text; -- list of target columns
vals text; -- list of values to insert
aggs text; -- column list for aggregate query
BEGIN
SELECT INTO cols, vals, aggs
string_agg(quote_ident(attname), ', ')
, string_agg(format('(%I - g.%I) / g.%I'
, attname, 'avg_' || attname, 'range_' || attname), ', ')
, string_agg(format('avg(%1$I) AS %2$I, max(%1$I) - min(%1$I) AS %3$I'
, attname, 'avg_' || attname, 'range_' || attname), ', ')
FROM pg_attribute
WHERE attrelid = 'features'::regclass
AND attname NOT IN ('n', 'x0') -- exclude columns from update
AND NOT attisdropped -- no dropped (dead) columns
AND attnum > 0; -- no system columns
EXECUTE format('UPDATE features
SET (%s) = (%s)
FROM (SELECT %s FROM features) g'
, cols, vals, aggs);
END
$func$ LANGUAGE plpgsql;
Related answer with more explanation:
Update multiple columns in a trigger function in plpgsql
SQL Fiddle.