In Postgres, how would I retrieve the default value of a column, preferably inline in an insert statement? - sql

Here's my example table:
CREATE TABLE IF NOT EXISTS public.cars
(
id serial PRIMARY KEY,
make varchar(32) not null,
model varchar(32),
has_automatic_transmission boolean not null default false,
created_on_date timestamptz not null DEFAULT NOW()
);
I have a function that allows my data service to insert a car into the database. It looks like this:
drop function if exists cars_insert;
create function cars_insert
(
in make_in text,
in model_in text,
in has_automatic_transmission_in boolean,
in created_on_date_in timestamptz
)
returns public.carsas
$$
declare result_set public.cars;
begin
insert into cars
(
make,
model,
has_automatic_transmission,
created_on_date
)
values
(
make_in,
model_in,
has_automatic_transmission_in,
created_on_date_in
)
returning * into result_set;
return result_set;
end;
$$
language 'plpgsql';
This works really well until the service wants to insert a car with no value for has_automatic_transmission or created_on_date. In that case they'd send null for those parameters and would expect the database to use a default value. But instead the database rejects that null for obvious reasons (NOT NULL!).
What I want to do is have the insert routine do a coalesce to DEFAULT, but that doesn't work. Here's the logic I want for the insert:
insert into cars
(
make,
model,
has_automatic_transmission,
created_on_date
)
values
(
make,
model,
COALESCE(has_automatic_transmission_in, DEFAULT),
COALESCE(created_on_date_in, DEFAULT)
)
How can I effectively achieve that? Ideally it'd be some method I can apply inline to every column so that we don't need special knowledge of which columns do or don't have defaults, but I'll take anything at this point...
Except I'd like to avoid Dynamic SQL if possible.

While you need to pass values to a function, and want to insert default values instead of NULL dynamically, you could look them up like this (but see disclaimer below!):
CREATE OR REPLACE FUNCTION cars_insert (make_in text
, model_in text
, has_automatic_transmission_in boolean
, created_on_date_in timestamptz)
RETURNS public.cars AS
$func$
INSERT INTO cars(make, model, has_automatic_transmission, created_on_date)
VALUES (make_in
, model_in
, COALESCE(has_automatic_transmission_in
, (SELECT pg_get_expr(d.adbin, d.adrelid)::bool -- default_value
FROM pg_catalog.pg_attribute a
JOIN pg_catalog.pg_attrdef d ON (d.adrelid, d.adnum) = (a.attrelid, a.attnum)
WHERE a.attrelid = 'public.cars'::regclass
AND a.attname = 'has_automatic_transmission'))
, COALESCE(created_on_date_in
, (SELECT pg_get_expr(d.adbin, d.adrelid)::timestamptz -- default_value
FROM pg_catalog.pg_attribute a
JOIN pg_catalog.pg_attrdef d ON (d.adrelid, d.adnum) = (a.attrelid, a.attnum)
WHERE a.attrelid = 'public.cars'::regclass
AND a.attname = 'created_on_date'))
)
RETURNING *;
$func$
LANGUAGE sql;
db<>fiddle here
You also have to know the column type to cast the text returned from pg_get_expr().
I simplified to an SQL function, as nothing here requires PL/pgSQL.
See:
Get the default values of table columns in Postgres?
However, this only works for constants and types where a cast from text is defined. Other expressions (incl. functions) are not evaluated without dynamic SQL. now() in the example only happens to work by coincidence, as 'now' (ignoring parentheses) is a special input string for timestamptz that evaluates to the the same as the function now(). Misleading coincidence. See:
Difference between now() and current_timestamp
To make it work for expressions that have to be evaluated, dynamic SQL is required - which you ruled out. But if dynamic SQL is allowed, it's much more efficient to build the target list of the INSERT dynamically and omit columns that are supposed get default values. Or keep the target list constant and switch NULL values for the DEFAULT keyword. See:
Function to INSERT dynamic list of columns in multiple tables
Test for null in function with varying parameters
Generate DEFAULT values in a CTE UPSERT using PostgreSQL 9.3

I like Erwin's solution from the playfulness point of view, but it is quite expensive to have these subqueries in every INSERT. For practical purposes, I would recommend one of the following:
Have four INSERT statements in the function, one for each combination of default/non-default arguments, and use IF statements to pick the right one.
Don't use DEFAULT, but write a BEFORE INSERT trigger that replaces NULLs with the appropriate value.
Of course this will add overhead too. You should benchmark the different options.

Building on the suggestions made by previous commentators, I would write a function that generates, in a dynamic fashion, an insert function for each table.
The advantage of such approach is that the resulting insert function will not use dynamic SQL at all.
Function generating function:
CREATE OR REPLACE FUNCTION f_generate_insert_function(tableid regclass) RETURNS VOID LANGUAGE PLPGSQL AS
$$
DECLARE
tablename text := tableid::text;
funcname text := tablename || '_insert';
ddl text := $ddl$
CREATE OR REPLACE FUNCTION %s (%s) RETURNS %s LANGUAGE PLPGSQL AS $func$
DECLARE
result_set %s;
BEGIN
INSERT INTO %s
(
%s
)
VALUES
(
%s
)
RETURNING * INTO result_set;
RETURN result_set;
END;
$func$
$ddl$;
argument_list text := '';
column_list text := '';
value_list text := '';
r record;
BEGIN
FOR r IN
SELECT attname nam, pg_catalog.format_type(atttypid, atttypmod) typ, pg_catalog.pg_get_expr(adbin, adrelid) def
FROM pg_catalog.pg_attribute
JOIN pg_catalog.pg_type t
ON t.oid = atttypid
LEFT JOIN pg_catalog.pg_attrdef
ON adrelid = attrelid AND adnum = attnum AND atthasdef
WHERE attrelid = tableid
AND attnum > 0
LOOP
IF r.def LIKE 'nextval%' THEN
CONTINUE;
END IF;
argument_list := argument_list || r.nam || '_in ' || r.typ || ',';
column_list := column_list || r.nam || ',';
IF r.def IS NULL THEN
value_list := value_list || r.nam || '_in,';
ELSE
value_list := value_list || 'coalesce(' || r.nam || '_in,' || r.def || '),';
END IF;
END LOOP;
argument_list := rtrim(argument_list, ',');
column_list := rtrim(column_list, ',');
value_list := rtrim(value_list, ',');
EXECUTE format(ddl, funcname, argument_list, tablename, tablename, tablename, column_list, value_list);
END;
$$;
In your case, the resulting insert function will be:
CREATE OR REPLACE FUNCTION public.cars_insert(make_in character varying, model_in character varying, has_automatic_transmission_in boolean, created_on_date_in timestamp with time zone)
RETURNS cars
LANGUAGE plpgsql
AS $function$
DECLARE
result_set cars;
BEGIN
INSERT INTO cars
(
make,model,has_automatic_transmission,created_on_date
)
VALUES
(
make_in,model_in,coalesce(has_automatic_transmission_in,false),coalesce(created_on_date_in,now())
)
RETURNING * INTO result_set;
RETURN result_set;
END;
$function$

You need two Insert Statements; one where the Nullable columns are filled and another one which omits these columns as the default is only used if you do not reference the columns for insert.

Related

Replacing Placeholder values with another table's data

I have 2 tables .The first table contains rows with placeholders and the second table contains those placeholders values.
I want a query which fetches data from the first table and replaces placeholders with actual values which are stored in the second table.
Ex:
Table1 Data
id value
608CB424-90BF-4B08-8CF8-241C7635434F jdbc:postgresql://{POSTGRESIP}:{POSTGRESPORT}/{TESTDB}
CDA4C3D4-72B5-4422-8071-A29D32BD14E0 https://{SERVICEIP}/svc/{TESTSERVICE}/
Table2 Data
id placeolder value
201FEBFE-DF92-4474-A945-A592D046CA02 POSTGRESIP 1.2.3.4
20D9DE14-643F-4CE3-B7BF-4B7E01963366 POSTGRESPORT 5432
45611605-F2D9-40C8-8C0C-251E300E183C TESTDB mytest
FA8E2E4E-014C-4C1C-907E-64BAE6854D72 SERVICEIP 10.90.30.40
45B76C68-8A0F-4FD3-882F-CA579EC799A6 TESTSERVICE mytest-service
Required output is
id value
608CB424-90BF-4B08-8CF8-241C7635434F jdbc:postgresql://1.2.3.4:5432/mytest
CDA4C3D4-72B5-4422-8071-A29D32BD14E0 https://10.90.30.40/svc/mytest-service/
If you want to use Python-like named placeholders then you need the helper function written on plpythonu:
create extension plpythonu;
create or replace function formatpystring( str text, a json ) returns text immutable language plpythonu as $$
import json
d = json.loads(a)
return str.format(**d)
$$;
Then simple test:
select formatpystring('{foo}.{bar}', '{"foo": "win", "bar": "amp"}');
formatpystring
----------------
win.amp
Finally you need to compose those arguments from your tables. It is simple:
select t1.id, formatpystring(t1.value, json_object_agg(t2.placeholder, t2.value)) as value
from table1 as t1, table2 as t2
group by t1.id, t1.value;
(Query was not tested but you have the direction)
(Clumsy) dynamic SQL implementation, featuring an outer join, but generating a recursive function call:
This function will not be very efficient, but probably the translation table is relatively small.
CREATE TABLE xlat_table (aa text ,bb text);
INSERT INTO xlat_table (aa ,bb ) VALUES( 'BBB', '/1.2.3.4/')
,( 'ccc', 'OMG') ,( 'ddd', '/4.3.2.1/') ;
CREATE FUNCTION dothe_replacements(_arg1 text) RETURNS text
AS
$func$
DECLARE
script text;
braced text;
res text;
found record; -- (aa text, bb text, xx text);
BEGIN
script := '';
res := format('%L', _arg1);
for found IN SELECT xy.aa,xy.bb
, regexp_matches(_arg1, '{\w+}','g' ) AS xx
FROM xlat_table xy
LOOP
-- RAISE NOTICE '#xx=%', found.xx[1];
-- RAISE NOTICE 'aa=%', found.aa;
-- RAISE NOTICE 'bb=%', found.bb;
braced := '{'|| found.aa || '}';
IF (found.xx[1] = braced ) THEN
-- RAISE NOTICE 'Res=%', res;
script := format ('replace(%s, %L, %L)'
,res,braced,found.bb);
res := format('%s', script);
END IF;
END LOOP;
if(length(script) =0) THEN return res; END IF;
script :='Select '|| script;
-- RAISE NOTICE 'script=%', script;
EXECUTE script INTO res;
return res;
END;
$func$
LANGUAGE plpgsql;
SELECT dothe_replacements( 'aaa{BBB}ccc{ddd}eee' );
SELECT dothe_replacements( '{AAA}bbb{CCC}DDD}{EEE}' );
Results:
CREATE TABLE
INSERT 0 3
CREATE FUNCTION
dothe_replacements
-----------------------------
aaa/1.2.3.4/ccc/4.3.2.1/eee
(1 row)
dothe_replacements
--------------------------
'{AAA}bbb{CCC}DDD}{EEE}'
(1 row)
The above method has quadratic behaviour(wrt the numberof xlat-entries); which is horrible.
But,we could dynamically create a function (once) and call it multiple times
(a poor man's generator)
Selecting only the relevant entries from the xlat table should probably be added.
And, you should of course re-create the function everytime the xlat table is changed.
CREATE FUNCTION create_replacement_function(_name text) RETURNS void
AS
$func$
DECLARE
argname text;
res text;
script text;
braced text;
found record; -- (aa text, bb text, xx text);
BEGIN
script := '';
argname := '_arg1';
res :=format('%I', argname);
for found IN SELECT xy.aa,xy.bb
FROM xlat_table xy
LOOP
-- RAISE NOTICE 'aa=%', found.aa;
-- RAISE NOTICE 'bb=%', found.bb;
-- RAISE NOTICE 'Res=%', res;
braced := '{'|| found.aa || '}';
script := format ('replace(%s, %L, %L)'
,res,braced,found.bb);
res := format('%s', script);
END LOOP;
script :=FORMAT('CREATE FUNCTION %I (_arg1 text) RETURNS text AS
$omg$
BEGIN
RETURN %s;
END;
$omg$ LANGUAGE plpgsql;', _name, script);
RAISE NOTICE 'script=%', script;
EXECUTE script ;
return ;
END;
$func$
LANGUAGE plpgsql;
SELECT create_replacement_function( 'my_function');
SELECT my_function('aaa{BBB}ccc{ddd}eee' );
SELECT my_function( '{AAA}bbb{CCC}DDD}{EEE}' );
And the result:
CREATE FUNCTION
NOTICE: script=CREATE FUNCTION my_function (_arg1 text) RETURNS text AS
$omg$
BEGIN
RETURN replace(replace(replace(_arg1, '{BBB}', '/1.2.3.4/'), '{ccc}', 'OMG'), '{ddd}', '/4.3.2.1/');
END;
$omg$ LANGUAGE plpgsql;
create_replacement_function
-----------------------------
(1 row)
my_function
-----------------------------
aaa/1.2.3.4/ccc/4.3.2.1/eee
(1 row)
my_function
------------------------
{AAA}bbb{CCC}DDD}{EEE}
(1 row)
The following offers a plpgsql solution in a with a single function.
You'll notice I've 'renamed' the value column. It's bad practice using rserved/key words as object names. Also soq is the schema I use for all SO code.
The process first takes the holder-values from table2 and generates a set of key-value pairs (in this case hstore, but jsonb would also work). It then builds an array from the value column (my column name: val_string) containing the place_holder name from the value. Finally, it iterates that array replacing the actual holder-name with the value from the key-values using the array value as the lookup key.
The performance would not be great with a larger volume from either table. If you need to process a large volume at a time to a single row temp table may yield better performance.
create or replace function soq.replace_holders( place_holder_line_in text)
returns text
language plpgsql
as $$
declare
l_holder_values hstore;
l_holder_line text;
l_holder_array text[];
l_indx integer;
begin
-- transform cloumns to key-value pairs of holder-value
select string_agg(place,',')::hstore
into l_holder_values
from (
select concat( '"',place_holder,'"=>"',place_value,'"') place
from soq.table2
) p;
-- raise notice 'holder_array_in==%',l_holder_values;
-- extract the text line and build array of place_holder names
select phv, string_to_array (string_agg(v,','),',')
into l_holder_line,l_holder_array
from (
select replace(replace(place_holder_line_in,'{',''),'}','') phv
, replace(replace(replace(regexp_matches(place_holder_line_in,'({[^}]+})','g')::text ,'{',''),'}',''),'"','') v
) s
group by phv;
-- raise notice 'Array==%',l_holder_array::text;
-- replace each key from text line with the corresponding value
for l_indx in 1 .. array_length(l_holder_array,1)
loop
l_holder_line = replace(l_holder_line,l_holder_array[l_indx],l_holder_values -> l_holder_array[l_indx]);
end loop;
-- done
return l_holder_line;
end;
$$;
-- Test driver
select id, soq.replace_holders(val_string) result_value from soq.table1;
I have created a simple query for this solution and it working as required.
WITH RECURSIVE cte(id, value, level) AS (
SELECT id,value, 0 as level
FROM Table1
UNION
SELECT ts.id,replace(ts.value,'{'||tp.placeholder||'}',tp.value) as value, level+1
FROM cte ts, Table2 tp WHERE ts.value LIKE CONCAT('%',tp.placeholder, '%')
)
SELECT id, value FROM cte c
where level =
(
select Max(level)
from cte c2 where c.id=c2.id
)
Output is
id value
CDA4C3D4-72B5-4422-8071-A29D32BD14E0 https://10.90.30.40/svc/mytest-service/
608CB424-90BF-4B08-8CF8-241C7635434F jdbc:postgresql://1.2.3.4:5432/mytest

Iterate through column names to get counts in a PL/pgSQL function

I have a table in my Postgres database that I'm trying to determine fill rates for (that is, I'm trying to understand how often data is/isn't missing). I need to make a function that, for each column (in a list of a couple dozen columns I've selected), counts the number and percentage of columns with non-null values.
The problem is, I don't really know how to iterate through a list of columns in a programmatic way, because I don't know how to reference a column from a string of its name. I've read about how you can use the EXECUTE command to run dynamically-written SQL, but I haven't been able to get it to work. Here's my current function:
CREATE OR REPLACE FUNCTION get_fill_rates() RETURNS TABLE (field_name text, fill_count integer, fill_percentage float) AS $$
DECLARE
fields text[] := array['column_a', 'column_b', 'column_c'];
total_rows integer;
BEGIN
SELECT reltuples INTO total_rows FROM pg_class WHERE relname = 'my_table';
FOR i IN array_lower(fields, 1) .. array_upper(fields, 1)
LOOP
field_name := fields[i];
EXECUTE 'SELECT COUNT(*) FROM my_table WHERE $1 IS NOT NULL' INTO fill_count USING field_name;
fill_percentage := fill_count::float / total_rows::float;
RETURN NEXT;
END LOOP;
END;
$$ LANGUAGE plpgsql;
SELECT * FROM get_fill_rates() ORDER BY fill_count DESC;
This function, as written, returns every field as having a 100% fill rate, which I know to be false. How can I make this function work?
I know you already solved it. But let me suggest you to avoid concatenating identifiers on dynamic queries, you can use format with a identifier wildcard instead:
CREATE OR REPLACE FUNCTION get_fill_rates() RETURNS TABLE (field_name text, fill_count integer, fill_percentage float) AS $$
DECLARE
fields text[] := array['column_a', 'column_b', 'column_c'];
table_name name := 'my_table';
total_rows integer;
BEGIN
SELECT reltuples INTO total_rows FROM pg_class WHERE relname = table_name;
FOREACH field_name IN ARRAY fields
LOOP
EXECUTE format('SELECT COUNT(*) FROM %I WHERE %I IS NOT NULL', table_name, field_name) INTO fill_count;
fill_percentage := fill_count::float / total_rows::float;
RETURN NEXT;
END LOOP;
END;
$$ LANGUAGE plpgsql;
Doing this way will help you preventing SQL-injection attacks and will reduce query parse overhead a bit. More info here.
I figured out the solution after I wrote my question but before I submitted it -- since I've already done the work of writing the question, I'll just go ahead and share the answer. The problem was in my EXECUTE statement, specifically with that USING field_name bit. I think it was getting treated as a string literal when I did it that way, which meant the query was evaluating if "a string literal" IS NOT NULL which of course, is always true.
Instead of parameterizing the column name, I need to inject it directly into the query string. So, I changed my EXECUTE line to the following:
EXECUTE 'SELECT COUNT(*) FROM my_table WHERE ' || field_name || ' IS NOT NULL' INTO fill_count;
Some problems in the code aside (see below), this can be substantially faster and simpler with a single scan over the table in a plain query:
SELECT v.*
FROM (
SELECT count(column_a) AS ct_column_a
, count(column_b) AS ct_column_b
, count(column_c) AS ct_column_c
, count(*)::numeric AS ct
FROM my_table
) sub
, LATERAL (
VALUES
(text 'column_a', ct_column_a, round(ct_column_a / ct, 3))
, (text 'column_b', ct_column_b, round(ct_column_b / ct, 3))
, (text 'column_c', ct_column_c, round(ct_column_c / ct, 3))
) v(field_name, fill_count, fill_percentage);
The crucial "trick" here is that count() only counts non-null values to begin with, no tricks required.
I rounded the percentage to 3 decimal digits, which is optional. For this I cast to numeric.
Use a VALUES expression to unpivot the results and get one row per field.
For repeated use or if you have a long list of columns to process, you can generate and execute the query dynamically. But, again, don't run a separate count for each column. Just build above query dynamically:
CREATE OR REPLACE FUNCTION get_fill_rates(tbl regclass, fields text[])
RETURNS TABLE (field_name text, fill_count bigint, fill_percentage numeric) AS
$func$
BEGIN
RETURN QUERY EXECUTE (
-- RAISE NOTICE '%', ( -- to debug if needed
SELECT
'SELECT v.*
FROM (
SELECT count(*)::numeric AS ct
, ' || string_agg(format('count(%I) AS %I', fld, 'ct_' || fld), ', ') || '
FROM ' || tbl || '
) sub
, LATERAL (
VALUES
(text ' || string_agg(format('%L, %2$I, round(%2$I/ ct, 3))', fld, 'ct_' || fld), ', (') || '
) v(field_name, fill_count, fill_pct)
ORDER BY v.fill_count DESC'
FROM unnest(fields) fld
);
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM get_fill_rates('my_table', '{column_a, column_b, column_c}');
As you can see, this works for any given table and column list now.
And all identifiers are properly quoted automatically, using format() or by the built-in virtues of the regclass type.
Related:
Table name as a PostgreSQL function parameter
How to unpivot a table in PostgreSQL
Query for crosstab view
Convert one row into multiple rows with fewer columns
Your original query could be improved like this, but this is just lipstick on a pig. Do not use this inefficient approach.
CREATE OR REPLACE FUNCTION get_fill_rates()
RETURNS TABLE (field_name text, fill_count bigint, fill_percentage float) AS
$$
DECLARE
fields text[] := '{column_a, column_b, column_c}'; -- must be legal identifiers!
total_rows float; -- use float right away
BEGIN
SELECT reltuples INTO total_rows FROM pg_class WHERE relname = 'my_table';
FOREACH field_name IN ARRAY fields -- use FOREACH
LOOP
EXECUTE 'SELECT COUNT(*) FROM big WHERE ' || field_name || ' IS NOT NULL'
INTO fill_count;
fill_percentage := fill_count / total_rows; -- already type float
RETURN NEXT;
END LOOP;
END
$$ LANGUAGE plpgsql;
Plus, pg_class.reltuples is only an estimate. Since you are counting anyway, use an actual count.
Related:
Iterating over integer[] in PL/pgSQL
Fast way to discover the row count of a table in PostgreSQL

Safe way to open cursor with dynamic column name from user input

I am trying write function which open cursor with dynamic column name in it.
And I am concerned about obvious SQL injection possibility here.
I was happy to see in the fine manual that this can be easily done, but when I try it in my example, it goes wrong with
error: column does not exist.
My current attempt can be condensed into this SQL Fiddle. Below, I present formatted code for this fiddle.
The goal of tst() function is to be able to count distinct occurances of values in any given column of constant query.
I am asking for hint what am I doing wrong, or maybe some alternative way to achieve the same goal in a safe way.
CREATE TABLE t1 (
f1 character varying not null,
f2 character varying not null
);
CREATE TABLE t2 (
f1 character varying not null,
f2 character varying not null
);
INSERT INTO t1 (f1,f2) VALUES ('a1','b1'), ('a2','b2');
INSERT INTO t2 (f1,f2) VALUES ('a1','c1'), ('a2','c2');
CREATE OR REPLACE FUNCTION tst(p_field character varying)
RETURNS INTEGER AS
$BODY$
DECLARE
v_r record;
v_cur refcursor;
v_sql character varying := 'SELECT count(DISTINCT(%I)) as qty
FROM t1 LEFT JOIN t2 ON (t1.f1=t2.f1)';
BEGIN
OPEN v_cur FOR EXECUTE format(v_sql,lower(p_field));
FETCH v_cur INTO v_r;
CLOSE v_cur;
return v_r.qty;
END;
$BODY$
LANGUAGE plpgsql;
Test execution:
SELECT tst('t1.f1')
Provides error message:
ERROR: column "t1.f1" does not exist
Hint: PL/pgSQL function tst(character varying) line 1 at OPEN
This would work:
SELECT tst('f1');
The problem you are facing: format() interprets parameters concatenated with %I as one identifier. You are trying to pass a table-qualified column name that consists of two identifiers, which is interpreted as "t1.f1" (one name, double-quoted to preserve the otherwise illegal dot in the name.
If you want to pass table and column name, use two parameters:
CREATE OR REPLACE FUNCTION tst2(_col text, _tbl text = NULL)
RETURNS int AS
$func$
DECLARE
v_r record;
v_cur refcursor;
v_sql text := 'SELECT count(DISTINCT %s) AS qty
FROM t1 LEFT JOIN t2 USING (f1)';
BEGIN
OPEN v_cur FOR EXECUTE
format(v_sql, CASE WHEN _tbl <> '' -- rule out NULL and ''
THEN quote_ident(lower(_tbl)) || '.' ||
quote_ident(lower(_col))
ELSE quote_ident(lower(_col)) END);
FETCH v_cur INTO v_r;
CLOSE v_cur;
RETURN v_r.qty;
END
$func$ LANGUAGE plpgsql;
Aside: It's DISTINCT f1- no parentheses around the column name, unless you want to make it a row type.
Actually, you don't need a cursor for this at all. Faster, simpler:
CREATE OR REPLACE FUNCTION tst3(_col text, _tbl text = NULL, OUT ct bigint) AS
$func$
BEGIN
EXECUTE format('SELECT count(DISTINCT %s) AS qty
FROM t1 LEFT JOIN t2 USING (f1)'
, CASE WHEN _tbl <> '' -- rule out NULL and ''
THEN quote_ident(lower(_tbl)) || '.' ||
quote_ident(lower(_col))
ELSE quote_ident(lower(_col)) END)
INTO ct;
RETURN;
END
$func$ LANGUAGE plpgsql;
I provided NULL as parameter default for convenience. This way you can call the function with just a column name or with column and table name. But not without column name.
Call:
SELECT tst3('f1', 't1');
SELECT tst3('f1');
SELECT tst3(_col := 'f1');
Same as for test2().
SQL Fiddle.
Related answer:
Table name as a PostgreSQL function parameter

Function to find column names dynamically

In the function find_value_in_table() provided below, I am trying to find the name of any columns which have a record where the column value matches the String "255.255.255.255".
In short, something like this:
SELECT column_name
FROM dynamic_table
WHERE column_value = '255.255.255.255';
Note:
the name of the table is passed in as a parameter to the function, hence the "dynamic_table"
I am trying to determine the name of the column, not the data value.
This is a first step. Later I will parameterize the column_value as well. I know there is a table storing the value "255.255.255.255" somewhere, and want to prove functionality of this function by finding the table and column name storing this value.
The aim of all this: I have big database, that I am retro-engineering. I know it contains, somewhere, some configuration parameters of an application. In the current configuration I know the precise value of some of these parameters (seen in the application config GUI: e.g. a computer name, ip addresses). I want to browse the entire database in order to determine which tables store this configuration information.
I have been building the function find_value() to return these clues.
How can this be done?
create or replace function find_all_columns(tablename in text)
return setof record as
$func$
declare r record;
begin
return select a.attname as "Column",
pg_catalog.format_type(a.atttypid, a.atttypmod) as "Datatype"
from
pg_catalog.pg_attribute a
where
a.attnum > 0
and not a.attisdropped
and a.attrelid = ( select c.oid from pg_catalog.pg_class c left join pg_catalog.pg_namespace n on n.oid = c.relnamespace where c.relname ~ '^(' || quote_ident(tablename) || ')$' and pg_catalog.pg_table_is_visible(c.oid);
end loop;
end;
$func$ language 'plpgsql';
create or replace function find_value_in_table(tablename text)
returns setof record as
$func$
declare r record;
return select
begin
for r in (select find_all_columns(tablename)) loop
return select * from tablename t where t... = "255.255.255.255" /* here column would be the value in the record: r.Column*/
end loop;
end;
$func$ language 'plpgsql';
create or replace function find_tables_name(_username text)
returns setof record as
$func$
declare
tbl text;
begin
for tbl in
select t.tablename from pg_tables t
where t.tableowner = _username and t.schemaname = 'public'
loop
return quote_ident(tbl);
end loop;
end;
$func$ language 'plpgsql';
create or replace function find_value(_username text, valuetofind text)
returns setof record as
$func$
declare r record;
begin
for r in (select find_tables_name(_username)) loop
return find_value_in_table( r.tablename );
end loop;
end;
$func$ language 'plpgsql';
One primitive way to achieve this would be to make a plain-text dump and use an editor of your choice (vim in my case) to search for the string.
But this function does a better job. :)
CREATE OR REPLACE FUNCTION find_columns(_owner text
,_valuetofind text
,_part bool = FALSE)
RETURNS TABLE (tbl text, col text, typ text) LANGUAGE plpgsql STRICT AS
$func$
DECLARE
_go bool;
_search_row text := '%' || _search || '%'; -- Search row for part of string
BEGIN
IF _part THEN -- search col for part of string?
valuetofind := '%' || valuetofind || '%';
END IF;
FOR tbl IN
SELECT quote_ident(t.schemaname) || '.' || quote_ident(t.tablename)
FROM pg_tables t
WHERE t.tableowner = _owner
-- AND t.schemaname = 'public' -- uncomment to only search one schema
LOOP
EXECUTE '
SELECT EXISTS (
SELECT 1 FROM ' || tbl || ' t WHERE t::text ~~ $1)' -- check whole row
INTO _go
USING _search_row;
IF _go THEN
FOR col, typ IN
SELECT quote_ident(a.attname) -- AS col
,pg_catalog.format_type(a.atttypid, a.atttypmod) -- AS typ
FROM pg_catalog.pg_attribute a
WHERE a.attnum > 0
AND NOT a.attisdropped
AND a.attrelid = tbl::regclass
LOOP
EXECUTE '
SELECT EXISTS (
SELECT 1
FROM ' || tbl || ' WHERE ' || col || '::text ~~ $1)' -- check col
INTO _go
USING valuetofind;
IF _go THEN
RETURN NEXT;
END IF;
END LOOP;
END IF;
END LOOP;
END;
$func$;
COMMENT ON FUNCTION x.find_columns(text, text, boolean) IS 'Search all tables
owned by "_owner" user for a value "_search" (text representation).
Match full or partial (_part)';
Call:
SELECT * FROM find_columns('postgres', '255.255.255.255');
SELECT * FROM find_columns('fadmin', '255.255.255.255', TRUE);
Returns:
tbl | col | typ
-----------------+-------------+------
event.eventkat | eventkat | text
public.foo | description | text
public.bar | filter | text
Tested with PostgreSQL 9.1
Major points
The function is a one-stop-shop.
I built an option to search for part of the value (_part). The default is to search for whole columns.
I built in a quick test on the whole row to eliminate tables, that don't have the valuetofind in them at all. I use PostgreSQL's ability to convert whole rows to text quickly for this. This should make the function a lot faster - except when all or almost all tables qualify or when tables only have one columns.
I define the return type as RETURNS TABLE (tbl text, col text, typ text) and assign the implicitly defined variables tbl, col and typ right away. So I don't need additional variables and can RETURN NEXT right away when a column qualifies.
Make heavy use of EXISTS here! That's the fastest option, as you are only interested whether the column has the value at all.
Use LIKE (or ~~ for short) instead of regular expressions. Simpler, faster.
I quote_ident() all identifiers right away.
EXECUTE *command* INTO USING is instrumental.

Can I have a postgres plpgsql function return variable-column records?

I want to create a postgres function that builds the set of columns it
returns on-the-fly; in short, it should take in a list of keys, build
one column per-key, and return a record consisting of whatever that set
of columns was. Briefly, here's the code:
CREATE OR REPLACE FUNCTION reports.get_activities_for_report() RETURNS int[] AS $F$
BEGIN
RETURN ARRAY(SELECT activity_id FROM public.activity WHERE activity_id NOT IN (1, 2));
END;
$F$
LANGUAGE plpgsql
STABLE;
CREATE OR REPLACE FUNCTION reports.get_amount_of_time_query(format TEXT, _activity_id INTEGER) RETURNS TEXT AS $F$
DECLARE
_label TEXT;
BEGIN
SELECT label INTO _label FROM public.activity WHERE activity_id = _activity_id;
IF _label IS NOT NULL THEN
IF lower(format) = 'percentage' THEN
RETURN $$TO_CHAR(100.0 *$$ ||
$$ (SUM(CASE WHEN activity_id = $$ || _activity_id || $$ THEN EXTRACT(EPOCH FROM ended - started) END) /$$ ||
$$ SUM(EXTRACT(EPOCH FROM ended - started))),$$ ||
$$ '990.99 %') AS $$ || quote_ident(_label);
ELSE
RETURN $$SUM(CASE WHEN activity_id = $$ || _activity_id || $$ THEN ended - started END)$$ ||
$$ AS $$ || quote_ident(_label);
END IF;
END IF;
END;
$F$
LANGUAGE plpgsql
STABLE;
CREATE OR REPLACE FUNCTION reports.build_activity_query(format TEXT, activities int[]) RETURNS TEXT AS $F$
DECLARE
_activity_id INT;
query TEXT;
_activity_count INT;
BEGIN
_activity_count := array_upper(activities, 1);
query := $$SELECT agent_id, portal_user_id, SUM(ended - started) AS total$$;
FOR i IN 1.._activity_count LOOP
_activity_id := activities[i];
query := query || ', ' || reports.get_amount_of_time_query(format, _activity_id);
END LOOP;
query := query || $$ FROM public.activity_log_final$$ ||
$$ LEFT JOIN agent USING (agent_id)$$ ||
$$ WHERE started::DATE BETWEEN actual_start_date AND actual_end_date$$ ||
$$ GROUP BY agent_id, portal_user_id$$ ||
$$ ORDER BY agent_id$$;
RETURN query;
END;
$F$
LANGUAGE plpgsql
STABLE;
CREATE OR REPLACE FUNCTION reports.get_agent_activity_breakdown(format TEXT, start_date DATE, end_date DATE) RETURNS SETOF RECORD AS $F$
DECLARE
actual_end_date DATE;
actual_start_date DATE;
query TEXT;
_rec RECORD;
BEGIN
actual_start_date := COALESCE(start_date, '1970-01-01'::DATE);
actual_end_date := COALESCE(end_date, now()::DATE);
query := reports.build_activity_query(format, reports.get_activities_for_report());
FOR _rec IN EXECUTE query LOOP
RETURN NEXT _rec;
END LOOP;
END
$F$
LANGUAGE plpgsql;
This builds queries that look (roughly) like this:
SELECT agent_id,
portal_user_id,
SUM(ended - started) AS total,
SUM(CASE WHEN activity_id = 3 THEN ended - started END) AS "Label 1"
SUM(CASE WHEN activity_id = 4 THEN ended - started END) AS "Label 2"
FROM public.activity_log_final
LEFT JOIN agent USING (agent_id)
WHERE started::DATE BETWEEN actual_start_date AND actual_end_date
GROUP BY agent_id, portal_user_id
ORDER BY agent_id
When I try to call the get_agent_activity_breakdown() function, I get this error:
psql:2009-10-22_agent_activity_report_test.sql:179: ERROR: a column definition list is required for functions returning "record"
CONTEXT: SQL statement "SELECT * FROM reports.get_agent_activity_breakdown('percentage', NULL, NULL)"
PL/pgSQL function "test_agent_activity" line 92 at SQL statement
The trick is, of course, that the columns labeled 'Label 1' and 'Label
2' are dependent on the set of activities defined in the contents of the
activity table, which I cannot predict when calling the function. How
can I create a function to access this information?
If you really want to create such table dynamically, maybe just create a temporary table within the function so it can have any columns you want. Let the function insert all rows into the table instead of returning them. The function can return only the name of the table or you can just have one exact table name that you know. After running that function you can just select data from the table. The function should also check if the temporary table exists so it should delete or truncate it.
Simon's answer might be better overall in the end, I'm just telling you how to do it without changing what you've got.
From the docs:
from_item can be one of:
...
function_name ( [ argument [, ...] ] ) [ AS ] alias [ ( column_alias [, ...] | column_definition [, ...] ) ]
function_name ( [ argument [, ...] ] ) AS ( column_definition [, ...] )
In other words, later it says:
If the function has been defined as
returning the record data type, then
an alias or the key word AS must be
present, followed by a column
definition list in the form (
column_name data_type [, ... ] ). The
column definition list must match the
actual number and types of columns
returned by the function.
I think the alias thing is only an option if you've predefined a type somewhere (like if you're mimicing the output of a predefined table, or have actually used CREATE TYPE...don't quote me on that, though.)
So, I think you would need something like:
SELECT *
FROM reports.get_agent_activity_breakdown('percentage', NULL, NULL)
AS (agent_id integer, portal_user_id integer, total something, ...)
The problem for you lies in the .... You'll need to know before you execute the query the names and types of all the columns--so you'll end up selecting on public.activity twice.
Both Simon's and Kev's answers are good ones, but what I ended up doing was splitting the calls to the database into two queries:
Build the query using the query constructor methods I included in the question, returning that to the application.
Call the query directly, and return that data.
This is safe in my case because the dynamic column list is not subject to frequent change, so I don't need to worry about the query's target data changing in between these calls. Otherwise, though, my method might not work.
you cannot change number of output columns, but you can to use refcursor, and you can return opened cursor.
more on http://okbob.blogspot.com/2008/08/using-cursors-for-generating-cross.html