I would like a fairly efficient way to condense an entire table to a hash value.
I have some tools that generate entire data tables, which can then be used to generate further tables, and so on. I'm trying to implement a simplistic build system to coordinate build runs and avoid repeating work. I want to be able to record hashes of the input tables so that I can later check whether they have changed. Building a table takes minutes or hours, so spending several seconds building hashes is acceptable.
A hack I have used is to just pipe the output of pg_dump to md5sum, but that requires transferring the entire table dump over the network to hash it on the local box. Ideally I'd like to produce the hash on the database server.
Finding the hash value of a row in postgresql gives me a way to calculate a hash for a row at a time, which could then be combined somehow.
Any tips would be greatly appreciated.
Edit to post what I ended up with: tinychen's answer didn't work for me directly, because I couldn't use 'plpgsql' apparently. When I implemented the function in SQL instead, it worked, but was very inefficient for large tables. So instead of concatenating all the row hashes and then hashing that, I switched to using a "rolling hash", where the previous hash is concatenated with the text representation of a row and then that is hashed to produce the next hash. This was much better; apparently running md5 on short strings millions of extra times is better than concatenating short strings millions of times.
create function zz_concat(text, text) returns text as
'select md5($1 || $2);' language 'sql';
create aggregate zz_hashagg(text) (
sfunc = zz_concat,
stype = text,
initcond = '');
I know this is old question, however this is my solution:
SELECT
md5(CAST((array_agg(f.* order by id))AS text)) /* id is a primary key of table (to avoid random sorting) */
FROM
foo f;
SELECT md5(array_agg(md5((t.*)::varchar))::varchar)
FROM (
SELECT *
FROM my_table
ORDER BY 1
) AS t
just do like this to create a hash table aggregation function.
create function pg_concat( text, text ) returns text as '
begin
if $1 isnull then
return $2;
else
return $1 || $2;
end if;
end;' language 'plpgsql';
create function pg_concat_fin(text) returns text as '
begin
return $1;
end;' language 'plpgsql';
create aggregate pg_concat (
basetype = text,
sfunc = pg_concat,
stype = text,
finalfunc = pg_concat_fin);
then you could use the pg_concat function to caculate the table's hash value.
select md5(pg_concat(md5(CAST((f.*)AS text)))) from f order by id
I had a similar requirement, to use when testing a specialized table replication solution.
#Ben's rolling MD5 solution (which he appended to the question) seems quite efficient, but there were a couple of traps which tripped me up.
The first (mentioned in some of the other answers) is that you need to ensure that the aggregate is performed in a known order over the table you are checking. The syntax for that is eg.
select zz_hashagg(CAST((example.*)AS text) order by id) from example;
Note the order by is inside the aggregate.
The second is that using CAST((example.*)AS text will not give identical results for two tables with the same column contents unless the columns were created in the same order. In my case that was not guaranteed, so to get a true comparison I had to list the columns separately, for example:
select zz_hashagg(CAST((example.id, example.a, example.c)AS text) order by id) from example;
For completeness (in case a subsequent edit should remove it) here is the definition of the zz_hashagg from #Ben's question:
create function zz_concat(text, text) returns text as
'select md5($1 || $2);' language 'sql';
create aggregate zz_hashagg(text) (
sfunc = zz_concat,
stype = text,
initcond = '');
Tomas Greif's solution is nice. But for huge enough table invalid memory alloc request size error will occur. So, it can be overcome with 2 options.
Option 1. Without batches
If the table is not big enough use string_agg and bytea data type.
select
md5(string_agg(c.row_hash, '' order by c.row_hash)) table_hash
from
foo f
cross join lateral(select ('\x' || md5(f::text))::bytea row_hash) c
;
Option 2. With batches
If the query in previous option ends with error like
SQL Error [54000]: ERROR: out of memory
Detail: Cannot enlarge string buffer containing 1073741808 bytes by 16 more bytes.
the row count limit is 1073741808 / 16 = 67108863 and the table should be divided to batches.
select
md5(string_agg(t.batch_hash, '' order by t.batch_hash)) table_hash
from(
select
md5(string_agg(c.row_hash, '' order by c.row_hash)) batch_hash
from
foo f
cross join lateral(select ('\x' || md5(f::text))::bytea row_hash) c
group by substring(row_hash for 3)
) t
;
Where 3 in group by clause divides row hashes to 16 777 216 batches (2: 65 536, 1: 256). Also other batching methods (e.g. strictly ntile) will work.
P.S. If you need to compare two tables this post may help.
Great answers.
In case by any means someone required not to use aggregation functions but maintaining support for tables sized several GiB, you can use this that has little performance penalties over the best answers in the case of largest tables.
CREATE OR REPLACE FUNCTION table_md5(
table_name CHARACTER VARYING
, VARIADIC order_key_columns CHARACTER VARYING [])
RETURNS CHARACTER VARYING AS $$
DECLARE
order_key_columns_list CHARACTER VARYING;
query CHARACTER VARYING;
first BOOLEAN;
i SMALLINT;
working_cursor REFCURSOR;
working_row_md5 CHARACTER VARYING;
partial_md5_so_far CHARACTER VARYING;
BEGIN
order_key_columns_list := '';
first := TRUE;
FOR i IN 1..array_length(order_key_columns, 1) LOOP
IF first THEN
first := FALSE;
ELSE
order_key_columns_list := order_key_columns_list || ', ';
END IF;
order_key_columns_list := order_key_columns_list || order_key_columns[i];
END LOOP;
query := (
'SELECT ' ||
'md5(CAST(t.* AS TEXT)) ' ||
'FROM (' ||
'SELECT * FROM ' || table_name || ' ' ||
'ORDER BY ' || order_key_columns_list ||
') t');
OPEN working_cursor FOR EXECUTE (query);
-- RAISE NOTICE 'opened cursor for query: ''%''', query;
first := TRUE;
LOOP
FETCH working_cursor INTO working_row_md5;
EXIT WHEN NOT FOUND;
IF first THEN
first := FALSE;
SELECT working_row_md5 INTO partial_md5_so_far;
ELSE
SELECT md5(working_row_md5 || partial_md5_so_far)
INTO partial_md5_so_far;
END IF;
-- RAISE NOTICE 'partial md5 so far: %', partial_md5_so_far;
END LOOP;
-- RAISE NOTICE 'final md5: %', partial_md5_so_far;
RETURN partial_md5_so_far :: CHARACTER VARYING;
END;
$$ LANGUAGE plpgsql;
Used as:
SELECT table_md5(
'table_name', 'sorting_col_0', 'sorting_col_1', ..., 'sorting_col_n'
);
As for the algorithm, you could XOR all the individual MD5 hashes, or concatenate them and hash the concatenation.
If you want to do this completely server-side you probably have to create your own aggregation function, which you could then call.
select my_table_hash(md5(CAST((f.*)AS text)) from f order by id
As an intermediate step, instead of copying the whole table to the client, you could just select the MD5 results for all rows, and run those through md5sum.
Either way you need to establish a fixed sort order, otherwise you might end up with different checksums even for the same data.
Related
I implemented this function in my Postgres database: http://www.cureffi.org/2013/03/19/automatically-creating-pivot-table-column-names-in-postgresql/
Here's the function:
create or replace function xtab (tablename varchar, rowc varchar, colc varchar, cellc varchar, celldatatype varchar) returns varchar language plpgsql as $$
declare
dynsql1 varchar;
dynsql2 varchar;
columnlist varchar;
begin
-- 1. retrieve list of column names.
dynsql1 = 'select string_agg(distinct '||colc||'||'' '||celldatatype||''','','' order by '||colc||'||'' '||celldatatype||''') from '||tablename||';';
execute dynsql1 into columnlist;
-- 2. set up the crosstab query
dynsql2 = 'select * from crosstab (
''select '||rowc||','||colc||','||cellc||' from '||tablename||' group by 1,2 order by 1,2'',
''select distinct '||colc||' from '||tablename||' order by 1''
)
as ct (
'||rowc||' varchar,'||columnlist||'
);';
return dynsql2;
end
$$;
So now I can call the function:
select xtab('globalpayments','month','currency','(sum(total_fees)/sum(txn_amount)*100)::decimal(48,2)','text');
Which returns (because the return type of the function is varchar):
select * from crosstab (
'select month,currency,(sum(total_fees)/sum(txn_amount)*100)::decimal(48,2)
from globalpayments
group by 1,2
order by 1,2'
, 'select distinct currency
from globalpayments
order by 1'
) as ct ( month varchar,CAD text,EUR text,GBP text,USD text );
How can I get this function to not only generate the code for the dynamic crosstab, but also execute the result? I.e., the result when I manually copy/paste/execute is this. But I want it to execute without that extra step: the function shall assemble the dynamic query and execute it:
Edit 1
This function comes close, but I need it to return more than just the first column of the first record
Taken from: Are there any way to execute a query inside the string value (like eval) in PostgreSQL?
create or replace function eval( sql text ) returns text as $$
declare
as_txt text;
begin
if sql is null then return null ; end if ;
execute sql into as_txt ;
return as_txt ;
end;
$$ language plpgsql
usage: select * from eval($$select * from analytics limit 1$$)
However it just returns the first column of the first record :
eval
----
2015
when the actual result looks like this:
Year, Month, Date, TPV_USD
---- ----- ------ --------
2016, 3, 2016-03-31, 100000
What you ask for is impossible. SQL is a strictly typed language. PostgreSQL functions need to declare a return type (RETURNS ..) at the time of creation.
A limited way around this is with polymorphic functions. If you can provide the return type at the time of the function call. But that's not evident from your question.
Refactor a PL/pgSQL function to return the output of various SELECT queries
You can return a completely dynamic result with anonymous records. But then you are required to provide a column definition list with every call. And how do you know about the returned columns? Catch 22.
There are various workarounds, depending on what you need or can work with. Since all your data columns seem to share the same data type, I suggest to return an array: text[]. Or you could return a document type like hstore or json. Related:
Dynamic alternative to pivot with CASE and GROUP BY
Dynamically convert hstore keys into columns for an unknown set of keys
But it might be simpler to just use two calls: 1: Let Postgres build the query. 2: Execute and retrieve returned rows.
Selecting multiple max() values using a single SQL statement
I would not use the function from Eric Minikel as presented in your question at all. It is not safe against SQL injection by way of maliciously malformed identifiers. Use format() to build query strings unless you are running an outdated version older than Postgres 9.1.
A shorter and cleaner implementation could look like this:
CREATE OR REPLACE FUNCTION xtab(_tbl regclass, _row text, _cat text
, _expr text -- still vulnerable to SQL injection!
, _type regtype)
RETURNS text
LANGUAGE plpgsql AS
$func$
DECLARE
_cat_list text;
_col_list text;
BEGIN
-- generate categories for xtab param and col definition list
EXECUTE format(
$$SELECT string_agg(quote_literal(x.cat), '), (')
, string_agg(quote_ident (x.cat), %L)
FROM (SELECT DISTINCT %I AS cat FROM %s ORDER BY 1) x$$
, ' ' || _type || ', ', _cat, _tbl)
INTO _cat_list, _col_list;
-- generate query string
RETURN format(
'SELECT * FROM crosstab(
$q$SELECT %I, %I, %s
FROM %I
GROUP BY 1, 2 -- only works if the 3rd column is an aggregate expression
ORDER BY 1, 2$q$
, $c$VALUES (%5$s)$c$
) ct(%1$I text, %6$s %7$s)'
, _row, _cat, _expr -- expr must be an aggregate expression!
, _tbl, _cat_list, _col_list, _type);
END
$func$;
Same function call as your original version. The function crosstab() is provided by the additional module tablefunc which has to be installed. Basics:
PostgreSQL Crosstab Query
This handles column and table names safely. Note the use of object identifier types regclass and regtype. Also works for schema-qualified names.
Table name as a PostgreSQL function parameter
However, it is not completely safe while you pass a string to be executed as expression (_expr - cellc in your original query). This kind of input is inherently unsafe against SQL injection and should never be exposed to the general public.
SQL injection in Postgres functions vs prepared queries
Scans the table only once for both lists of categories and should be a bit faster.
Still can't return completely dynamic row types since that's strictly not possible.
Not quite impossible, you can still execute it (from a query execute the string and return SETOF RECORD.
Then you have to specify the return record format. The reason in this case is that the planner needs to know the return format before it can make certain decisions (materialization comes to mind).
So in this case you would EXECUTE the query, return the rows and return SETOF RECORD.
For example, we could do something like this with a wrapper function but the same logic could be folded into your function:
CREATE OR REPLACE FUNCTION crosstab_wrapper
(tablename varchar, rowc varchar, colc varchar,
cellc varchar, celldatatype varchar)
returns setof record language plpgsql as $$
DECLARE outrow record;
BEGIN
FOR outrow IN EXECUTE xtab($1, $2, $3, $4, $5)
LOOP
RETURN NEXT outrow
END LOOP;
END;
$$;
Then you supply the record structure on calling the function just like you do with crosstab.
Then when you all the query you would have to supply a record structure (as (col1 type, col2 type, etc) like you do with connectby.
I am trying to write a script which drops some obsolete tables in Postgres database. I want to be sure the tables are empty before dropping them. I also want the script could be kept in our migration scripts where it is safe to run even after these tables are actually dropped.
There is my script:
CREATE OR REPLACE FUNCTION __execute(TEXT) RETURNS VOID AS $$
BEGIN EXECUTE $1; END;
$$ LANGUAGE plpgsql STRICT;
CREATE OR REPLACE FUNCTION __table_exists(TEXT, TEXT) RETURNS bool as $$
SELECT exists(SELECT 1 FROM information_schema.tables WHERE (table_schema, table_name, table_type) = ($1, $2, 'BASE TABLE'));
$$ language sql STRICT;
CREATE OR REPLACE FUNCTION __table_is_empty(TEXT) RETURNS bool as $$
SELECT not exists(SELECT 1 FROM $1 );
$$ language sql STRICT;
-- Start migration here
SELECT __execute($$
DROP TABLE oldtable1;
$$)
WHERE __table_exists('public', 'oldtable1')
AND __table_is_empty('oldtable1');
-- drop auxilary functions here
And finally I got:
ERROR: syntax error at or near "$1"
LINE 11: SELECT not exists(SELECT 1 FROM $1 );
Is there any other way?
You must use EXECUTE if you want to pass a table name as parameter in a Postgres function.
Example:
CREATE OR REPLACE FUNCTION __table_is_empty(param character varying)
RETURNS bool
AS $$
DECLARE
v int;
BEGIN
EXECUTE 'select 1 WHERE EXISTS( SELECT 1 FROM ' || quote_ident(param) || ' ) '
INTO v;
IF v THEN return false; ELSE return true; END IF;
END;
$$ LANGUAGE plpgsql;
/
Demo: http://sqlfiddle.com/#!12/09cb0/1
No, no, no. For many reasons.
#kordirko already pointed out the immediate cause for the error message: In plain SQL, variables can only be used for values not for key words or identifiers. You can fix that with dynamic SQL, but that still doesn't make your code right.
You are applying programming paradigms from other programming languages. With PL/pgSQL, it is extremely inefficient to split your code into multiple separate tiny sub-functions. The overhead is huge in comparison.
Your actual call is also a time bomb. Expressions in the WHERE clause are executed in any order, so this may or may not raise an exception for non-existing table names:
WHERE __table_exists('public', 'oldtable1')
AND __table_is_empty('oldtable1');
... which will roll back your whole transaction.
Finally, you are completely open to race conditions. Like #Frank already commented, a table can be in use by concurrent transactions, in which case open locks may stall your attempt to drop the table. Could also lead to deadlocks (which the system resolves by rolling back all but one competing transactions). Take out an exclusive lock yourself, before you check whether the table is (still) empty.
Proper function
This is safe for concurrent use. It takes an array of table names (and optionally a schema name) and only drops existing, empty tables that are not locked in any way:
CREATE OR REPLACE FUNCTION f_drop_tables(_tbls text[] = '{}'
, _schema text = 'public'
, OUT drop_ct int) AS
$func$
DECLARE
_tbl text; -- loop var
_empty bool; -- for empty check
BEGIN
drop_ct := 0; -- init!
FOR _tbl IN
SELECT quote_ident(table_schema) || '.'
|| quote_ident(table_name) -- qualified & escaped table name
FROM information_schema.tables
WHERE table_schema = _schema
AND table_type = 'BASE TABLE'
AND table_name = ANY(_tbls)
LOOP
EXECUTE 'SELECT NOT EXISTS (SELECT 1 FROM ' || _tbl || ')'
INTO _empty; -- check first, only lock if empty
IF _empty THEN
EXECUTE 'LOCK TABLE ' || _tbl; -- now table is ripe for the plucking
EXECUTE 'SELECT NOT EXISTS (SELECT 1 FROM ' || _tbl || ')'
INTO _empty; -- recheck after lock
IF _empty THEN
EXECUTE 'DROP TABLE ' || _tbl; -- go in for the kill
drop_ct := drop_ct + 1; -- count tables actually dropped
END IF;
END IF;
END LOOP;
END
$func$ LANGUAGE plpgsql STRICT;
Call:
SELECT f_drop_tables('{foo1,foo2,foo3,foo4}');
To call with a different schema than the default 'public':
SELECT f_drop_tables('{foo1,foo2,foo3,foo4}', 'my_schema');
Major points
Reports the number of tables actually dropped. (Adapt to report info of your choice.)
Using the information schema like in your original. Seems the right choice here, but be aware of subtle limitations:
How to check if a table exists in a given schema
For use under heavy concurrent load (with long transactions), consider the NOWAIT option for the LOCK command and possibly catch exceptions from it.
Per documentation on "Table-level Locks":
ACCESS EXCLUSIVE
Conflicts with locks of all modes (ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE,
SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE,
and ACCESS EXCLUSIVE). This mode guarantees that the holder
is the only transaction accessing the table in any way.
Acquired by the ALTER TABLE, DROP TABLE, TRUNCATE, REINDEX, CLUSTER, and VACUUM FULL commands. This is also the
default lock mode for LOCK TABLE statements that do not specify a mode explicitly.
Bold emphasis mine.
I work on DB2 so I can't say for sure if this will work on a Postgres database, but try this:
select case when count(*) > 0 then True else False end from $1
...in place of:
SELECT not exists(SELECT 1 FROM $1 )
If Postgres does not have a CASE / END expression capability, I'd be shocked if it didn't have some kind of similar IF/THEN/ELSE like expression ability to use as a substitute.
Here's my setup:
Table 1 (table_with_info): Contains a list of varchars with substrings that I'd like to replace.
Table 2 (sub_info): Contains two columns: the substring in table_with_info that I'd like to replace and the string I'd like to replace it with.
What I'd like to do is replace all the substrings in table_with_info with their substitutions in sub_info.
This works to a point but the issue is that select replace(...) returns a new row for each one of the substituted words replaced and doesn't replace all of the ones in an individual row.
I'm explaining the best I can but I don't know if it's too clear. Here's the code an example of what's happening/what I'd like to happen.
Here's my code:
create table table_with_info
(
val varchar
);
insert into table_with_info values
('this this is test data');
create table sub_info
(
word_from varchar,
word_to varchar
);
insert into sub_info values
('this','replace1')
, ('test', 'replace2');
update table_with_info set val = (select replace("val", "word_from", "word_to")
from "table_with_info", "sub_info"
the update() function doesn't work as select() returns two rows:
Row 1: replace1 replace1 is test data
Row 2: this this is replace2 data
so what I'd like for it for the select statement to return is:
Row 1: replace1 replace1 is test data
Any thoughts? I can't create UDFs on the system I'm running.
Your UPDATE statement is incorrect in multiple ways. Consult the manual before you try to run anything like this again. You introduce two cross joins that would make this statement extremely expensive, besides yielding nonsense.
To do this properly, you need to administer each UPDATE sequentially. In a single statement, one row version eliminates the other, while each replace would use the same original row version. You can use a DO statement for this or wrap it in a plpgsql function for instance:
DO
$do$
DECLARE
r sub_info;
BEGIN
FOR r IN
TABLE sub_info
-- SELECT * FROM sub_info ORDER BY ??? -- order is relevant
LOOP
UPDATE table_with_info
SET val = replace(val, r.word_from, r.word_to)
WHERE val LIKE ('%' || r.word_from || '%'); -- avoid empty updates
END LOOP;
END
$do$;
Be aware, that the order in which updates are applied can make a difference! If the first update creates a string where the second matches (but not otherwise) ..
So, order your columns in sub_info if that can be relevant.
Avoid empty updates. Without the additional WHERE clause, you would write many new row versions without changing anything. Expensive and useless.
double-quotes are optional for legal, lower-case names.
->SQLfiddle
Expanding on Erwin's answer, a do block with dynamic SQL can do the trick as well:
do $$
declare
rec record;
repl text;
begin
repl := 'val'; -- quote_ident() this if needed
for rec in select word_from, word_to from sub_info
loop
repl := 'replace(' || repl || ', '
|| quote_literal(rec.word_from) || ', '
|| quote_literal(rec.word_to) || ')';
end loop;
-- now do them all in a single query
execute 'update ' || 'table_with_info'::regclass || ' set val = ' || repl;
end;
$$ language plpgsql;
Optionally, build a like parameter in a similar way to avoid updating rows needlessly.
I'm trying to write a function that conditionaly prunes out data based on parameters supplied. I want to be able to use default parameters so that if they aren't explicity supplied the function doesn't do any filtering. The first of two filtering types is temporal ranges, and the second is an array of primary keys.
For the temporal ranges I can use the default sentinel variables +/- infinity. This matches all the data, and in essence disables the filter. These are provided by postgres. Is there anything (a sentinel of some sort) I can use to disable the array based filter ?
Specifically i'd pass the function an array of integers, but I want to disable this filter by default (--i.e., when I don't pass the array). The obvious choice is to set the array to an empty array as a default parameter. However, where in with an empty array would match nothing (provided an empty array is even valid plpgql). So the obvious choice is to have an IF statement, or dynamic SQL. But knowing postgres there has to be some trick or syntax I might have missed.
edit:
My initial plan was to write a boolean UDF to emulate the behaviour I wanted. Something along the lines of :
CREATE OR REPLACE FUNCTION filter_category(
category_array INTEGER[],
current_category INTEGER
)
RETURNS
BOOLEAN AS $$
BEGIN
IF category_array = '{}' THEN
RETURN TRUE;
ELSE
-- if in set return true, else false.
RETURN FALSE;
END IF;
END;
$$ LANGUAGE plpgsql;
However, After some further thought on the matter, I think the query will have to get more complex. A where in array clause, or the function above, will probably not be very efficient. So to end up using any indices that might be present I'll have to unnest the array of primary keys and perform a join.
And I should only do this when the array is not null. So the query will need at least one `IF ... ELSE... END IF' statement. And if I need to do further filtering I might have to implement intermediary local datasets. Still I'm interested in any advice :D
edit 2:
My dynamic query looks like this:
CREATE OR REPLACE FUNCTION get_paginated_search_results(
forum_id_ INTEGER,
query_ CHARACTER VARYING,
from_date_ TIMESTAMP WITHOUT TIME ZONE DEFAULT NULL,
to_date_ TIMESTAMP WITHOUT TIME ZONE DEFAULT NULL,
in_categories_ INTEGER[] DEFAULT '{}')
RETURNS SETOF post_result_entry AS $$
DECLARE
join_string CHARACTER VARYING := ' ';
from_where_date CHARACTER VARYING := ' ';
to_where_date CHARACTER VARYING := ' ';
query_string_ CHARACTER VARYING := ' ';
BEGIN
IF NOT from_date_ IS NULL THEN
from_where_date := ' AND fp.posted_at > ''' || from_date_ || '''';
END IF;
IF NOT to_date_ IS NULL THEN
to_where_date := ' AND fp.posted_at < ''' || to_date_ || '''';
END IF;
CREATE LOCAL TEMP TABLE un_cat(id) ON COMMIT DROP AS (select * from unnest(in_categories_)) ;
if in_categories_ != '{}' THEN
join_string := ' INNER JOIN forum_topics ft ON fp.topic_id = ft.id ' ||
' INNER JOIN un_cat uc ON uc.id = ft.category_id ' ;
END IF;
query_string_ := '
SELECT index,posted_at,post_text,name,join_date,quotes
FROM forum_posts fp
INNER JOIN forum_user fu ON
fu.forum_id = fp.forum_id AND fu.id = fp.user_id' ||
join_string
||
'WHERE fu.forum_id = ' || forum_id_ || ' AND
to_tsvector(''english'',fp.post_text) ## to_tsquery(''english'','''|| query_||''')' ||
from_where_date ||
to_where_date
||';';
RAISE NOTICE '%', query_string_ ;
RETURN QUERY
EXECUTE query_string_;
END;
$$ LANGUAGE plpgsql;
You could use an empty array and roll it into your WHERE clause like this:
where ...
and ($1 = array[]::int[] or id = any ($1))
where $1 is whatever your int[] parameter is. Or you could use a NULL and do it the same way:
where ...
and ($1 is null or id = any ($1))
I'd check the EXPLAIN output on your "between -infinity and +infinity" query though, the optimizer might not recognize that as a tautology (assuming that you don't have NULLs so that it really is a tautology) so you could have a pointless table scan on your hands. Dynamic SQL or an ugly pile of IFs might server you better.
Background: I'm converting a database table to a format that doesn't support null values. I want to replace the null values with an arbitrary number so my application can support null values.
Question: I'd like to search my whole table for a value ("999999", for example) to make sure that it doesn't appear in the table. I could write a script to test each column individually, but I wanted to know if there is a way I could do this in pure sql without enumerating each field. Is that possible?
You can use a special feature of the PostgreSQL type system:
SELECT *
FROM tbl t
WHERE t::text LIKE '%999999%';
There is a composite type of the same name for every table that you create in PostgreSQL. And there is a text representation for every type in PostgreSQL (to input / output values).
Therefore you can just cast the whole row to text and if the string '999999' is contained in any column (its text representation, to be precise) it is guaranteed to show in the query above.
You cannot rule out false positives completely, though, if separators and / or decorators used by Postgres for the row representation can be part of the search term. It's just very unlikely. And positively not the case for your search term '999999'.
There was a very similar question on codereview.SE recently. I added some more explanation in my answer there.
create or replace function test_values( real ) returns setof record as
$$
declare
query text;
output record;
begin
for query in select 'select distinct ''' || table_name || '''::text table_name, ''' || column_name || '''::text column_name from '|| quote_ident(table_name)||' where ' || quote_ident(column_name) || ' = ''' || $1::text ||'''::' || data_type from information_schema.columns where table_schema='public' and numeric_precision is not null
loop
raise notice '%1 qqqq', query;
execute query::text into output;
return next output;
end loop;
return;
end;$$ language plpgsql;
select distinct * from test_values( 999999 ) as t(table_name text ,column_name text)