Find out which schema based on table values - sql

My database is separated into schemas based on clients (i.e.: each client has their own schema, with same data structure).
I also happen to have an external action that does not know which schema it should target. It comes from another part of the system that has no concepts of clients and does not know in which client's set it is operating. Before I process it, I have to find out which schema that request needs to target
To find the right schema, I have to find out which holds the record R with a particular unique ID (string)
From my understanding, the following
SET search_path TO schema1,schema2,schema3,...
will only look through the tables in schema1 (or the first schema that matches the table) and will not do a global search.
Is there a way for me to do a global search across all schemas, or am I just going to have to use a for loop and iterate through all of them, one at a time?

You could use inheritance for this. (Be sure to consider the limitations.)
Consider this little demo:
CREATE SCHEMA master; -- no access of others ..
CREATE SEQUENCE master.myseq; -- global sequence for globally unique ids
CREATE table master.tbl (
id int primary key DEFAULT nextval('master.myseq')
, foo text);
CREATE SCHEMA x;
CREATE table x.tbl() INHERITS (master.tbl);
INSERT INTO x.tbl(foo) VALUES ('x');
CREATE SCHEMA y;
CREATE table y.tbl() INHERITS (master.tbl);
INSERT INTO y.tbl(foo) VALUES ('y');
SELECT * FROM x.tbl; -- returns 'x'
SELECT * FROM y.tbl; -- returns 'y'
SELECT * FROM master.tbl; -- returns 'x' and 'y' <-- !!
Now, to actually identify the table a particular row lives in, use the tableoid:
SELECT *, tableoid::regclass AS table_name
FROM master.tbl
WHERE id = 2;
Result:
id | foo | table_name
---+-----+-----------
2 | y | y.tbl
You can derive the source schema from the tableoid, best by querying the system catalogs with the tableoid directly. (The displayed name depends on the setting of search_path.)
SELECT n.nspname
FROM master.tbl t
JOIN pg_class c ON c.oid = t.tableoid
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE t.id = 2;
This is also much faster than looping through many separate tables.

You will have to iterate over all namespaces. You can get a lot of this information from the pg_* system catalogs. In theory, you should be able to resolve the client -> schema mapping at request time without talking to the database so that the first SQL call you make is:
SET search_path = client1,global_schema;

While I think Erwin's solution is probably preferable if you can re-structure your tables, an alternative that doesn't require any schema changes is to write a PL/PgSQL function that scans the tables using dynamic SQL based on the system catalog information.
Given:
CREATE SCHEMA a;
CREATE SCHEMA b;
CREATE TABLE a.testtab ( searchval text );
CREATE TABLE b.testtab (LIKE a.testtab);
INSERT INTO a.testtab(searchval) VALUES ('ham');
INSERT INTO b.testtab(searchval) VALUES ('eggs');
The following PL/PgSQL function searches all schemas containing tables named _tabname for values in _colname equal to _value and returns the first matching schema.
CREATE OR REPLACE FUNCTION find_schema_for_value(_tabname text, _colname text, _value text) RETURNS text AS $$
DECLARE
cur_schema text;
foundval integer;
BEGIN
FOR cur_schema IN
SELECT nspname
FROM pg_class c
INNER JOIN pg_namespace n ON (c.relnamespace = n.oid)
WHERE c.relname = _tabname AND c.relkind = 'r'
LOOP
EXECUTE
format('SELECT 1 FROM %I.%I WHERE %I = $1',
cur_schema, _tabname, _colname
) INTO foundval USING _value;
IF foundval = 1 THEN
RETURN cur_schema;
END IF;
END LOOP;
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';
If there are are no matches then null is returned. If there are multiple matches the result will be one of them, but no guarantee is made about which one. Add an ORDER BY clause to the schema query if you want to return (say) the first in alphabetical order or something. The function is also trivially modified to return setof text and RETURN NEXT cur_schema if you want to return all the matches.
regress=# SELECT find_schema_for_value('testtab','searchval','ham');
find_schema_for_value
-----------------------
a
(1 row)
regress=# SELECT find_schema_for_value('testtab','searchval','eggs');
find_schema_for_value
-----------------------
b
(1 row)
regress=# SELECT find_schema_for_value('testtab','searchval','bones');
find_schema_for_value
-----------------------
(1 row)
By the way, you can re-use the table definitions without inheritance if you want, and you really should. Either use a common composite data type:
CREATE TYPE public.testtab AS ( searchval text );
CREATE TABLE a.testtab OF public.testtab;
CREATE TABLE b.testtab OF public.testtab;
in which case they share the same data type but not any data; or or via LIKE:
CREATE TABLE public.testtab ( searchval text );
CREATE TABLE a.testtab (LIKE public.testtab);
CREATE TABLE b.testtab (LIKE public.testtab);
in which case they're completely unconnected to each other after creation.

Related

Creating a table with certain columns from another table

Lets say we have a table A with 200 different columns. We want to select columns that contain a certain substring (e.g. the "host" substring in "host_id", "host_name", "average_host_rating"), and create a new table B with only those columns and their data imported from a .csv file.
I tried creating the new table manually, however this is not good practice and i want to improve the code by making it valid and functional even if i add more columns to table A.
Creating the table manually:
SELECT
listings.host_id,
listings.host_url ,
..
..
listings.host_name ,
listings.host_since ,
INTO host_table
FROM listings
WHERE TRUE;
Trying to create the table in a better way:
CREATE TABLE B AS
SELECT *
FROM A
WHERE A::text LIKE '%host%'
I expected it to create table B with every column that contains 'host' in its name however it returned an exact copy of table A (and all its data). I tried different ways and methods of creating new tables, however the problem always was that i could not isolate only the columns with the specified substring ('host').
What could be wrong in my syntax, way of thinking or anything else?
Thanks in advance!
You may create and call a function with parameters. The function will dynamically chose the columns from information_schema.columns.
Note that where false is used because you mentioned data will come from a csv file and not from the original table.
create or replace function
fn_gen_tab_text ( curr_tab_in text,tab_text_in TEXT, new_tab_in text)
RETURNS void AS
$body$
declare v_sql TEXT;
BEGIN
select 'CREATE TABLE %I AS select ' || string_agg(column_name,',')
||' from %I where false'
into v_sql from information_schema.columns
where table_name = curr_tab_in
and column_name like '%'||tab_text_in||'%';
EXECUTE format (v_sql,new_tab_in,curr_tab_in);
END $body$ language plpgsql
Call it as
select fn_gen_tab_text('host_table','host','new_table' );
DEMO

Left join with dynamic table name derived from column

I am new in PostgreSQL and I wonder if it's possible to use number from table tbc as part of the table name in left join 'pa' || number. So for example if number is 456887 I want left join with table pa456887. Something like this:
SELECT tdc.cpa, substring(tdc.ku,'[0-9]+') AS number, paTab.vym
FROM public."table_data_C" AS tdc
LEFT JOIN concat('pa' || number) AS paTab ON (paTab.cpa = tdc.cpa)
And I want to use only PostgreSQL, not additional code in PHP for example.
Either way, you need dynamic SQL.
Table name as given parameter
CREATE OR REPLACE FUNCTION foo(_number int)
RETURNS TABLE (cpa int, nr text, vym text) AS -- adapt to actual data types!
$func$
BEGIN
RETURN QUERY EXECUTE format(
'SELECT t.cpa, substring(t.ku,'[0-9]+'), p.vym
FROM public."table_data_C" t
LEFT JOIN %s p USING (cpa)'
, 'pa' || _number
);
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM foo(456887)
Generally, you would sanitize table names with format ( %I ) to avoid SQL injection. With just an integer as dynamic input that's not necessary. More details and links in this related answer:
INSERT with dynamic table name in trigger function
Data model
There may be good reasons for the data model. Like partitioning / sharding or separate privileges ...
If you don't have such a good reason, consider consolidating multiple tables with identical schema into one and add the number as column. Then you don't need dynamic SQL.
Consider inheritance. Then you can add a condition on tableoid to only retrieve rows from a given child table:
SELECT * FROM parent_table
WHERE tableoid = 'pa456887'::regclass
Be aware of limitations for inheritance, though. Related answers:
Get the name of a row's source table when querying the parent it inherits from
Select (retrieve) all records from multiple schemas using Postgres
Name of 2nd table depending on value in 1st table
Deriving the name of the join table from values in the first table dynamically complicates things.
For only a few tables
LEFT JOIN each on tableoid. There is only one match per row, so use COALESCE.
SELECT t.*, t.tbl, COALESCE(p1.vym, p2.vym, p3.vym) AS vym
FROM (
SELECT cpa, ('pa' || substring(ku,'[0-9]+'))::regclass AS tbl
FROM public."table_data_C"
-- WHERE <some condition>
) t
LEFT JOIN pa456887 p1 ON p1.cpa = t.cpa AND p1.tableoid = t.tbl
LEFT JOIN pa456888 p2 ON p2.cpa = t.cpa AND p2.tableoid = t.tbl
LEFT JOIN pa456889 p3 ON p3.cpa = t.cpa AND p3.tableoid = t.tbl
For many tables
Combine a loop with dynamic queries:
CREATE OR REPLACE FUNCTION foo(_number int)
RETURNS TABLE (cpa int, nr text, vym text) AS
$func$
DECLARE
_nr text;
BEGIN
FOR _nr IN
SELECT DISTINCT substring(ku,'[0-9]+')
FROM public."table_data_C"
LOOP
RETURN QUERY EXECUTE format(
'SELECT t.cpa, _nr, p.vym
FROM public."table_data_C" t
LEFT JOIN %I p USING (cpa)
WHERE t.ku LIKE (_nr || '%')'
, 'pa' || _nr
);
END LOOP;
END
$func$ LANGUAGE plpgsql;

Update multiple columns in a trigger function in plpgsql

Given the following schema:
create table account_type_a (
id SERIAL UNIQUE PRIMARY KEY,
some_column VARCHAR
);
create table account_type_b (
id SERIAL UNIQUE PRIMARY KEY,
some_other_column VARCHAR
);
create view account_type_a view AS select * from account_type_a;
create view account_type_b view AS select * from account_type_b;
I try to create a generic trigger function in plpgsql, which enables updating the view:
create trigger trUpdate instead of UPDATE on account_view_type_a
for each row execute procedure updateAccount();
create trigger trUpdate instead of UPDATE on account_view_type_a
for each row execute procedure updateAccount();
An unsuccessful effort of mine was:
create function updateAccount() returns trigger as $$
declare
target_table varchar := substring(TG_TABLE_NAME from '(.+)_view');
cols varchar;
begin
execute 'select string_agg(column_name,$1) from information_schema.columns
where table_name = $2' using ',', target_table into cols;
execute 'update ' || target_table || ' set (' || cols || ') = select ($1).*
where id = ($1).id' using NEW;
return NULL;
end;
$$ language plpgsql;
The problem is the update statement. I am unable to come up with a syntax that would work here. I have successfully implemented this in PL/Perl, but would be interested in a plpgsql-only solution.
Any ideas?
Update
As #Erwin Brandstetter suggested, here is the code for my PL/Perl solution. I incoporated some of his suggestions.
create function f_tr_up() returns trigger as $$
use strict;
use warnings;
my $target_table = quote_ident($_TD->{'table_name'}) =~ s/^([\w]+)_view$/$1/r;
my $NEW = $_TD->{'new'};
my $cols = join(',', map { quote_ident($_) } keys $NEW);
my $vals = join(',', map { quote_literal($_) } values $NEW);
my $query = sprintf(
"update %s set (%s) = (%s) where id = %d",
$target_table,
$cols,
$vals,
$NEW->{'id'});
spi_exec_query($query);
return;
$$ language plperl;
While #Gary's answer is technically correct, it fails to mention that PostgreSQL does support this form:
UPDATE tbl
SET (col1, col2, ...) = (expression1, expression2, ..)
Read the manual on UPDATE.
It's still tricky to get this done with dynamic SQL. I'll assume a simple case where views consist of the same columns as their underlying tables.
CREATE VIEW tbl_view AS SELECT * FROM tbl;
Problems
The special record NEW is not visible inside EXECUTE. I pass NEW as a single parameter with the USING clause of EXECUTE.
As discussed, UPDATE with list-form needs individual values. I use a subselect to split the record into individual columns:
UPDATE ...
FROM (SELECT ($1).*) x
(Parenthesis around $1 are not optional.) This allows me to simply use two column lists built with string_agg() from the catalog table: one with and one without table qualification.
It's not possible to assign a row value as a whole to individual columns. The manual:
According to the standard, the source value for a parenthesized
sub-list of target column names can be any row-valued expression
yielding the correct number of columns. PostgreSQL only allows the
source value to be a row constructor or a sub-SELECT.
INSERT is implemented simpler. If the structure of view and table are identical we can omit the column definition list. (Can be improved, see below.)
Solution
I made a couple of updates to your approach to make it shine.
Trigger function for UPDATE:
CREATE OR REPLACE FUNCTION f_trg_up()
RETURNS TRIGGER
LANGUAGE plpgsql AS
$func$
DECLARE
_tbl regclass := quote_ident(TG_TABLE_SCHEMA) || '.'
|| quote_ident(substring(TG_TABLE_NAME from '(.+)_view$'));
_cols text;
_vals text;
BEGIN
SELECT INTO _cols, _vals
string_agg(quote_ident(attname), ', ')
, string_agg('x.' || quote_ident(attname), ', ')
FROM pg_attribute
WHERE attrelid = _tbl
AND NOT attisdropped -- no dropped (dead) columns
AND attnum > 0; -- no system columns
EXECUTE format('
UPDATE %s
SET (%s) = (%s)
FROM (SELECT ($1).*) x', _tbl, _cols, _vals)
USING NEW;
RETURN NEW; -- Don't return NULL unless you knwo what you're doing
END
$func$;
Trigger function for INSERT:
CREATE OR REPLACE FUNCTION f_trg_ins()
RETURNS TRIGGER
LANGUAGE plpgsql AS
$func$
DECLARE
_tbl regclass := quote_ident(TG_TABLE_SCHEMA) || '.'
|| quote_ident(substring(TG_TABLE_NAME FROM '(.+)_view$'));
BEGIN
EXECUTE format('INSERT INTO %s SELECT ($1).*', _tbl)
USING NEW;
RETURN NEW; -- Don't return NULL unless you know what you're doing
END
$func$;
Triggers:
CREATE TRIGGER trg_instead_up
INSTEAD OF UPDATE ON a_view
FOR EACH ROW EXECUTE FUNCTION f_trg_up();
CREATE TRIGGER trg_instead_ins
INSTEAD OF INSERT ON a_view
FOR EACH ROW EXECUTE FUNCTION f_trg_ins();
Before Postgres 11 the syntax (oddly) was EXECUTE PROCEDURE instead of EXECUTE FUNCTION - which also still works.
db<>fiddle here - demonstrating INSERT and UPDATE
Old sqlfiddle
Major points
Include the schema name to make the table reference unambiguous. There can be multiple table of the same name in one database with multiple schemas!
Query pg_catalog.pg_attribute instead of information_schema.columns. Less portable, but much faster and allows to use the table-OID.
How to check if a table exists in a given schema
Table names are NOT safe against SQLi when concatenated as strings for dynamic SQL. Escape with quote_ident() or format() or with an object-identifer type. This includes the special trigger function variables TG_TABLE_SCHEMA and TG_TABLE_NAME!
Cast to the object identifier type regclass to assert the table name is valid and get the OID for the catalog look-up.
Optionally use format() to build the dynamic query string safely.
No need for dynamic SQL for the first query on the catalog tables. Faster, simpler.
Use RETURN NEW instead of RETURN NULL in these trigger functions unless you know what you are doing. (NULL would cancel the INSERT for the current row.)
This simple version assumes that every table (and view) has a unique column named id. A more sophisticated version might use the primary key dynamically.
The function for UPDATE allows the columns of view and table to be in any order, as long as the set is the same.
The function for INSERT expects the columns of view and table to be in identical order. If you want to allow arbitrary order, add a column definition list to the INSERT command, just like with UPDATE.
Updated version also covers changes to the id column by using OLD additionally.
Postgresql doesn't support updating multiple columns using the set (col1,col2) = select val1,val2 syntax.
To achieve the same in postgresql you'd use
update target_table
set col1 = d.val1,
col2 = d.val2
from source_table d
where d.id = target_table.id
This is going to make the dynamic query a bit more complex to build as you'll need to iterate the column name list you're using into individual fields. I'd suggest you use array_agg instead of string_agg as an array is easier to process than splitting the string again.
Postgresql UPDATE syntax
documentation on array_agg function

How to create sequence if not exists

I tried to use code from Check if sequence exists in Postgres (plpgsql).
To create sequence if it does not exists. Running this code two times causes an exception:
sequence ... already exists.
How to create sequence only if it does not exist?
If the sequence does not exist, no message should be written and no error should occur so I cannot use the stored procedure in the other answer to this question since it writes message to log file every time if sequence exists.
do $$
begin
SET search_path = '';
IF not EXISTS (SELECT * FROM pg_class
WHERE relkind = 'S'
AND oid::regclass::text = 'firma1.' || quote_ident('myseq'))
THEN
SET search_path = firma1,public;
create sequence myseq;
END IF;
SET search_path = firma1,public;
end$$;
select nextval('myseq')::int as nr;
Postgres 9.5 or later
IF NOT EXISTS was added to CREATE SEQUENCE in Postgres 9.5. That's the simple solution now:
CREATE SEQUENCE IF NOT EXISTS myschema.myseq;
But consider details of the outdated answer anyway ...
And you know about serial or IDENTITY columns, right?
Auto increment table column
Postgres 9.4 or older
Sequences share the namespace with several other table-like objects. The manual:
The sequence name must be distinct from the name of any other
sequence, table, index, view, or foreign table in the same schema.
Bold emphasis mine. So there are three cases:
Name does not exist. -> Create sequence.
Sequence with the same name exists. -> Do nothing? Any output? Any logging?
Other conflicting object with the same name exists. -> Do something? Any output? Any logging?
Specify what to do in either case. A DO statement could look like this:
DO
$do$
DECLARE
_kind "char";
BEGIN
SELECT relkind
FROM pg_class
WHERE oid = 'myschema.myseq'::regclass -- sequence name, optionally schema-qualified
INTO _kind;
IF NOT FOUND THEN -- name is free
CREATE SEQUENCE myschema.myseq;
ELSIF _kind = 'S' THEN -- sequence exists
-- do nothing?
ELSE -- object name exists for different kind
-- do something!
END IF;
END
$do$;
Object types (relkind) in pg_class according to the manual:
r = ordinary table
i = index
S = sequence
v = view
m = materialized view
c = composite type
t = TOAST table
f = foreign table
Related:
How to check if a table exists in a given schema
I went a different route: just catch the exception:
DO
$$
BEGIN
CREATE SEQUENCE myseq;
EXCEPTION WHEN duplicate_table THEN
-- do nothing, it's already there
END
$$ LANGUAGE plpgsql;
One nice benefit to this is that you don't need to worry about what your current schema is.
If you don't need to preserve the potentially existing sequence, you could just drop it and then recreate it:
DROP SEQUENCE IF EXISTS id_seq;
CREATE SEQUENCE id_seq;
Postgres doesn't have CREATE SEQUENCE IF NOT EXISTS and if the table has default value using the sequence if you just drop the sequence, you might get error:
ERROR: cannot drop sequence (sequence_name) because other objects depend on it SQL state: 2BP01
For me, this one can help:
ALTER TABLE <tablename> ALTER COLUMN id DROP DEFAULT;
DROP SEQUENCE IF EXISTS <sequence_name>;
CREATE sequence <sequence_name>;
The information about sequences can be retrieved from information_schema.sequences (reference)
Try something like this (untested):
...
IF not EXISTS (SELECT * FROM information_schema.sequences
WHERE sequence_schema = 'firma1' AND sequence_name = 'myseq') THEN
...
I have a function to clean all tables in my database application at any time. It is build dynamically, but the essence is that it deletes all data from each table and resets the sequence.
This is the code to reset the sequence of one of the tables:
perform relname from pg_statio_all_sequences where relname = 'privileges_id_seq';
if found then
select setval ('privileges_id_seq',1, false) into i_result;
end if;
Hope this helps,
Loek
I am using postgres 8.4, I see that you use 9.2. Could make a difference where the information is stored.

Is there a way to define a named constant in a PostgreSQL query?

Is there a way to define a named constant in a PostgreSQL query? For example:
MY_ID = 5;
SELECT * FROM users WHERE id = MY_ID;
This question has been asked before (How do you use script variables in PostgreSQL?). However, there is a trick that I use for queries sometimes:
with const as (
select 1 as val
)
select . . .
from const cross join
<more tables>
That is, I define a CTE called const that has the constants defined there. I can then cross join this into my query, any number of times at any level. I have found this particularly useful when I'm dealing with dates, and need to handle date constants across many subqueries.
PostgreSQL has no built-in way to define (global) variables like MySQL or Oracle. (There is a limited workaround using "customized options"). Depending on what you want exactly there are other ways:
For one query
You can provide values at the top of a query in a CTE like #Gordon already provided.
Global, persistent constant:
You could create a simple IMMUTABLE function for that:
CREATE FUNCTION public.f_myid()
RETURNS int LANGUAGE sql IMMUTABLE PARALLEL SAFE AS
'SELECT 5';
(Parallel safety settings only apply to Postgres 9.6 or later.)
It has to live in a schema that is visible to the current user, i.e. is in the respective search_path. Like the schema public, by default. If security is an issue, make sure it's the first schema in the search_path or schema-qualify it in your call:
SELECT public.f_myid();
Visible for all users in the database (that are allowed to access schema public).
Multiple values for current session:
CREATE TEMP TABLE val (val_id int PRIMARY KEY, val text);
INSERT INTO val(val_id, val) VALUES
( 1, 'foo')
, ( 2, 'bar')
, (317, 'baz');
CREATE FUNCTION f_val(_id int)
RETURNS text LANGUAGE sql STABLE PARALLEL RESTRICTED AS
'SELECT val FROM val WHERE val_id = $1';
SELECT f_val(2); -- returns 'baz'
Since plpgsql checks the existence of a table on creation, you need to create a (temporary) table val before you can create the function - even if a temp table is dropped at the end of the session while the function persists. The function will raise an exception if the underlying table is not found at call time.
The current schema for temporary objects comes before the rest of your search_path per default - if not instructed otherwise explicitly. You cannot exclude the temporary schema from the search_path, but you can put other schemas first.
Evil creatures of the night (with the necessary privileges) might tinker with the search_path and put another object of the same name in front:
CREATE TABLE myschema.val (val_id int PRIMARY KEY, val text);
INSERT INTO val(val_id, val) VALUES (2, 'wrong');
SET search_path = myschema, pg_temp;
SELECT f_val(2); -- returns 'wrong'
It's not much of a threat, since only privileged users can alter global settings. Other users can only do it for their own session. Consider the related chapter of manual on creating functions with SECURITY DEFINER.
A hard-wired schema is typically simpler and faster:
CREATE FUNCTION f_val(_id int)
RETURNS text LANGUAGE sql STABLE PARALLEL RESTRICTED AS
'SELECT val FROM pg_temp.val WHERE val_id = $1';
Related answers with more options:
How to test my ad-hoc SQL with parameters in Postgres query window
Passing user id to PostgreSQL triggers
In addition to the sensible options Gordon and Erwin already mentioned (temp tables, constant-returning functions, CTEs, etc), you can also (ab)use the PostgreSQL GUC mechanism to create global-, session- and transaction-level variables.
See this prior post which shows the approach in detail.
I don't recommend this for general use, but it could be useful in narrow cases like the one mentioned in the linked question, where the poster wanted a way to provide the application-level username to triggers and functions.
I've found this solution:
with vars as (
SELECT * FROM (values(5)) as t(MY_ID)
)
SELECT * FROM users WHERE id = (SELECT MY_ID FROM vars)
I've found a mixture of the available approaches to be best:
Store your variables in a table:
CREATE TABLE vars (
id INT NOT NULL PRIMARY KEY DEFAULT 1,
zipcode INT NOT NULL DEFAULT 90210,
-- etc..
CHECK (id = 1)
);
Create a dynamic function, which loads the contents of your table, and uses it to:
Re/Create another separate static immutable getter function.
CREATE FUNCTION generate_var_getter()
RETURNS VOID AS $$
DECLARE
var_name TEXT;
var_value TEXT;
new_rows TEXT[];
new_sql TEXT;
BEGIN
FOR var_name IN (
SELECT columns.column_name
FROM information_schema.columns
WHERE columns.table_schema = 'public'
AND columns.table_name = 'vars'
ORDER BY columns.ordinal_position ASC
) LOOP
EXECUTE
FORMAT('SELECT %I FROM vars LIMIT 1', var_name)
INTO var_value;
new_rows := ARRAY_APPEND(
new_rows,
FORMAT('(''%s'', %s)', var_name, var_value)
);
END LOOP;
new_sql := FORMAT($sql$
CREATE OR REPLACE FUNCTION var_get(key_in TEXT)
RETURNS TEXT AS $config$
DECLARE
result NUMERIC;
BEGIN
result := (
SELECT value FROM (VALUES %s)
AS vars_tmp (key, value)
WHERE key = key_in
);
RETURN result;
END;
$config$ LANGUAGE plpgsql IMMUTABLE;
$sql$, ARRAY_TO_STRING(new_rows, ','));
EXECUTE new_sql;
RETURN;
END;
$$ LANGUAGE plpgsql;
Add an update trigger to your table, so that after you change one of your variables, generate_var_getter() is called, and the immutable var_get() function is recreated.
CREATE FUNCTION vars_regenerate_update()
RETURNS TRIGGER AS $$
BEGIN
PERFORM generate_var_getter();
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_vars_regenerate_change
AFTER INSERT OR UPDATE ON vars
EXECUTE FUNCTION vars_regenerate_update();
Now you can easily keep your variables in a table, but also get blazing-fast immutable access to them. The best of both worlds:
INSERT INTO vars DEFAULT VALUES;
-- INSERT 0 1
SELECT var_get('zipcode')::INT;
-- 90210
UPDATE vars SET zipcode = 84111;
-- UPDATE 1
SELECT var_get('zipcode')::INT;
-- 84111
When your query uses "GROUP BY":
WITH const AS (
select 5 as MY_ID,
'2022-03-1'::date as MY_DAY)
SELECT u.user_group,
COUNT(*),
const.MY_DAY
FROM users u
CROSS JOIN const
WHERE 1=1
GROUP BY u.user_group, const.MY_ID, const.MY_DAY
the sample contains more fields, than the OP, but that helps to more visitors, who are looking for the subject.
Without GROUP BY:
WITH const AS (
select 5 as MY_ID)
SELECT u.* FROM users u
CROSS JOIN const
WHERE u.id = const.MY_ID
credits to #GordonLinoff
Without GROUP BY and no column-name conflicts:
WITH const AS (
select 5 as MY_ID)
SELECT users.* FROM users
CROSS JOIN const
WHERE id = MY_ID