How to use a temp sequence within a Postgresql function - sql

I have some lines of SQL which will take a set of IDs from the same GROUP_ID that are not contiguous (ex. if some rows got deleted) and will make them contiguous again. I wanted to turn this into a function for reusability purposes. The lines work if executed individually but when I try to create the function I get the error
ERROR: relation "id_seq_temp" does not exist
LINE 10: UPDATE THINGS SET ID=nextval('id_se...
If I create a sequence outside of the function and use that sequence in the function instead then the function is created successfully (schema qualified or unqualified). However I felt like creating the temp sequence inside of the function rather than leaving it in the schema was a cleaner solution.
I have seen this question: Function shows error "relation my_table does not exist"
However, I'm using the public schema and schema qualifying the sequence with public. does not seem to help.
I've also seen this question: How to create a sql function using temp sequences and a SELECT on PostgreSQL8. I probably could use generate_series but this adds a lot of complexity that SERIES solves such as needing to know how big of a series to generate.
Here is my function, I anonymized some of the names - just in case there's a typo.
CREATE OR REPLACE FUNCTION reindex_ids(IN BIGINT) RETURNS VOID
LANGUAGE SQL
AS $$
CREATE TEMPORARY SEQUENCE id_seq_temp
MINVALUE 1
START WITH 1
INCREMENT BY 1;
ALTER SEQUENCE id_seq_temp RESTART;
UPDATE THINGS SET ID=ID+2000 WHERE GROUP_ID=$1;
UPDATE THINGS SET ID=nextval('id_seq_temp') WHERE GROUP_ID=$1;
$$;
Is it possible to use a sequence you create within a function later in the function?

Answer to question
The reason is that SQL functions (LANGUAGE sql) are parsed and planned as one. All objects used must exist before the function runs.
You can switch to PL/pgSQL, (LANGUAGE plpgsql) which plans each statement on demand. There you can create objects and use them in the next command.
See:
Why can PL/pgSQL functions have side effect, while SQL functions can't?
Since you are not returning anything, consider a PROCEDURE. (FUNCTION works, too.)
CREATE OR REPLACE PROCEDURE reindex_ids(IN bigint)
LANGUAGE plpgsql AS
$proc$
BEGIN
IF EXISTS ( SELECT FROM pg_catalog.pg_class
WHERE relname = 'id_seq_temp'
AND relnamespace = pg_my_temp_schema()
AND relkind = 'S') THEN
ALTER SEQUENCE id_seq_temp RESTART;
ELSE
CREATE TEMP SEQUENCE id_seq_temp;
END IF;
UPDATE things SET id = id + 2000 WHERE group_id = $1;
UPDATE things SET id = nextval('id_seq_temp') WHERE group_id = $1;
END
$proc$;
Call:
CALL reindex_ids(123);
This creates your temp sequence if it does not exist already.
If the sequence exists, it is reset. (Remember that temporary objects live for the duration of a session.)
In the unlikely event that some other object occupies the name, an exception is raised.
Alternative solutions
Solution 1
This usually works:
UPDATE things t
SET id = t1.new_id
FROM (
SELECT pk_id, row_number() OVER (ORDER BY id) AS new_id
FROM things
WHERE group_id = $1 -- your input here
) t1
WHERE t.pk_id = t1.pk_id;
And only updates each row once, so half the cost.
Replace pk_id with your PRIMARY KEY column, or any UNIQUE NOT NULL (combination of) column(s).
The trick is that the UPDATE typically processes rows according to the sort order of the subquery in the FROM clause. Updating in ascending order should never hit a duplicate key violation.
And the ORDER BY clause of the window function row_number() imposes that sort order on the resulting set. That's an undocumented implementation detail, so you might want to add an explicit ORDER BY to the subquery. But since the behavior of UPDATE is undocumented anyway, it still depends on an implementation detail.
You can wrap that into a plain SQL function.
Solution 2
Consider not doing what you are doing at all. Gaps in sequential numbers are typically expected and not a problem. Just live with it. See:
Serial numbers per group of rows for compound key

Related

How can I create a calculate column in the creation of table in POSTGRESQL, for example in sql server LineTotal AS Price * Quantity [duplicate]

Does PostgreSQL support computed / calculated columns, like MS SQL Server? I can't find anything in the docs, but as this feature is included in many other DBMSs I thought I might be missing something.
Eg: http://msdn.microsoft.com/en-us/library/ms191250.aspx
Postgres 12 or newer
STORED generated columns are introduced with Postgres 12 - as defined in the SQL standard and implemented by some RDBMS including DB2, MySQL, and Oracle. Or the similar "computed columns" of SQL Server.
Trivial example:
CREATE TABLE tbl (
int1 int
, int2 int
, product bigint GENERATED ALWAYS AS (int1 * int2) STORED
);
fiddle
VIRTUAL generated columns may come with one of the next iterations. (Not in Postgres 15, yet).
Related:
Attribute notation for function call gives error
Postgres 11 or older
Up to Postgres 11 "generated columns" are not supported.
You can emulate VIRTUAL generated columns with a function using attribute notation (tbl.col) that looks and works much like a virtual generated column. That's a bit of a syntax oddity which exists in Postgres for historic reasons and happens to fit the case. This related answer has code examples:
Store common query as column?
The expression (looking like a column) is not included in a SELECT * FROM tbl, though. You always have to list it explicitly.
Can also be supported with a matching expression index - provided the function is IMMUTABLE. Like:
CREATE FUNCTION col(tbl) ... AS ... -- your computed expression here
CREATE INDEX ON tbl(col(tbl));
Alternatives
Alternatively, you can implement similar functionality with a VIEW, optionally coupled with expression indexes. Then SELECT * can include the generated column.
"Persisted" (STORED) computed columns can be implemented with triggers in a functionally equivalent way.
Materialized views are a related concept, implemented since Postgres 9.3.
In earlier versions one can manage MVs manually.
YES you can!! The solution should be easy, safe, and performant...
I'm new to postgresql, but it seems you can create computed columns by using an expression index, paired with a view (the view is optional, but makes makes life a bit easier).
Suppose my computation is md5(some_string_field), then I create the index as:
CREATE INDEX some_string_field_md5_index ON some_table(MD5(some_string_field));
Now, any queries that act on MD5(some_string_field) will use the index rather than computing it from scratch. For example:
SELECT MAX(some_field) FROM some_table GROUP BY MD5(some_string_field);
You can check this with explain.
However at this point you are relying on users of the table knowing exactly how to construct the column. To make life easier, you can create a VIEW onto an augmented version of the original table, adding in the computed value as a new column:
CREATE VIEW some_table_augmented AS
SELECT *, MD5(some_string_field) as some_string_field_md5 from some_table;
Now any queries using some_table_augmented will be able to use some_string_field_md5 without worrying about how it works..they just get good performance. The view doesn't copy any data from the original table, so it is good memory-wise as well as performance-wise. Note however that you can't update/insert into a view, only into the source table, but if you really want, I believe you can redirect inserts and updates to the source table using rules (I could be wrong on that last point as I've never tried it myself).
Edit: it seems if the query involves competing indices, the planner engine may sometimes not use the expression-index at all. The choice seems to be data dependant.
One way to do this is with a trigger!
CREATE TABLE computed(
one SERIAL,
two INT NOT NULL
);
CREATE OR REPLACE FUNCTION computed_two_trg()
RETURNS trigger
LANGUAGE plpgsql
SECURITY DEFINER
AS $BODY$
BEGIN
NEW.two = NEW.one * 2;
RETURN NEW;
END
$BODY$;
CREATE TRIGGER computed_500
BEFORE INSERT OR UPDATE
ON computed
FOR EACH ROW
EXECUTE PROCEDURE computed_two_trg();
The trigger is fired before the row is updated or inserted. It changes the field that we want to compute of NEW record and then it returns that record.
PostgreSQL 12 supports generated columns:
PostgreSQL 12 Beta 1 Released!
Generated Columns
PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet.
Generated Columns
A generated column is a special column that is always computed from other columns. Thus, it is for columns what a view is for tables.
CREATE TABLE people (
...,
height_cm numeric,
height_in numeric GENERATED ALWAYS AS (height_cm * 2.54) STORED
);
db<>fiddle demo
Well, not sure if this is what You mean but Posgres normally support "dummy" ETL syntax.
I created one empty column in table and then needed to fill it by calculated records depending on values in row.
UPDATE table01
SET column03 = column01*column02; /*e.g. for multiplication of 2 values*/
It is so dummy I suspect it is not what You are looking for.
Obviously it is not dynamic, you run it once. But no obstacle to get it into trigger.
Example on creating an empty virtual column
,(SELECT *
From (values (''))
A("virtual_col"))
Example on creating two virtual columns with values
SELECT *
From (values (45,'Completed')
, (1,'In Progress')
, (1,'Waiting')
, (1,'Loading')
) A("Count","Status")
order by "Count" desc
I have a code that works and use the term calculated, I'm not on postgresSQL pure tho we run on PADB
here is how it's used
create table some_table as
select category,
txn_type,
indiv_id,
accum_trip_flag,
max(first_true_origin) as true_origin,
max(first_true_dest ) as true_destination,
max(id) as id,
count(id) as tkts_cnt,
(case when calculated tkts_cnt=1 then 1 else 0 end) as one_way
from some_rando_table
group by 1,2,3,4 ;
A lightweight solution with Check constraint:
CREATE TABLE example (
discriminator INTEGER DEFAULT 0 NOT NULL CHECK (discriminator = 0)
);

Postgres ATOMIC stored procedure INSERT INTO . . . SELECT with one parameter and one set of rows from a table

I am trying to write a stored procedure to let a dev assign new user identities to a specified group when they don't already have one (i.e. insert a parameter and the output of a select statement into a joining table) without hand-writing every pair of foreign keys as values to do so. I know how I'd do it in T-SQL/SQL Server but I'm working with a preexisting/unfamiliar Postgres database. I would strongly prefer to keep my stored procedures as LANGUAGE SQL/BEGIN ATOMIC and this + online examples being simplified and/or using constants has made it difficult for me to get my bearings.
Apologies in advance for length, this is me trying to articulate why I do not believe this question is a duplicate based on what I've been able to find searching on my own but I may have overcorrected.
Schema (abstracted from the most identifying parts; these are not the original table names and I am not in a position to change what anything is called; I am also leaving out indexing for simplicity's sake) is like:
create table IF NOT EXISTS user_identities (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY NOT NULL,
[more columns not relevant to this query)
)
create table IF NOT EXISTS user_groups (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY NOT NULL,
name TEXT NOT NULL
)
create table IF NOT EXISTS group_identities (
user_id BIGINT REFERENCES user_identities(id) ON DELETE RESTRICT NOT NULL,
group_id BIGINT REFERENCES user_groups(id) ON DELETE RESTRICT NOT NULL
)
Expected dev behavior:
Add all predetermined identities intended to belong to a group in a single batch
Add identifying information for the new group (it is going to take a lot of convincing to bring the people involved around to using nested stored procedures for this if I ever can)
Bring the joining table up to date accordingly (what I've been asked to streamline).
If this were SQL Server I would do (error handling omitted for time and putting aside whether EXCEPT or NOT IN would be best for now, please)
create OR alter proc add_identities_to_group
#group_name varchar(50) NULL
as BEGIN
declare #use_group_id int
if #group_name is NULL
set #use_group_id = (select Top 1 id from user_groups where id not in (select group_id from group_identities) order by id asc)
ELSE set #use_group_id = (select id from user_groups where name = #group_name)
insert into group_identities (user_id, group_id)
select #use_group_id, id from user_identities
where id not in (select user_id from group_identities)
END
GO
Obviously this is not going to fly in Postgres; part of why I want to stick with atomic stored procedures is staying in "neutral" SQL, both to be closer to my comfort zone and because I don't know what other languages the database is currently set up for, but my existing education has played kind of fast and loose with differentiating what was T-SQL specific at any point.
I am aware that this is not going to run for a wide variety of reasons because I'm still trying to internalize the syntax, but the bad/conceptual draft I have written so that I have anything to stare at is:
create OR replace procedure add_identities_to_groups(
group_name text default NULL ) language SQL
BEGIN ATOMIC
declare use_group_id integer
if group_name is NULL
set use_group_id = (select Top 1 id from user_groups
where id not in (select user_id from group_identities)
order by id asc)
ELSE set use_group_id = (select id from user_groups where name = group_name) ;
insert into group_identities (group_id, user_id)
select use_group_id, id from user_identities
where id not in (select user_id from group_identities)
END ;
GO ;
Issues:
Have not found either answers for how to do this with the combination of a single variable and a column with BEGIN ATOMIC or hard confirmation that it wouldn't work (e.g. can atomic stored procedures just not accept parameters? I cannot find an answer to this on my own). (This is part of why existing answers that I can find here and elsewhere haven't been clarifying for me.)
~~Don't know how to compensate for Postgres's not differentiating variables and parameters from column names at all. (This is why examples using a hardcoded constant haven't helped, and they make up virtually all of what I can find off StackOverflow itself.)~~ Not a problem if Postgres will handle that intelligently within the atomic block but that's one of the things I hadn't been able to confirm on my own.
Google results for "vanilla" SQL unpredictably saturated with SQL Server anyway, while my lack of familiarity with Postgres is not doing me any favors but I don't know anyone personally who has more experience than I do.
because I don't know what other languages the database is currently set up for
All supported Postgres versions always include PL/pgSQL.
If you want to use procedural elements like variables or conditional statements like IF you need PL/pgSQL. So your procedure has to be defined with language plpgsql - that removes the possibility to use the ANSI standard BEGIN ATOMIC syntax.
Don't know how to compensate for Postgres's not differentiating variables and parameters from column names at all.
You don't. Most people simply using naming conventions to do that. In my environment we use p_ for parameters and l_ for "local" variables. Use whatever you prefer.
Quote from the manual
By default, PL/pgSQL will report an error if a name in an SQL statement could refer to either a variable or a table column. You can fix such a problem by renaming the variable or column, or by qualifying the ambiguous reference, or by telling PL/pgSQL which interpretation to prefer.
The simplest solution is to rename the variable or column. A common coding rule is to use a different naming convention for PL/pgSQL variables than you use for column names. For example, if you consistently name function variables v_something while none of your column names start with v_, no conflicts will occur.
As documented in the manual the body for a procedure written in PL/pgSQL (or any other language that is not SQL) must be provided as a string. This is typically done using dollar quoting to make writing the source easier.
As documented in the manual, if you want to store the result of a single row query in a variable, use select ... into from ....
As documented in the manual an IF statement needs a THEN
As documented in the manual there is no TOP clause in Postgres (or standard SQL). Use limit or the standard compliant fetch first 1 rows only instead.
To avoid a clash between names of variables and column names, most people use some kind of prefix for parameters and variables. This also helps to identify them in the code.
In Postgres it's usually faster to use NOT EXISTS instead of NOT IN.
In Postgres statements are terminated with ;. GO isn't a SQL command in SQL Server either - it's a client side thing supported by SSMS. To my knowledge, there is no SQL tool that works with Postgres that supports the GO "batch terminator" the same way SSMS does.
So a direct translation of your T-SQL code to PL/pgSQL could look like this:
create or replace procedure add_identities_to_groups(p_group_name text default NULL)
language plpgsql
as
$$ --<< start of PL/pgSQL code
declare --<< start a block for all variables
l_use_group_id integer;
begin --<< start the actual code
if p_group_name is NULL THEN --<< then required
select id
into l_use_group_id
from user_groups ug
where not exists (select * from group_identities gi where gi.id = ug.user_id)
order by ug.id asc
limit 1;
ELSE
select id
into l_use_group_id
from user_groups
where name = p_group_name;
end if;
insert into group_identities (group_id, user_id)
select l_use_group_id, id
from user_identities ui
where not exists (select * from group_identities gi where gi.user_id = ui.id);
END;
$$
;

How can I set data from on table to another according spatial relation geometries in these tables

I have two tables with data. I need to set data building_h from one table to another with respect to the spatial relationships between data in these tables. Spatial data in the source table is polygon type and in destination table - points. I made a query for points and polygons which intersect. It works nice. But there are some points that do not intersect with polygons. My idea is to define for each of these points the closest polygon and take the building_h value from it. For this purpose, I wrote the function which takes the id of the point and returns the building_h value. And it works ok in the test.
CREATE OR REPLACE FUNCTION closest_pol(int4)
RETURNS NUMERIC
AS
$$
DECLARE
retVal NUMERIC;
BEGIN
SELECT bgs.building_h INTO retVal
FROM buildings_geoalert_spgg bgs, building_from_landuse_spgg bfl
WHERE ST_INTERSECTS(bgs.geom, ST_BUFFER(bfl.geom_centr, 0.0006,'quad_segs=8')) AND bfl.id = $1
ORDER BY ST_INTERSECTION(bgs.geom, ST_BUFFER(bfl.geom_centr, 0.0006,'quad_segs=8')) ASC
LIMIT 1;
RETURN retVal;
END;
$$
LANGUAGE plpgsql
STABLE
RETURNS NULL ON NULL INPUT;
But when I run the query for the whole table, it executes forever.. (for ~3000 rows with NULL value). It lasted several hours before I stopped it.
UPDATE building_from_landuse_spgg AS bfl SET
building_h = (SELECT closest_pol(bfl.id))
WHERE bfl.building_h IS NULL;
Do you have an idea what I did wrong?
In your WHERE clause you're using ST_Intersects with 0.006 buffer that is created in query time. Consider using a partial gist index with this buffer of yours (if for whatever reason you must use them):
CREATE INDEX idx_buffer ON building_from_landuse_spgg USING gist (ST_Buffer(geom_centr, 0.0006,'quad_segs=8'));
I'd argue that you do not need a function for that, since you could use the query inside of the function as a subquery in your UPDATE statement. But if you have your reasons to stick to the function, you can get rid of the SELECT to call it:
UPDATE building_from_landuse_spgg
SET building_h = closest_pol(id)
WHERE building_h IS NULL;
EDIT: As correctly mentioned by #JGH (see comments), using buffers is less efficient than simply using ST_DWithin. So, if you can afford it, create an index in the geometries used in the WHERE clause ..
CREATE INDEX idx_landuse_geom_centr ON building_from_landuse_spgg USING gist (geom_centr);
CREATE INDEX idx_geoalert_geom ON buildings_geoalert_spgg USING gist (geom);
.. and also partially indexing building_h would speed up things a bit, since you're only interested in the NULL records:
CREATE INDEX idx_landuse_building_h ON building_from_landuse_spgg (building_h)
WHERE building_h IS NULL;
Or if you prefer a broader index but still putting NULL values in the first class ..
CREATE INDEX idx_landuse_building_h ON building_from_landuse_spgg
(building_h NULLS FIRST);
And perhaps consider putting the code of your function in a subquery, e.g.
UPDATE building_from_landuse_spgg AS bfl
SET building_h = (
SELECT bgs.building_h
FROM buildings_geoalert_spgg bgs
WHERE ST_DWithin(bgs.geom, bfl.geom_centr, 0.0006)
ORDER BY ST_Distance(bgs.geom, bfl.geom_centr) ASC LIMIT 1)
WHERE bfl.building_h IS NULL;
Further reading:
Getting all Buildings in range of 5 miles from specified coordinates
ST_Distance

How to pass a set of rows from one function into another?

Overview
I'm using PostgreSQL 9.1.14, and I'm trying to pass the results of a function into another function. The general idea (specifics, with a minimal example, follow) is that we can write:
select * from (select * from foo ...)
and we can abstract the sub-select away in a function and select from it:
create function foos()
returns setof foo
language sql as $$
select * from foo ...
$$;
select * from foos()
Is there some way to abstract one level farther, so as to be able to do something like this (I know functions cannot actually have arguments with setof types):
create function more_foos( some_foos setof foo )
language sql as $$
select * from some_foos ... -- or unnest(some_foos), or ???
$$:
select * from more_foos(foos())
Minimal Example and Attempted Workarounds
I'm using PostgreSQL 9.1.14. Here's a minimal example:
-- 1. create a table x with three rows
drop table if exists x cascade;
create table if not exists x (id int, name text);
insert into x values (1,'a'), (2,'b'), (3,'c');
-- 2. xs() is a function with type `setof x`
create or replace function xs()
returns setof x
language sql as $$
select * from x
$$;
-- 3. xxs() should return the context of x, too
-- Ideally the argument would be a `setof x`,
-- but that's not allowed (see below).
create or replace function xxs(x[])
returns setof x
language sql as $$
select x.* from x
join unnest($1) y
on x.id = y.id
$$;
When I load up this code, I get the expected output for the table definitions, and I can call and select from xs() as I'd expect. But when I try to pass the result of xs() to xxs(), I get an error that "function xxs(x) does not exist":
db=> \i test.sql
DROP TABLE
CREATE TABLE
INSERT 0 3
CREATE FUNCTION
CREATE FUNCTION
db=> select * from xs();
1 | a
2 | b
3 | c
db=> select * from xxs(xs());
ERROR: function xxs(x) does not exist
LINE 1: select * from xxs(xs());
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I'm a bit confused about "function xxs(x) does not exist"; since the return type of xs() was setof x, I'd expected that its return type would be setof x (or maybe x[]), not x. Following the complaints about the type, I can get to either of the following , but while with either definition I can select xxs(xs());, I can't select * from xxs(xs());.
create or replace function xxs( x )
returns setof x
language sql as $$
select x.* from x
join unnest(array[$1]) y -- unnest(array[...]) seems pretty bad
on x.id = y.id
$$;
create or replace function xxs( x )
returns setof x
language sql as $$
select * from x
where x.id in ($1.id)
$$;
db=> select xxs(xs());
(1,a)
(2,b)
(3,c)
db=> select * from xxs(xs());
ERROR: set-valued function called in context that cannot accept a set
Summary
What's the right way to pass the results of a set-returning function into another function?
(I have noted that create function … xxs( setof x ) … results in the error: ERROR: functions cannot accept set arguments, so the answer won't literally be passing a set of rows from one function to another.)
Table functions
I perform very high speed, complex database migrations for a living, using SQL as both the client and server language (no other language is used), all running server side, where the code rarely surfaces from the database engine. Table functions play a HUGE role in my work. I don't use "cursors" since they are too slow to meet my performance requirements, and everything I do is result set oriented. Table functions have been an immense help to me in completely eliminating use of cursors, achieving very high speed, and have contributed dramatically towards reducing code volume and improving simplicity.
In short, you use a query that references two (or more) table functions to pass the data from one table function to the next. The select query result set that calls the table functions serves as the conduit to pass the data from one table function to the next. On the DB2 platform / version I work on, and it appears based on a quick look at the 9.1 Postgres manual that the same is true there, you can only pass a single row of column values as input to any of the table function calls, as you've discovered. However, because the table function call happens in the middle of a query's result set processing, you achieve the same effect of passing a whole result set to each table function call, albeit, in the database engine plumbing, the data is passed only one row at a time to each table function.
Table functions accept one row of input columns, and return a single result set back into the calling query (i.e. select) that called the function. The result set columns passed back from a table function become part of the calling query's result set, and are therefore available as input to the next table function, referenced later in the same query, typically as a subsequent join. The first table function's result columns are fed as input (one row at a time) to the second table function, which returns its result set columns into the calling query's result set. Both the first and second table function result set columns are now part of the calling query's result set, and are now available as input (one row at a time) to a third table function. Each table function call widens the calling query's result set via the columns it returns. This can go on an on until you start hitting limits on the width of a result set, which likely varies from one database engine to the next.
Consider this example (which may not match Postgres' syntax requirements or capabilities as I work on DB2). This is one of many design patterns in which I use table functions, is one of the simpler ones, that I think is very illustrative, and one that I anticipate would have broad appeal if table functions were in heavy mainstream use (to my knowledge they are not, but I think they deserve more attention than they are getting).
In this example, the table functions in use are: VALIDATE_TODAYS_ORDER_BATCH, POST_TODAYS_ORDER_BATCH, and DATA_WAREHOUSE_TODAYS_ORDER_BATCH. On the DB2 version I work on, you wrap the table function inside "TABLE( place table function call and parameters here )", but based on quick look at a Postgres manual it appears you omit the "TABLE( )" wrapper.
create table TODAYS_ORDER_PROCESSING_EXCEPTIONS as (
select TODAYS_ORDER_BATCH.*
,VALIDATION_RESULT.ROW_VALID
,POST_RESULT.ROW_POSTED
,WAREHOUSE_RESULT.ROW_WAREHOUSED
from TODAYS_ORDER_BATCH
cross join VALIDATE_TODAYS_ORDER_BATCH ( ORDER_NUMBER, [either pass the remainder of the order columns or fetch them in the function] )
as VALIDATION_RESULT ( ROW_VALID ) --example: 1/0 true/false Boolean returned
left join POST_TODAYS_ORDER_BATCH ( ORDER_NUMBER, [either pass the remainder of the order columns or fetch them in the function] )
as POST_RESULT ( ROW_POSTED ) --example: 1/0 true/false Boolean returned
on ROW_VALIDATED = '1'
left join DATA_WAREHOUSE_TODAYS_ORDER_BATCH ( ORDER_NUMBER, [either pass the remainder of the order columns or fetch them in the function] )
as WAREHOUSE_RESULT ( ROW_WAREHOUSED ) --example: 1/0 true/false Boolean returned
on ROW_POSTED = '1'
where coalesce( ROW_VALID, '0' ) = '0' --Capture only exceptions and unprocessed work.
or coalesce( ROW_POSTED, '0' ) = '0' --Or, you can flip the logic to capture only successful rows.
or coalesce( ROW_WAREHOUSED, '0' ) = '0'
) with data
If table TODAYS_ORDER_BATCH contains 1,000,000 rows, then
VALIDATE_TODAYS_ORDER_BATCH will be called 1,000,000 times, once for
each row.
If 900,000 rows pass validation inside VALIDATE_TODAYS_ORDER_BATCH, then POST_TODAYS_ORDER_BATCH will be called 900,000 times.
If only 850,000 rows successfully post, then VALIDATE_TODAYS_ORDER_BATCH needs some loopholes closed LOL, and DATA_WAREHOUSE_TODAYS_ORDER_BATCH will be called 850,000 times.
If 850,000 rows successfully made it into the Data Warehouse (i.e. no additional exceptions were generated), then table TODAYS_ORDER_PROCESSING_EXCEPTIONS will be populated with 1,000,000 - 850,000 = 150,000 exception rows.
The table function calls in this example are only returning a single column, but they could be returning many columns. For example, the table function validating an order row could return the reason why an order failed validation.
In this design, virtually all the chatter between a HLL and the database is eliminated, since the HLL requestor is asking the database to process the whole batch in ONE request. This results in a reduction of millions of SQL requests to the database, in a HUGE removal of millions of HLL procedure or method calls, and as a result provides a HUGE runtime improvement. In contrast, legacy code which often processes a single row at a time, would typically send 1,000,000 fetch SQL requests, 1 for each row in TODAYS_ORDER_BATCH, plus at least 1,000,000 HLL and/or SQL requests for validation purposes, plus at least 1,000,000 HLL and/or SQL requests for posting purposes, plus 1,000,000 HLL and/or SQL requests for sending the order to the data warehouse. Granted, using this table function design, inside the table functions SQL requests are being sent to the database, but when the database makes requests to itself (i.e from inside a table function), the SQL requests are serviced much faster (especially in comparison to a legacy scenario where the HLL requestor is doing single row processing from a remote system, with the worst case over a WAN - OMG please don't do that).
You can easily run into performance problems if you use a table function to "fetch a result set" and then join that result set to other tables. In that case, the SQL optimizer can't predict what set of rows will be returned from the table function, and therefore it can't optimize the join to subsequent tables. For that reason, I rarely use them for fetching a result set, unless I know that result set will be a very small number of rows, hence not causing a performance problem, or I don't need to join to subsequent tables.
In my opinion, one reason why table functions are underutilized is that they are often perceived as only a tool to fetch a result set, which often performs poorly, so they get written off as a "poor" tool to use.
Table functions are immensely useful for pushing more functionality over to the server, for eliminating most of the chatter between the database server and programs on remote systems, and even for eliminating chatter between the database server and external programs on the same server. Even chatter between programs on the same server carries more overhead than many people realize, and much of it is unnecessary. The heart of the power of table functions lies in using them to perform actions inside result set processing.
There are more advanced design patterns for using table functions that build on the above pattern, where you can maximize result set processing even further, but this post is a lot for most to absorb already.

PostgreSQL - Rule to create a copy of the primaryID table

In my schema I want to have a primaryID and a SearchID. For every SearchID it is the primaryID plus some text at the start. I need this to look like this:
PrimaryID = 1
SearchID = Search1
Since the PrimaryID is set to autoincrement, I was hoping I could use a postgresql rule to do the following (pseudo code)
IF PRIMARYID CHANGES
{
SEARCHID = SEARCH(PRIMARYID)
}
This would hopefully occure exactly after the primaryID is updated and happen automatically. So, is this the best way of achieving this and can anyone provide an example of how it is done?
Thank you
Postgres 11 introduced genuine generated columns. See:
Computed / calculated / virtual / derived columns in PostgreSQL
For older (or any) versions, you could emulate a "virtual generated column" with a special function. Say your table is named tbl and the serial primary key is named tbl_id:
CREATE FUNCTION search_id(t tbl)
RETURNS text STABLE LANGUAGE SQL AS
$$
SELECT 'Search' || $1.tbl_id;
$$;
Then you can:
SELECT t.tbl_id, t.search_id FROM tbl t;
Table-qualification in t.search_id is needed in this case. Since search_id is not found as column of table tbl, Postgres looks for a function that takes tbl as argument next.
Effectively, t.search_id is just a syntax variant of search_id(t), but makes usage rather intuitive.