Is it possible to insert a row only to a partition? - sql

I have an already exist table which I want to be divided.
Here is my script for partitions creating:
create or replace function create_partition_and_insert_to_partition_bundle() returns trigger as
$$
declare
partition text;
dt_constraint text;
begin
dt_constraint := format( 'y%sq%s', (select date_part('year', date(new.created))), select ceil(date_part('month', date(new.created))::float / 3));
partition := format( 'bundle_%s', dt_constraint);
if not exists(select relname from pg_class where relname = partition) then
execute 'create table '||partition||' (like bundle including all) inherits (bundle)';
end if;
execute 'insert into ' || partition || ' values ( ($1).* )' using new;
return null;
end
$$ language plpgsql;
create trigger create_insert_partition_bundle before insert on bundle for each row execute procedure create_partition_and_insert_to_partition_bundle();
set constraint_exclusion = partition;
When I add a new row, the trigger runs and creates a new partition for a new quart, but the row is also inserted to a parent table bundle. Is it possible to insert only to a partition, and how should I change my script?

Personally i found inheritance very hard to understand. So I use partition.
The following is using partition.You don't even need trigger.
CREATE TABLE bundle (
bundle_id int not null,
created date not null,
misc text
) PARTITION BY RANGE (created);
CREATE TABLE bundle_y2022q1 PARTITION OF bundle
FOR VALUES FROM ('2022-01-01') TO ('2022-04-01');
CREATE TABLE bundle_y2022q2 PARTITION OF bundle
FOR VALUES FROM ('2022-04-01') TO ('2022-07-01');
CREATE TABLE bundle_y2022q3 PARTITION OF bundle
FOR VALUES FROM ('2022-07-01') TO ('2022-10-01');
CREATE TABLE bundle_y2022q4 PARTITION OF bundle
FOR VALUES FROM ('2022-10-01') TO ('2023-01-01');

Related

Trigger for conditional insert into table

I have subscription data that is being appended to a table in real-time (via kafka). i have set up a trigger such that once the data is added it is checked for consistency. If checks pass some of the data should be added to other tables (that have master information on the customer profile etc.). The checks function i wrote works fine but i keep getting errors on the function used in the trigger. The function for the trigger is:
CREATE OR REPLACE FUNCTION update_tables()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$$
BEGIN
CASE (SELECT check_incoming_data()) WHEN 0
THEN INSERT INTO
sub_master(sub_id, sub_date, customer_id, product_id)
VALUES(
(SELECT sub_id::int FROM sub_realtime WHERE CTID = (SELECT MAX(CTID) FROM sub_realtime)),
(SELECT sub_date::date FROM sub_realtime WHERE CTID = (SELECT MAX(CTID) FROM sub_realtime)),
(SELECT customer_id::int FROM sub_realtime WHERE CTID = (SELECT MAX(CTID) FROM sub_realtime)),
(SELECT product_id::int FROM sub_realtime WHERE CTID = (SELECT MAX(CTID) FROM sub_realtime))
);
RETURN sub_master;
END CASE;
RETURN sub_master;
END;
$$
The trigger is then:
CREATE TRIGGER incoming_data
AFTER INSERT
ON claims_realtime_3
FOR EACH ROW
EXECUTE PROCEDURE update_tables();
What I am saying is 'if checks pass then select data from the last added row and add them to the master table'. What is the best way to structure this query?
Thanks a lot!
The trigger functions are executed for each row and you must use a record type variable called "NEW" which is automatically created by the database in the trigger functions. "NEW" gets only inserted records. For example, I want to insert data to users_log table when inserting records to users table.
CREATE OR REPLACE FUNCTION users_insert()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
begin
insert into users_log
(
username,
first_name,
last_name
)
select
new.username,
new.first_name,
new.last_name;
return new;
END;
$function$;
create trigger store_data_to_history_insert
before insert
on users for each row execute function users_insert();

Postgresql - Union table values from different schemas in a database into one table

I have a Database called Knowledge in postgres. It has multiple schemas and every schema has same number of tables, table has same columns as well.
Now I want to create a new schema called Aggregate, table called aggregate.table1 and put values from schema1.table1 and schema2.table1 in it.
I need to add another column in Aggregate.table1 which holds the value representing schema.
If any value in schema1.table1 is updated then aggregate.table1 should get the updated values.
Question,
Is it possible in Postgresql? if so please help me with this.
I need this aggregated table for further processing
You can try writing an anonymous code block to iterate over all schemas and tables, so that you can import your data into the aggregate schema. The following block search for all tables contained in the schemas s1 and s2, creates a corresponding table in the schema s_agg and finally copies its records.
DO $$
DECLARE row record;
BEGIN
FOR row IN SELECT * FROM pg_tables WHERE schemaname IN ('s1','s2') LOOP
EXECUTE 'CREATE TABLE IF NOT EXISTS s_agg.'||quote_ident(row.tablename)||
' AS TABLE ' || quote_ident(row.schemaname)||'.'|| quote_ident(row.tablename) ||
' WITH NO DATA;';
EXECUTE 'INSERT INTO s_agg.' || quote_ident(row.tablename)
|| ' SELECT * FROM '||quote_ident(row.schemaname)||'.'||quote_ident(row.tablename);
END LOOP;
END;
$$;
Demo
CREATE SCHEMA s1;
CREATE SCHEMA s2;
CREATE SCHEMA s_agg;
CREATE TABLE s1.t1 (id int);
INSERT INTO s1.t1 VALUES (1);
CREATE TABLE s2.t1 (id int);
INSERT INTO s2.t1 VALUES (42);
DO $$
DECLARE row record;
BEGIN
FOR row IN SELECT * FROM pg_tables WHERE schemaname IN ('s1','s2') LOOP
EXECUTE 'CREATE TABLE IF NOT EXISTS s_agg.'||quote_ident(row.tablename)||
' AS TABLE ' || quote_ident(row.schemaname)||'.'|| quote_ident(row.tablename) ||
' WITH NO DATA;';
EXECUTE 'INSERT INTO s_agg.' || quote_ident(row.tablename)
|| ' SELECT * FROM '||quote_ident(row.schemaname)||'.'||quote_ident(row.tablename);
END LOOP;
END;
$$;
-- contains values of t1 from s1 and s2
SELECT * FROM s_agg.t1;
id
----
1
42
Note: This code works with the assumption that the aggregate schema is either empty or it has empty tables, otherwise data will be duplicated. If you run this periodically and the size of your tables isn't too large, you can add a DROP TABLE before the CREATE TABLE statement. To make it work on every commit on all tables of all schemas you have to take a look at TRIGGERS or even logical replication.

Return, but not write on INSERT Trigger Function with table split (postgres)

Good day,
I have the following problem:
I maintain a database with a huge table - which contents are split/clustered in daily tables.
To do so, I have a Trigger Function which inserts the data into the correct table
This is my trigger:
CREATE OR REPLACE FUNCTION "public"."insert_example"()
RETURNS "pg_catalog"."trigger" AS $BODY$
DECLARE
_tabledate text;
_tablename text;
_start_date text;
_end_date text;
BEGIN
--Takes the current inbound "time" value and determines when midnight is for the given date
_tabledate := to_char(NEW."insert_date", 'YYYYMMDD');
_start_date := to_char(NEW."insert_date", 'YYYY-MM-DD 00:00:00');
_end_date := to_char(NEW."insert_date", 'YYYY-MM-DD 23:59:59');
_tablename := 'zz_example_'||_tabledate;
-- Check if the partition needed for the current record exists
PERFORM 1
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
AND c.relname = _tablename
AND n.nspname = 'public';
-- If the partition needed does not yet exist, then we create it:
-- Note that || is string concatenation (joining two strings to make one)
IF NOT FOUND THEN
EXECUTE '
CREATE TABLE IF NOT EXISTS "'||quote_ident(_tablename)||'" ( LIKE "example_table" INCLUDING ALL )
INHERITS ("example_table");
ALTER TABLE "'||quote_ident(_tablename)||'"
ADD FOREIGN KEY ("log_id") REFERENCES "foreign_example1" ("log_id") ON DELETE SET NULL,
ADD FOREIGN KEY ("detail_log_id") REFERENCES "foreign_example2" ("log_id") ON DELETE SET NULL,
ADD FOREIGN KEY ("image_id") REFERENCES "foreign_example3" ("image_id") ON DELETE SET NULL,
ADD CONSTRAINT date_check CHECK ("insert_date" >= timestamptz '||quote_literal(_start_date)||' and "insert_date" <= timestamptz '||quote_literal(_end_date)||');
CREATE TRIGGER "'||quote_ident(_tablename)||'_create_alarm" AFTER INSERT ON "'||quote_ident(_tablename)||'"
FOR EACH ROW
EXECUTE PROCEDURE "public"."external_trigger_example"();
';
END IF;
--
-- Insert the current record into the correct partition, which we are sure will now exist.
EXECUTE 'INSERT INTO public.' || quote_ident(_tablename) || ' VALUES ($1.*)' USING NEW;
RETURN NULL;
--RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
This Trigger actually works as expected, but this part is a problem:
RETURN NULL;
--RETURN NEW;
Whatever I return here, is written to the Table - Together with the written content from the contained INSERT Query!
So If I would RETURN NEW - I would write duplicates
This is no problem as long as I keep RETURN NULL - But when doing so, I Am unable to do any INSERT RETURNING Queries (As nothing is returned of course)
So my Question is:
How Could I return the just inserted ID - without creating Duplicates?
Or how could I write this trigger, to return something - but not to actually insert it (only use the contained INSERT from the Trigger)?
Thanks for any help!

Limit inserts to 6 rows per id

I am learning Hibernate creating a basic console app, using Oracle as the back end. I have a table where if a student enters a 7th record he should not be permitted to do so. How do I do this?
Well beside triggers, you can create a materialized view , then a checking constraint on the table.
create materialized view log on test_table;
create materialized view mv_test_table
refresh FAST on COMMIT
ENABLE QUERY REWRITE
as
select id, count(*) cnts
from test_table
group by id;
alter table test_table
add constraint check_userid
check (cnts< 7);
You can also use a simple trigger (on condition that your table has an ID column):
create or replace trigger trg_limit_row
after insert on your_table
for each row
begin
if :new.id >5 then -- assume that you have id in range (0-5) -> 6 rows
execute immediate 'delete from your_table t where t.id = '
|| ':' || 'new_id';
end if;
end;
/

PostgreSQL - Shared temp table between functions

I wnat to know if its possible to share a temporary table between functions that are called in a "main function", like this:
-- some sub function
create or replace function up_sub_function (str text)
returns table (id int, descr text) as $$
begin
return query select * from temp_table where descr like concat('%', str , '%');
end; $$
language plpgsql;
-- main function
create or replace function up_main_function ()
returns table (id int, descr text) as $$
begin
create temporary table temp_table if not exists (
id int,
descr text
);
insert into temp_campaigns select id, descr from test_table;
return query select * from up_sub_function('a');
end; $$
language plpgsql;
BEGIN;
select * from up_main_function();
drop table temp_table;
COMMIT;
If you can show me the correct way to achieve this, I want to be able to populate a temporary table and then filter rows by calling othe functions inside the main function.
Thanks ans happy programming! :)
See the documentation https://www.postgresql.org/docs/current/static/sql-createtable.html
temp tables are valid for the entire session. That is as long as you stay connected to the database.
In your case you only need it during the transaction. So you should create it with ON COMMIT DROP
create temporary table temp_table if not exists (
id int,
descr text
) ON COMMIT DROP;
Once you created the table you can use it within any function in the current transaction.
You do not need the BEGIN to start the transaction. A transaction is automatically started when the outer function is called.
Nested function calls share the same transaction. So they all see the table.