Using PostgreSQL 9.0.4
Below is a very similar structure of my table:
CREATE TABLE departamento
(
id bigserial NOT NULL,
master_fk bigint,
nome character varying(100) NOT NULL
CONSTRAINT departamento_pkey PRIMARY KEY (id),
CONSTRAINT departamento_master_fk_fkey FOREIGN KEY (master_fk)
REFERENCES departamento (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
And the function I created:
CREATE OR REPLACE FUNCTION fn_retornar_dptos_ate_raiz(bigint[])
RETURNS bigint[] AS
$BODY$
DECLARE
lista_ini_dptos ALIAS FOR $1;
dp_row departamento%ROWTYPE;
dpto bigint;
retorno_dptos bigint[];
BEGIN
BEGIN
PERFORM id FROM tbl_temp_dptos;
EXCEPTION
WHEN undefined_table THEN
EXECUTE 'CREATE TEMPORARY TABLE tbl_temp_dptos (id bigint NOT NULL) ON COMMIT DELETE ROWS';
END;
FOR i IN array_lower(lista_ini_dptos, 1)..array_upper(lista_ini_dptos, 1) LOOP
SELECT id, master_fk INTO dp_row FROM departamento WHERE id=lista_ini_dptos[i];
IF dp_row.id IS NOT NULL THEN
EXECUTE 'INSERT INTO tbl_temp_dptos VALUES ($1)' USING dp_row.id;
WHILE dp_row.master_fk IS NOT NULL LOOP
dpto := dp_row.master_fk;
SELECT id, master_fk INTO dp_row FROM departamento WHERE id=lista_ini_dptos[i];
EXECUTE 'INSERT INTO tbl_temp_dptos VALUES ($1)' USING dp_row.id;
END LOOP;
END IF;
END LOOP;
RETURN ARRAY(SELECT id FROM tbl_temp_dptos);
END;
$BODY$
LANGUAGE plpgsql VOLATILE
Any questions about the names I can translate ..
What is the idea of the function? I first check if the temporary table already exists (perform), and when the exception occurs I create a temporary table.
Then I take each element in the array and use it to fetch the id and master_fk of a department. If the search is successful (check if id is not null, it is even unnecessary) I insert the id in the temporary table and start a new loop.
The second loop is intended to get all parents of that department which was previously found by performing the previous steps (ie, pick a department and insert it into the temporary table).
At the end of the second loop returns to the first. When this one ends I return bigint[] refers to what was recorded in the temporary table.
My problem is that the function returns me the same list I provide. What am I doing wrong?
There is a lot I would do differently, and to great effect.
Table definition
Starting with the table definition and naming conventions. These are mostly just opinions:
CREATE TEMP TABLE conta (conta_id bigint primary key, ...);
CREATE TEMP TABLE departamento (
dept_id serial PRIMARY KEY
, master_id int REFERENCES departamento (dept_id)
, conta_id bigint NOT NULL REFERENCES conta (conta_id)
, nome text NOT NULL
);
Major points
Are you sure you need a bigserial for departments? There are hardly that many on this planet. A plain serial should suffice.
I hardly ever use character varying with a length restriction. Unlike with some other RDBMS there is no performance gain whatsoever by using a restriction. Add a CHECK constraint if you really need to enforce a maximum length. I just use text, mostly and save myself the trouble.
I suggest a naming convention where the foreign key column shares the name with the referenced column, so master_id instead of master_fk, etc. Also allows to use USING in joins.
And I rarely use the non-descriptive column name id. Using dept_id instead here.
PL/pgSQL function
It can be largely simplified to:
CREATE OR REPLACE FUNCTION f_retornar_plpgsql(lista_ini_depts VARIADIC int[])
RETURNS int[] AS
$func$
DECLARE
_row departamento; -- %ROWTYPE is just noise
BEGIN
IF NOT EXISTS ( -- simpler in 9.1+, see below
SELECT FROM pg_catalog.pg_class
WHERE relnamespace = pg_my_temp_schema()
AND relname = 'tbl_temp_dptos') THEN
CREATE TEMP TABLE tbl_temp_dptos (dept_id bigint NOT NULL)
ON COMMIT DELETE ROWS;
END IF;
FOR i IN array_lower(lista_ini_depts, 1) -- simpler in 9.1+, see below
.. array_upper(lista_ini_depts, 1) LOOP
SELECT * INTO _row -- since rowtype is defined, * is best
FROM departamento
WHERE dept_id = lista_ini_depts[i];
CONTINUE WHEN NOT FOUND;
INSERT INTO tbl_temp_dptos VALUES (_row.dept_id);
LOOP
SELECT * INTO _row
FROM departamento
WHERE dept_id = _row.master_id;
EXIT WHEN NOT FOUND;
INSERT INTO tbl_temp_dptos
SELECT _row.dept_id
WHERE NOT EXISTS (
SELECT FROM tbl_temp_dptos
WHERE dept_id =_row.dept_id);
END LOOP;
END LOOP;
RETURN ARRAY(SELECT dept_id FROM tbl_temp_dptos);
END
$func$ LANGUAGE plpgsql;
Call:
SELECT f_retornar_plpgsql(2, 5);
Or:
SELECT f_retornar_plpgsql(VARIADIC '{2,5}');
ALIAS FOR $1 is outdated syntax and discouraged. Use function parameters instead.
The VARIADIC parameter makes it more convenient to call. Related:
Pass multiple values in single parameter
You don't need EXECUTE for queries without dynamic elements. Nothing to gain here.
You don't need exception handling to create a table. Quoting the manual here:
Tip: A block containing an EXCEPTION clause is significantly more
expensive to enter and exit than a block without one. Therefore, don't
use EXCEPTION without need.
Postgres 9.1 or later has CREATE TEMP TABLE IF NOT EXISTS. I use a workaround for 9.0 to conditionally create the temp table.
Postgres 9.1 also offer FOREACH to loop through an arrays.
All that said, here comes the bummer: you don't need most of this.
SQL function with rCTE
Even in Postgres 9.0, a recursive CTE makes this a whole lot simpler:
CREATE OR REPLACE FUNCTION f_retornar_sql(lista_ini_depts VARIADIC int[])
RETURNS int[] AS
$func$
WITH RECURSIVE cte AS (
SELECT dept_id, master_id
FROM unnest($1) AS t(dept_id)
JOIN departamento USING (dept_id)
UNION ALL
SELECT d.dept_id, d.master_id
FROM cte
JOIN departamento d ON d.dept_id = cte.master_id
)
SELECT ARRAY(SELECT DISTINCT dept_id FROM cte) -- distinct values
$func$ LANGUAGE sql;
Same call.
Closely related answer with explanation:
Tree Structure and Recursion
SQL Fiddle demonstrating both.
I managed to fix my code. At the end of this response is its final form, but if you have any suggestions for improvement are welcome. Here are the changes:
1 - I have provided the essential structure of my table, but in reality it is much bigger. Before master_fk field, there is a field called account_fk, and because of the variable department dp_row%**ROWTYPE** the entire structure of my table is copied to the variable, so if I fill only the first two fields, i.e., id and account_fk, then master_fk that is the third field will be null.
2 - #Nicolas was right, and I ended up using the variable dpto for the second loop. And I had forgotten to fill it inside the loop. Besides using it in the search done within the loop.
3 - I added an if statement to make sure that would not have duplicates in the temporary table.
Correction in the structure of my table:
CREATE TABLE departamento
(
id bigserial NOT NULL,
account_fk bigint NOT NULL,
master_fk bigint,
nome character varying(100) NOT NULL,
CONSTRAINT departamento_pkey PRIMARY KEY (id),
CONSTRAINT departamento_account_fk_fkey FOREIGN KEY (account_fk)
REFERENCES conta (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT departamento_master_fk_fkey FOREIGN KEY (master_fk)
REFERENCES departamento (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
My function as it is now:
CREATE OR REPLACE FUNCTION fn_retornar_dptos_ate_raiz(bigint[]) RETURNS bigint[] AS
$BODY$
DECLARE
lista_ini_dptos ALIAS FOR $1;
dp_row departamento%ROWTYPE;
dpto bigint;
BEGIN
BEGIN
PERFORM id FROM tbl_temp_dptos;
EXCEPTION
WHEN undefined_table THEN
EXECUTE 'CREATE TEMPORARY TABLE tbl_temp_dptos (id bigint NOT NULL) ON COMMIT DELETE ROWS';
END;
FOR i IN array_lower(lista_ini_dptos, 1)..array_upper(lista_ini_dptos, 1) LOOP
SELECT id, conta_fk, master_fk INTO dp_row FROM departamento WHERE id=lista_ini_dptos[i];
EXECUTE 'INSERT INTO tbl_temp_dptos VALUES ($1)' USING dp_row.id;
dpto := dp_row.master_fk;
-- RAISE NOTICE 'dp_row: (%); ', dp_row.master_fk;
WHILE dpto IS NOT NULL LOOP
SELECT id, conta_fk, master_fk INTO dp_row FROM departamento WHERE id=dpto;
IF NOT(select exists(select 1 from tbl_temp_dptos where id=dp_row.id limit 1)) THEN
EXECUTE 'INSERT INTO tbl_temp_dptos VALUES ($1)' USING dp_row.id;
END IF;
dpto := dp_row.master_fk;
-- RAISE NOTICE 'dp_row: (%); ', dp_row.master_fk;
END LOOP;
END LOOP;
RETURN ARRAY(SELECT id FROM tbl_temp_dptos);
END;
$BODY$
LANGUAGE plpgsql VOLATILE
Related
I have a table
CREATE TABLE items(
id SERIAL PRIMARY KEY,
group_id INT NOT NULL,
item_id INT NOT NULL,
name TEXT,
.....
.....
);
I am creating a function that
takes set of row values for a single group_id, fail if multiple group_ids present in in input rows
compares it with matching values in the table (only for that group_id
updates changed values (only for the input group_id)
inserts new values
deletes table rows that are absent in the row input (compare rows with group_id and item_id)(only for the input group_id)
this is my function definition
CREATE OR REPLACE FUNCTION update_items(rows_input items[]) RETURNS boolean as $$
DECLARE
rows items[];
group_id_input integer;
BEGIN
-- get single group_id from input rows, fail if multiple group_id's present in input
-- read items of that group_id in table
-- compare input rows and table rows (of the same group_id)
-- create transaction
-- delete absent rows
-- upsert
-- return success of transaction (boolean)
END;
$$ LANGUAGE plpgsql;
I am trying to call the function in a query
select update_items(
(38,1,1283,"Name1"),
(39,1,1471,"Name2"),
(40,1,1333,"Name3")
);
I get the following error
Failed to run sql query: column "Name1" does not exist
I tried removing the id column values: that gives me the same error
What is the correct way to pass row values to a function that accepts table type array as arguments?
updates changed values
inserts new values deletes table rows that are
absent in the row input (compare rows with group_id and item_id)
If you want do upsert, you must upsert with unique constraint.
So there is two unique constraints. primary key(id), (group_id, item_id).
insert on conflict need consider these two unique constraint.
Since You want pass items[] type to the functions. So it also means that any id that is not in the input function arguments will also be deleted.
drop table if exists items cascade;
begin;
CREATE TABLE items(
id bigint GENERATED BY DEFAULT as identity PRIMARY KEY,
group_id INT NOT NULL,
item_id INT NOT NULL,
name TEXT
,unique(group_id,item_id)
);
insert into items values
(38,1,1283,'original_38'),
(39,1,1471,'original_39'),
(40,1,1333,'original_40'),
(42,1,1332,'original_42');
end;
main function:
CREATE OR REPLACE FUNCTION update_items (in_items items[])
RETURNS boolean
AS $FUNC$
DECLARE
iter items;
saved_ids bigint[];
BEGIN
saved_ids := (SELECT ARRAY (SELECT (unnest(in_items)).id));
DELETE FROM items
WHERE NOT (id = ANY (saved_ids));
FOREACH iter IN ARRAY in_items LOOP
INSERT INTO items
SELECT
iter.*
ON CONFLICT (id)
DO NOTHING;
INSERT INTO items
SELECT
iter.*
ON CONFLICT (group_id,
item_id)
DO UPDATE SET
name = EXCLUDED.name;
RAISE NOTICE 'rec.groupid: %, rec.items_id:%', iter.group_id, iter.item_id;
END LOOP;
RETURN TRUE;
END
$FUNC$
LANGUAGE plpgsql;
call it:
SELECT
*
FROM
update_items ('{"(38, 1, 1283, Name1) "," (39, 1, 1471, Name2) "," (40, 1, 1333, Name3)"}'::items[]);
references:
Iterating over integer[] in PL/pgSQL
How to match elements in an array of composite type?
IN vs ANY operator in PostgreSQL
Here's how I achieved UPSERT with DELETE missing rows, if anyone is looking to do the same.
CREATE OR REPLACE FUNCTION update_items(in_rows items[]) RETURNS INT AS $$
DECLARE
in_groups INTEGER[];
in_group_id INTEGER;
in_item_ids INTEGER[];
BEGIN
-- get single group id from input rows, fail if multiple group ids present in input
in_groups = (SELECT ARRAY (SELECT distinct(group_id) FROM UNNEST(in_rows)));
IF ARRAY_LENGTH(in_groups,1)>1 THEN
RAISE EXCEPTION 'Multiple group_ids found in input items: %', in_groups;
END IF;
in_group_id = in_groups[1];
-- delete items of this group that are absent in in_rows
in_item_ids := (SELECT ARRAY (SELECT (UNNEST(in_rows)).item_id));
DELETE FROM items
WHERE
master_code <> ANY (in_item_ids)
AND group_id = in_group_id;
-- upsert in_rows
INSERT INTO items
SELECT * FROM UNNEST(in_rows)
ON CONFLICT (group_id,item_d)
DO UPDATE SET
parent_group_id = EXCLUDED.parent_group_id,
mat_centre_id = EXCLUDED.mat_centre_id,
NAME = EXCLUDED.NAME,
opening_date = EXCLUDED.opening_date;
RETURN in_group_id;
-- return success of transaction (boolean)
END;
$$ LANGUAGE plpgsql;
This function removes rows that are missing from your in_rows
I have a table that looks like this:
CREATE TABLE label (
hid UUID PRIMARY KEY DEFAULT UUID_GENERATE_V4(),
name TEXT NOT NULL UNIQUE
);
I want to create a function that takes a list of names and inserts multiple rows into the table, ignoring duplicate names, and returns an array of the IDs generated for the rows it inserted.
This works:
CREATE OR REPLACE FUNCTION insert_label(nms TEXT[])
RETURNS UUID[]
AS $$
DECLARE
ids UUID[];
BEGIN
CREATE TEMP TABLE tmp_names(name TEXT);
INSERT INTO tmp_names SELECT UNNEST(nms);
WITH new_names AS (
INSERT INTO label(name)
SELECT tn.name
FROM tmp_names tn
WHERE NOT EXISTS(SELECT 1 FROM label h WHERE h.name = tn.name)
RETURNING hid
)
SELECT ARRAY_AGG(hid) INTO ids
FROM new_names;
DROP TABLE tmp_names;
RETURN ids;
END;
$$ LANGUAGE PLPGSQL;
I have many tables with the exact same columns as the label table, so I would like to have a function that can insert into any of them. I'd like to create a dynamic query to do that. I tried that, but this does not work:
CREATE OR REPLACE FUNCTION insert_label(h_tbl REGCLASS, nms TEXT[])
RETURNS UUID[]
AS $$
DECLARE
ids UUID[];
query_str TEXT;
BEGIN
CREATE TEMP TABLE tmp_names(name TEXT);
INSERT INTO tmp_names SELECT UNNEST(nms);
query_str := FORMAT('WITH new_names AS ( INSERT INTO %1$I(name) SELECT tn.name FROM tmp_names tn WHERE NOT EXISTS(SELECT 1 FROM %1$I h WHERE h.name = tn.name) RETURNING hid)', h_tbl);
EXECUTE query_str;
SELECT ARRAY_AGG(hid) INTO ids FROM new_names;
DROP TABLE tmp_names;
RETURN ids;
END;
$$ LANGUAGE PLPGSQL;
This is the output I get when I run that function:
psql=# select insert_label('label', array['how', 'now', 'brown', 'cow']);
ERROR: syntax error at end of input
LINE 1: ...SELECT 1 FROM label h WHERE h.name = tn.name) RETURNING hid)
^
QUERY: WITH new_names AS ( INSERT INTO label(name) SELECT tn.name FROM tmp_names tn WHERE NOT EXISTS(SELECT 1 FROM label h WHERE h.name = tn.name) RETURNING hid)
CONTEXT: PL/pgSQL function insert_label(regclass,text[]) line 19 at EXECUTE
The query generated by the dynamic SQL looks like it should be exactly the same as the query from static SQL.
I got the function to work by changing the return value from an array of UUIDs to a table of UUIDs and not using CTE:
CREATE OR REPLACE FUNCTION insert_label(h_tbl REGCLASS, nms TEXT[])
RETURNS TABLE (hid UUID)
AS $$
DECLARE
query_str TEXT;
BEGIN
CREATE TEMP TABLE tmp_names(name TEXT);
INSERT INTO tmp_names SELECT UNNEST(nms);
query_str := FORMAT('INSERT INTO %1$I(name) SELECT tn.name FROM tmp_names tn WHERE NOT EXISTS(SELECT 1 FROM %1$I h WHERE h.name = tn.name) RETURNING hid', h_tbl);
RETURN QUERY EXECUTE query_str;
DROP TABLE tmp_names;
RETURN;
END;
$$ LANGUAGE PLPGSQL;
I don't know if one way is better than the other, returning an array of UUIDs or a table of UUIDs, but at least I got it to work one of those ways. Plus, possibly not using a CTE is more efficient, so it may be better to stick with the version that returns a table of UUIDs.
What I would like to know is why the dynamic query did not work when using a CTE. The query it produced looked like it should have worked.
If anyone can let me know what I did wrong, I would appreciate it.
... why the dynamic query did not work when using a CTE. The query it produced looked like it should have worked.
No, it was only the CTE without (required) outer query. (You had SELECT ARRAY_AGG(hid) INTO ids FROM new_names in the static version.)
There are more problems, but just use this query instead:
INSERT INTO label(name)
SELECT unnest(nms)
ON CONFLICT DO NOTHING
RETURNING hid;
label.name is defined UNIQUE NOT NULL, so this simple UPSERT can replace your function insert_label() completely.
It's much simpler and faster. It also defends against possible duplicates from within your input array that you didn't cover, yet. And it's safe under concurrent write load - as opposed to your original, which might run into race conditions. Related:
How to use RETURNING with ON CONFLICT in PostgreSQL?
I would just use the simple query and replace the table name.
But if you still want a dynamic function:
CREATE OR REPLACE FUNCTION insert_label(_tbl regclass, _nms text[])
RETURNS TABLE (hid uuid)
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE format(
$$
INSERT INTO %s(name)
SELECT unnest($1)
ON CONFLICT DO NOTHING
RETURNING hid
$$, _tbl)
USING _nms;
END
$func$;
If you don't need an array as result, stick with the set (RETURNS TABLE ...). Simpler.
Pass values (_nms) to EXECUTE in a USING clause.
The tablename (_tbl) is type regclass, so the format specifier %I for format() would be wrong. Use %s instead. See:
Table name as a PostgreSQL function parameter
Good day,
I have the following problem:
I maintain a database with a huge table - which contents are split/clustered in daily tables.
To do so, I have a Trigger Function which inserts the data into the correct table
This is my trigger:
CREATE OR REPLACE FUNCTION "public"."insert_example"()
RETURNS "pg_catalog"."trigger" AS $BODY$
DECLARE
_tabledate text;
_tablename text;
_start_date text;
_end_date text;
BEGIN
--Takes the current inbound "time" value and determines when midnight is for the given date
_tabledate := to_char(NEW."insert_date", 'YYYYMMDD');
_start_date := to_char(NEW."insert_date", 'YYYY-MM-DD 00:00:00');
_end_date := to_char(NEW."insert_date", 'YYYY-MM-DD 23:59:59');
_tablename := 'zz_example_'||_tabledate;
-- Check if the partition needed for the current record exists
PERFORM 1
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
AND c.relname = _tablename
AND n.nspname = 'public';
-- If the partition needed does not yet exist, then we create it:
-- Note that || is string concatenation (joining two strings to make one)
IF NOT FOUND THEN
EXECUTE '
CREATE TABLE IF NOT EXISTS "'||quote_ident(_tablename)||'" ( LIKE "example_table" INCLUDING ALL )
INHERITS ("example_table");
ALTER TABLE "'||quote_ident(_tablename)||'"
ADD FOREIGN KEY ("log_id") REFERENCES "foreign_example1" ("log_id") ON DELETE SET NULL,
ADD FOREIGN KEY ("detail_log_id") REFERENCES "foreign_example2" ("log_id") ON DELETE SET NULL,
ADD FOREIGN KEY ("image_id") REFERENCES "foreign_example3" ("image_id") ON DELETE SET NULL,
ADD CONSTRAINT date_check CHECK ("insert_date" >= timestamptz '||quote_literal(_start_date)||' and "insert_date" <= timestamptz '||quote_literal(_end_date)||');
CREATE TRIGGER "'||quote_ident(_tablename)||'_create_alarm" AFTER INSERT ON "'||quote_ident(_tablename)||'"
FOR EACH ROW
EXECUTE PROCEDURE "public"."external_trigger_example"();
';
END IF;
--
-- Insert the current record into the correct partition, which we are sure will now exist.
EXECUTE 'INSERT INTO public.' || quote_ident(_tablename) || ' VALUES ($1.*)' USING NEW;
RETURN NULL;
--RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
This Trigger actually works as expected, but this part is a problem:
RETURN NULL;
--RETURN NEW;
Whatever I return here, is written to the Table - Together with the written content from the contained INSERT Query!
So If I would RETURN NEW - I would write duplicates
This is no problem as long as I keep RETURN NULL - But when doing so, I Am unable to do any INSERT RETURNING Queries (As nothing is returned of course)
So my Question is:
How Could I return the just inserted ID - without creating Duplicates?
Or how could I write this trigger, to return something - but not to actually insert it (only use the contained INSERT from the Trigger)?
Thanks for any help!
Say I have a unique column of VarChar(32).
ex. 13bfa574e23848b68f1b7b5ff6d794e1.
I want to preserve the uniqueness of this while converting the column to int. I figure I can convert all of the letters to their ascii equivalent, while retaining the numbers and character position. To do this, I will use the translate function.
psuedo code: select translate(uid, '[^0-9]', ascii('[^0-9]'))
My issue is finding all of the letters in the VarChar column originally.
I've tried
select uid, substring(uid from '[^0-9]') from test_table;
But it only returns the first letter it encounters. Using the above example, I would be looking for bfaebfbbffde
Any help is appreciated!
First off, I agree with the two commenters who said you should use a UID datatype.
That aside...
Your UID looks like a traditional one, in that it's not alphanumeric, it's hex. If this is the case, you can convert the hex to the numeric value using this solution:
PostgreSQL: convert hex string of a very large number to a NUMERIC
Notice the accepted solution (mine, shame) is not as good as the other solution listed, as mine will not work for hex values this large.
That said, yikes, what a huge number. Holy smokes.
Depending on how many records are in your table and the frequency of insert/update, I would consider a radically different approach. In a nutshell, I would create another column to store your numeric ID whose value would be determined by a sequence.
If you really want to make it bulletproof, you can also create a cross-reference table to store the relationships that would
Reuse an ID if it ever repeated (I know UIDs don't, but this would cover cases where a record is deleted by mistake, re-appears, and you want to retain the original id)
If UIDs repeat (like this is a child table with multiple records per UID), it would cover that case as well
If neither of these apply, you could dumb it down quite a bit.
The solution would look something like this:
Add an ID column that will be your numeric equivalent to the UID:
alter table test_table
add column id bigint
Create a sequence:
CREATE SEQUENCE test_id
create a cross-reference table (again, not necessary for the dumbed down version):
create table test_id_xref (
uid varchar(32) not null,
id bigint not null,
constraint test_id_xref_pk primary key (uid)
)
Then do a one-time update to assign a surrogate ID to each UID for both the cross-reference and actual tables:
insert into test_id_xref
with uids as (
select distinct uid
from test_table
)
select uid, nextval ('test_id')
from uids;
update test_table tt
set id = x.id
from test_id_xref x
where tt.uid = x.uid;
And finally, for all future inserts, create a trigger to assign the next value:
CREATE OR REPLACE FUNCTION test_table_insert_trigger()
RETURNS trigger AS
$BODY$
BEGIN
select t.id
from test_id_xref t
into NEW.id
where t.uid = NEW.uid;
if NEW.id is null then
NEW.id := nextval('test_id');
insert into test_id_xref values (NEW.uid, NEW.id);
end if;
return NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
CREATE TRIGGER insert_test_table_trigger
BEFORE INSERT
ON test_table
FOR EACH ROW
EXECUTE PROCEDURE test_table_insert_trigger();
create one function which replace charter with blank which you not need in string,
CREATE FUNCTION replace_char(v_string VARCHAR(32) CHARSET utf8) RETURNS VARCHAR(32)
DETERMINISTIC
BEGIN
DECLARE v_return_string VARCHAR(32) DEFAULT '';
DECLARE v_remove_char VARCHAR(200) DEFAULT '1,2,3,4,5,6,7,8,9,0';
DECLARE v_length, j INT(3) DEFAULT 0;
SET v_length = LENGTH(v_string);
WHILE(j < v_length) DO
IF ( FIND_IN_SET( SUBSTR(v_string, (j+1), 1), v_remove_char ) = 0) THEN
SET v_return_string = CONCAT(v_return_string, SUBSTR(v_string, (j+1), 1) );
END IF;
SET j = j+1;
END WHILE;
RETURN v_return_string;
END$$
DELIMITER ;
Now you just nee to call this function in query
select uid, replace_char(uid) from test_table;
It will give you string what you need (bfaebfbbffde)
If you want to int number only i.e 13574238486817567941 then change value of variable, and also column datatype in decimal(50,0), decimal can stored large number and there is 0 decimal point so it will store int value as decimal.
v_remove_char = 'a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z';
I'm using postgres.
Let's say I'm going to create following trigger:
CREATE OR REPLACE FUNCTION auditlogfunc() RETURNS TRIGGER AS $example_table$
BEGIN
INSERT INTO AUDIT(EMP_ID, ENTRY_DATE) VALUES (new.ID, current_timestamp);
RETURN NEW;
END;
$example_table$ LANGUAGE plpgsql;
I want to use this trigger with many tables and not every table has primary key with name 'id' (table can have no 'id' column at all).
So I need to find out in some way how to use primary key in my trigger function no matter which column name it has.
How can I achieve this?
What's your PostgreSQL version? You must query PostgreSQL catalog to achieve what you want. See script below:
I'm assuming that your PostgreSQL has JSONB support (9.4+).
CREATE TABLE public.test_table
(
id BIGINT NOT NULL,
a_column TEXT NOT NULL,
CONSTRAINT test_tablepkey PRIMARY KEY (id)
);
CREATE OR REPLACE FUNCTION auditlogfunc() RETURNS TRIGGER AS $example_table$
DECLARE
reg_id JSONB;
affected_row JSON;
BEGIN
IF TG_OP IN('INSERT', 'UPDATE') THEN
affected_row := row_to_json(NEW);
ELSE
affected_row := row_to_json(OLD);
END IF;
--Get PK columns
--You may want to extract this to a SQL function
WITH pk_columns (attname) AS (
SELECT
CAST(a.attname AS TEXT)
FROM
pg_index i
JOIN pg_attribute a ON a.attrelid = i.indrelid AND a.attnum = ANY(i.indkey)
WHERE
i.indrelid = TG_RELID
AND i.indisprimary
)
SELECT
json_object_agg(key, value) INTO reg_id
FROM
json_each_text(affected_row)
WHERE
key IN(SELECT attname FROM pk_columns);
--Raise collected PK
RAISE INFO 'PK: %', reg_id;
--TODO: your insert into audit table goes here
RETURN NEW;
END;
$example_table$ LANGUAGE plpgsql;
CREATE TRIGGER tg_audit_cadprodu_row AFTER INSERT OR UPDATE OR DELETE
ON public.test_table FOR EACH ROW EXECUTE PROCEDURE public.auditlogfunc();
--A simple test
INSERT INTO test_table VALUES (CAST((random() * 10000) AS INTEGER), 'Test');
I took Michel Milezzi's answer and cleaned it up so it only returns what we need, with better performance in a more concise statement. Tested on Postgres v12.1. I imagine it would work on 9.4+.
CREATE OR REPLACE FUNCTION auditlogfunc() RETURNS TRIGGER AS $example_table$
DECLARE
col TEXT = (
SELECT attname
FROM pg_index
JOIN pg_attribute ON
attrelid = indrelid
AND attnum = ANY(indkey)
WHERE indrelid = TG_RELID AND indisprimary
);
BEGIN
INSERT INTO AUDIT(EMP_ID, ENTRY_DATE) VALUES ((row_to_json(NEW) ->> col), current_timestamp);
RETURN NEW;
END;
$example_table$ LANGUAGE plpgsql;
First I'm querying for the name of the table's primary key column, and storing it in the col variable. I filter the query by the special variable TG_RELID, which is the object ID of the table which caused the trigger. (See the Postgres Docs.)
Then we can insert into the audit table. All I've changed from the question is new.ID. Now that we know the name of the primary key column, we can convert the new row--the row that will be (or has been) inserted, updated, or deleted--to json, and select the value with the col variable. This should work for both BEFORE and AFTER triggers. If triggering on DELETE, change row_to_json(NEW) to row_to_json(OLD).