How to set a value in a variable column name? - sql

How do I set a value in a variable column name? For some context, I am writing a function to be used as a trigger which sets a variable column to a constant value. To be used as follows:
CREATE TRIGGER always_6_trigger
BEFORE INSERT
ON table
FOR EACH ROW
EXECUTE PROCEDURE always_6('col1');
The above would result in the following rows all having a col1 value of 6. So for example:
INSERT INTO table (col1, col2) VALUES (6, 2), (null, 9), (null, 10), (7, 2);
Would result in:
| col1 | col2 |
---------------
| 6 | 2 |
| 6 | 9 |
| 6 | 10 |
| 6 | 2 |
Or if using the following trigger:
CREATE TRIGGER always_6_trigger
BEFORE INSERT
ON table
FOR EACH ROW
EXECUTE PROCEDURE always_6('col2');
And the same insert:
INSERT INTO table (col1, col2) VALUES (6, 2), (null, 9), (null, 10), (7, 2);
The table would look like:
| col1 | col2 |
---------------
| 6 | 6 |
| null | 6 |
| null | 6 |
| 7 | 6 |
How would I write the always_6 function?
Edit: To better explain the use case, the constant value would be current_setting('user_id') (or something alike). And the column name would be things like author_id and user_id. The thinking being that a user could never add for data which was not their own.

You can define your function to produce dynamically generated SQL.
The EXECUTE command takes a string as input and executes it as SQL, so it would look something like this:
EXECUTE FORMAT('UPDATE mytable SET %I='constantvalue' WHERE condition', colname);
Here I have used the FORMAT function to prepare a string with the value of colname substituted in where the column name would go. condition would be some valid WHERE clauses to select the record to update.
If the value of colname could come from an external source (ie. user supplied data) then you would have to be very careful to validate it beforehand, otherwise you might create an SQL injection vector.

You can use a dynamic query to do this with conditional check on the passed parameter value to the input parameter of the stored procedure.

I think I oversimplified the task based on the initial description, but would something like this work? You can't pass a parameter to a trigger function, but you mentioned the parameter was the result of another function current_setting('user_id'), so is it possible to roll the two concepts together like this?
CREATE OR REPLACE FUNCTION always_6()
RETURNS trigger AS
$BODY$
DECLARE
current_user_id varchar;
BEGIN
current_user_id := current_setting('user_id');
if current_user_id = 'test1' then
new.col_1 := 6;
elsif current_user_id = 'test2' then
new.col_2 := 6;
end if;
return NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

Related

Copy value from one column into another during insert using postgreSQL

If I have a table like this:
CREATE TABLE mytable
(
id SERIAL,
content TEXT,
copyofid INTEGER
);
Is there a way to copy id into copyofid in a single insert statement?
I tried:
INSERT INTO mytable(content, copyofid) VALUES("test", id);
But that doesn't seem to work.
You can find the sequence behind your serial column using pg_get_serial_sequence() and access it using currval() to get what serial column just got as a result of your INSERT.
CREATE TABLE mytable
( id SERIAL,
content TEXT,
copyofid INTEGER
);
--this works for a single-record insert
INSERT INTO mytable
(content, copyofid)
VALUES
('test', currval(pg_get_serial_sequence('mytable','id')));
--inserting more, you'll have to handle both columns relying on the sequence
INSERT INTO mytable
( id,
content,
copyofid)
VALUES
( nextval(pg_get_serial_sequence('mytable','id')),
'test3',
currval(pg_get_serial_sequence('mytable','id'))),
( nextval(pg_get_serial_sequence('mytable','id')),
'test4',
currval(pg_get_serial_sequence('mytable','id')));
table mytable;
-- id | content | copyofid
------+---------+----------
-- 1 | test | 1
-- 2 | test3 | 2
-- 3 | test4 | 3
--(3 rows)
Fiddle
Edouard makes makes a fair point that if you can specify the conditions when you want this behaviour, you can add them to the definition:
CREATE TABLE mytable
( id SERIAL,
content TEXT,
copyofid integer
generated always as (
case when content ilike '%requires copying ID%' then id end)
stored
);
insert into mytable (content) values ('abc') returning *;
-- id | content | copyofid
------+---------+----------
-- 1 | abc |
--(1 row)
insert into mytable (content) values ('abc, but requires copying ID') returning *;
-- id | content | copyofid
------+------------------------------+----------
-- 2 | abc, but requires copying ID | 2
--(1 row)
If they vary between inserts
CREATE TABLE mytable
( id SERIAL,
content TEXT,
copyofid integer
generated always as (
case when should_copy_id then id end)
stored,
should_copy_id boolean default false
);
insert into mytable (content) values ('efg') returning *;
-- id | content | copyofid | should_copy_id
------+---------+----------+----------------
-- 1 | efg | | f
--(1 row)
insert into mytable (content,should_copy_id) values ('klm','today'::date<>'2022-10-28'::date) returning *;
-- id | content | copyofid | should_copy_id
------+---------+----------+----------------
-- 2 | klm | 2 | t
--(1 row)
The trigger will be better if
the check is fairly complex - generated columns are pretty limited in terms of the definition complexity. For example, you can't use mutable functions in them - not even STABLE are accepted
you want to save the logic and change it later without having to drop the column each time, then re-add it with a new definition (only way to alter a generated column definition)
as a part of the insert you'll want to do more than just copy the id column
The solution is to create a trigger function which is fired before inserting a new row in table mytable and which copy NEW.id into NEW.copyofid if a condition is true :
CREATE OR REPLACE FUNCTION before_insert_mytable() RETURN trigger LANGUAGE plpgsql AS $$
BEGIN
IF condition
THEN NEW.copyofid = NEW.id ;
END IF ;
RETURN NEW ;
END ; $$
CREATE OR REPLACE TRIGGER before_insert_mytable BEFORE INSERT ON mytable
FOR EACH ROW EXECUTE FUNCTION before_insert_mytable () ;
The condition can also be stated directly in the WHEN clause of the trigger instead of in the function :
CREATE OR REPLACE FUNCTION before_insert_mytable() RETURN trigger LANGUAGE plpgsql AS $$
BEGIN
NEW.copyofid = NEW.id ;
RETURN NEW ;
END ; $$
CREATE OR REPLACE TRIGGER before_insert_mytable BEFORE INSERT ON mytable
WHEN condition
FOR EACH ROW EXECUTE FUNCTION before_insert_mytable () ;
see the manual

In Oracle, I want to use a sequence and not allow Insert on the column that uses the sequence

I want to make happen the same that happens when I do the following
CREATE TABLE "TEST1"
(
"ID" NUMBER(10,0) GENERATED ALWAYS AS IDENTITY,
"APPCODE" VARCHAR2(1)
);
Table TEST1 created.
INSERT INTO TEST1 (ID, APPCODE) VALUES (1,'A');
Error starting at line : 6 in command -
INSERT INTO TEST1 (ID, APPCODE) VALUES (1,'A')
Error at Command Line : 50 Column : 1
Error report -
SQL Error: ORA-32795: cannot insert into a generated always identity column
INSERT INTO TEST (APPCODE) VALUES ('A');
1 row inserted.
but I want to use named sequences, created by me. I want the same behavior as
when using the "ALWAYS" keyword (as in "GENERATED ALWAYS AS IDENTITY") and at the same time use my own named sequences, but I don't know how.
With named sequences, it seems to be impossible to avoid that an INSERT uses the ID COLUMN on the insert. But maybe there is a way? This is the question I'm asking. Below I create a named sequence and show the difference (I can't figure out how to prevent the ID column to be allowed on the insert).
CREATE SEQUENCE SEQ_TEST2 START WITH 1 INCREMENT BY 1 MINVALUE 1 NOMAXVALUE;
Sequence SEQ_TEST2 created.
INSERT INTO TEST2 (APPCODE) VALUES ('A'); /* This is ok */
1 row inserted.
INSERT INTO TEST2 (ID,APPCODE) VALUES (1928,'A'); /* This is NOT ok */
1 row inserted.
The second insert above is what I want to prevent from happening, it shouldn't be possible to insert on the ID column. I don't care how to prevent it to happen, doesn't have to be the same way that the "ALWAYS" keyword on the TEST1 table works, but I would like to prevent it from happening. Anyone knows please how to to it?
When you define a column as a identity column, Oracle automatically creates a sequence, you just don't get to choose the name. You can view the name of the sequence that was created and will be used to populate the identity in the DATA_DEFAULT column of the ALL_TAB_COLS table.
SELECT owner,
table_name,
column_name,
data_default
FROM all_tab_cols
WHERE identity_column = 'YES';
Why do you thing that while using IDENTITY you do not use a SEQUENCE?
Check the documentation or the example below
CREATE TABLE "TEST1"
(
"ID" NUMBER(10,0) GENERATED ALWAYS AS IDENTITY,
"APPCODE" VARCHAR2(1)
);
For this table Oracle creates a sequence for you under the over:
EXPLAIN PLAN SET STATEMENT_ID = 'jara1' into plan_table FOR
insert into TEST1 (APPCODE) values ('x');
---
SELECT * FROM table(DBMS_XPLAN.DISPLAY('plan_table', 'jara1','ALL'));
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | TEST1 | | | | |
| 2 | SEQUENCE | ISEQ$$_75209 | | | | |
-----------------------------------------------------------------------------------------
Or check the dictionary
select SEQUENCE_NAME from USER_TAB_IDENTITY_COLS
where table_name = 'TEST1';
SEQUENCE_NAME
---------------
ISEQ$$_75209
In identity_options you can define the sequence options.
By selection ALWAYS or BY DEAFULT [ON NULL] you can adjust what is posible / now allowed to use in insert (I'm not sure from you description what is your aim).

How to return a custom record using return next

When I try to return a table, my results are showing up like this
generateClassRecord
--------------------
(james, 5)
(bob, 10)
rather than like this
+-------+----------+
| name | new_rank |
+-------+----------+
| james | 5 |
| bob | 10 |
| cole | 54 |
+-------+----------+
I am assuming I am not correctly returning the rows properly. Could someone advice me how I can return my data. I tried using return query but when I evoke my function it tells me the types do not match.
CREATE TABLE employees (
id integer primary key
name text,
rank integer
);
create type empRecord as (
name text,
new_rank integer
);
INSERT INTO employees (id, name, rank) VALUES (1, 'james', 5), (2, 'bob', 10), (3, 'Cole', '54');
CREATE OR REPLACE FUNCTION
generateEmpRecord()
RETURNS empRecord
as $$
DECLARE
r empRecord;
BEGIN
FOR r IN
SELECT name, rank
FROM employees;
LOOP
return next r;
END LOOP;
END;
$$ language plpgsql
Your function should be declared as returns setof emprecord as you are returning multiple rows.
But the whole example can be simplified by using returns table - then you don't need the extra "return type". You also don't need PL/pgSQL for this. A (usually more efficient) language sql function is enough:
CREATE OR REPLACE FUNCTION generateemprecord()
RETURNS table(name text, new_rank int)
as $$
SELECT name, rank
FROM employees;
$$
language sql;
Online example

How can I use data_type in where clause

Is there any way to use column data type in where clause in Oracle SQL? My table contains columns with different data types and, as a result, I want to have only columns with 'char' data type.
I need to check, whether at least 1 value in every column is set to 'T'. Is there better way to check this, than using if statment?
Thank you for help.
EDIT
To be more specific, I add simple table to represent the problem.
Moreover, as I said, there are more columns, including some date columns. Let's say, that user has both
+----------+------------+------+------+--------+----------+--------+
| Datatype | Privileges | Open | Edit | Delete | Download | Upload |
+----------+------------+------+------+--------+----------+--------+
| PNG | Default | T | T | T | T | T |
| JPEG | Default | T | T | T | T | T |
| PDF | Default | T | F | F | T | T |
| DOCX | Default | T | T | F | T | T |
| PNG | Test | T | F | F | T | F |
| PDF | Test | T | F | F | T | F |
+----------+------------+------+------+--------+----------+--------+
Moreover, as I said, there are more columns, including some date columns. Let's say, that user has both privileges. I need to check, which datatype user has access to and what can he do with each. I.e this user can work with PNG file and execute every kind of operation, because of Default privileges. Test privileges offers some operations on PNG file too, but not every o them. This is like bit OR operation I assume. If at least one is set to T, then user can work with file using set operation. There is more datatypes, privileges and columns, this is just simple example.
I suppose, that there is no fast way to find only CHAR columns, so I have to simply write every column in Select.
Alex Poole has a good comment--if you already know which columns contain the T/F data, there is no need to query by data type, since you already would know which columns to check by name. You can just query by column name.
I'll provide an answer in case you do Not already know which columns will need to be checked but will include far below an example of static sql as well. I'm not sure exactly what is meant by your second requirements, so I'll add a couple examples that take different angles. These will use EXECUTE IMMEDIATE, as Gordon Linoff mentioned in a comment.
This first example interprets your requirements that you don't know the table beforehand (otherwise you can just check its CHAR colums and query those directly), but want to check whether at least one row has a T for each of a given TABLE's CHAR columns (across rows).
The block takes a TABLE_NAME as a parameter, then builds a dynamic query that checks whether each COLUMN has at least one entry in the table with a value of T.
First create a test table with different data types including some CHAR:
CREATE TABLE HETEROGENEOUS (
CHAR_COL_1 CHAR(10),
NUMBER_COL_1 NUMBER,
CHAR_COL_2 CHAR(10),
TIMESTAMP_COL_1 TIMESTAMP,
CHAR_COL_3 CHAR(10)
);
Then add some test data. This first load has two of three columns with at least one T value, so will fail the test.
INSERT INTO HETEROGENEOUS VALUES ('Chewbacca', 1, 'VOLTRON', SYSTIMESTAMP, 'Gundam');
INSERT INTO HETEROGENEOUS VALUES ('T', 1, 'Frodo', SYSTIMESTAMP, 'Starscream');
INSERT INTO HETEROGENEOUS VALUES ('X', 1, 'Bombadil', SYSTIMESTAMP, 'T');
Then run the block. This block counts the number of CHAR columns, then executes a dynamic query to count how many columns have at least one row with a T value in each CHAR column and compares the count of T columns with the count of CHAR columns:
DECLARE
V_TABLE_NAME VARCHAR2(128) := 'HETEROGENEOUS';
V_SQL_TEXT VARCHAR2(32000);
V_REQUIRED_COLUMN_COUNT NUMBER := 0;
V_OK_COLUMN_COUNT NUMBER := 0;
BEGIN
EXECUTE IMMEDIATE
UTL_LMS.FORMAT_MESSAGE('SELECT COUNT(*) FROM USER_TAB_COLUMNS WHERE TABLE_NAME = ''%s'' AND DATA_TYPE = ''CHAR''',V_TABLE_NAME)
INTO V_REQUIRED_COLUMN_COUNT;
SELECT 'SELECT ' ||LISTAGG('(SELECT COALESCE(MIN(1),0) FROM '||V_TABLE_NAME||' WHERE TRIM('||
COLUMN_NAME||') = ''T'' AND ROWNUM = 1)','+')
WITHIN GROUP (ORDER BY COLUMN_ID) || ' FROM DUAL'
INTO V_SQL_TEXT
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = V_TABLE_NAME
AND DATA_TYPE = 'CHAR' GROUP BY TABLE_NAME;
EXECUTE IMMEDIATE V_SQL_TEXT INTO V_OK_COLUMN_COUNT;
IF V_OK_COLUMN_COUNT < V_REQUIRED_COLUMN_COUNT
THEN
DBMS_OUTPUT.PUT_LINE(UTL_LMS.FORMAT_MESSAGE('Required at least: %s columns to have 1+ T values but only found: %s',TO_CHAR(V_REQUIRED_COLUMN_COUNT),TO_CHAR(V_OK_COLUMN_COUNT)));
ELSE
DBMS_OUTPUT.PUT_LINE(UTL_LMS.FORMAT_MESSAGE('All: %s CHAR columns have at least one T value',TO_CHAR(V_REQUIRED_COLUMN_COUNT)));
END IF;
END;
/
Result:
Required at least: 3 columns to have 1+ T values but only found: 2
Then add another row to get the last required T value:
INSERT INTO HETEROGENEOUS VALUES ('Deckard', 1, 'T', SYSTIMESTAMP, 'Megatron');
And run again:
All: 3 CHAR columns have at least one T value
The static SQL equivalent (if you already know you table/columns) is:
SELECT (SELECT COALESCE(MIN(1), 0) FROM HETEROGENEOUS
WHERE TRIM(CHAR_COL_1) = 'T' AND ROWNUM = 1) +
(SELECT COALESCE(MIN(1), 0) FROM HETEROGENEOUS
WHERE TRIM(CHAR_COL_2) = 'T' AND ROWNUM = 1) +
(SELECT COALESCE(MIN(1), 0) FROM HETEROGENEOUS
WHERE TRIM(CHAR_COL_3) = 'T' AND ROWNUM = 1)
FROM DUAL;
If your requirement instead is to find ROWs where at least one CHAR column has a T value, the approach is the same, but the dynamic query is different.
This second example will find all the rows where at least one CHAR column has a value of T (and just print them):
DECLARE
V_TABLE_NAME VARCHAR2(128) := 'HETEROGENEOUS';
V_SQL_TEXT VARCHAR2(32000);
TYPE REFCURSOR IS REF CURSOR;
V_REFCURSOR REFCURSOR;
V_ROWID VARCHAR2(64);
BEGIN
SELECT 'SELECT ROWID FROM '||V_TABLE_NAME||' WHERE 1 = ANY ( '||LISTAGG('DECODE(TRIM('||COLUMN_NAME||'),''T'',1,0) ',',') WITHIN GROUP (ORDER BY COLUMN_ID)||')'
INTO V_SQL_TEXT
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = V_TABLE_NAME
AND DATA_TYPE = 'CHAR'
GROUP BY TABLE_NAME;
OPEN V_REFCURSOR FOR V_SQL_TEXT;
LOOP
FETCH V_REFCURSOR INTO V_ROWID;
EXIT WHEN V_REFCURSOR%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(UTL_LMS.FORMAT_MESSAGE('RowId: %s',V_ROWID));
END LOOP;
CLOSE V_REFCURSOR;
END;
/
Running it gives the three rows that have a T in any CHAR column:
RowId: AAGKHPAFJAABL49AAB
RowId: AAGKHPAFJAABL49AAC
RowId: AAGKHPAFJAABL49AAD
Or alternatively get the single row that has NO T values in their CHAR columns, by switching from ANY to ALL:
WHERE 1 = ANY
WHERE 1 <> ALL
Gives one row:
RowId: AAGKHPAFJAABL49AAA
The static eqivalent (if you already know your table and don't need to use data type) is:
SELECT ROWID
FROM HETEROGENEOUS
WHERE 1 = ANY (DECODE(TRIM(CHAR_COL_1), 'T', 1, 0),
DECODE(TRIM(CHAR_COL_2), 'T', 1, 0),
DECODE(TRIM(CHAR_COL_3), 'T', 1, 0));​

Postgres UPSERT (INSERT or UPDATE) only if value is different

I'm updating a Postgres 8.4 database (from C# code) and the basic task is simple enough: either UPDATE an existing row or INSERT a new one if one doesn't exist yet. Normally I would do this:
UPDATE my_table
SET value1 = :newvalue1, ..., updated_time = now(), updated_username = 'evgeny'
WHERE criteria1 = :criteria1 AND criteria2 = :criteria2
and if 0 rows were affected then do an INSERT:
INSERT INTO my_table(criteria1, criteria2, value1, ...)
VALUES (:criteria1, :criteria2, :newvalue1, ...)
There is a slight twist, though. I don't want to change the updated_time and updated_username columns unless any of the new values are actually different from the existing values to avoid misleading users about when the data was updated.
If I was only doing an UPDATE then I could add WHERE conditions for the values as well, but that won't work here, because if the DB is already up to date the UPDATE will affect 0 rows and then I would try to INSERT.
Can anyone think of an elegant way to do this, other than SELECT, then either UPDATE or INSERT?
Take a look at a BEFORE UPDATE trigger to check and set the correct values:
CREATE OR REPLACE FUNCTION my_trigger() RETURNS TRIGGER LANGUAGE plpgsql AS
$$
BEGIN
IF OLD.content = NEW.content THEN
NEW.updated_time= OLD.updated_time; -- use the old value, not a new one.
ELSE
NEW.updated_time= NOW();
END IF;
RETURN NEW;
END;
$$;
Now you don't even have to mention the field updated_time in your UPDATE query, it will be handled by the trigger.
http://www.postgresql.org/docs/current/interactive/plpgsql-trigger.html
Two things here.
Firstly depending on activity levels in your database you may hit a race condition between checking for a record and inserting it where another process may create that record in the interim.
The manual contains an example of how to do this
link example
To avoid doing an update there is the suppress_redundant_updates_trigger() procedure. To use this as you wish you wold have to have two before update triggers the first will call the suppress_redundant_updates_trigger() to abort the update if no change made and the second to set the timestamp and username if the update is made. Triggers are fired in alphabetical order.
Doing this would also mean changing the code in the example above to try the insert first before the update.
Example of how suppress update works:
DROP TABLE sru_test;
CREATE TABLE sru_test(id integer not null primary key,
data text,
updated timestamp(3));
CREATE TRIGGER z_min_update
BEFORE UPDATE ON sru_test
FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger();
DROP FUNCTION set_updated();
CREATE FUNCTION set_updated()
RETURNS TRIGGER
AS $$
DECLARE
BEGIN
NEW.updated := now();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER zz_set_updated
BEFORE INSERT OR UPDATE ON sru_test
FOR EACH ROW EXECUTE PROCEDURE set_updated();
insert into sru_test(id,data) VALUES (1,'Data 1');
insert into sru_test(id,data) VALUES (2,'Data 2');
select * from sru_test;
update sru_test set data = 'NEW';
select * from sru_test;
update sru_test set data = 'NEW';
select * from sru_test;
update sru_test set data = 'ALTERED' where id = 1;
select * from sru_test;
update sru_test set data = 'NEW' where id = 2;
select * from sru_test;
Postgres is getting UPSERT support . It is currently in the tree since 8 May 2015 (commit):
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that
first does a pre-check for existing tuples and then attempts an
insert. If a violating tuple was inserted concurrently, the
speculatively inserted tuple is deleted and a new attempt is made. If
the pre-check finds a matching tuple the alternative DO NOTHING or DO
UPDATE action is taken. If the insertion succeeds without detecting a
conflict, the tuple is deemed inserted.
A snapshot is available for download. It has not yet made a release.
INSERT INTO table_name(column_list) VALUES(value_list)
ON CONFLICT target action;
https://www.postgresqltutorial.com/postgresql-upsert/
Dummy example :
insert into user_profile (user_id, resident_card_no, last_name) values
(103, '14514367', 'joe_inserted' )
on conflict on constraint user_profile_pk do
update set resident_card_no = '14514367', last_name = 'joe_updated';
The RETURNING clause enables you to chain your queries; the second query uses the results from the first. (in this case to avoid re-touching the same rows) (RETURNING is available since postgres 8.4)
Shown here embedded in a a function, but it works for plain SQL, too
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE TABLE my_table
( updated_time timestamp NOT NULL DEFAULT now()
, updated_username varchar DEFAULT '_none_'
, criteria1 varchar NOT NULL
, criteria2 varchar NOT NULL
, value1 varchar
, value2 varchar
, PRIMARY KEY (criteria1,criteria2)
);
INSERT INTO my_table (criteria1,criteria2,value1,value2)
SELECT 'C1_' || gs::text
, 'C2_' || gs::text
, 'V1_' || gs::text
, 'V2_' || gs::text
FROM generate_series(1,10) gs
;
SELECT * FROM my_table ;
CREATE function funky(_criteria1 text,_criteria2 text, _newvalue1 text, _newvalue2 text)
RETURNS VOID
AS $funk$
WITH ins AS (
INSERT INTO my_table(criteria1, criteria2, value1, value2, updated_username)
SELECT $1, $2, $3, $4, COALESCE(current_user, 'evgeny' )
WHERE NOT EXISTS (
SELECT * FROM my_table nx
WHERE nx.criteria1 = $1 AND nx.criteria2 = $2
)
RETURNING criteria1 AS criteria1, criteria2 AS criteria2
)
UPDATE my_table upd
SET value1 = $3, value2 = $4
, updated_time = now()
, updated_username = COALESCE(current_user, 'evgeny')
WHERE 1=1
AND criteria1 = $1 AND criteria2 = $2 -- key-condition
AND (value1 <> $3 OR value2 <> $4 ) -- row must have changed
AND NOT EXISTS (
SELECT * FROM ins -- the result from the INSERT
WHERE ins.criteria1 = upd.criteria1
AND ins.criteria2 = upd.criteria2
)
;
$funk$ language sql
;
SELECT funky('AA', 'BB' , 'CC', 'DD' ); -- INSERT
SELECT funky('C1_3', 'C2_3' , 'V1_3', 'V2_3' ); -- (null) UPDATE
SELECT funky('C1_7', 'C2_7' , 'V1_7', 'V2_7777' ); -- (real) UPDATE
SELECT * FROM my_table ;
RESULT:
updated_time | updated_username | criteria1 | criteria2 | value1 | value2
----------------------------+------------------+-----------+-----------+--------+---------
2013-03-13 16:37:55.405267 | _none_ | C1_1 | C2_1 | V1_1 | V2_1
2013-03-13 16:37:55.405267 | _none_ | C1_2 | C2_2 | V1_2 | V2_2
2013-03-13 16:37:55.405267 | _none_ | C1_3 | C2_3 | V1_3 | V2_3
2013-03-13 16:37:55.405267 | _none_ | C1_4 | C2_4 | V1_4 | V2_4
2013-03-13 16:37:55.405267 | _none_ | C1_5 | C2_5 | V1_5 | V2_5
2013-03-13 16:37:55.405267 | _none_ | C1_6 | C2_6 | V1_6 | V2_6
2013-03-13 16:37:55.405267 | _none_ | C1_8 | C2_8 | V1_8 | V2_8
2013-03-13 16:37:55.405267 | _none_ | C1_9 | C2_9 | V1_9 | V2_9
2013-03-13 16:37:55.405267 | _none_ | C1_10 | C2_10 | V1_10 | V2_10
2013-03-13 16:37:55.463651 | postgres | AA | BB | CC | DD
2013-03-13 16:37:55.472783 | postgres | C1_7 | C2_7 | V1_7 | V2_7777
(11 rows)
Start a transaction. Use a select to see if the data you'd be inserting already exists, if it does, do nothing, otherwise update, if it does not exist, then insert. Finally close the transaction.