Increment field with not null and unique constraint in PostgreSQL 8.3 - sql

I have a table "items" with a column "position". position has a unique and not-null constraint. In order to insert a new row at position x I first try increment the positions of the subsequent items:
UPDATE items SET position = position + 1 WHERE position >= x;
This results in a unique constraint violation:
ERROR: duplicate key value violates unique constraint
The problem seems to be the order in which PostgreSQL performs the updates. Unique constraints in PostgreSQL < 9.0 aren't deferrable and unfortunately using 9.0 is currently not an option. Also, the UPDATE statement doesn't support an ORDER BY clause and the following doesn't work, too (still duplicate key violation):
UPDATE items SET position = position + 1 WHERE id IN (
SELECT id FROM items WHERE position >= x ORDER BY position DESC)
Does somebody know a solution that doesn't involve iterating over all items in code?

Another table, with multiple unique index:
create table utest(id integer, position integer not null, unique(id, position));
test=# \d utest
Table "public.utest"
Column | Type | Modifiers
----------+---------+-----------
id | integer |
position | integer | not null
Indexes:
"utest_id_key" UNIQUE, btree (id, "position")
Some data:
insert into utest(id, position) select generate_series(1,3), 1;
insert into utest(id, position) select generate_series(1,3), 2;
insert into utest(id, position) select generate_series(1,3), 3;
test=# select * from utest order by id, position;
id | position
----+----------
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
2 | 3
3 | 1
3 | 2
3 | 3
(9 rows)
I created a procedure that updates position values in the proper order:
create or replace function update_positions(i integer, p integer)
returns void as $$
declare
temprec record;
begin
for temprec in
select *
from utest u
where id = i and position >= p
order by position desc
loop
raise notice 'Id = [%], Moving % to %',
i,
temprec.position,
temprec.position+1;
update utest
set position = position+1
where position=temprec.position and id = i;
end loop;
end;
$$ language plpgsql;
Some tests:
test=# select * from update_positions(1, 2);
NOTICE: Id = [1], Moving 3 to 4
NOTICE: Id = [1], Moving 2 to 3
update_positions
------------------
(1 row)
test=# select * from utest order by id, position;
id | position
----+----------
1 | 1
1 | 3
1 | 4
2 | 1
2 | 2
2 | 3
3 | 1
3 | 2
3 | 3
(9 rows)
Hope it helps.

as PostgreSQL supports full set of transactional DDL, you can easily do something like this:
create table utest(id integer unique not null);
insert into utest(id) select generate_series(1,4);
The table looks now like this:
test=# \d utest
Table "public.utest"
Column | Type | Modifiers
--------+---------+-----------
id | integer | not null
Indexes:
"utest_id_key" UNIQUE, btree (id)
test=# select * from utest;
id
----
1
2
3
4
(4 rows)
And now the whole magic:
begin;
alter table utest drop constraint utest_id_key;
update utest set id = id + 1;
alter table utest add constraint utest_id_key unique(id);
commit;
After that we have:
test=# \d utest
Table "public.utest"
Column | Type | Modifiers
--------+---------+-----------
id | integer | not null
Indexes:
"utest_id_key" UNIQUE, btree (id)
test=# select * from utest;
id
----
2
3
4
5
(4 rows)
This solution has one drawback: it needs to lock the whole table, but maybe this is not a problem here.

The 'correcter' solution might be to make the constraint DEFERRABLE
ALTER TABLE channels ADD CONSTRAINT
channels_position_unique unique("position")
DEFERRABLE INITIALLY IMMEDIATE
and then set that constraint to DEFERRED when incrementing and setting it back to IMMEDIATE once you are done.
SET CONSTRAINTS channels_position_unique DEFERRED;
UPDATE channels SET position = position+1
WHERE position BETWEEN 1 AND 10;
SET CONSTRAINTS channels_position_unique IMMEDIATE;

Variant without altering table and drop constraint:
UPDATE items t1
SET position = t2.position + 1
FROM (SELECT position
FROM items
ORDER BY position DESC) t2
WHERE t2.position >= x AND t1.position = t2.position
Online example: http://rextester.com/FAU54991

Related

Error "duplicate key value violates unique constraint" while updating multiple rows

I created a table in PostgreSQL and Oracle as
CREATE TABLE temp(
seqnr smallint NOT NULL,
defn_id int not null,
attr_id int not null,
input CHAR(50) NOT NULL,
CONSTRAINT pk_id PRIMARY KEY (defn_id, attr_id, seqnr)
);
This temp table has primary key as (defn_id,attr_id,seqnr) as a whole!
Then I inserted the record in the temp table as
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (1,100,100,'test1');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (2,100,100,'test2');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (3,100,100,'test3');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (4,100,100,'test4');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (5,100,100,'test5');
in both Oracle and Postgres!
The table now contains:
seqnr | defn_id | attr_id | input
1 | 100 | 100 | test1
2 | 100 | 100 | test2
3 | 100 | 100 | test3
4 | 100 | 100 | test4
5 | 100 | 100 | test5
When I run the command:
UPDATE temp SET seqnr=seqnr+1
WHERE defn_id = 100 AND attr_id = 100 AND seqnr >= 1;
In case of ORACLE it is Updating 5 Rows and the O/p is
seqnr | defn_id | attr_id | input
2 | 100 | 100 | test1
3 | 100 | 100 | test2
4 | 100 | 100 | test3
5 | 100 | 100 | test4
6 | 100 | 100 | test5
But in case of PostgreSQL it is giving an error!
DETAIL: Key (defn_id, attr_id, seqnr)=(100, 100, 2) already exists.
Why does this happen and how can I replicate the same result in Postgres as Oracle?
Or how can the same result be achieved in Postgres without any errors?
UNIQUE an PRIMARY KEY constraints are checked immediately (for each row) unless they are defined DEFERRABLE - which is the solution you demand.
ALTER TABLE temp
DROP CONSTRAINT pk_id
, ADD CONSTRAINT pk_id PRIMARY KEY (defn_id, attr_id, seqnr) DEFERRABLE
;
Then your UPDATE just works.
db<>fiddle here
This comes at a cost, though. The manual:
Note that deferrable constraints cannot be used as conflict
arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clause.
And for FOREIGN KEY constraints:
The referenced columns must be the columns of a non-deferrable unique
or primary key constraint in the referenced table.
And:
When a UNIQUE or PRIMARY KEY constraint is not deferrable,
PostgreSQL checks for uniqueness immediately whenever a row is
inserted or modified. The SQL standard says that uniqueness should be
enforced only at the end of the statement; this makes a difference
when, for example, a single command updates multiple key values. To
obtain standard-compliant behavior, declare the constraint as
DEFERRABLE but not deferred (i.e., INITIALLY IMMEDIATE). Be aware
that this can be significantly slower than immediate uniqueness
checking.
See:
Constraint defined DEFERRABLE INITIALLY IMMEDIATE is still DEFERRED?
I would avoid a DEFERRABLE PK if at all possible. Maybe you can work around the demonstrated problem? This usually works:
UPDATE temp t
SET seqnr = t.seqnr + 1
FROM (
SELECT defn_id, attr_id, seqnr
FROM temp
WHERE defn_id = 100 AND attr_id = 100 AND seqnr >= 1
ORDER BY defn_id, attr_id, seqnr DESC
) o
WHERE (t.defn_id, t.attr_id, t.seqnr)
= (o.defn_id, o.attr_id, o.seqnr);
db<>fiddle here
But there are no guarantees as ORDER BY is not specified for UPDATE in Postgres.
Related:
UPDATE with ORDER BY

Select count(*) returns 0 but select * returns 2 rows

This is very similar to this question with a full minimal example.
I have a simple select query (from a non-empty table) with only left joins. The last left join happens to be with an empty table.
The query returns 2 non-null rows as it should, but simply changing it to a count(*) query makes it return 0 as the count of rows.
The same SQL works properly on both MySQL and MSSQL (after fixing the PK syntax).
Full (re-runnable if uncomented) example:
-- DROP TABLE first;
-- DROP TABLE second;
-- DROP TABLE empty;
CREATE TABLE first (
pk int,
fk int
);
ALTER TABLE first
ADD CONSTRAINT PK_first PRIMARY KEY (pk);
CREATE TABLE second (
pk int
);
ALTER TABLE second
ADD CONSTRAINT PK_second PRIMARY KEY (pk);
CREATE TABLE empty (
pk int
);
ALTER TABLE first ADD CONSTRAINT FK_first FOREIGN KEY (fk)
REFERENCES second (pk) ENABLE;
INSERT INTO second (pk)
VALUES (5);
INSERT INTO first (pk, fk)
VALUES (1, 5);
INSERT INTO first (pk, fk)
VALUES (2, 5);
SELECT
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN empty
ON (1 = 1);
The last query returns 0 on my machine, but changing the count(*) to just * makes it return 2 rows.
Can anyone reproduce this? My db_version is 11.2.0.2.
Explain plan seems to see the 2 rows that should be returned:
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | MERGE JOIN CARTESIAN| | 2 | 26 | 3 (0)| 00:00:01 |
| 3 | VIEW | | 1 | | 2 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | EMPTY | 1 | | 2 (0)| 00:00:01 |
| 5 | BUFFER SORT | | 2 | 26 | 3 (0)| 00:00:01 |
| 6 | INDEX FULL SCAN | PK_FIRST | 2 | 26 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
I don't know much about dynamic sampling, but if i alter session set OPTIMIZER_DYNAMIC_SAMPLING=0;, then the plan shows 82 rows in each step.
Removing the primary keys fixes the problem on Oracle, but that is hardly a proper solution.
Removing the join into the empty table also fixes the problem, but it is an outer join with tautological filter, so it should be a noop.
Is this actually the intended behavior on Oracle for some reason? Or is my server just bugged?
Both MSSQL and MySQL return 2 as the count.
Edit: Round 2
It was enough to add 2 more tables and the bug shows also in 11.2.0.4. Can anyone reproduce it on more current Oracle versions?
An online fiddle here.
CREATE TABLE first (
pk int,
fk int
);
ALTER TABLE first
ADD CONSTRAINT PK_first PRIMARY KEY (pk);
CREATE TABLE second (
pk int,
fk int
);
ALTER TABLE second
ADD CONSTRAINT PK_second PRIMARY KEY (pk);
CREATE TABLE third (
pk int,
fk int
);
ALTER TABLE third
ADD CONSTRAINT PK_third PRIMARY KEY (pk);
CREATE TABLE fourth (
pk int
);
ALTER TABLE fourth
ADD CONSTRAINT PK_fourth PRIMARY KEY (pk);
CREATE TABLE empty (
pk int
);
ALTER TABLE first ADD CONSTRAINT FK_first FOREIGN KEY (fk)
REFERENCES second (pk) ENABLE;
ALTER TABLE second ADD CONSTRAINT FK_second FOREIGN KEY (fk)
REFERENCES third (pk) ENABLE;
ALTER TABLE third ADD CONSTRAINT FK_third FOREIGN KEY (fk)
REFERENCES fourth (pk) ENABLE;
INSERT INTO fourth (pk)
VALUES (50);
INSERT INTO third (pk, fk)
VALUES (10, 50);
INSERT INTO third (pk, fk)
VALUES (11, 50);
INSERT INTO second (pk, fk)
VALUES (5, 10);
INSERT INTO second (pk, fk)
VALUES (6, 10);
INSERT INTO first (pk, fk)
VALUES (1, 5);
INSERT INTO first (pk, fk)
VALUES (2, 5);
SELECT
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN third
ON (first.pk = third.pk)
LEFT OUTER JOIN fourth
ON (third.fk = fourth.pk)
LEFT OUTER JOIN empty
ON (1 = 1);
Anyway the consensus seems to be that this is a bug in obsolete Oracle releases.
11.2.0.2 is too old version (EOL already )and looks like it even has never been patched.
The obvious workaroud for your bug is the hint no_query_transformation, try:
SELECT--+ no_query_transformation
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN empty
ON (1 = 1);
Update and addition: you can just disable join elimination using hint NO_ELIMINATE_JOIN:
http://sqlfiddle.com/#!4/9cf338/10
SELECT--+ NO_ELIMINATE_JOIN(second)
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN empty e
ON (1 = 1);
or _optimizer_join_elimination_enabled:
http://sqlfiddle.com/#!4/9cf338/10
SELECT--+ opt_param('_optimizer_join_elimination_enabled' 'false')
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN third
ON (first.pk = third.pk)
LEFT OUTER JOIN fourth
ON (third.fk = fourth.pk)
LEFT OUTER JOIN empty
ON (1 = 1);

How to use a new serial ID for each new batch of inserted rows?

Is it possible to use a sequence for a batch of rows, versus getting a new ID on each insert? I'm keeping track of a set of details, and I want the sequence to apply for the set, not each individual row. So my data should look like so:
id batch_id name dept
1 99 John Engineering
2 99 Amy Humanities
3 99 Bill Science
4 99 Jack English
It's the batch_id that I want Postgres to issue as a sequence. Is this possible?
Define batch_id as batch_id bigint not null default currval('seqname') and call nextval('seqname') manually before inserting batch of rows.
Or, for the full automation:
1) Create sequence for the batch id:
create sequence mytable_batch_id;
2) Create your table, declare batch id field as below:
create table mytable (
id bigserial not null primary key,
batch_id bigint not null default currval('mytable_batch_id'),
name text not null);
3) Create statement level trigger to increment the batch id sequence:
create function tgf_mytable_batch_id() returns trigger language plpgsql
as $$
begin
perform nextval('mytable_batch_id');
return null;
end $$;
create trigger tg_mytablebatch_id
before insert on mytable
for each statement execute procedure tgf_mytable_batch_id();
Now each single statement when you inserting data into the table will be interpreted as next single batch.
Example:
postgres=# insert into mytable (name) values('John'), ('Amy'), ('Bill');
INSERT 0 3
postgres=# insert into mytable (name) values('Jack');
INSERT 0 1
postgres=# insert into mytable (name) values('Jimmmy'), ('Abigail');
INSERT 0 2
postgres=# table mytable;
id | batch_id | name
----+----------+-------------
1 | 1 | John
2 | 1 | Amy
3 | 1 | Bill
4 | 2 | Jack
5 | 3 | Jimmy
6 | 3 | Abigail
(6 rows)

why null values are not honored by unique constraint, but honored by SELECT distinct query in POSTGRES?

I am working with postgres. I have created a sample table whose schema looks like given below :-
postgres=# \d temp;
Table "public.temp"
Column | Type | Modifiers
--------+---------+---------------------------------------------------
id | integer | not null default nextval('temp_id_seq'::regclass)
name | text |
Indexes:
"temp_pkey" PRIMARY KEY, btree (id)
"temp_name_key" UNIQUE CONSTRAINT, btree (name)
It's allowing me to insert null values multiple times.
postgres=# select * from temp;
id | name
----+------
1 |
(1 row)
postgres=# insert into temp
(name)
values
(null);
INSERT 0 1
postgres=# select * from temp;
id | name
----+------
1 |
2 |
(2 rows)
I know that postgres doesn't consider null values for unique constraint, as per sql standard, but when I try distinct with select query, it honors null values.
postgres=# select distinct name from temp;
name
------
(1 row)
So, I am not clear about why two null values are different in context of unique constraint in insert operation, but same in case of distinct select operation.

Inserting a row at the specific place in SQLite database

I was creating the database in SQLite Manager & by mistake I forgot to mention a row.
Now, I want to add a row in the middle manually & below it the rest of the Auto-increment keys should be increased by automatically by 1 . I hope my problem is clear.
Thanks.
You shouldn't care about key values, just append your row at the end.
If you really need to do so, you could probably just update the keys with something like this. If you want to insert the new row at key 87
Make room for the key
update mytable
set key = key + 1
where key >= 87
Insert your row
insert into mytable ...
And finally update the key for the new row
update mytable
set key = 87
where key = NEW_ROW_KEY
I would just update IDs, incrementing them, then insert record setting ID manually:
CREATE TABLE cats (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR
);
INSERT INTO cats (name) VALUES ('John');
INSERT INTO cats (name) VALUES ('Mark');
SELECT * FROM cats;
| 1 | John |
| 2 | Mark |
UPDATE cats SET ID = ID + 1 WHERE ID >= 2; -- "2" is the ID of forgotten record.
SELECT * FROM cats;
| 1 | John |
| 3 | Mark |
INSERT INTO cats (id, name) VALUES (2, 'SlowCat'); -- "2" is the ID of forgotten record.
SELECT * FROM cats;
| 1 | John |
| 2 | SlowCat |
| 3 | Mark |
Next record, inserted using AUTOINCREMENT functionality, will have next-to-last ID (4 in our case).