Conditional rule on Insert in Postgresql - sql

My question is relative to a postgresql problem I encountered.
I want to modify a database containing a newsgroup-like forum. Its messages describe a tree and are stored in a table as below :
\d messages
Table « public.messages »
Column | Type | Modifiers
-----------+-----------------------------+------------------------------------------------------------------
idmessage | integer | not NULL By default, nextval('messages_idmessage_seq'::regclass)
title | character varying(50) | not NULL
datemsg | timestamp without time zone | By default, ('now'::text)::timestamp without time zone
author | character varying(10) | By default, "current_user"()
text | text | not NULL
idgroup | integer | not NULL
msgfather | integer |
(I translated the output from French, I hope I didn't add misinterpretation)
Title is the message title, datemsg its timestamp, author and text are quite explicit, idgroup is the discussion group identifier where the message is found and msgfather is another message this message anwsers to.
The rule I want to implement on insert is about preventing the user to add an answer to a message with a different group from its father's (if it has one, as a new discussion starts with a new message posted without parent).
Now I have this view:
\d newmessage
View « public.newmessage »
Column | Type | Modifiers
-----------+-----------------------+---------------
idmessage | integer |
title | character varying(50) |
text | text |
idgroup | integer |
msgfather | integer |
View definition :
SELECT messages.idmessage, messages.title, messages.text, messages.idgroup, messages.msgfather
FROM messages;
Rules :
ins_message AS
ON INSERT TO newmessage DO INSTEAD INSERT INTO messages (title, text, idgroup, msgfather)
VALUES (new.title, new.text, new.idgroup, new.msgfather)
I can't change view or table as people already use the database, so I think I have to play with rules or triggers.
I tried to add to the view this rule below, without the expected effect:
CREATE OR REPLACE RULE ins_message_answer AS ON INSERT TO newmessage WHERE NEW.msgfather IS NOT NULL DO INSTEAD INSERT INTO messages( title, text, idgroup, msgfather) VALUES ( NEW.title, NEW.text, (SELECT idgroup FROM messages WHERE idmessage=new.msgfather), NEW.msgfather);
Even with this new rule, people can still answer a message and set this answer in another group.
I tried too to change the ins_message rule by adding WHERE NEW.msgfather IS NULL, but I then have the error that no rule is found as on insert in newmessage view when I try to add a message.
So how to achieve what I want to do? With a trigger? (I don't know how to do one with postgresql).

First create an unique index for (idmessage, idgroup):
alter table messages add constraint messages_idmessage_idgroup_key
unique (idmessage, idgroup);
This constraint is needed for msgfather_idgroup_fkey below
as foreign key constraints require matching unique index and
without it there'll be an ERROR: there is no unique constraint matching given keys for referenced table "messages".
Then add self referencing foreign key:
alter table messages add constraint msgfather_idgroup_fkey
foreign key (msgfather,idgroup) references messages(idmessage,idgroup);
In passing:
add NOT NULL to datemsg and author columns;
never use rules.

Related

Insert multiple values with foreign key Postgresql

I am having trouble figuring out how to insert multiple values to a table, which checks if another table has the needed values stored. I am currently doing this in a PostgreSQL server, but will be implementing it in PreparedStatements for my java program.
user_id is a foreign key which references the primary in mock2. I have been trying to check if mock2 has values ('foo1', 'bar1') and ('foo2', 'bar2').
After this I am trying to insert new values into mock1 which would have a date and integer value and reference the primary key of the row in mock2 to the foreign key in mock1.
mock1 table looks like this:
===============================
| date | time | user_id |
| date | integer | integer |
| | | |
And the table mock2 is:
==================================
| Id | name | program |
| integer | text | test |
Id is a primary key for the table and the name is UNIQUE.
I've been playing around with this solution https://dba.stackexchange.com/questions/46410/how-do-i-insert-a-row-which-contains-a-foreign-key
However, I haven't been able to make it work. Could someone please point out what the correct syntax is for this, I would be really appreciative.
EDIT:
The create table statements are:
CREATE TABLE mock2(
id SERIAL PRIMARY KEY UNIQUE,
name text NOT NULL,
program text NOT NULL UNIQUE
);
and
CREATE TABLE mock1(
date date,
time_spent INTEGER,
user_id integer REFERENCES mock2(Id) NOT NULL);
Ok so I found an answer to my own question.
WITH ins (date,time_spent, id) AS
( VALUES
( '22/08/2012', 170, (SELECT id FROM mock3 WHERE program ='bar'))
)
INSERT INTO mock4
(date, time_spent, user_id)
SELECT
ins.date, ins.time_spent, mock3.id
FROM
mock3 JOIN ins
ON ins.id = mock3.id ;
I was trying to take the 2 values from the first table, match these and then insert 2 new values to the next table, but I realised that I should be using the Primary and Foreign keys to my advantage.
I instead now JOIN on the ID and then just select the key I need by searching it from the values with (SELECT id FROM mock3 WHERE program ='bar') in the third row.

Create column with duplicate data in same table psql

Postgres database
I'm trying to find a faster way to create a new column in a table which is a copy of the tables primary key column, so if I have the following columns in a table named students:
student_id Integer Auto-Increment -- Primary key
name varchar
Then I would like to create a new column named old_student_id which has all the same values as student_id.
To do this I create the column and the execute the following update statement
update student set old_student_id=student_id
Which works, but on my biggest table it takes over an hour, and I feels like I should be able to use some kind of alternative approach to get that down to a few minutes, I just don't know what.
So what I want at the end of the day is something that looks like this:
+------------+-----+---------------+
| student_id | name| old_student_id|
+------------+-----+---------------+
| 1 | bob | 1 |
+------------+-----+---------------+
| 2 | tod | 2 |
+------------+-----+---------------+
| 3 | joe | 3 |
+------------+-----+---------------+
| 4 | tim | 4 |
+------------+-----+---------------+
To speed things up a bit before I do the update query, I drop all the FK's and Indices on the table, then reapply them when it finishes. Also I'm on an AWS RDS, so I have setup a param group which has synchronized_commits=false, turned off backups, and increased working mem a bit for the duration of this update.
For context this is actually happening to every table in the database, across three databases. The old ids are used as references for several external systems which reference these ids, so I need to keep track of them in order to update those systems as well. I have an 8 hour downtime window, and currently merging the databases takes ~3 hours, and a whole hour of that time is spent creating these ids.
If in the future you do not need to update old_student_id column then you can use virtual columns on PostgreSQL.
CREATE TABLE table2 (
id serial4 NOT NULL,
val1 int4 NULL,
val2 int4 NULL,
total int4 NULL GENERATED ALWAYS AS (id) STORED
);
During the inserting process, the total field will be set to the same value as the id field. But you can not update this field, because this is a virtual column.
Alternative method is a using triggers. In this case you can update your fields. See this example:
Firstly we need create trigger function which will be called before table inserting.
CREATE OR REPLACE FUNCTION table2_insert()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
begin
new.total = new.val1 * new.val2;
return new;
END;
$function$
;
After then:
CREATE TABLE table2 (
id serial4 NOT NULL,
val1 int4 NULL,
val2 int4 NULL,
total int4 NULL
);
create trigger my_trigger before
insert
on
table2 for each row execute function table2_insert();
With both methods, you don't have to update many records every time.

Can't update or delete row from table (Postgres)

I have table with bytea field. When I try to delete a row from this table, I get such error:
[42704] ERROR: large object 0 does not exist
Can you help me in this situation?
Edit. Information from command \d photo:
Table "public.photo"
Column | Type | Modifiers
------------+------------------------+-----------
id | character varying(255) | not null
ldap_name | character varying(255) | not null
file_name | character varying(255) | not null
image_data | bytea |
Indexes:
"pk_photo" PRIMARY KEY, btree (id)
"photo_file_name_key" UNIQUE CONSTRAINT, btree (file_name)
"photo_ldap_name" btree (ldap_name)
Triggers:
remove_unused_large_objects BEFORE DELETE OR UPDATE ON photo FOR EACH ROW EXECUTE PROCEDURE lo_manage('image_data')
Drop the trigger:
drop trigger remove_unused_large_objects on photo;
try using this
delete from photo where primarykey = 'you want to delete';

Postgres start table ID from 1000

Before you mark this a duplicate. I found this answer on another thread and having difficulties making it work.
From psql I see my table:
\d people
Table:
Column | Type | Modifiers
---------------+----------------------------------+-----------------------------------------------------------------------
id | integer | not null default
nextval('people_id_seq'::regclass)
Code I tried which seems to do nothing...
ALTER SEQUENCE people_id_seq RESTART 1000
How do I make the primary key start from 1000?
The following query would set the sequence value to 999. The next time the sequence is accessed, you would get 1000.
SELECT setval('people_id_seq', 999);
Reference:
Sequence Manipulation Functions on PostgreSQL Manual
Why are you declaring your id like that ?
I mean, I would do the following :
create table people(
id serial,
constraint primaryKeyID primary key(id));
And now if you want to start your sequence from 1000, your alter query will work.
alter sequence people_id_seq restart 1000

Duplicate entry error when trying to import a dump

I have the following table on a server:
CREATE TABLE routes (
id int(11) unsigned NOT NULL AUTO_INCREMENT,
from_lat double NOT NULL,
from_lng double NOT NULL,
to_lat double NOT NULL,
to_lng double NOT NULL,
distance int(11) unsigned NOT NULL,
drive_time int(11) unsigned NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY route (from_lat,from_lng,to_lat,to_lng)
) ENGINE=InnoDB;
We are saving some routing information from point A (from_lat, from_lng) to point B (to_lat, to_lng). There is a unique index on the coordinates.
However, there are two entries in the database that confuse me:
+----+----------+----------+---------+---------+----------+------------+
| id | from_lat | from_lng | to_lat | to_lng | distance | drive_time |
+----+----------+----------+---------+---------+----------+------------+
| 27 | 52.5333 | 13.1667 | 52.5833 | 13.2833 | 13647 | 1125 |
| 28 | 52.5333 | 13.1667 | 52.5833 | 13.2833 | 13647 | 1125 |
+----+----------+----------+---------+---------+----------+------------+
They are exactly the same.
When I not try to export the database using mysqldump and trying to reimport it, I get an error:
ERROR 1062 (23000): Duplicate entry '52.5333-13.1667-52.5833-13.2833' for key 'route'
How can it be that this is in the database, when there is an unique key on them? Shouldn't MySQL reject them?
Is it possible that the double values are slightly different, but only after the 4th digit?
If you export and import them, they would be the same, and that would give a unique constraint violation.
Quoting from this MySQL bug report:
When mysqldump dumps a DOUBLE value, it uses insufficient precision to
distinguish between some close values (and, presumably, insufficient
precision to recreate the exact values from the original database). If
the DOUBLE value is a primary key or part of a unique index, restoring
the database from this output fails with a duplicate key error.
Try to display them with more digits behind the comma (how will depend on your client.)