error while creating a sqlite trigger - sql

I have these tables:
+-------------------------+
| Movies |
+-----+--------+----------+
| _id | title | language |
+--+--+--------+----------+
|
+-------------------------+
|
+----------------------------------+
| Movie_Cast |
+------------+----------+----------+
| id | actor_id | movie_id |
+------------+-----+----+----------+
|
+-------------+
|
+-------------------+
| Actors |
+----------+--------+
| actor_id | name |
+----------+--------+
What i'm trying to do is to delete a movies row, delete also the related rows from the junction table (movie_cast). And finally delete, from actors table, all the rows that are not referenced in the movie_cast table.
this is the tables schema:
create table movies (_id INTEGER PRIMARY KEY, title TEXT, language TEXT);
create table movie_cast (id INTEGER PRIMARY KEY,
actor_id INTEGER REFERENCES actors(actor_id) ON DELETE RESTRICT,
movie_id INTEGER REFERENCES movies(_id) ON DELETE CASCADE);
create table actors (actor_id INTEGER PRIMARY KEY, actor TEXT UNIQUE);
right now, when the user deletes a movies entry, movie_cast rows referencing the movies._id are also deleted. (i had some troubles with that, but then i used "PRAGMA foreign_keys = ON;" ) so far so good! To delete the actors rows, i thought i could create a trigger that tries to delete actors entries based on the movie_cast.actor_id we just deleted, but since i'm using "ON DELETE RESTRIC", if the actor has still a reference it would abort the delete.
But i'm not even being able to prove it, because i'm getting an error when creating the trigger:
CREATE TRIGGER Delete_Actors AFTER DELETE ON movie_cast
FOR EACH ROW
BEGIN
DELETE FROM actors WHERE actors.actor_id = OLD.actor_id;
END;
SQL Error [1]: [SQLITE_ERROR] SQL error or missing database (near "actor_id": syntax error)
[SQLITE_ERROR] SQL error or missing database (near "actor_id": syntax error)
It seems, it doesn't know what OLD is. What am i doing wrong here?
UPDATE:
It looks like a sqlite configuration problem. I'm using DBeaver SQLite 3.8.2, and it seems to be a problem with the temporary file, but to be honest i don't know how to fix it even reading the possible solution:
It's failing to create the temporary file required for a statement journal.
It's likely any statement that uses a temporary file will fail.
http://www.sqlite.org/tempfiles.html
One way around the problem would be to configure SQLite not to use
temp files using "PRAGMA temp_store = memory;".
Or ensure that the environment variable SQLITE_TMPDIR is set to
the path of a writable directory. See also:
http://www.sqlite.org/c3ref/temp_directory.html
So, I am going to assume it works and try it directly on my android app.

It was something really s****p. For DBeaver the trigger creation is a complex-statement, and delimiters were not working either, so it was needed to select the whole statement then press ctrl+enter.
Anyway, the statement works. But for a better results I got rid of "ON DELETE RESTRICT" from movie_cast.actor_id. and created a conditional trigger, that executes the delete from actors table, only when there are no more actor_ids equal to the one just deleted(OLD):
CREATE TRIGGER Delete_Actors
AFTER DELETE ON movie_cast
FOR EACH ROW
WHEN (SELECT count(id) FROM movie_cast WHERE actor_id = OLD.actor_id) = 0
BEGIN
DELETE FROM actors WHERE actors.actor_id = OLD.actor_id;
END;

Related

Data extract and import from CSV with foreign keys - Postgresql

I have a multi tenant database. My requirement is to extract a single tenant's data from a database and insert in to other database.
So I have 2 tables: users and identities.
users table has foreign key identity_id connected with identities table
There can be many identities and users under a customer.
I am extracting the data to a csv file and inserting into new database from the csv file.
primary key is set to auto increment, so users and identities table generate id while inserting data from csv.
Table data from existing database
Users table
| id | identity_id |
| --- | ------------|
| 86 | 70 |
| 193 | 127 |
| 223 | 131 |
Identities table
|id |name |email |
|---|------------|-----------------|
|70 |Alon muscle |muscle#test.com |
|131|james |james#james.com |
|127|watson |watson#watson.com|
Now identity_id is the foreign key in users table mapping to identities table.
I am trying to insert users and identities data to new database
So primary key will be auto incremented for users and identities.
The problem comes here with foreign key.
How can I maintain foreign_key relationship as I have multiple users and identities records?
Well you did not actually provide details on your tables, that would be the actual definitions (ddl). Nor provide the CVS contents, which I assume your stage table contains same. However with the test data provided and a couple assumptions the following demonstrates a method to load your data. The method is to build a procedure which uses the stage table to load identities table then selects the generates id from the email provided to populate users table. Assumptions:
email must be unique in identities (at least in lower case).
stage table reflects name and email for identities.
Procedure to load identities and users.
create or replace procedure generate_user_idents()
language sql $
insert into identities(name, email)
select name, email
from stage
on conflict (low_email)
do nothing ;
as $$
insert into users(ident_id)
select ident.ident_id
from identities ident
where ident.low_email in
( select lower(email)
from stage
) ;
$$;
Script to clear and repopulate stage data then load stage to identities and users.
do $$
begin
execute 'truncate table stage';
-- replace the following with your \copy to load stage
insert into stage(name, email)
values ( 'Alon muscle', 'muscle#test.com' )
, ( 'watson', 'watson#watson.com')
, ( 'james', 'james#james.com' );
call generate_user_idents();
end ;
$$;
See demo here. Since the demo generates the ids, it does not exactly match your provided values, but close. As it stands the procedure would be happy generating duplicates should you fail to clear the stage table or reenter the same values into it. You have to decide how to handle that.

CHECK (table1.integer >= table2.integer)

I need to create a CHECK constraint to verify that the entered integer in a column is greater than or equal to the integer in another column in a different table.
For example, the following tables would be valid:
=# SELECT * FROM table1;
current_project_number
------------------------
12
=# SELECT * FROM table2;
project_name | project_number
--------------+----------------
Schaf | 1
Hase | 8
Hai | 12
And the following tables would NOT be valid:
=# SELECT * FROM table1;
current_project_number
------------------------
12
=# SELECT * FROM table2;
project_name | project_number
--------------+----------------
Schaf | 1
Hase | 8
Hai | 12
Erdmännchen | 71 <-error:table1.current_project_number is NOT >= 71
Please note this CHECK constraint is designed to make sure info like above cannot be inserted. I'm not looking to SELECT values where current_project_number >= project_number, this is about INSERTing
What would I need in order for such a CHECK to work? Thanks
Defining a CHECK constraint that references another table is possible, but a seriously bad idea that will lead to problems in the future.
CHECK constraints are only validated when the table with the constraint on it is modified, not when the other table referenced in the constraint is modified. So it is possible to render the condition invalid with modifications on that second table.
In other words, PostgreSQL will not guarantee that the constraint is always valid. This can and will lead to unpleasant surprises, like a backup taken with pg_dump that can no longer be restored.
Don't go down that road.
If you need functionality like that, define a BEFORE INSERT trigger on table1 that verifies the condition and throws an exception otherwise:
CREATE FUNCTION ins_trig() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF EXISTS (SELECT 1 FROM table1
WHERE NEW.project_number > current_project_number)
THEN
RAISE EXCEPTION 'project number must be less than or equal to values in table1';
END IF;
RETURN NEW;
END;$$;
CREATE TRIGGER ins_trig BEFORE INSERT ON table2
FOR EACH ROW EXECUTE PROCEDURE ins_trig();

How to prevent delete query in SQL

I have created a database through Entity Framework Code First Approach and My application is ready and running live . The problem is that I did not turned "False" on Cascade Delete at the time of creating database.
Now if I delete any record from one table that is referenced with another table through foreign so all the record containing foreign key of deleted row is deleted from another table .
Practically demonstration :
Let say I have a Table called Passenger:
ID Name CategoryID
1 ABC 1
CategoryID here is a foreign key
Here is the category Table
ID Name
1 Gold
Let say I run my query on category table
delete from Category where ID = 1
Now all the record from my Passenger Table is deleted . I want to restrict it. Is it Possible through SQL now ?
I suppose
This is what you are looking for :
alter TRIGGER customers_del_prevent
ON dbo.customers
INSTEAD OF DELETE
AS
BEGIN
insert into dbo.log
values ('DELETE')
RAISERROR ('Deletions not allowed from this table (source = instead of)', 16, 1)
END
Hope this helps you. :)

SQLite unique constraint on multiple column without order

I need help on a problem in my table definitions : I have a table which will be defined, for example, like this :
id, primary key
friend0_id, foreign key on table users
friend1_id, foreign key on table users
The problem is I do not want to have multiple times the couple (friend0_id, friend1_id), whatever the order the are in the table.
I tried to define an UNIQUE constraint on the couple (friend0_id, friend1_id), but the columns order defined in the constraint (here friend0_id, THEN friend1_id) matters. So :
| id | friend0_id | friend1_id |
|----|------------|------------|
| 1 | 3 | 4 | -> OK
| 2 | 4 | 3 | -> OK, as the columns order in index matters
| 3 | 3 | 4 | -> Not OK, constraint prevent this
I would like the id 2 and 3 in the example to be disallowed, but I can't figure how. Do you have some tips for me ?
Thank you,
naccyde
As #mu too short mentioned, the way is to use (greatest(friend0_id, friend1_id), least(friend0_id, friend1_id)), so, now I have a working 2 columns order free unique constraint. I did it in SQLite this way (which could not be the better) :
Create a trigger which set min(friend0_id, friend1_id) to friend0 and max(friend0_id, friend1_id) to friend1 :
CREATE TRIGGER friend_fixed_id_order_trigger
AFTER INSERT ON friends
BEGIN UPDATE friends
SET
friend0_id = min(NEW.friend0_id, NEW.friend1_id),
friend1_id = max(NEW.friend0_id, NEW.friend1_id)
WHERE friends.id = NEW.id;
END
Then, set a unique constraint on the couple (friend0_id, friend1_id) :
CREATE UNIQUE INDEX `friends_unique_relation_index`
ON `contacts` (`friend0_id` ,`friend1_id`)
And it works !
EDIT : If someone need this tip, do not forget to create an update trigger too, otherwise an update request could break the mechanism.

Keeping a column in sync with another column in Postgres

I'm wondering if it's possible to have a column always kept in sync with another column in the same table.
Let this table be an example:
+------+-----------+
| name | name_copy |
+------+-----------+
| John | John |
+------+-----------+
| Mary | Mary |
+------+-----------+
I'd like to:
Be able to INSERT into this table, using providing a value only for the name column - The name_copy column should automatically take the value I used in name
When UPDATE-ing the name column on a pre-existing row, the name_copy should automatically update to match the new & updated name_column.
Some solutions
I could do this via code but that would be terribly bad as there's no guarantee the data would always be accessible by my code (what if someone changes the data through a DB client?)
What would be a safe and reliable and easy way to tackle this in Postgres?
You can create a trigger. Simple trigger function:
create or replace function trigger_on_example()
returns trigger language plpgsql as $$
begin
new.name_copy := new.name;
return new;
end
$$;
In Postgres 12+ there is a nice alternative in the form of generated columns.
create table my_table(
id int,
name text,
name_copy text generated always as (name) stored);
Note that a generated column cannot be written to directly.
Test both solutions in db<>fiddle.
Don't put name_copy into the table. One method is to create the column and access it using a view:
create view v_table as
select t.*, name as name_copy
from t;
That said, I don't really see a use for this.