Trigger "instead of delete", can it work? - sql

ALTER Trigger tr_sal On salaries
Instead of delete
AS
declare #id int
set #id=(select id_teacher from deleted)
if #id in (select ID_teacher from techers)
BEGIN
print 'not able !!'
END
ELSE
BEGIN
delete from salaries where id_teacher=#id
END
delete from salaries
where id_teacher=???--(write id)
What I wanted to do:
If id is of teacher, you can not delete his/her salary, else you can delete.
I run this and get both:
Not able to delete without allowance!!
(1 row(s) affected)
Is my trigger working or not?

You have two separate DELETE statements. The first one is probably not executing, but the second (the one outside any BEGIN/END pair) is, and that's what's deleting the row.
Remove the second delete from salaries completely, along with the where on the next line.

Your trigger is poorly designed. Never under any circumstances assume only one record in the deleted (or instered) pseudotable. You should never set the value of an id to a scalar variable. If 20 records were deleted, then only one is going to be affected by the rest of the code of the trigger. Triggers are set-based in SQL Server, they do not run row by row. This is the first thing you need to fix. And do not put print statements in a trigger, they are useless. You also need to understand how the trigger would work if some of the records should be deleted and others should not.
Now to know whether your trigger is working you need to unit test.
Write a delete statment to delete from the table. Make sure it deletes multiple records and that you know from a select statment exactly which records you expect to delete and which you do not.
Run it
Check to see if the action you expected from the trigger happened and if the records were or were not deleted.
You should know how to do this and not have to go to the Internet to figure out if your own code worked.
Your logic is a bit odd, usually you don't want to delete a parent record if the child record exists not the other way around (which is what an FK does). You need to delete the child records first to be able to delete the parent (unless you use cascade delete which is a horrible practice even then you might run into this problem and you should test your trigger to see what happens when you delete a teacher), so your trigger is likely to cause problems in maintaining the data if it did what you intended.

Related

On delete cascade with 2 different tables

I have the following case, what I would like to do is, if I delete a row from DashboardKpi or DashboardGrid, the corresponding record on ComponentProperty should be deleted as well.
Is this possible? So far, I was only able to do the other way around, if I delete a ComponentProperty, the corresponding DashboardKpi or DashboardGrid gets deleted, but this is definitely not what I want.
ER Diagram
Any suggestion on how can I do?
Delete cascade wont help you. As you mentioned when you delete row cascade it will delete all other rows in other tables those referencing to original deleting row, after that your original row.
The reason of logic is that rows in ComponentProperty can be without DashboardKpi or DashboardGrid, but rows in DashboardKpi or DashboardGrid (if they have reference to ComponentProperty) cant not cause they depend on ComponentProperty.
You could solve your problem in different ways depending on your DBMS. Common to most of them is to use procedures or triggers. If you use Postgresql then you can use ON DELETE rule as well.

Ordered DELETE of records in self-referencing table

I need to delete a subset of records from a self referencing table. The subset will always be self contained (that is, records will only have references to other records in the subset being deleted, not to any records that will still exist when the statement is complete).
My understanding is that this might cause an error if one of the records is deleted before the record referencing it is deleted.
First question: does postgres do this operation one-record-at-a-time, or as a whole transaction? Maybe I don't have to worry about this problem?
Second question: is the order of deletion of records consistent or predictable?
I am obviously able to write specific SQL to delete these records without any errors, but my ultimate goal is to write a regression test to show the next person after me why I wrote it that way. I want to set up the test data in such a way that a simplistic delete statement will consistently fail because of the records referencing the same table. That way if someone else messes with the SQL later, they'll get notified by the test suite that I wrote it that way for a reason.
Anyone have any insight?
EDIT: just to clarify, I'm not trying to work out how to delete the records safely (that's simple enough). I'm trying to figure out what set of circumstances will cause such a DELETE statement to consistently fail.
EDIT 2: Abbreviated answer for future readers: this is not a problem. By default, postgres checks the constraints at the end of each statement (not per-record, not per-transaction). Confirmed in the docs here: http://www.postgresql.org/docs/current/static/sql-set-constraints.html And by the SQLFiddle here: http://sqlfiddle.com/#!15/11b8d/1
In standard SQL, and I believe PostgreSQL follows this, each statement should be processed "as if" all changes occur at the same time, in parallel.
So the following code works:
CREATE TABLE T (ID1 int not null primary key,ID2 int not null references T(ID1));
INSERT INTO T(ID1,ID2) VALUES (1,2),(2,1),(3,3);
DELETE FROM T WHERE ID2 in (1,2);
Where we've got circular references involved in both the INSERT and the DELETE, and yet it works just fine.
fiddle
A single DELETE with a WHERE clause matching a set of records will delete those records in an implementation-defined order. This order may change based on query planner decisions, statistics, etc. No ordering guarantees are made. Just like SELECT without ORDER BY. The DELETE executes in its own transaction if not wrapped in an explicit transaction, so it'll succeed or fail as a unit.
To force order of deletion in PostgreSQL you must do one DELETE per record. You can wrap them in an explicit transaction to reduce the overhead of doing this and to make sure they all happen or none happen.
PostgreSQL can check foreign keys at three different points:
The default, NOT DEFERRABLE: checks for each row as the row is inserted/updated/deleted
DEFERRABLE INITIALLY IMMEDIATE: Same, but affected by SET CONSTRAINTS DEFERRED to instead check at end of transaction / SET CONSTRAINTS IMMEDIATE
DEFERRABLE INITIALLY DEFERRED: checks all rows at the end of the transaction
In your case, I'd define your FOREIGN KEY constraint as DEFERRABLE INITIALLY IMMEDIATE, and do a SET CONSTRAINTS DEFERRED before deleting.
(Actually if I vaguely recall correctly, despite the name IMMEDIATE, DEFERRABLE INITIALLY IMMEDIATE actually runs the check at the end of the statement instead of the default of after each row change. So if you delete the whole set in a single DELETE the checks will then succeed. I'll need to double check).
(The mildly insane meaning of DEFERRABLE is IIRC defined by the SQL standard, along with gems like a TIMESTAMP WITH TIME ZONE that doesn't have a time zone).
If you issue a single DELETE that affects multiple records (like delete from x where id>100), that will be handled as a single transaction and either all will succeed or fail. If multiple DELETEs, you have to put them in a transaction yourself.
There will be problems. If you have a constraint with DELETE CASCADE, you might delete more than you want with a single DELETE. If you don't, the integrity check might stop you from deleting. Constraints other than NO ACTION are not deferrable, so you'd have to disable the constraint before delete and enable it afterwards (basically drop/create, which might be slow).
If you have multiple DELETEs, then the order is as the DELETE statements are sent. If a single DELETE, the database will delete in the order it happens to find them (index, oids, something else...).
So I would also suggest thinking about the logic and maybe handling the deletes differently. Can you elaborate more on the actual logic? A tree in database?
1) It will do as transaction if enclosed within "BEGIN/COMMIT". Otherwise in general no.
For more see http://www.postgresql.org/docs/current/static/tutorial-transactions.html
The answer in general to your question depends on how is self-referencing implemented.
If it is within application logic, it is solely your responsibility to check the things yourself.
Otherwise, it is in general possible to restrict or cascade deletes for rows with foreign keys and DELETE CASCADE . However, as far as PG docs go, I understand we are talking about referencing columns in other tables, not sure if same-table foreign keys are supported:
http://www.postgresql.org/docs/current/static/ddl-constraints.html#DDL-CONSTRAINTS-FK
2) In general, the order of deletion will be the order in which you issue delete statements. If you want them all to be "uninterruptible" with no other statements modifying table in between, you enclose them in a transaction.
As a warning, I may be wrong, but what you seem to be trying to do, must not be done. You should not have to rely on some esoteric "order of deletion" or some other undocumented and/or implicit features of database. The underlying logic does not seem sound, there should be another way.

SQL DELETE appears to work but then does not implement changes

I have created a table of Cards that so far contains 10 rows. if I execute:
delete from [Card] where cardnumber = 00010 //this card being held in the last row
I get the resulting message (1 row(s) affected), which I would expect works, however upon opening the table I find that the row is still there.
Next I used Edit Top 200 Rows in SSMS to select the row manually and delete it, upon which it is removed from the table, but when I execute, the row is back in the table.
Finally, I created a stored procedure that returns a success when executed, but the changes are still not being kept.
What is strange however, is that I have a CardAmount table with a relationship to the Card table that holds points per card, and the cards are being removed from CardAmount.
I have had a look at this question, but I don't think that was ever resolved.
First guess (which turned out to be correct) would be an instead of delete trigger that forgot to actually affect the table it's meant to be performing the delete for.
This is a common error, especially if you're implementing some far reaching form of cascade (and can't just use ON DELETE CASCADE on the foreign key constraint) where you have to use an INSTEAD OF trigger but then neglect to perform the DELETE on the underlying table.

Adding a constraint or otherwise preventing the removal of a record in a SQL DB table

I have a table in a SQL Server database that contains a row I never want deleted. This is a row required by the application to work as designed.
Is there a way to add a constraint to that row that prevents it from being deleted? Or another way to handle this scenario?
Here is an example of using a FOR DELETE trigger to prevent the deletion of a row when a certain condition is satisfied:
CREATE TRIGGER KeepImportantRow ON MyTable
FOR DELETE
AS BEGIN
-- This next line assumes that your important table has a
-- column called id, and your important row has an id of 0.
-- Adjust accordingly for your situation.
IF DELETED.id = 0 BEGIN
RAISERROR('Cannot delete important row!', 16, 1)
ROLLBACK TRAN
END
END
If you want to prevent accidental deletes then you could have a dummy table that declares a foreign key into your table with ON DELETE NO ACTION, and add one row in it with the foreign key matching your 'precious' row primary key. This way if the 'parent' row is deleted, the engine will refuse and raise an error.
If you want to prevent intentional deletes then you should rely on security (deny DELETE permission on the table). Of course, privileged users that have the required permission can delete the row, there is no way to prevent that, nor should you try. Since SQL Server does not support row level security, if you need to deny only certain rows then you have to go back to the drawing broad and change your table layout so that all rows that have to be denied are stored in one table, and rows that are allowed to be delete are stored in a different table.
Other solutions (like triggers) will ultimately be a variation on these themes, what you really must solve is the question whether you want to prevent accidental deletes (solvable) or intentional deletes (unsolvable, is their database, not yours).
You could do it in a number of ways, although it depends on the situation.
If the table only contains that row, do not grant deletion / truncate privledges.
If the table contains other rows as well you could use a before deletion trigger.
One issue you will have is that someone with DBA / SA access to the database, can get around anything you put in, if they desire, so what are you trying to protect against, casual user, or anyone.

How to prevent deletion of the first row in table (PostgreSQL)?

Is it possible to prevent deletion of the first row in table on PostgreSQL side?
I have a category table and I want to prevent deletion of default category as it could break the application. Of course I could easily do it in application code, but it would be a lot better to do it in database.
I think it has something to do with rules on delete statement, but I couldn't find anything remotely close to my problem in documentation.
You were right about thinking of the rules system. Here is a link to an example matching your problem. It's even simpler than the triggers:
create rule protect_first_entry_update as
on update to your_table
where old.id = your_id
do instead nothing;
create rule protect_first_entry_delete as
on delete to your_table
where old.id = your_id
do instead nothing;
Some answers miss one point: also the updating of the protected row has to be restricted. Otherwise one can first update the protected row such that it no longer fulfills the forbidden delete criterion, and then one can delete the updated row as it is no longer protected.
You want to define a BEFORE DELETE trigger on the table. When you attempt to delete the row (either match by PK or have a separate "protect" boolean column), RAISE an exception.
I'm not familiar with PostgreSQL syntax, but it looks like this is how you'd do it:
CREATE FUNCTION check_del_cat() RETURNS trigger AS $check_del_cat$
BEGIN
IF OLD.ID = 1 /*substitute primary key value for your row*/ THEN
RAISE EXCEPTION 'cannot delete default category';
END IF;
END;
$check_del_cat$ LANGUAGE plpgsql;
CREATE TRIGGER check_del_cat BEFORE DELETE ON categories /*table name*/
FOR EACH ROW EXECUTE PROCEDURE check_del_cat();
The best way I see to accomplish this is by creating a delete trigger on this table. Basically, you'll have to write a stored procedure to make sure that this 'default' category will always exist, and then enforce it using a trigger ON DELETE event on this table. A good way to do this is create a per-row trigger that will guarantee that on DELETE events the 'default' category row will never be deleted.
Please check out PostgreSQL's documentation about triggers and stored procedures:
http://www.postgresql.org/docs/8.3/interactive/trigger-definition.html
http://www.postgresql.org/docs/8.3/interactive/plpgsql.html
There's also valuable examples in this wiki:
http://wiki.postgresql.org/wiki/A_Brief_Real-world_Trigger_Example
You could have a row in another table (called defaults) referencing the default category. The FK constraint would not let the deletion of the default category happen.
Keep in mind how triggers work. They will fire off for every row your delete statement will delete. This doesn't mean you shouldn't use triggers just keep this in mind and most importantly test your usage scenarios and make sure performance meets the requirements.
Should I use a rule or a trigger?
From the official docs:
"For the things that can be implemented by both, which is best depends on the usage of the database. A trigger is fired for any affected row once. A rule manipulates the query or generates an additional query. So if many rows are affected in one statement, a rule issuing one extra command is likely to be faster than a trigger that is called for every single row and must execute its operations many times. However, the trigger approach is conceptually far simpler than the rule approach, and is easier for novices to get right."
See the docs for details.
http://www.postgresql.org/docs/8.3/interactive/rules-triggers.html