SQL DELETE appears to work but then does not implement changes - sql

I have created a table of Cards that so far contains 10 rows. if I execute:
delete from [Card] where cardnumber = 00010 //this card being held in the last row
I get the resulting message (1 row(s) affected), which I would expect works, however upon opening the table I find that the row is still there.
Next I used Edit Top 200 Rows in SSMS to select the row manually and delete it, upon which it is removed from the table, but when I execute, the row is back in the table.
Finally, I created a stored procedure that returns a success when executed, but the changes are still not being kept.
What is strange however, is that I have a CardAmount table with a relationship to the Card table that holds points per card, and the cards are being removed from CardAmount.
I have had a look at this question, but I don't think that was ever resolved.

First guess (which turned out to be correct) would be an instead of delete trigger that forgot to actually affect the table it's meant to be performing the delete for.
This is a common error, especially if you're implementing some far reaching form of cascade (and can't just use ON DELETE CASCADE on the foreign key constraint) where you have to use an INSTEAD OF trigger but then neglect to perform the DELETE on the underlying table.

Related

Ordered DELETE of records in self-referencing table

I need to delete a subset of records from a self referencing table. The subset will always be self contained (that is, records will only have references to other records in the subset being deleted, not to any records that will still exist when the statement is complete).
My understanding is that this might cause an error if one of the records is deleted before the record referencing it is deleted.
First question: does postgres do this operation one-record-at-a-time, or as a whole transaction? Maybe I don't have to worry about this problem?
Second question: is the order of deletion of records consistent or predictable?
I am obviously able to write specific SQL to delete these records without any errors, but my ultimate goal is to write a regression test to show the next person after me why I wrote it that way. I want to set up the test data in such a way that a simplistic delete statement will consistently fail because of the records referencing the same table. That way if someone else messes with the SQL later, they'll get notified by the test suite that I wrote it that way for a reason.
Anyone have any insight?
EDIT: just to clarify, I'm not trying to work out how to delete the records safely (that's simple enough). I'm trying to figure out what set of circumstances will cause such a DELETE statement to consistently fail.
EDIT 2: Abbreviated answer for future readers: this is not a problem. By default, postgres checks the constraints at the end of each statement (not per-record, not per-transaction). Confirmed in the docs here: http://www.postgresql.org/docs/current/static/sql-set-constraints.html And by the SQLFiddle here: http://sqlfiddle.com/#!15/11b8d/1
In standard SQL, and I believe PostgreSQL follows this, each statement should be processed "as if" all changes occur at the same time, in parallel.
So the following code works:
CREATE TABLE T (ID1 int not null primary key,ID2 int not null references T(ID1));
INSERT INTO T(ID1,ID2) VALUES (1,2),(2,1),(3,3);
DELETE FROM T WHERE ID2 in (1,2);
Where we've got circular references involved in both the INSERT and the DELETE, and yet it works just fine.
fiddle
A single DELETE with a WHERE clause matching a set of records will delete those records in an implementation-defined order. This order may change based on query planner decisions, statistics, etc. No ordering guarantees are made. Just like SELECT without ORDER BY. The DELETE executes in its own transaction if not wrapped in an explicit transaction, so it'll succeed or fail as a unit.
To force order of deletion in PostgreSQL you must do one DELETE per record. You can wrap them in an explicit transaction to reduce the overhead of doing this and to make sure they all happen or none happen.
PostgreSQL can check foreign keys at three different points:
The default, NOT DEFERRABLE: checks for each row as the row is inserted/updated/deleted
DEFERRABLE INITIALLY IMMEDIATE: Same, but affected by SET CONSTRAINTS DEFERRED to instead check at end of transaction / SET CONSTRAINTS IMMEDIATE
DEFERRABLE INITIALLY DEFERRED: checks all rows at the end of the transaction
In your case, I'd define your FOREIGN KEY constraint as DEFERRABLE INITIALLY IMMEDIATE, and do a SET CONSTRAINTS DEFERRED before deleting.
(Actually if I vaguely recall correctly, despite the name IMMEDIATE, DEFERRABLE INITIALLY IMMEDIATE actually runs the check at the end of the statement instead of the default of after each row change. So if you delete the whole set in a single DELETE the checks will then succeed. I'll need to double check).
(The mildly insane meaning of DEFERRABLE is IIRC defined by the SQL standard, along with gems like a TIMESTAMP WITH TIME ZONE that doesn't have a time zone).
If you issue a single DELETE that affects multiple records (like delete from x where id>100), that will be handled as a single transaction and either all will succeed or fail. If multiple DELETEs, you have to put them in a transaction yourself.
There will be problems. If you have a constraint with DELETE CASCADE, you might delete more than you want with a single DELETE. If you don't, the integrity check might stop you from deleting. Constraints other than NO ACTION are not deferrable, so you'd have to disable the constraint before delete and enable it afterwards (basically drop/create, which might be slow).
If you have multiple DELETEs, then the order is as the DELETE statements are sent. If a single DELETE, the database will delete in the order it happens to find them (index, oids, something else...).
So I would also suggest thinking about the logic and maybe handling the deletes differently. Can you elaborate more on the actual logic? A tree in database?
1) It will do as transaction if enclosed within "BEGIN/COMMIT". Otherwise in general no.
For more see http://www.postgresql.org/docs/current/static/tutorial-transactions.html
The answer in general to your question depends on how is self-referencing implemented.
If it is within application logic, it is solely your responsibility to check the things yourself.
Otherwise, it is in general possible to restrict or cascade deletes for rows with foreign keys and DELETE CASCADE . However, as far as PG docs go, I understand we are talking about referencing columns in other tables, not sure if same-table foreign keys are supported:
http://www.postgresql.org/docs/current/static/ddl-constraints.html#DDL-CONSTRAINTS-FK
2) In general, the order of deletion will be the order in which you issue delete statements. If you want them all to be "uninterruptible" with no other statements modifying table in between, you enclose them in a transaction.
As a warning, I may be wrong, but what you seem to be trying to do, must not be done. You should not have to rely on some esoteric "order of deletion" or some other undocumented and/or implicit features of database. The underlying logic does not seem sound, there should be another way.

Trigger "instead of delete", can it work?

ALTER Trigger tr_sal On salaries
Instead of delete
AS
declare #id int
set #id=(select id_teacher from deleted)
if #id in (select ID_teacher from techers)
BEGIN
print 'not able !!'
END
ELSE
BEGIN
delete from salaries where id_teacher=#id
END
delete from salaries
where id_teacher=???--(write id)
What I wanted to do:
If id is of teacher, you can not delete his/her salary, else you can delete.
I run this and get both:
Not able to delete without allowance!!
(1 row(s) affected)
Is my trigger working or not?
You have two separate DELETE statements. The first one is probably not executing, but the second (the one outside any BEGIN/END pair) is, and that's what's deleting the row.
Remove the second delete from salaries completely, along with the where on the next line.
Your trigger is poorly designed. Never under any circumstances assume only one record in the deleted (or instered) pseudotable. You should never set the value of an id to a scalar variable. If 20 records were deleted, then only one is going to be affected by the rest of the code of the trigger. Triggers are set-based in SQL Server, they do not run row by row. This is the first thing you need to fix. And do not put print statements in a trigger, they are useless. You also need to understand how the trigger would work if some of the records should be deleted and others should not.
Now to know whether your trigger is working you need to unit test.
Write a delete statment to delete from the table. Make sure it deletes multiple records and that you know from a select statment exactly which records you expect to delete and which you do not.
Run it
Check to see if the action you expected from the trigger happened and if the records were or were not deleted.
You should know how to do this and not have to go to the Internet to figure out if your own code worked.
Your logic is a bit odd, usually you don't want to delete a parent record if the child record exists not the other way around (which is what an FK does). You need to delete the child records first to be able to delete the parent (unless you use cascade delete which is a horrible practice even then you might run into this problem and you should test your trigger to see what happens when you delete a teacher), so your trigger is likely to cause problems in maintaining the data if it did what you intended.

Locking table in postgresql

I have a table named as 'games', which contains a column named as 'title', this column is unique, database used in PostgreSQL
I have a user input form that allows him to insert a new 'game' in 'games' table. The function that insert a new game checks if a previously entered 'game' with the same 'title' already exists, for this, I get the count of rows, with the same game 'title'.
I use transactions for this, the insert function at the start uses BEGIN, gets the row count, if row count is 0, inserts the new row and after process is completed, it COMMITS the changes.
The problem is that, there are chances that 2 games with the same title if submitted by the user at the same time, would be inserted twice, since I just get the count of rows to chk for duplicate records, and each of the transaction would be isolated from each other
I thought of locking the tables when getting the row count as:
LOCK TABLE games IN ACCESS EXCLUSIVE MODE;
SELECT count(id) FROM games WHERE games.title = 'new_game_title'
Which would lock the table for reading too (which means the other transaction would have to wait, until the current one is completed successfully). This would solve the problem, which is what I suspect. Is there a better way around this (avoiding duplicate games with the same title)
You should NOT need to lock your tables in this situation.
Instead, you can use one of the following approaches:
Define UNIQUE index for column that really must be unique. In this case, first transaction will succeed, and second will error out.
Define AFTER INSERT OR UPDATE OR DELETE trigger that will check your condition, and if it does not hold, it should RAISE error, which will abort offending transaction
In all these cases, your client code should be ready to properly handle possible failures (like failed transactions) that could be returned by executing your statements.
Using the highest transaction isolation(Serializable) you can achieve something similar to your actual question. But be aware that this may fail ERROR: could not serialize access due to concurrent update
I do not agree with the constraint approach entirely. You should have a constraint to protect data integrity, but relying on the constraint forces you to identify not only what error occurred, but which constraint caused the error. The trouble is not catching the error as some have discussed but identifying what caused the error and providing a human readable reason for the failure. Depending on which language your application is written in, this can be next to impossible. eg: telling the user "Game title [foo] already exists" instead of "game must have a price" for a separate constraint.
There is a single statement alternative to your two stage approach:
INSERT INTO games ( [column1], ... )
SELECT [value1], ...
WHERE NOT EXISTS ( SELECT x FROM games as g2 WHERE games.title = g2.title );
I want to be clear with this... this is not an alternative to having a unique constraint (which requires extra data for the index). You must have one to protect your data from corruption.

Output Parameters used with update CASCADE issue, when using data adapters

I've manually created some data adapters – using the auto-generated ones is not viable due to version incompatibilities, for a dataset made of a number of tables with the usual mixture of PK, FK constraints. Most of its working pretty smoothly so far, but after modifying the adapters to use DB sequences for the PKs (instead of the temporary one assigned to the rows in the dataset) when updating the DB with new ‘added’ rows, I’ve been hitting problems.
I added the sequences into the insert statements and changed PK parameter to output parameters, so that it would update the dataset row and thus update all the child rows as well (with the UPDATE CASCADE rule). The problem is that the child rows, that, before the update had a row state of added, are changed to a state of modified (I don’t agree this should even be happening, surely an added row should stay as added even if it’s modified!). Thus when we get round to updating the child table with the child rows, it fails as its expecting rows with the added state.
What’s the cleanest way I could get around this problem? Potential solutions I can think of:
Turning off UPDATE CASCADE and updating each child row manually after, updating the parents PK and changing each row back to added, after modifying it.
Creating a copy of the all the added rows in all tables of the dataset before starting updates and then updating the main copy after each tables updates marking all the correct rows back to added.
Any better ideas?

Adding a constraint or otherwise preventing the removal of a record in a SQL DB table

I have a table in a SQL Server database that contains a row I never want deleted. This is a row required by the application to work as designed.
Is there a way to add a constraint to that row that prevents it from being deleted? Or another way to handle this scenario?
Here is an example of using a FOR DELETE trigger to prevent the deletion of a row when a certain condition is satisfied:
CREATE TRIGGER KeepImportantRow ON MyTable
FOR DELETE
AS BEGIN
-- This next line assumes that your important table has a
-- column called id, and your important row has an id of 0.
-- Adjust accordingly for your situation.
IF DELETED.id = 0 BEGIN
RAISERROR('Cannot delete important row!', 16, 1)
ROLLBACK TRAN
END
END
If you want to prevent accidental deletes then you could have a dummy table that declares a foreign key into your table with ON DELETE NO ACTION, and add one row in it with the foreign key matching your 'precious' row primary key. This way if the 'parent' row is deleted, the engine will refuse and raise an error.
If you want to prevent intentional deletes then you should rely on security (deny DELETE permission on the table). Of course, privileged users that have the required permission can delete the row, there is no way to prevent that, nor should you try. Since SQL Server does not support row level security, if you need to deny only certain rows then you have to go back to the drawing broad and change your table layout so that all rows that have to be denied are stored in one table, and rows that are allowed to be delete are stored in a different table.
Other solutions (like triggers) will ultimately be a variation on these themes, what you really must solve is the question whether you want to prevent accidental deletes (solvable) or intentional deletes (unsolvable, is their database, not yours).
You could do it in a number of ways, although it depends on the situation.
If the table only contains that row, do not grant deletion / truncate privledges.
If the table contains other rows as well you could use a before deletion trigger.
One issue you will have is that someone with DBA / SA access to the database, can get around anything you put in, if they desire, so what are you trying to protect against, casual user, or anyone.