I am trying to make a delete call that triggers a recursive foreign key (which doesn't have an index). The query is very slow.
I've been searching for a while and it seems my options are
add index on fk -- this is not ideal because the write speed for this table is very important, and already not very good
disable trigger for session -- again not ideal because it's exposed to other transactions, would prefer this only applies to an isolated transaction where others are not affected
extend trigger --- this one i'm curious about. Is it possible to store a local variable with set_config which we then check against i.e if var=== true run trigger else don't? Something like this answer https://stackoverflow.com/a/62010745/7530306
You can change the parameter session_replication_role to replica, then only replica triggers will fire, and foreign key constraints won't be checked. That requires superuser permissions, because it endangers the integrity of the database.
I don't see your point. If you disable the foreign key, why keep it around at all? If you are not ready to pay the price, do without referential integrity.
My advice is:
If you need to delete rows frequently, create the index. The risk of violating the constaint by repeatedly disabling it is too high.
If this is a one-time affair, accept the sequential scan on the referencing table.
Related
I have a fairly simple design, as follows:
What I want to achieve in my grouping_individual_history is marked in red:
when a session is deleted, I want to cascade delete the grouping_history....
when a grouping is deleted, I just want the child field to be nullified
It seems that MSSQL will not allow me to have more than one FK that does something else than no action ... It'll complain with:
Introducing FOREIGN KEY constraint 'FK_grouping_individual_history_grouping' on table 'grouping_individual_history' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
I've already read this post (https://www.mssqltips.com/sqlservertip/2733/solving-the-sql-server-multiple-cascade-path-issue-with-a-trigger/), although it's not quite the same scenario it seems to me.
I've tried doing a INSTEAD OF DELETE trigger on my grouping table, but it wont accept it, because in turn, my grouping table has another FK (fkSessionID) that does a cascade delete... So, the fix would be to change it all, in all affected tables with FKs. The chain is long though, and we cannot consider it.
For one thing, can someone explain to me why SQL Server is giving me the issue for this very simple scenario in the first place? I just don't understand it.
Is there another workaround I could use (besides just removing the foreign key link from my grouping_individual_history table)?
I have a main table with many associated tables linked to it using an "id" foreign key.
I need to update a row in this main table.
Instead of updating all the fields of the row, one by one, it would be easier for me to simply deleting the whole row and recreating it with the new values (by keeping the original primary key!).
Is there a way, inside a transaction, to delete such row that has foreign key constraints if the row is recreated, with the same primary key, before the transaction is actually commited?
I tried it, and it doesn't seem to work...
Is there something I can do to achieve that other than dropping the constraints before my DELETE operation? Some kind of lock?
No.
Without dropping/disabling the constraint, SQL Server will enforce the relationship and prevent you from the deleting the referenced row.
It is possible to disable the constraint, but you'll incur the overhead when enabling it that SQL Server must verify EVERY REFERENCE to that key before it will consider the relationships trusted again.
You are much better off taking the time to develop a separate update/upsert function than to incur that additional processing overhead every time you need to change a record.
You could alter the foreign key to use a CASCADE DELETE, but that has its own overhead and baggage.
I use foreign keys at work. But we pretty much manually manage our tables and we always make sure that we always have a parent entry in another table for a child entry that references it by its Id. We insert, update and delete the parent and child entities in the table in the same transaction.
So why should we still keep those foreign keys? They slow the database down when inserting new entities in the database and may be one of the reasons we get deadlocks from time to time.
Are they actually used by Sql Server for other things? Like gathering better statistics or is their only purpose to keep data integrity?
You shouldn't. Drop constraints with their foreign keys.
Checks at the Database lever are the last integrity barrier protecting your data.
For performance issues you might want to remove foreign keys but you might end up having to maintain a partially corrupted DB what ends up being a nightmare.
Can Foreign key improve performance
Foreign key constraint improve performance at the time of reading data
but at the same time it slows down the performance at the time of
inserting / modifying / deleting data.
In case of reading the query, the optimizer can use foreign key
constraints to create more efficient query plans as foreign key
constraints are pre declared rules. This usually involves skipping
some part of the query plan because for example the optimizer can see
that because of a foreign key constraint, it is unnecessary to execute
that particular part of the plan.
Morning all,
I'm doing a lot of work to drag a database (SQL Server 2005, in 2000 compatibility mode) kicking and screaming towards having a sane design.
At the moment, all the tables' primary keys are nvarchar(32), and are set using uniqId() (oddly, this gets run through a special hashing function, no idea why)
So in several phases, I'm making some fundamental changes:
Introducing ID_int columns to each table, auto increment and primary key
Adding some extra indexing, removing unused indexes, dropping unused columns
This phase has worked well so far, test db seems a bit faster, total index sizes for each table are MUCH smaller.
My problem is with the next phase: foreign keys. I need to be able to set these INT foreign keys on insert in the other tables.
There are several applications pointing at this DB, only one of which I have much control over. It also contains many stored procs and triggers.
I can't physically make all the changes needed in one go.
So what I'd like to be able to do is add the integer FKs to each table and have them automatically set to the right thing on insert.
To illustrate this with an example:
Two tables, Call and POD, linked pod.Call_ID -> Call.Call_ID. This is an nvarchar(32) field.
I've altered call such that Call_ID_int is identity, auto increment, primary key. I need to add POD.Call_ID_int such that, on insert, it gets the right value from Call.Call_ID_int.
I'm sure I could do this with a BEFORE trigger, but I'd rather avoid this for maintenance and speed reasons.
I thought I could do this with a constraint, but after much research found I can't. I tried this:
alter table POD
add constraint
pf_callIdInt
default([dbo].[map_Call_ID_int](Call_ID))
for Call_ID_int
Where the map_Call_ID_int function takes the Call_ID and returns the right Call_ID_int, but I get this error:
The name "Call_ID" is not permitted in this context. Valid expressions
are constants, constant expressions, and (in some contexts) variables.
Column names are not permitted.
Any ideas how I can achieve this?
Thanks very much in advance!
-Oli
Triggers are the easiest way.
You'll have odd concurrency issues with defaults based on UDFs too (like you would for CHECK constraints).
Another trick is to use views to hide schema changes but still with triggers to intercept DML. So your "old" table no longer exists only as a view on "new" table. A write to the "old" table/view actually happens on the new table.
Can anyone provide a clear explanation / example of what these functions do, and when it's appropriate to use them?
Straight from the manual...
We know that the foreign keys disallow creation of orders that do not relate to any products. But what if a product is removed after an order is created that references it? SQL allows you to handle that as well. Intuitively, we have a few options:
Disallow deleting a referenced product
Delete the orders as well
Something else?
CREATE TABLE order_items (
product_no integer REFERENCES products ON DELETE RESTRICT,
order_id integer REFERENCES orders ON DELETE CASCADE,
quantity integer,
PRIMARY KEY (product_no, order_id)
);
Restricting and cascading deletes are the two most common options. RESTRICT prevents deletion of a referenced row. NO ACTION means that if any referencing rows still exist when the constraint is checked, an error is raised; this is the default behavior if you do not specify anything. (The essential difference between these two choices is that NO ACTION allows the check to be deferred until later in the transaction, whereas RESTRICT does not.) CASCADE specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well. There are two other options: SET NULL and SET DEFAULT. These cause the referencing columns to be set to nulls or default values, respectively, when the referenced row is deleted. Note that these do not excuse you from observing any constraints. For example, if an action specifies SET DEFAULT but the default value would not satisfy the foreign key, the operation will fail.
Analogous to ON DELETE there is also ON UPDATE which is invoked when a referenced column is changed (updated). The possible actions are the same.
edit: You might want to take a look at this related question: When/Why to use Cascading in SQL Server?. The concepts behind the question/answers are the same.
I have a PostGreSQL database and I use On Delete when I have a user that I delete from the database and I need to delete it's information from other table. This ways I need to do only 1 delete and FK that has ON delete will delete information from other table.
You can do the same with ON Update. If you update the table and the field have a FK with On Update, if a change is made on the FK you will be noticed on the FK table.
What Daok says is true... it can be rather convenient. On the other hand, having things happen automagically in the database can be a real problem, especially when it comes to eliminating data. It's possible that in the future someone will count on the fact that FKs usually prevent deletion of parents when there are children and not realize that your use of On Delete Cascade not only doesn't prevent deletion, it makes huge amounts of data in dozens of other tables go away thanks to a waterfall of cascading deletes.
#Arthur's comment.
The more frequently "hidden" things happen in the database the less likely it becomes that anyone will ever have a good handle on what is going on. Triggers (and this is essentially a trigger) can cause my simple action of deleting a row, to have wide ranging consequences throughout my database. I issue a Delete statement and 17 tables are affected with cascades of triggers and constraints and none of this is immediately apparent to the issuer of the command. OTOH, If I place the deletion of the parent and all its children in a procedure then it is very easy and clear for anyone to see EXACTLY what is going to happen when I issue the command.
It has absolutely nothing to do with how well I design a database. It has everything to do with the operational issues introduced by triggers.
Instead of writing the method to do all the work, of the cascade delete or cascade update, you could simply write a warning message instead. A lot easier than reinventing the wheel, and it makes it clear to the client (and new developers picking up the code)