Using the example from PostgresSQL documentation, where orders.product_no references foreign key products.product_no. Consider the following updates on the "child table" orders:
UPDATE orders SET product_no = 'new_no', quantity = 'new_quantity'
UPDATE orders SET quantity = 'new_quantity'
Does the product_no foreign key constraint happen for these two commands? Intuitively, I'd guess the answer is Yes for 1 and No for 2. However, I couldn't find documentation that explicitly mentions this.
The motivation for this question is that we have a large table with some foreign key constraints, and the rows are frequently updated (which doesn't touch the columns with constraints). We were wondering if dropping the foreign key constraints would help speed up the updates.
Never, never, drop a foreign key for a measly few ms shorter run time. FKs are how you maintain data integrity which is vastly more important then a reduced runtime. And in this case it will not make a difference any way.
Related
Just starting to learn basics of SQL. In some versions of SQL (Oracle, SQL server etc.) there are enable/disable constraints keywords. What is the difference between these and add/drop constraints keywords? Why do we need it?
Constraint validation has a performance penalty when performing a DML operation. It's common to disable a constraint before a bulk insert/import of data (especially if you know that data is "OK"), and then enable it after the bulk operation is done.
I use disabled constraints in a special situation. I have an application with many tables (around 1000). The records in these table have "natural keys", i.e. identifiers and relations which are given by external source. Some tables use even different natural keys as foreign key references to different tables.
But I like to use common surrogate keys as primary key and for foreign references.
Here is one example (not 100% sure about correct syntax):
CREATE TABLE T_BTS (
OBJ_ID number constraint BTS_PK (OBJ_ID) PRIMARY KEY,
BTS_ID VARCHAR2(20) CONSTRAINT BTS_UK (BTS_ID) UNIQUE,
some more columns);
CREATE TABLE T_CELL (
OBJ_ID number constraint BTS_PK (OBJ_ID) PRIMARY KEY,
OBJ_ID_PARENT number,
BTS_ID VARCHAR2(20),
CELL_ID VARCHAR2(20) CONSTRAINT CELL_UK (BTS_ID, CELL_ID) UNIQUE,
some more columns);
ALTER TABLE T_CELL ADD CONSTRAINT CELL_PARENT_FK
FOREIGN KEY (OBJ_ID_PARENT)
REFERENCES T_BTS (OBJ_ID);
ALTER TABLE T_CELL ADD CONSTRAINT CELL_PARENT
FOREIGN KEY (BTS_ID)
REFERENCES T_BTS (BTS_ID) DISABLE;
In all my tables the primary key column is always OBJ_ID and the key to parent table is always OBJ_ID_PARENT, not matter how the natural key is defined. This makes me easier to have common PL/SQL procedures and compose dynamic SQL Statements.
One example: In order to set OBJ_ID_PARENT after insert, following update would be needed
UPDATE T_CELL cell SET OBJ_ID_PARENT =
(SELECT OBJ_ID
FROM T_BTS bts
WHERE cell.BTS_ID = bts.BTS_ID)
I am too lazy to write 1000+ such individual statements. By using views USER_CONSTRAINTS and USER_CONS_COLUMNS I am able to link the natural keys and the surrogate keys and I can execute these updates via dynamic SQL.
All my keys and references are purely defined by constraints. I don't need to maintain any extra table where I track relations or column names. The only limitation in my application design is, I have to utilize a certain naming convention for the constraints. But the countervalue for this is almost no maintenance is required to keep the data consistent and have good performance.
In order to use all above, some constrains needs to be disabled - even permanently.
I [almost] never disable constraints during the normal operation of the application. The point of the constraints is to preserve data quality.
Now, during maintenance, I can disable them temporarily while adding or removing massive amounts of data. Once they data is loaded I make sure they are enabled again before restarting the application.
Is the assumption that each foreign key added a to a table also adds a CHECK constraint that ensures that values inserted in the foreign key column is from the set of values from the table where that key is the primary key.
This would imply that a table with more foreign keys would take longer to insert a value into. Is this correct?
I am using Microsoft SQL Server 2014.
Yes. Foreign key relationships are checked when data is inserted or modified in the table.
The foreign key needs to be to a primary key or unique key. This guarantees that an index is available for the check.
In general, looking up the value in the index should be pretty fast. Faster than the other things that are going on in an insert, such as finding a free page for the data and logging the data.
However, validating the foreign key is going to add some overhead.
Don't mix up foreign keys and checks - there are two different constraint types. For example check accepts nulls and foreign keys not (exception: on delete set null fk option).
When rows are inserted/updated in database set od step is beeing executed, e.g. checking existance of tables, columns, veryfing privileges. Where you have fk database engine must verify contraint before inserting/updateing data to the table - it's additional step to execute.
I have never expirienced situation, when fk painfully slowed down the database operations duration.
In other words, if I have two foreign key constraints on the same column, will both constraints have to be met or just one in order to successfully add the record?
If you have several constraints defined on a table, then when an operation happens on the table ALL constraints needs to be met. Please note that this applies to ALL types of constraints, not only foreign constraints (that you initially questionned about) :
UNIQUE
NOT NULL
CHECK
FOREIGN KEY
See the sqlite documentation for more information about table ad column constraints.
Does anyone know if there is a quicker way of editing a record that has foreign keys in a table (in sql server).. i will explain.. i have approx 5 tables that have there own ID but are linked together using a foreign key...
Hence i needed to change the foreign key (the contract number in my case), but i had to copy each record to a new record and edit it that way...
As if i try to edit the contract number it gives me the standard error of being associated and violates a foreign key etc
Surly there must be a better way?
ANy ideas?
are you talking about changing the PK and then updating all the Fks? In that case enable cascade updates and this will be done automagically
same with deletes, you enable cascade deletes
ON DELETE CASCADE
Specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. If cascading referential actions have also been defined on the target tables, the specified cascading actions are also taken for the rows deleted from those tables.
ON UPDATE CASCADE
Specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. If cascading referential actions
I'm not an SQL expert, but can't you set something like ON UPDATE CASCADE to automatically update the foreign key when the primary key is changed?
Or try disabling the integrity constraint, do your changes and attempt to re-enable the constraint. Basically, if you didn't do it right you will get an error then (can't enable a constraint that would be violated).
For instance, suppose I have table A. Then I have tables B-Z that have a foreign key to table A's primary key. Then perhaps there are also some tables that have a foreign key constraint to a table in B-Z's primary key constraint. Is there any easy way to clear out table A and all of the tables that refer to A (or that refer to a table that refers to A) without having to explicitly delete from each table or add an ON CASCADE constraint to each foreign key?
Note that this is mainly for testing purposes, not to be used in production. I would just drop the entire schema and start over again, but that simply isn't feasible for every test (considering how long it takes to build the schema).
I think the most efficient way to do this would be to drop all the FK's, truncate the tables, and then rebuild the FK's.