How to configure reference to be deleted on parent table update? - sql

I have two tables:
info: ID, fee_id
and
fee: ID, amount
and a reference between them (SQL Server 2008):
ALTER TABLE info WITH CHECK ADD CONSTRAINT FK_info_fee FOREIGN KEY(fee_id)
REFERENCES fee (ID)
ALTER TABLE info CHECK CONSTRAINT FK_info_fee
GO
How to configure this reference that way so a record in fee will be deleted if info.fee_id becomes NULL
EDIT: or maybe set info.fee_id to NULL on deleting the corresponding record in fee.
Anyway I can do it this way:
UPDATE info SET fee = NULL WHERE = ..
DELETE FROM fee WHERE ..
but I'm sure that this can be done by the database itself.

You probably don't want to do this. What would you expect to happen if multiple info rows referenced the same fee row?
If you really want to do something like this, adding logic to an AFTER UPDATE, DELETE trigger on the info table would probably be the way to go. Check if any other info rows reference that same fee row, and if not, delete the fee row.

Some thoughts:
If you have a one:one reference then can the 2 tables be combined?
Drilling up from child to parent is odd: if it's 1:1 then can you reverse the FK direction and simply CASCADE NULL?
Otherwise, you'll have to use a trigger but assuming 1:1 makes me uneasy...
... unless you have a unique constraint/index on info_fee.fee_id
Like so:
ALTER TABLE info WITH CHECK ADD
CONSTRAINT FK_fee_info_fee FOREIGN KEY (id) REFERENCES info_fee (fee_ID) ON DELETE SET NULL

If you really intend to remove rows when fee_id is set to null, one way is an update trigger. In an update trigger, the deleted table contains the old version of the updated rows, and the inserted table contains the new version. By joining them, you can take action when a fee_id changes to null:
CREATE TRIGGER deleteFee
ON info
FOR UPDATE
AS
DELETE FROM Fee
WHERE Fee.id IN (
SELECT old.fee_id
FROM deleted old
JOIN inserted new ON old.id = new.id
WHERE old.fee_id = fee.id
AND new.fee_id is null
)
This is tricky when multiple info rows refer to the same fee. The fee will be removed if any info row is set to null. A full synch trigger would avoid that:
CREATE TRIGGER deleteFee
ON info
FOR UPDATE
AS
DELETE FROM Fee
WHERE NOT EXISTS (
SELECT *
FROM Info
WHERE Fee.id = Info.fee_id
)
But this can have other unintended consequences, like deleting half the Fee table in response to an update. In this case, as in most cases, triggers add more complexity than they solve. Triggers are evil and should be avoided at almost any cost.

Related

SQL. Deleting data from a table, but maintaining the relationship

I work in PostgreSQL
I have two tables. The first is the product and the second is the receipt. They are referenced using a foreign key (the product in the receipt table). I need to delete the product row from the product table, but to keep the reference. I wanted to make it a "virtual table" (product) where the receipt table will reference after deleting information from the main product table.
But I can't figure out how to do it... Can someone tell me or show me how to do this.
You can't have a foreign key relationship to a non-existent row. So, do a soft-delete. That is, add a column to the products table such as is_deleted.
Then don't actually delete the row. Just set the column to true.
It can be helpful to have a view for active products:
create view v_products as
select p.*
from products p
where not is_deleted;
EDIT:
If you want to change the row in the original table, you can use a cascading option on the trigger. Use on delete cascade to remove the row in receipts or on delete set null to set the referencing value to NULL. I'm not a fan of these because you lose the original data.

What kind of approach is this in SQL, it actually exists? Is it viable/good pratice?

One of our teachers gave us the following challenge:
"Make a Database schema with the following principle:
you can't change any values on any table, only add new ones."
I came with the following schema:
CREATE TABLE TRANSACTIONS(ID PRIMARY KEY, TRANSACTION_TYPE_FK, DATE);
CREATE TABLE TRANSACTION_TYPE(ID PRIMARY KEY, NAME);
CREATE TABLE PRODUCTS_TRANSACTIONS(ID_PROD_FK, ID_TRANS_FK, MONEY, QTY);
CREATE TABLE PRODUCTS(ID PRIMARY KEY, NAME, PRICE_FK );
CREATE TABLE PRICES(ID PRIMARY KEY, DATE, DETAILS);
It's just a proof of concept. Basically everything is based on transactions.
Transactions can be Entry, Exit and Move Products and In & Out Money.
I can control my quantities and cash based on transactions.
The PRODUCTS_TRANSACTIONS "MONEY" field is used if a transaction involves money only or there are "discounts" or "taxes" on the transaction.
The Products Table has a "child" table called "prices", it storages all the price changes , the "details" field is for annotations like "Cost Price" etc.
I made it very quick, I am sorry for any inconsistency.
I liked this kind of approach, I am kinda of a newbie with SQL so I really wanted to know if this approach has a name and if it is viable perfomance-wise or a good pratice.
My idea is making a View and "update" it whenever a new transaction is made, since nothing needs to be "updated" I only need to add new rows to the View.
I am currently very sick, so I can't go to college to remedy my doubts.
Thanks in advance for any help
Let's take only one table TRANSACTION_TYPE(ID PRIMARY KEY, NAME) for example:
Now if you want to restrict update on the table, you can achieve that with following queries:
GRANT SELECT,INSERT,DELETE ON TRANSACTION_TYPE TO Username;
OR
Deny UPDATE ON TRANSACTION_TYPE TO Username;
Now to maintain history of insertion and deletion,you can store in another table by creating trigger on TRANSACTION_TYPE as follows:
CREATE or REPLACE TRIGGER my_trigger // name of trigger
AFTER INSERT OR DELETE
ON TRANSACTION_TYPE
FOR EACH ROW
BEGIN
IF INSERTING THEN
INSERT INTO TRANSACTION_INSERT_HISTORY(ID,NAME) //table that maintain history of insertion
VALUES(:new.ID,:new.NAME);
ELSIF DELETING THEN
INSERT INTO TRANSACTION_DELETE_HISTORY(ID,NAME) //table that maintain history of deleted records
VALUES(:old.ID,:old.NAME);
END IF;
END;
/
Before creating this trigger, you first have to create two tables:
TRANSACTION_INSERT_HISTORY(ID,NAME) and
TRANSACTION_DELETE_HISTORY(ID,NAME)
I have created two different tables for insertion and deletion for simplicity.
You can do it with one table too.
Hope it helps.
The table that holds the information, you could give permissions only to insert and select to the table, preventing update.
https://www.mssqltips.com/sqlservertip/1138/giving-and-removing-permissions-in-sql-server/
GRANT INSERT, SELECT ON TableX TO UserY
In a production system, you'd probably design this using a VIEW for selecting the data from the table (to only get the most recent revision of the audit data). With perhaps another VIEW that would allow you to see all the audit history. You'd probably also make use of a Stored Procedure for inserting the data and ensuring the data was being maintained in the audit history way you suggest.

Insert Records with Violations in SQL Server

I want to populate 5000 records in the below format to a particular table.
Insert into #Table
(c1,c2,c3,c4,c5)
Values
(1,2,3,4,5),
(2,2,3,4,5),
(3,2,3,4,5),
(4,2,3,4,5),
(5,2,3,4,5)
....
....
Up to 1000 rows
When I try to execute it. I got a foreign Key violation. I know the reason since one of the value did not exist in its corresponding parent table.
There are few records causing this violation. It's very hard to find those violated rows among the 1000 rows so I want to insert at least the valid records to my target table leaving the violated rows as it is for now.
I am not sure how to perform this. Please suggest me any ideas to do this.
If this is a one time thing, then you can do the following:
Drop the FK constraint
ALTER TABLE MyTAble
DROP CONSTRAINT FK_Contstraint
GO
Execute INSERT
Find the records with no matching parent id.
SELECT * FROM MyTable MT WHERE NOT EXISTS (SELECT 1 FROM ParentTable PT WHERE MT.ParentId = PT.ID)
DELETE those records or do something else with them.
Recreate the FK constraint.
Disable the foreign key or fix your data.
Finding the bad data is simple - you can always temporarily insert it into a buffer table and run queries to find which data is missing in the related table.

Trying to copy one table from another database to another in SQL Server 2008 R2

i am trying to copy table information from a backup dummy database to our live sql database(as an accident happened in our program, Visma Business, where someone managed to overwrite 1300 customer names) but i am having a hard time figuring out the perfect code for this, i've looked around and yes there are several similar problems, but i just can't get this to work even though i've tried different solutions.
Here is the simple code i used last time, in theory all i need is the equivilant of mysqls On Duplicate, which would be MERGE on SQL server? I just didn't quite know what to write to get that merge to work.
INSERT [F0001].[dbo].[Actor]
SELECT * FROM [FDummy].[dbo].[Actor]
The error message i get with this is:
Violation of PRIMARY KEY constraint 'PK__Actor'. Cannot insert duplicate key in object 'dbo.Actor'.
What error message says is simply "You cant add same value if an attribute has PK constraint". If you already have all the information in your backup table what you should do is TRUNCATE TABLE which removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain.
After that step you should follow this answer . Or alternatively i recommend a tool called Kettle which is open source and easy to use for these kinds of data movements. That will save you a lot of work.
Here are thing which can be the reason :
You have multiple row in [FDummy].[dbo].[Actor] with same data in a column which is going to be inserted in primary key column of [F0001].[dbo].[Actor].
You have existing rows in [FDummy].[dbo].[Actor] with some value x in primary key column and there is/are row(s) in [F0001].[dbo].[Actor] with same value x in the column which is going to be inserted in primary key column.
List item
-- to check first point. if it returns row then you have some problem
SELECT ColumnGoingToBeMappedWithPK,
Count(*)
FROM [FDummy].[dbo].[Actor]
GROUP BY ColumnGoingToBeMappedWithPK
HAVING Count(*) > 1
-- to check second point. if count is greater than 0 then you have some problem
SELECT Count(*)
FROM [FDummy].[dbo].[Actor] a
JOIN [F0001].[dbo].[Actor] b
ON a.ColumnGoingToBeMappedWithPK = b.PrimaryKeyColumn
The MERGE statement will be possibly the best for you here, unless the primary key of the Actor table is reused after a previous record is deleted, so not autoincremented and say record with id 13 on F0001.dbo.Actor is not the same "actor" information as on FDummy.dbo.Actor
To use the statement with your code, it will look something like this:
begin transaction
merge [F0001].[dbo].[Actor] as t -- the destination
using [FDummy].[dbo].[Actor] as s -- the source
on (t.[PRIMARYKEY] = s.[PRIMARYKEY]) -- update with your primary keys
when matched then
update set t.columnname1 = s.columnname1,
t.columnname2 = s.columnname2,
t.columnname3 = s.columnname3
-- repeat for all your columns that you want to update
output $action,
Inserted.*,
Deleted.*;
rollback transaction -- change to commit after testing
Further reading can be done at the sources below:
MERGE (Transact-SQL)
Inserting, Updating, and Deleting Data by Using MERGE
Using MERGE in SQL Server to insert, update and delete at the same time

SQL Server concurrency

I asked two questions at once in my last thread, and the first has been answered. I decided to mark the original thread as answered and repost the second question here. Link to original thread if anyone wants it:
Handling SQL Server concurrency issues
Suppose I have a table with a field which holds foreign keys for a second table. Initially records in the first table do not have a corresponding record in the second, so I store NULL in that field. Now at some point a user runs an operation which will generate a record in the second table and have the first table link to it. If two users simultaneously try to generate the record, a single record should be created and linked to, and the other user receives a message saying the record already exists. How do I ensure that duplicates are not created in a concurrent environment?
The steps I need to carry out are:
1) Look up x number of records in table A
2) Perform some business logic that prepares a single row which is inserted into table B
3) Update the records selected in step 1) to point to the newly created record in table B
I can use scope_identity() to retrieve the primary key of the newly created record in table B, so I don't need to worry about the new record being lost due to simultaneous transactions. However I need to eliminate the possibility of concurrently executing processes resulting in a duplicate record in table B being created.
In SQL Server 2008, this can be handled with a filtered unique index:
CREATE UNIQUE INDEX ix_MyIndexName ON MyTable (FKField) WHERE FkField IS NOT NULL
This will require all non-null values be unique, and the database will enforce it for you.
The 2005 way of simulating a unique filtered index for constraint purposes is
CREATE VIEW dbo.EnforceUnique
WITH SCHEMABINDING
AS
SELECT FkField
FROM dbo.TableB
WHERE FkField IS NOT NULL
GO
CREATE UNIQUE CLUSTERED INDEX ix ON dbo.EnforceUnique(FkField)
Connections that update the base table will need to have the correct SET options but unless you are using non default options this will be the case anyway in SQL Server 2005 (ARITH_ABORT used to be the problem one in 2000)
Using a computed column
ALTER TABLE MyTable ADD
OneNonNullOnly AS ISNULL(FkField, -PkField)
CREATE UNIQUE INDEX ix_OneNullOnly ON MyTable (OneNonNullOnly);
Assumes:
FkField is numeric
no clash of FkField and -PkField values
Decided to go with the following:
1) Begin transaction
2) UPDATE tableA SET foreignKey = -1 OUTPUT inserted.id INTO #tempTable
FROM (business logic)
WHERE foreignKey is null
3) If ##rowcount > 0 Then
3a) Create record in table 2.
3b) Capture ID of newly created record using scope_identity()
3c) UPDATE tableA set foreignKey = IdOfNewRecord FROM tableA INNER JOIN #tempTable ON tableA.id = tempTable.id
Since I write junk into the foreign key field in step 2), those rows are locked and no concurrent transactions will touch them. The first transaction is free to create the record. After the transaction is committed, the blocked transaction will execute the update query, but won't capture any of the original rows due to the WHERE clause only considering NULL foreignKey fields. If no rows are returned (##rowcount = 0), the current transaction exits without creating the record in table B, and returns some sort of error message to the client. (e.g. Error: Record already exists)