I am trying to update one row of the storeID column. When I do the first code below, it runs but will not affect any row. However, the bottom two are producing the following error " The UPDATE statement conflicted with the REFERENCE constraint "Products_FK". The conflict occurred in database "group7", table "dbo.Products", column 'storeID'."
Could anyone help? Thanks!
Table in SQL
UPDATE Store
SET storeID='E50'
WHERE storeID='D50'
AND storeID is Null;
UPDATE Store
SET storeID='E50'
WHERE storeID='D50'
UPDATE Store
SET storeID='E50'
WHERE storeName='A Plus Cables'
Above is the code that I have tried and nothing is being updated.
The first one can't possibly match any rows, because a single value can't equal two different things at the same time:
WHERE storeID='D50'
AND storeID is Null
So that one is kind of a moot point. You'd need to update the WHERE clause to target the record(s) you want to target.
For the latter two, the error is telling you what's wrong. You're trying to write this value to a column:
SET storeID='E50'
But the error is telling you:
That column is a foreign key to another table. (Products?)
The value you're writing isn't presentin that other table.
So your options are:
Use a value that is present in the other table.
Use NULL (and update the column to allow NULL if necessary).
Remove the foreign key constraint to that other table.
I'm running a data stage job, Input through DB2 and output to DB2. Input side has a query containing joins and functions.
I'm getting the following warning message;
TRN_HEALTH_INSURANCE_DETAIL,
2: STATEMENT
INSERT
INTO
HEALTH_INSURANCE_DETAIL
(
RISK_DETAIL_ID,
RISK_COVER_ID,
RD_POLICY_SYSTEM_NO,
RD_POLICY_END_NO_IDX,
RD_POLICY_ID,
RD_LEVEL1_ID,
RD_SUM_INSURED_AMT_LC,
RD_PREMIUM_AMT_LC,
PREMIUM_AMOUNT_FC,
SUM_INSURED_AMT_FC,
RD_REC_TYPE,
RD_EFFECT_FROM_DT,
RD_EFFECT_TO_DT,
RD_END_EFFECT_FROM_DT,
SEX_MAS_CD,
MARITAL_STATUS_CD,
EMP_CATG,
NO_OF_DEPENDENTS,
EMP_AL_NO,
DOB,
EFF_DATE,
EFF_DATE2,
NAME,
RELATIONSHIP_CD_S,
RELATIONSHIP_CD,
DESIGNATION,
BRANCH,
BANK_ACCOUNT,
BANK_BRANCH_NAME,
PRE_EXISTING_AILMENT,
AUTHORITY_LETTER_NO,
AGE,
REGION,
CNIC,
CO_CODE,
EMP_LOCATION,
SUB_LOCATION,
CLH_SYSTEM_NO,
CTH_SYS_ID,
CTH_POL_SYS_ID,
CTH_END_NO_IDX,
CTH_END_SR_NO,
CTH_CATEGORY,
CLD_SYS_ID,
CLDH_SYS_ID,
CLD_COVER_CD,
CLD_END_IDX,
CLD_COVER_DESC,
CLD_CLM_TYPE_LIMIT,
CLD_CLM_REL,
CLD_CLM_AGE_FROM,
CLD_CLM_AGE_TO,
CLD_CLM_RB_LIMIT,
CLD_CATEGORY_LIMIT_FC,
CLD_CATEGORY_PREM_FC
)
VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) failed to run.
I cant see such records in my data. The data quality is good. Then what are these ????, I search a bit and found a suggestion to keep the array size and row count to 1, instead of default 2000. But still I'm getting the same warning.
There are a lot of errors followed by this warning; The next error is also interesting.
TRN_HEALTH_INSURANCE_DETAIL,2: SQLExecute reported: SQLSTATE = 23505: Native Error Code = -803: Msg = [IBM][CLI Driver][DB2/NT64] SQL0803N One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "1" constrains table "DB2ADMIN.HEALTH_INSURANCE_DETAIL" from having duplicate values for the index key. SQLSTATE=23505 (CC_DB2DBStatement::executeInsert, file CC_DB2DBStatement.cpp, line 1,095)
I believe the errors are due to the first warning. Kindly help me out.
Regards, Nuh
Make a copy stage before the DB2 connector and put one link to the DB2 and the other to a dataset file to see the data in a data set. But the problem seems to be in the primary key you have a duplicate primary index or a duplicate unique index. It can be either in your data that you want to insert or maybe the table already have a record that you want to insert again
Looking for a way to write the following LinQ to entities query as a T-SQL statement.
repository.ProductShells.Where(x => x.ShellMembers.Any(sm => sm.ProductID == pid)).ToList().ForEach(x => repository.ProductShells.Remove(x));
The below is obviously not correct but I need it to delete respective ProductShell object where
any ShellMember contains a ProductID equal to the passed in variable pid. I would presume this would involve a join statement to get the relevant ShellMembers.
repository.Database.ExecuteSqlCommand("FROM Shellmembers WHERE ProductID={0} DELETE FK_ProductShell", pid);
I have cascade delete enabled for the FK_ShellMembers_ProductShells foreign key, so when I delete the ProductShell it will delete all the ShellMembers that are associated with it. I am going to pass this statement to System.Data.Entity Database.ExecuteSqlCommand method.
You should always show table structures and foreign key linkages.
However, it should look something like this.
This assumes the link between the two tables is productshell.shellid=shellmembers.shellid
delete productshell
where shellid in (
select shellid
from shellmembers
where productid={0}
)
It can also be written as a join
delete productshell
from productshell
join shellmembers on productshell.shellid=shellmembers.shellid
where shellmembers.productid={0}
Is there a way to overwrite or skip duplicate records?
1062 - Duplicate entry '2' for key 1
is there a way to add : insert on duplicate key update to a sql file that only has insert?
Have a look at 12.2.5.1. INSERT ... SELECT Syntax and 12.2.5. INSERT Syntax
And look for
Specify IGNORE to ignore rows that
would cause duplicate-key violations.
try this:
INSERT ON DUPLICATE KEY UPDATE
Although usually people forget to put the autoincrement field for the id and thus the error.
Assuming that all foreign keys have the appropriate constraint, is there a simple SQL statement to delete rows not referenced anywhere in the DB?
Something as simple as delete from the_table that simply skip any rows with child record?
I'm trying to avoid manually looping through the table or adding something like where the_SK not in (a,b,c,d).
You might be able to use the extended DELETE statement in 10g that includes error logging.
First use DBMS_ERRLOG to create a logging table (which is just a copy of the original table with some additional prefixing columns: ORA_ERR_MESG$, ..., ORA_ERR_TAG$)
execute dbms_errlog.create_error_log('parent', 'parent_errlog');
Now, you can use the LOG ERRORS clause of the delete statement to capture all rows that have existing integrity constraints:
delete from parent
log errors into parent_errlog ('holding-breath')
reject limit unlimited;
In this case the "holding-breath" comment will go into the ORA_ERR_TAG$ column.
You can read the full documentation here.
If the parent table is huge and you're only looking to delete a few stray rows, you'll end up with a parent_errlog table that is essentially a duplicate of your parent table. If this isn't ok, you'll have to do it the long way:
Directly reference the child tables (following Tony's solution), or,
Loop through the table in PL/SQL and catch any exceptions (following Confusion's and Bob's solutions).
The easiest way may be to write an application or stored procedure that attempts to delete the rows in the table one-by-one and simply ignores the failures due to foreign key constraints. Afterwards, all rows not under a foreign key constraint should be removed. Depending on the required/possible performance, this may be an option.
No. Obviously you can do this (but I realise you would rather not):
delete parent
where not exists (select null from child1 where child1.parent_id = parent.parent_id)
and not exists (select null from child2 where child2.parent_id = parent.parent_id)
...
and not exists (select null from childn where childn.parent_id = parent.parent_id);
One way to do this is to write something like the following:
eForeign_key_violation EXCEPTION;
PRAGMA EXCEPTION_INIT(eForeign_key_violation, -2292);
FOR aRow IN (SELECT primary_key_field FROM A_TABLE) LOOP
BEGIN
DELETE FROM A_TABLE A
WHERE A.PRIMARY_KEY_FIELD = aRow.PRIMARY_KEY_FIELD;
EXCEPTION
WHEN eForeign_key_violation THEN
NULL; -- ignore the error
END;
END LOOP;
If a child row exists the DELETE will fail and no rows will be deleted, and you can proceed to your next key.
Note that if your table is large this may take quite a while.