How to check for clustered unique key while inserting in SQL table - sql

I am trying to insert rows from another database table to new database table getting the below error if there is no where condition in the query.
Violation of UNIQUE KEY constraint 'NK_LkupxViolations'. Cannot insert duplicate key in object 'dbo.LkupxViolation'. The duplicate key value is (00000000-0000-0000-0000-000000000000, (Not Specified)).
Then I wrote the below query adding where conditions it worked but it didn't insert the expected no. of rows.
IF EXISTS(SELECT 1 FROM sys.tables WHERE name = 'LkupxViolation')
BEGIN
INSERT INTO dbo.[LkupxViolation] SELECT * FROM [DMO_DB].[dbo].[LkupxViolation] where CGRootId not in (select CGRootId from dbo.[LkupxViolation])
and Name not in (select name from dbo.[LkupxViolation])
END
ELSE
PRINT 'LkupxViolation table does not exist'
The unique key in the table is created as:
CONSTRAINT [NK_LkupxViolations] UNIQUE CLUSTERED
(
[CGRootId] ASC,
[Name] ASC
)

Try using NOT EXISTS:
INSERT INTO dbo.[LkupxViolation]
SELECT *
FROM [DMO_DB].[dbo].[LkupxViolation] remove_l
WHERE NOT EXISTS (SELECT 1
FROM dbo.[LkupxViolation] local_l
WHERE local_l.Name = remote_l.Name AND
local_l.CGRootId = remote_l.CGRootId
);
This checks for both values in the same row. In addition, NOT IN is not NULL-safe. If any values generated by the subquery are NULL then all rows are filtered out.

Related

Check for uniqueness of column in postgres table

I need to ensure that the values in a column from a table are unique as part of a larger process.
I'm aware of the UNIQUE constraint, but I'm wondering if there is a better way to do the check.
I'm running the queries using psycopg2 so adding that tag on the off chance there's something in there that can help with this.
If the column is unique I can add a constraint. If the column is not unique adding the constraint will return an error.
If there is already a constraint of the same name a useful error is returned. in this case would prefer to just check for the existing constraint.
If the column is the primary key, the unique constraint can be added without error but in this case it would be preferable to just recognize that the column must be unique based on the primary key.
Code examples of this below.
DROP TABLE IF EXISTS unique_test;
CREATE TABLE unique_test (
pkey INT PRIMARY KEY,
unique_yes CHAR(1),
unique_no CHAR(1)
);
INSERT INTO unique_test (pkey, unique_yes, unique_no)
VALUES(1, 'a', 'a'),
(2, 'b', 'a');
CREATE UNIQUE INDEX CONCURRENTLY u_test_1 ON unique_test (unique_yes);
ALTER TABLE unique_test
ADD CONSTRAINT unique_target_1
UNIQUE USING INDEX u_test_1;
-- the above runs no problem
-- check what happens when column is not unique
CREATE UNIQUE INDEX CONCURRENTLY u_test_2 ON unique_test (unique_no);
ALTER TABLE unique_test
ADD CONSTRAINT unique_target_2
UNIQUE USING INDEX u_test_2;
-- returns:
-- SQL Error [23505]: ERROR: could not create unique index "u_test_2"
-- Detail: Key (unique_no)=(a) is duplicated.
CREATE UNIQUE INDEX CONCURRENTLY u_test_1 ON unique_test (unique_yes);
ALTER TABLE unique_test
ADD CONSTRAINT unique_target_1
UNIQUE USING INDEX u_test_1;
-- returns
-- SQL Error [42P07]: ERROR: relation "unique_target_1" already exists
-- test what happens if adding constrint to primary key column
CREATE UNIQUE INDEX CONCURRENTLY u_test_pkey ON unique_test (pkey);
ALTER TABLE unique_test
ADD CONSTRAINT unique_target_pkey
UNIQUE USING INDEX u_test_pkey;
-- this runs no problem but is inefficient.
If all you want to do is verify that values are unique, then use a query:
select unique_no, count(*)
from unique_test
group by unique_no
having count(*) > 1;
If it needs to be boolean output:
select not exists (
select unique_no, count(*)
from unique_test
group by unique_no
having count(*) > 1
);
If you just want a flag, you can use:
select count(*) <> count(distinct uniq_no) as duplicate_flag
from unique_test;
DELETE FROM
zoo x
USING zoo y
WHERE
x.animal_id < y.animal_id
AND x.animal = y.animal;
I think this is simpler, https://kb.objectrocket.com/postgresql/delete-duplicate-rows-in-postgresql-762 for reference

Check constraint to prevent 2 or more rows from having numeric value of 1

I have a SQL table with a column called [applied], only one row from all rows can be applied ( have the value of 1) all other rows should have the value 0
Is there a check constraint that i can write to force such a case?
If you use null instead of 0, it will be much easier.
Have a CHECK constraint to make sure the (non-null) value = 1. Also have a UNIQUE constraint to only allow a single value 1.
create table testtable (
id int primary key,
applied int,
constraint applied_unique unique (applied),
constraint applied_eq_1 check (applied = 1)
);
Core ANSI SQL, i.e. expected to work with any database.
Most databases support filtered indexes:
create unique index unq_t_applied on t(applied) where applied = 1;
To know exactly how to write trigger that will help you an info of a database you use is needed.
You wil need a trigger where this will be your test control:
SELECT COUNT(APPLIED)
FROM TEST
WHERE APPLIED = 1
If it is > 0 then do not allow insert else allow.
While this can be done with triggers and constraints, they probably require an index. Instead, consider a join table.
create table things_applied (
id smallint primary key default 1,
thing_id bigint references things(id) not null,
check(id = 1)
);
Because the primary key is unique, there can only ever be one row.
The first is activated with an insert.
insert into things_applied (thing_id) values (1);
Change it by updating the row.
update things_applied set thing_id = 2;
To deactivate completely, delete the row.
delete things_applied;
To find the active row, join with the table.
select t.*
from things t
join things_applied ta on ta.thing_id = t.id
To check if it's active at all, count the rows.
select count(id) as active
from things_applied
Try it.

How to merge rows of one table to another while keeping foreign key constraints on autogenerated columns?

Here are two tables that I have, with Table B referencing Table A:
CREATE TABLE TableA
(
[Id_A] [bigint] IDENTITY(1,1) NOT NULL,
...
CONSTRAINT [PK_TableA_Id_A] PRIMARY KEY CLUSTERED
(
[Id_A] ASC
)
)
CREATE TABLE TableB
(
[Id_B] [bigint] IDENTITY(1,1) NOT NULL,
[RefId_A] [bigint] NOT NULL
...
CONSTRAINT [PK_TableB_Id_B] PRIMARY KEY CLUSTERED
(
[Id_B] ASC
)
)
ALTER TABLE [dbo].[TableB] WITH CHECK ADD CONSTRAINT [FK_Id_A] FOREIGN KEY([RefId_A])
REFERENCES [dbo].[TableA] ([Id_A])
These two tables are part of 2 databases.
Table A and Table B in database 1;
Table A and Table B in database 2.
I need to merge the rows of Table A from database 1 into Table A of database 2 and the rows of Table B from database 1 into Table B of database 2.
I used the SQL Data Import and Export Wizard , checked the Enable Identity Insert option but it fails:
An OLE DB record is available. Source: "Microsoft SQL Server Native
Client 11.0" Hresult: 0x80004005 Description: "Violation of PRIMARY
KEY constraint 'PK_TableB_Id_B'. Cannot insert duplicate key in object
'dbo.TableB'. The duplicate key value is (1).". (SQL Server Import and
Export Wizard)
Which seems to make sense. There are rows in Table B of database 1 that have the same auto-generated PK as rows of Table B in database 2.
QUESTION
In this scenario, how can I merge the tables content from database 1 to the tables of database 2 while maintaining the foreign key constraints?
You can try something like the following. In here we assume that you need to insert all records as new ones (and not compare if some already exist or not). I wrapped both operations in a transaction to ensure that both go OK or none at all.
BEGIN TRY
IF OBJECT_ID('tempdb..#IdentityRelationships') IS NOT NULL
DROP TABLE #IdentityRelationships
CREATE TABLE #IdentityRelationships (
OldIdentity INT,
NewIdentity INT)
BEGIN TRANSACTION
;WITH SourceData AS
(
SELECT
OldIdentity = A.Id_A,
OtherColumn = A.OtherColumn
FROM
Database1.Schema.TableA AS A
)
MERGE INTO
Database2.Schema.TableA AS T
USING
SourceData AS S ON 1 = 0 -- Will always execute the "WHEN NOT MATCHED" operation
WHEN NOT MATCHED THEN
INSERT (
OtherColumn)
VALUES (
S.OtherColumn)
OUTPUT
inserted.Id_A, -- "MERGE" clause can output non-inserted values
S.ID_A
INTO
#IdentityRelationships (
NewIdentity,
OldIdentity);
INSERT INTO Database2.Schema.TableB (
RefId_A,
OtherData)
SELECT
RefId_A = I.NewIdentity,
OtherData = T.OtherData
FROM
Database1.Schema.TableB AS T
INNER JOIN #IdentityRelationships AS I ON T.RefID_A = I.OldIdentity
COMMIT
END TRY
BEGIN CATCH
DECLARE #v_ErrorMessage VARCHAR(MAX) = CONVERT(VARCHAR(MAX), ERROR_MESSAGE())
IF ##TRANCOUNT > 0
ROLLBACK
RAISERROR (#v_ErrorMessage, 16, 1)
END CATCH
This is too long for a comment.
There is no simple way to do this. Your primary keys are identity columns that both start at "1", so the relationships are ambiguous.
You have two options:
A composite primary key, identifying the database source of the records.
A new primary key. You can preserve the existing primary key values from one database.
Your question doesn't provide enough information to say which is the better approach: "merge" is not clearly defined.
I might suggest that you just recreate all the tables. Insert all the rows from table A into a new table. Add a new identity primary key. Keep the original primary key and source.
Then bring the data from Table B into a new table, looking up the new primary key in the new Table A. At this point, the new Table B is finished, except for defining the primary key constraint.
Then drop the unnecessarily columns in the new table A.

SQL set check constraint

I have a problem with setting check constrain. I have table Policy where primary key is set on (Policy_id, History_id) + additional columns and table Report which have Policy_id and some additional columns.
How can I set check constraint statement on Report table to check if policy_id exists in Policy table?
I cannot use foreign key constrain because Report do not have history_id column
Report cannot contain record with Policy_id if it do not exists in Policy table and hence, cannot perform insert into Report
If the Policy_id and History_id are a composite primary key then the foreign key on the referencing table must also hold the both columns.
If you really need to check just for one of them (the policy_id) I guess you'd have to do it manually which is not a good idea.
It would be better if the Report table would have 2 foreign keys and both the policy_id and history_id would be a single primary keys.
You could create a separate table just for the purposes of this foreign key constraint and then use triggers to maintain this data:
CREATE TABLE ExistingPolicies (
PolicyID int not null,
PolicyCount int not null,
constraint PK_ExistingPolicies PRIMARY KEY (PolicyID)
)
And then the triggers:
CREATE TRIGGER T_Policy_I
on Policy
instead of insert
as
;With totals as (
select PolicyID,COUNT(*) as cnt from inserted
group by PolicyID
)
merge into ExistingPolicies t
using totals s
on
t.PolicyID = s.PolicyID
when matched then update set PolicyCount = PolicyCount + s.cnt
when not matched then insert (PolicyID,PolicyCount) values (s.PolicyID,s.cnt);
go
CREATE TRIGGER T_Policy_D
on Policy
instead of delete
as
;With totals as (
select PolicyID,COUNT(*) as cnt from deleted
group by PolicyID
)
merge into ExistingPolicies t
using totals s
on
t.PolicyID = s.PolicyID
when matched and t.PolicyCount = s.cnt then delete
when matched then update set PolicyCount = PolicyCount - s.cnt;
go
CREATE TRIGGER T_Policy_U
on Policy
instead of update
as
;With totals as (
select PolicyID,SUM(cnt) as cnt
from
(select PolicyID,1 as cnt from inserted
union all
select PolicyID,-1 as cnt from deleted
) t
group by PolicyID
)
merge into ExistingPolicies t
using totals s
on
t.PolicyID = s.PolicyID
when matched and t.PolicyCount = -s.cnt then delete
when matched then update set PolicyCount = PolicyCount + s.cnt
when not matched then insert (PolicyID,PolicyCount) values (s.PolicyID,s.cnt);
go
(Code not tested but should be close to correct)
I think using a check constraint is a fine idea here.
Write a function that accepts Policy_id as a parameter, and does a query to check if the policy exists in the Policy table, and returns a simple 1 (exists) or 0 (does not exist).
Then set your check constraint on your Report Table to dbo.MyFunction(Policy_Id)=1
That's it.

Updating foreign keys while inserting into new table

I have table A(id).
I need to
create table B(id)
add a foreign key to table A that references to B.id
for every row in A, insert a row in B and update A.b_id with the newly inserted row in B
Is it possible to do it without adding a temporary column in B that refers to A? The below does work, but I'd rather not have to make a temporary column.
alter table B add column ref_id integer references(A.id);
insert into B (ref_id) select id from A;
update A set b_id = B.id from B where B.ref_id = A.id;
alter table B drop column ref_id;
Assuming that:
1) you're using postgresql 9.1
2) B.id is a serial (so actually an int with a default value of nextval('b_id_seq')
3) when inserting to B, you actually add other fields from A otherwise the insert is useless
...I think something like this would work:
with n as (select nextval('b_id_seq') as newbid,a.id as a_id from a),
l as (insert into b(id) select newbid from n returning id as b_id)
update a set b_id=l.b_id from l,n where a.id=n.a_id and l.b_id=n.newbid;
Add the future foreign key column, but without the constraint itself:
ALTER TABLE A ADD b_id integer;
Fill the new column with values:
WITH cte AS (
SELECT
id
ROW_NUMBER() OVER (ORDER BY id) AS b_ref
FROM A
)
UPDATE A
SET b_id = cte.b_ref
FROM cte
WHERE A.id = cte.id;
Create the other table:
CREATE TABLE B (
id integer CONSTRAINT PK_B PRIMARY KEY
);
Add rows to the new table using the referencing column of the existing one:
INSERT INTO B (id)
SELECT b_id
FROM A;
Add the FOREIGN KEY constraint:
ALTER TABLE A
ADD CONSTRAINT FK_A_B FOREIGN KEY (b_id) REFERENCES B (id);
PostgeSQL dialect.
You might use an anonymous code block like this
do $$
declare
category_cursor cursor for select id from schema1.categories;
r_category bigint;
setting_id bigint;
begin
open category_cursor;
loop fetch category_cursor into r_category;
exit when not found;
insert into schema2.setting(field)
values ('field_value') returning id into setting_id;
update schema1.categories set category_setting_id = setting_id
where category_id = r_category;
end loop;
end; $$
Let assume we have two tables first - categories, second - settings which must be applied to these categories.
First step - declare cursor(collect ids from categories), and variabels where we store temporary data
Loop cursor inserting values 'field_value' into settings
Store id in variable setting_id
Update table categories with setting_id