postgresql: error duplicate key value violates unique constraint - sql

This question have been asked by several people but my problem seems to be different.
Actually I have to merge same structured tables from different databases in postgresql into a new DB. What I am doing is that I connect to remote db using dblink, reads that table in that db and insert it into the table in the current DB like below
INSERT INTO t_types_of_dementia SELECT * FROM dblink('host=localhost port=5432 dbname=snap-cadence password=xxxxxx', 'SELECT dementia_type, snapid FROM t_types_of_dementia') as x(dementia_type VARCHAR(256),snapid integer);
First time this query runs fine, but when I run it again or try to run it with some other remote database's table: it gives me this error
ERROR: duplicate key value violates unique constraint
"t_types_of_dementia_pkey"
I want that this new tables gets populated by entries of others tables from other dbs.
Some of the solutions proposed talks about sequence, but i am not using any
The structure of the table in current db is
CREATE TABLE t_types_of_dementia(
dementia_type VARCHAR(256),
snapid integer NOT NULL,
PRIMARY KEY (dementia_type,snapid)
);
P.S. There is a specific reason why both the columns are used as a primary key which may not be relevant in this discussion, because same issue happens in other tables where this is not the case.

As the error message tells you - you can not have two rows with the same value in the columns dementia_type, snapid since they need to be unique.
You have to make sure that the two databases has the same values for dementia_type, snapid.
A workaround would be to add a column to your table alter table t_types_of_dementia add column id serial generated always and use that as primary key instead of your current.

Related

Postgres BIGSERIAL does not share sequence when inserts are made with multiple remote fdw sources

I am trying to insert into a table in a Postgres database from two other Postgres databases using Foreign Data Wrappers. The objective is to have an autogenerate primary key, independent of the source, as there will be more than two in.
I first defined the tables like so:
Target database:
create table dummy (
dummy_pk bigserial primary key
-- other fields
);
Sources databases:
create foreign table dummy (
dummy_pk bigserial
-- other fields
) server ... ;
This solution worked fine as long as I inserted from only one source, when I tried to insert from the other one, without specifying dummy_pk, I got this message:
Duplicate key (dummy_pk)=(1)
Because postgres tries to insert an id of 1, I believe the sequence used for each source foreign table is different. I changed the source tables a bit in an attempt to let the target table's sequence do the job for the id:
create foreign table dummy (
dummy_pk bigint
-- other fields
) server ... ;
This time I got a diffrent error:
NULL value violates NOT NULL constaint on column « dummy_pk »
Therefore I believe the source server sends a query to the target where the dummy_pk is null, and the target does not replace it with the default value.
So, is there a way I can force the use of the target's sequence in a query executed on the source? Maybe I have to share that sequence, can I create a foreign sequence? I cannot remove the column on the foreign tables as I need a read access to them.
Thanks!
Remove dummy_pk from foreign tables so that destination table does not get NULL nor value and so fall backs to DEFAULT or NULL if no DEFAULT specified. If you attempt to pass DEFAULT to foreign table it will try to use DEFAULT value of foreign table instead.
create foreign table dummy (
/*dummy_pk bigserial,*/
column1 text,
column2 int2,
-- other fields
) server ... ;
Another way would be to grab sequence values from destination server using dblink, but I think this is better (if you can afford to have this column removed from foreign tables).

SQL server 2012, combined primary key needs to be unique?

Hello i have a question i assume mainly about SQL server functionality. Im building a test database and i have stumbled a problem when i try to insert data into my tables.
picture 1 shows the error message i get when trying to add rows.
https://i.stack.imgur.com/GW3C2.png
Picture 2 shows all relations in the database
//i.stack.imgur.com/7BhHa.png
picture 3 shows the table i am currently trying to update
//i.stack.imgur.com/3JqtA.png
In the table i have a combined primary key ("SDat" and "Kurs") The error message i get implyes that primary key must be uniqe, but what dont understand is since i have a third column "Elev" which makes the row uniqe, why wont SQL server me insert this row to table? I have tried making the same database in Acess and it works so i assume problem is something in SQL server
Regards Robert
A Primary Key by definition means the value must be unique. So if you have a combined primary key on 2 fields, then that value on those 2 fields needs to be unique meaning it can only have 1 row. If you need to enforce unique values on the combination of 3 fields (SDat, Kurs, and Elev) then your PK needs to include all 3 fields.
If you really need to enforce a unique constraint across alot of fields in a table, I wouldn't use a PK to enforce that, but instead use a UNIQUE constraint.
ALTER TABLE tablename ADD CONSTRAINT constraintname UNIQUE (column1, ..., columnn)
Then you can create a different column for your Primary Key so that as you add columns and need to require those additional columns be unique, you don't have to edit your PK and rebuild your table.

Sql change Data Type

There is one column named Line_no (smallint) now. I want to change this column data type is bigint ,but this column is primary key, and have so many tables has foreign key reference on it, so how to change it?, i need to change both Sql server and oracle database
First of all there's no easy way to do that currently. especially in Oracle, in order to change the data type, all the values of the field should be null. anyway the following process works for both Oracle and SQL Server:
make your database off line so that no operation can disturb our
process.
Add a new field, say line_num having your new data type.
update the the new field with the line_no values for all records.
write a Stored Procedure to drop all the FKs referencing current
PK, using meta data and this SP should write the add FK command to
dbms output, while it is looping, so that later you can execute them
to add these FKs again in step 9.
drop the primary key off the line_no field.
drop the field line_no.
rename the field
line_num to line_no.
add the primary key on the new field.
run the commands generated in step 4 to add all the FKs again.
make your db online :)
It depends on your DBMS. You may have to drop the foreign key constraints, alter the columns and re-create the constraints.

SQL Server Does Not Delete Records

I am newbie to MSSQL Server and don't have any knowledge about it.
i have below question.
I have added nine records with same value as show per below image in SQL Server 2005.
i Have not given any primary key to Table.
Now when i selecting one record or multiple record and hit the delete key it does not delete the records from table instead it gives me error.
You need to add a primary key to uniquely identify each record, otherwise the SQL server has no way of distinguishing the records, and therefore no way of knowing which one to delete, causing an error.
That's because you don't have any primary key and server doesn't know which row to remove. Clear the table ( DELETE * FROM dbo.Patient ) and create new Id column as a primary key.
In MSSQL you need to have a primary key for the table. This will uniquely identify each row of that particular table.
For example in Oracle you don't need this as there you can use ROWID (meaning every row from every table has a unique ID in the database). Once you know this ID you Oracle knows for sure from which table it is.
So now you can add a primary key to the table and you can make it be auto-increment - ensuring uniqueness.

sqlite and 'constraint failed' error while select and insert at the same time

I'm working on migration function. It reads data from old table and inserts it into the new one. All that stuff working in background thread with low priority.
My steps in pseudo code.
sqlite3_prepare_stmt (select statement)
sqlite3_prepare_stmt (insert statement)
while (sqlite3_step (select statement) == SQLITE_ROW)
{
get data from select row results
sqlite3_bind select results to insert statement
sqlite3_step (insert statement)
sqlite3_reset (insert statement)
}
sqlite3_reset (select statement)
I'm always getting 'constraint failed' error on sqlite3_step (insert statement). Why it's happend and how i could fix that?
UPD: As i'm understand that's happend because background thread use db handle opened in main thread. Checking that guess now.
UPD2:
sqlite> select sql from sqlite_master where tbl_name = 'tiles';
CREATE TABLE tiles('pk' INTEGER PRIMARY KEY, 'data' BLOB, 'x' INTEGER, 'y' INTEGER, 'z' INTEGER, 'importKey' INTEGER)
sqlite> select sql from sqlite_master where tbl_name = 'tiles_v2';
CREATE TABLE tiles_v2 (pk int primary key, x int, y int, z int, layer int, data blob, timestamp real)
It probably means your insert statement is violating a constraint in the new table. Could be a primary key constraint, a unique constraint, a foreign key constraint (if you're using PRAGMA foreign_keys = ON;), and so on.
You fix that either by dropping the constraint, correcting the data, or dropping the data. Dropping the constraint is usually a Bad Thing, but that depends on the application.
Is there a compelling reason to copy data one row at a time instead of as a set?
INSERT INTO new_table
SELECT column_list FROM old_table;
If you need help identifying the constraint, edit your original question, and post the output of these two SQLite queries.
select sql from sqlite_master where tbl_name = 'old_table_name';
select sql from sqlite_master where tbl_name = 'new_table_name';
Update: Based on the output of those two queries, I see only one constraint--the primary key constraint in each table. If you haven't built any triggers on these tables, the only constraint that can fail is the primary key constraint. And the only way that constraint can fail is if you try to insert two rows that have the same value for 'pk'.
I suppose that could happen in a few different ways.
The old table has duplicate values in
the 'pk' column.
The code that does your migration
alters or injects a duplicate value
before inserting data into your new
table.
Another process, possibly running on
a different computer, is inserting or
updating data without your knowledge.
Other reasons I haven't thought of yet. :-)
You can determine whether there are duplicate values of 'pk' in the old table by running this query.
select pk
from old_table_name
group by pk
having count() > 1;
You might consider trying to manually migrate the data using INSERT INTO . . . SELECT . . . If that fails, add a WHERE clause to reduce the size of the set until you isolate the bad data.
I have found that troubleshooting foreign key constraint errors with sqlite to be troublesome, particularly on large data sets. However the following approach helps identify the offending relation.
disable foreign key checking: PRAGMA foreign_keys = 0;
execute the statement that cause the error - in my case it was an INSERT of 70,000 rows with 3 different foreign key relations.
re-enable foreign key checking: PRAGMA foreign_keys = 1;
identify the foreign key errors: PRAGMA foreign_key_check(table-name);
In my case, it showed the 13 rows with invalid references.
Just in case anyone lands here looking for "Constraint failed" error message, make sure your Id column's type is INTEGER, not INTEGER (0, 15) or something.
Background
If your table has a column named Id, with type INTEGER and set as Primary Key, SQLite treats it as an alias for the built-in column RowId. This column works like an auto-increment column. In my case, this column was working fine till some table designer (probably the one created by SQLite guys for Visual Studio) changed column type from INTEGER to INTEGER (0, 15) and all of a sudden my application started throwing Constraint failed exception.
just for making it more clearer :
make sure that you use data type correct , i was using int instead of integer on creating tmy tables like this :
id int primary key not null
and i was stuck on the constrain problem over hours ...
just make sure that to enter data type correctly in creating database.
One of the reasons that this error occurs is inserting duplicated data for a unique row too. Check your all unique and keys for duplicated data.
Today i had similar error: CHECK constraint failed: profile
First i read here what is constraint.
The CONSTRAINTS are an integrity which defines some conditions that
restrict the column to contain the true data while inserting or
updating or deleting. We can use two types of constraints, that is
column level or table level constraint. The column level constraints
can be applied only on a specific column where as table level
constraints can be applied to the whole table.
Then figured out that i need to check how database was created, because i used sqlalchemy and sqlite3 dialect, i had to check tables schema. Schema will show actual database structure.
sqlite3 >>> .schema
CREATE TABLE profile (
id INTEGER NOT NULL,
...
forum_copy_exist BOOLEAN DEFAULT 'false',
forum_deleted BOOLEAN DEFAULT 'true',
PRIMARY KEY (id),
UNIQUE (id),
UNIQUE (login_name),
UNIQUE (forum_name),
CHECK (email_verified IN (0, 1)),
UNIQUE (uid),
CHECK (forum_copy_exist IN (0, 1)),
CHECK (forum_deleted IN (0, 1))
So here I found that boolean CHECK is always 0 or 1 and i used column default value as 'false', so each time i had no forum_copy_exist or forum_deleted value, false was inserted, and because false is invalid value it throws error, and didn't inserted row.
So changing database defaults to:
forum_copy_exist BOOLEAN DEFAULT '0',
forum_deleted BOOLEAN DEFAULT '0',
solved the issue.
In postgresql i think false is valid value. So it depends how database schema is created.
Hope this will help others in the future.