I am loading a file data into a table using BODI (Business Objects Data Integrator).
Unfortunately the input file has a duplicate record which is giving unique constraint error while inserting in the table. Is there a way to find that duplicate record?
import into table with the constraint disabled or removed or table with similar structure without the constraint and do the query
select uniquefield(s) from tablename group by uniquefield(s) having count(*) > 1
and sometimes the error message shows the key that is duplicate too.
You Can use a table compare transformation and set the field "Input Contains DUplicate Keys" . This will load the the record only once even though the source field has duplicate records.
Related
I have a database with 10 tables, I'm using Postgresql db on pgAdmin4 (symfony back-end), I imported the data from 10 different csv files, in each data files i added an ID column to set their values for the single fact that there is foreign key, lets just say TABLE1 is a foreign key in TABLE_2, TABLE_2_ID is a foreign key in TABLE_3, TABLE_3_ID is a foreign key in TABLE_C etc... it goes on like this till the last table.
The data import for the csv files worked. I can display my data in my front-end.
Now that I'm trying to insert new data into any table via the the plateform user interface, I'm getting a constraint error :
SQLSTATE[23505]: Unique violation: 7 ERROR: duplicate key value violates unique constraint "theme_pkey" DETAIL: Key (id)=(4) already exists."
Let just say I'm trying to create a new value in Table_2, Doctrine orm will launch an error that's going to say the id_key already exist, it seem like my database didn't update itself by following the current data I have already in the data base, like the ID isn't incremented.
I looked around it seems like it is a common when you import your database from an outside source, but can't manage to find something that would get me out of this error message and move forward with my dev. I look at everything they all talk about update sequence but nothing seems to fit my problem.
After importing my data from a csv file to my postrgresql database using PGAdmin4, the data was successfully imported, querying to see the data worked and each row had an id, in my mind everything was fine until i tried to insert in to my database a new object, but it seems like my database was stuck in Primary_Key id 1, it wasn't taking into account the database that was imported and the fact that it showed on the table an object with a an Id of 1. I didn't fully understand what was going on cause i'm not an expert with sql, the convention and restriction that applies to it, i knew enough to get things working around with using an ORM.
I came to the conclusion after hours of research and documentations reading that a thing called Sequence existed and that each table had their own sequence, for example tableA and tableB had a tableA_id_seq and a tableB_id_seq.
All i had to do was to get the max id from my table / get the next Id and simply increment that next Id.
---GETTING THE MAX VALUE ID FROM MY TARGETED TABLE
SELECT MAX(id) FROM table;
--- GETTING THE NEXT ID VALUE FROM THAT TARGETED TABLE
SELECT NEXT ID
SELECT NEXTVAL('table_id_seq');
Lets just say my max id value from my table was 90, so in theory my next value is suppose to be 91, but when i launched the 2nd script the next value was 1,
So i had to set the next value based on my max value + 1
---- SETTING NEXT VALUE THAT FOLLOWS MY MAX VALUE
SELECT setval('table_id_seq', (SELECT MAX(id) FROM table)+1)
Hope this helps out the next person that runs into similar a issue.
i am trying to copy table information from a backup dummy database to our live sql database(as an accident happened in our program, Visma Business, where someone managed to overwrite 1300 customer names) but i am having a hard time figuring out the perfect code for this, i've looked around and yes there are several similar problems, but i just can't get this to work even though i've tried different solutions.
Here is the simple code i used last time, in theory all i need is the equivilant of mysqls On Duplicate, which would be MERGE on SQL server? I just didn't quite know what to write to get that merge to work.
INSERT [F0001].[dbo].[Actor]
SELECT * FROM [FDummy].[dbo].[Actor]
The error message i get with this is:
Violation of PRIMARY KEY constraint 'PK__Actor'. Cannot insert duplicate key in object 'dbo.Actor'.
What error message says is simply "You cant add same value if an attribute has PK constraint". If you already have all the information in your backup table what you should do is TRUNCATE TABLE which removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain.
After that step you should follow this answer . Or alternatively i recommend a tool called Kettle which is open source and easy to use for these kinds of data movements. That will save you a lot of work.
Here are thing which can be the reason :
You have multiple row in [FDummy].[dbo].[Actor] with same data in a column which is going to be inserted in primary key column of [F0001].[dbo].[Actor].
You have existing rows in [FDummy].[dbo].[Actor] with some value x in primary key column and there is/are row(s) in [F0001].[dbo].[Actor] with same value x in the column which is going to be inserted in primary key column.
List item
-- to check first point. if it returns row then you have some problem
SELECT ColumnGoingToBeMappedWithPK,
Count(*)
FROM [FDummy].[dbo].[Actor]
GROUP BY ColumnGoingToBeMappedWithPK
HAVING Count(*) > 1
-- to check second point. if count is greater than 0 then you have some problem
SELECT Count(*)
FROM [FDummy].[dbo].[Actor] a
JOIN [F0001].[dbo].[Actor] b
ON a.ColumnGoingToBeMappedWithPK = b.PrimaryKeyColumn
The MERGE statement will be possibly the best for you here, unless the primary key of the Actor table is reused after a previous record is deleted, so not autoincremented and say record with id 13 on F0001.dbo.Actor is not the same "actor" information as on FDummy.dbo.Actor
To use the statement with your code, it will look something like this:
begin transaction
merge [F0001].[dbo].[Actor] as t -- the destination
using [FDummy].[dbo].[Actor] as s -- the source
on (t.[PRIMARYKEY] = s.[PRIMARYKEY]) -- update with your primary keys
when matched then
update set t.columnname1 = s.columnname1,
t.columnname2 = s.columnname2,
t.columnname3 = s.columnname3
-- repeat for all your columns that you want to update
output $action,
Inserted.*,
Deleted.*;
rollback transaction -- change to commit after testing
Further reading can be done at the sources below:
MERGE (Transact-SQL)
Inserting, Updating, and Deleting Data by Using MERGE
Using MERGE in SQL Server to insert, update and delete at the same time
I have an Excel file that maps exactly to a table in SQL Server. I have tried to import but I get the UNIQUE KEY error.
How can I overwrite the exisitng values in the database table with those in the excel file? I can convert to csv. if this is any help. Is there a statement I can write to do this?
Any guidance would be much appreciated.
Thank you
Create a second table using the same DDL as that table (in the database).
Afterward, import the Excel file into that newly created table.
Delete from the actual table (not the one just created above) all rows where the columns making up the unique key constraint match that of the the rows newly inserted into the newly created table.
Then insert into the actual table all rows that exist in the second, newly created and populated table. Because you just deleted the rows with the values you want to overwrite, you will no longer violate the unique constraint.
If you provide the column names of your table and the column(s) that make up the unique constraint I can help you write the query for #3 if needed.
Step 3 would be something like:
delete from table_name
where unique_id in
(select unique_id from newly_created_table_in_step1_and_2)
Please display the whole ERRO. if you have UNIQUE KEY Error then you have to create update query using primary key. you might have to write TSQL query.
I am pretty new in Microsoft SQL Server and I am not so into DB and I have the following problem.
I have to delete all the records that are inside a table named VulnerabilityReference
So I executed this statment:
delete from VulnerabilityReference;
But I obtain this error message and no rows are deleted form my table:
Msg 547, Level 16, State 0, Line 1
The DELETE statement conflicted with the REFERENCE constraint "FK_AlertDocument_Reference_Reference". The conflict occurred in database "DB NAME", table "dbo.VulnerabilityAlertDocument_VulnerabilityReference", column 'VulnerabilityReferenceId'.
The statement has been terminated.
What it exactly means? Have I to delete all the records from the VulnerabilityAlertDocument_VulnerabilityReference table before delete the records into my VulnerabilityReference?
Tnx
Andrea
The table you are attempting to delete has it's primary key in another table. You must first remove (or set to null) the column in the other table BEFORE you can delete. This is called DB Referential integrity.
You can disable constraints (if you have adequate permissions) but I would suggest you not do that.
To remove the link for all records in that other table, you could do this:
UPDATE VulnerabilityAlertDocument_VulnerabilityReference
SET VulnerabilityReferenceId = NULL
If the AlertDocument table does NOT ALLOW nulls for that column then you'll need to DELETE all the VulnerabilityAlertDocument_VulnerabilityReference records etc... OR ALTER the table to ALLOW NULL. Make sure you know what you are about to do...
I am assuming the name of the column in the VulnerabilityAlertDocument_VulnerabilityReference table, you obviously need to use the correct table and column names.
This error is because you have an id of a record in VulnerabilityReference related to a AlerDocument table. The error is warning you of this problem and that you can't delete this record if you don't delete first the related records.
I mean the tables in the relation named as FK_AlertDocument_Reference_Reference
You have a relationship between VulnerabilityAlertDocument_VulnerabilityReference and other table. You can go to this table and find the relationships, find the name beginning with FK. In the properties of this relationship you will see the name of the other table and columns used in this relationship. If you want you can create a diagram, then drop this two tables and you will see in a visual way how the relationship is built
The CREATE UNIQUE INDEX statement terminated because a duplicate key was found
for the object name 'dbo.tblhm' and the index name 'New_id1'. The duplicate
key value is (45560, 44200).
i want to know how to work on unique key constraint taking 2 columns together.Such that the values previously stored in the database are not in that format.Such it is showing me the above error ,So how to overcome that so that all the work can be done and no column value in the database gets deleted
If I follow you correctly, you have a duplicate key which you want to ignore but still want to apply a unique constraint going forward? I don't think this is possible. Either you need to remove the duplicate row (or update it such that it is not a duplicate), move the duplicated data into an archive table without a unique index or add the index to the existing table without a unique constraint.
I stand to be corrected, but I don't think there is any other way round this.
Lets assume that you are creating a Unique Index on columns : column1 and column2 in your table dbo.tblhm
This would assume that there is no repitition of any combination of column1, column2 values in any rows in table dbo.tblhm
As per your error, the following combination (45560, 44200) of values for column1, column2 is present in more than 1 row and hence the constraint fails.
What you need to do is to clean up your data first using an UPDATE statement to change the column1 or column2 values in the rows that are duplicates BEFORE you try to create the constraint.
AFAIK, in Oracle you have the " novalidate" keyword which can be used to achieve what you want to do without cleaning up the existing data. But atleast I am not aware of any way to achieve that in SQL Server without first cleaning up the data
The error means exactly what it says - there is more than one row with the same key.
i.e. for
CREATE UNIQUE INDEX New_id1 on dbo.tblhm(Column1, Column2)
there is more than one row with the same values for Column1 and Column2
So either
Your data is corrupt (e.g. inserting without checking for duplicates) - you will need to find and merge / delete duplicate keys before recreating the index
Or your Index can't be unique (e.g. there is a valid reason why there can be more than one row with this key, e.g. at a business level).