I have an Excel file that maps exactly to a table in SQL Server. I have tried to import but I get the UNIQUE KEY error.
How can I overwrite the exisitng values in the database table with those in the excel file? I can convert to csv. if this is any help. Is there a statement I can write to do this?
Any guidance would be much appreciated.
Thank you
Create a second table using the same DDL as that table (in the database).
Afterward, import the Excel file into that newly created table.
Delete from the actual table (not the one just created above) all rows where the columns making up the unique key constraint match that of the the rows newly inserted into the newly created table.
Then insert into the actual table all rows that exist in the second, newly created and populated table. Because you just deleted the rows with the values you want to overwrite, you will no longer violate the unique constraint.
If you provide the column names of your table and the column(s) that make up the unique constraint I can help you write the query for #3 if needed.
Step 3 would be something like:
delete from table_name
where unique_id in
(select unique_id from newly_created_table_in_step1_and_2)
Please display the whole ERRO. if you have UNIQUE KEY Error then you have to create update query using primary key. you might have to write TSQL query.
Related
I need to create a stored proc which has to copy existing rows based on a condition into the same table and deactivate existing records by setting active flag column value to 0.
I tried insert into [tablename] select * from [tablename] where [condition] obviously, I got an error due to primary key constraint, mentioning column list excluding primary key column in the select will work but there are multiple tables and some tables have around 300 columns. I don't want to give a long list in the select. As I use SQL Server there is a solution that I found here on SO by getting column list from information_schema.columns and preparing a dynamic query. However, I am not satisfied with any of those solutions is there any other way that I can do this?
What is the statement to alter a table which holds about 10 million rows, adding a guid column which will hold a unique identifier for each row (without being part of the pk)
What datatype should the global unique identifier column be?
Is there a procedure which creates it?
How will it auto incremented or produced everytime a new record is inserted?
Break it down into the separate stages
First, we need a new column:
alter table MyTable
add guid_column raw(32) default sys_guid();
Then update the existing rows:
update MyTable
set guid_column = sys_guid();
Use identity columns feature of oracle 12c to add a column to the table which auto increments upon adding new rows to the table.
An ideal way to handle this task is to:
a) CREATE a "new" table with structure similar to the source table using CREATE TABLE AS (CTAS statement) with a new "identity column" instead of adding identity column using ALTER statement on existing table.
b) CTAS works faster compared to running ALTER on existing table.
c) After confirming that the "new" table has all the data from the source table along with an column containing unique values and all the indexes and constraints then, you can drop the original table.
Another way to avoid creating constraints, indexes present on original table onto the new table is to create an empty table with all constraints, indexes and identity column. Let DBA extract data from the original table and import it into the "new" table.
Benefits:
This approach will ensure that none of the objects dependent on the source table become INVALID which generally hampers some features of the application(s).
i am trying to copy table information from a backup dummy database to our live sql database(as an accident happened in our program, Visma Business, where someone managed to overwrite 1300 customer names) but i am having a hard time figuring out the perfect code for this, i've looked around and yes there are several similar problems, but i just can't get this to work even though i've tried different solutions.
Here is the simple code i used last time, in theory all i need is the equivilant of mysqls On Duplicate, which would be MERGE on SQL server? I just didn't quite know what to write to get that merge to work.
INSERT [F0001].[dbo].[Actor]
SELECT * FROM [FDummy].[dbo].[Actor]
The error message i get with this is:
Violation of PRIMARY KEY constraint 'PK__Actor'. Cannot insert duplicate key in object 'dbo.Actor'.
What error message says is simply "You cant add same value if an attribute has PK constraint". If you already have all the information in your backup table what you should do is TRUNCATE TABLE which removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain.
After that step you should follow this answer . Or alternatively i recommend a tool called Kettle which is open source and easy to use for these kinds of data movements. That will save you a lot of work.
Here are thing which can be the reason :
You have multiple row in [FDummy].[dbo].[Actor] with same data in a column which is going to be inserted in primary key column of [F0001].[dbo].[Actor].
You have existing rows in [FDummy].[dbo].[Actor] with some value x in primary key column and there is/are row(s) in [F0001].[dbo].[Actor] with same value x in the column which is going to be inserted in primary key column.
List item
-- to check first point. if it returns row then you have some problem
SELECT ColumnGoingToBeMappedWithPK,
Count(*)
FROM [FDummy].[dbo].[Actor]
GROUP BY ColumnGoingToBeMappedWithPK
HAVING Count(*) > 1
-- to check second point. if count is greater than 0 then you have some problem
SELECT Count(*)
FROM [FDummy].[dbo].[Actor] a
JOIN [F0001].[dbo].[Actor] b
ON a.ColumnGoingToBeMappedWithPK = b.PrimaryKeyColumn
The MERGE statement will be possibly the best for you here, unless the primary key of the Actor table is reused after a previous record is deleted, so not autoincremented and say record with id 13 on F0001.dbo.Actor is not the same "actor" information as on FDummy.dbo.Actor
To use the statement with your code, it will look something like this:
begin transaction
merge [F0001].[dbo].[Actor] as t -- the destination
using [FDummy].[dbo].[Actor] as s -- the source
on (t.[PRIMARYKEY] = s.[PRIMARYKEY]) -- update with your primary keys
when matched then
update set t.columnname1 = s.columnname1,
t.columnname2 = s.columnname2,
t.columnname3 = s.columnname3
-- repeat for all your columns that you want to update
output $action,
Inserted.*,
Deleted.*;
rollback transaction -- change to commit after testing
Further reading can be done at the sources below:
MERGE (Transact-SQL)
Inserting, Updating, and Deleting Data by Using MERGE
Using MERGE in SQL Server to insert, update and delete at the same time
I am pretty new in Microsoft SQL Server and I am not so into DB and I have the following problem.
I have to delete all the records that are inside a table named VulnerabilityReference
So I executed this statment:
delete from VulnerabilityReference;
But I obtain this error message and no rows are deleted form my table:
Msg 547, Level 16, State 0, Line 1
The DELETE statement conflicted with the REFERENCE constraint "FK_AlertDocument_Reference_Reference". The conflict occurred in database "DB NAME", table "dbo.VulnerabilityAlertDocument_VulnerabilityReference", column 'VulnerabilityReferenceId'.
The statement has been terminated.
What it exactly means? Have I to delete all the records from the VulnerabilityAlertDocument_VulnerabilityReference table before delete the records into my VulnerabilityReference?
Tnx
Andrea
The table you are attempting to delete has it's primary key in another table. You must first remove (or set to null) the column in the other table BEFORE you can delete. This is called DB Referential integrity.
You can disable constraints (if you have adequate permissions) but I would suggest you not do that.
To remove the link for all records in that other table, you could do this:
UPDATE VulnerabilityAlertDocument_VulnerabilityReference
SET VulnerabilityReferenceId = NULL
If the AlertDocument table does NOT ALLOW nulls for that column then you'll need to DELETE all the VulnerabilityAlertDocument_VulnerabilityReference records etc... OR ALTER the table to ALLOW NULL. Make sure you know what you are about to do...
I am assuming the name of the column in the VulnerabilityAlertDocument_VulnerabilityReference table, you obviously need to use the correct table and column names.
This error is because you have an id of a record in VulnerabilityReference related to a AlerDocument table. The error is warning you of this problem and that you can't delete this record if you don't delete first the related records.
I mean the tables in the relation named as FK_AlertDocument_Reference_Reference
You have a relationship between VulnerabilityAlertDocument_VulnerabilityReference and other table. You can go to this table and find the relationships, find the name beginning with FK. In the properties of this relationship you will see the name of the other table and columns used in this relationship. If you want you can create a diagram, then drop this two tables and you will see in a visual way how the relationship is built
I am loading a file data into a table using BODI (Business Objects Data Integrator).
Unfortunately the input file has a duplicate record which is giving unique constraint error while inserting in the table. Is there a way to find that duplicate record?
import into table with the constraint disabled or removed or table with similar structure without the constraint and do the query
select uniquefield(s) from tablename group by uniquefield(s) having count(*) > 1
and sometimes the error message shows the key that is duplicate too.
You Can use a table compare transformation and set the field "Input Contains DUplicate Keys" . This will load the the record only once even though the source field has duplicate records.