Inserting data into SQL Server database with potentially duplicate data - sql

I have two databases on different servers, which have tables called dbo.A. The data is both is largely the same, but I want to make sure both tables have the same data. I've been using SQL Server June 2016 to export data from one table to the other, but the error I get is:
Violation of PRIMARY KEY constraint ''. Cannot insert duplicate key in object A
The duplicate key value is 'Some text here'
I know I can delete the table and reinsert the rows, but that's cumbersome and pretty bad practice. What would be the best way for me to update the data in the second database?

Add the server as your linked server and use the following statement.
To add rows in TableA from say Serve B's Table A.
INSERT INTO dbo.A (Col1 , Col2 , Col3 , ....)
SELECT Col1 , Col2 , Col3 , ....
FROM [LinkedServerB].[DBName].[dbo].[A] A
WHERE NOT EXISTS ( SELECT 1 FROM dbo.A
WHERE A.PK_Column = PK_Column)
And then use the same query on Server B to add the rows from Server A
To add rows in TableA from say Serve B's Table A.
INSERT INTO dbo.A (Col1 , Col2 , Col3 , ....)
SELECT Col1 , Col2 , Col3 , ....
FROM [LinkedServerA].[DBName].[dbo].[A] A
WHERE NOT EXISTS ( SELECT 1 FROM dbo.A
WHERE A.PK_Column = PK_Column)

Ok if you can't use a linked server can you copy the data into an empty staging table. Then run the similar insert statement but use the staging table instead of the linked table

Related

Update table (target) from different database (source)

I have a SQL Server instance running on my machine.
It has 2 databases:
SF_PROD
SF_INIT.
SF_PROD and SF_INIT have a common table USER_MASTER with the same structure.
My requirement is that whenever SF_PROD.USER_MASTER gets updated, the same operation should be applied to SF_INIT.USER_MASTER.
Is there any way to accomplish this task?
If both databases are running on the same SQL Server Instance, then you can just write Trigger on SF_PROD.USER_MASTER which inserts the data into SF_INIT.USER_MASTER table.
CREATE TRIGGER SyncUserMasterTrigger ON SF_PROD.USER_MASTER
FOR INSERT
AS
INSERT INTO SF_INIT.USER_MASTER (col1, col2 , col3)
SELECT col1 , col2 , col3
FROM inserted

How to copy data from TableA to TableB with new partitions?

I have TableA that has hundreds of thousands of rows and is still increasing in size. With no partitions, the speed has decreased very noticeably.
So I made a new table called TableB made columns exactly like (both name and type) TableA in Oracle SQL Developer. (TableA and TableB are in the same database but not the same tables) I additionally created partitions for TableB.
Now, all I want to do is copy all the data from TableA from TableB in order to test the speeds of queries.
In order to test speeds of tables with partitions, I decided to copy all of the data now that TableB has all the same columns as A.
insert into TableB ( select * from TableA);
What I expected from the statement above was the data to be copied over but instead, I got the error:
Error starting at line : 1 in command -
insert into TableB ( select * from TableA)
Error at Command Line : 1 Column : 1
Error report -
SQL Error: ORA-54013: INSERT operation disallowed on virtual columns
54013. 0000 - "INSERT operation disallowed on virtual columns"
*Cause: Attempted to insert values into a virtual column
*Action: Re-issue the statment without providing values for a virtual column
I looked up Virtual Columns and it seems to be
"When queried, virtual columns appear to be normal table columns, but their values are derived rather than being stored on disc. The syntax for defining a virtual column is listed below."
However, I do not have any data in TableB whatsoever. TableB only has the columns that match TableA so I am unsure as to how my columns can be derived, when there is nothing to derive?
You can use the query
SELECT column_name, virtual_column
FROM user_tab_cols
WHERE table_name = 'TABLEA';
COLUMN_NAME VIRTUAL_COLUMN
----------- --------------
ID NO
COL1 NO
COL2 NO
COL3 YES
Then use
INSERT INTO TABLEB(ID,COL1,COL2) SELECT ID,COL1,COL2 FROM TABLEA;
to be exempt from the virtual columns, those are calculated ones from the other columns' values.
did you create table B also with derived columns ? from your question i presume you created tableB also with virtual columns..
One thing you need to notice is since you have a large volume of records to insert , use bulk mode for faster operation.. use append hint as shown below.
Please note - you need not include virtual columns in below statement as they would be calculated on the fly.
insert /*+ APPEND */ into tableB (column1, column2,...columnn) select column1, column2,...columnn from TableA

postgresql insert into from select

I have two tables table1 and test_table1 which have the same schema.
Both tables have rows/data and pk id's starting from 1.
I would like to do:
insert into test_table1 select * from table1;
but this fails due to the pk values from table1 existing in test_table1.
Way around it would be to specify columns and leave the pk column out, but for some reason thats not working either:
e.g.
NOTE - no pk columns in query below
insert into test_table1 (col1, col2,..., coln) select col1,col2,...,coln from table1;
returns
ERROR: duplicate key value violates unique constraint "test_table1_pkey"
DETAIL: Key (id)=(1) already exists.
I know this works in MySql, is this just due to Postgresql? Anyway around it?
EDIT:
Both tables have primary keys and sequence set.
Since it wasn't clear - tables don't have the same data.
I would just like to add rows from table1 to test_table1.
For answers telling me to exclude primary key from the query - I did as I said before.
Just remove pk column from columns of query
insert into test_table1 (col2,..., coln) select col2,...,coln from table1;
If it still fails maybe you have not sequence on pk columns.
Create sequence on already existing pk column
create sequence test_table1_seq;
ALTER TABLE test_table1
ALTER COLUMN col1 SET DEFAULT nextval('test_table1_seq'::regclass);
And update sequence value to current
SELECT setval('test_table1_seq', (SELECT MAX(col1) FROM test_table1));
This post helped me solve my problem, not sure what went wrong:
How to fix PostgreSQL error "duplicate key violates unique constraint"
If you get this message when trying to insert data into a PostgreSQL database:
ERROR: duplicate key violates unique constraint
That likely means that the primary key sequence in the table you're working with has somehow become out of sync, likely because of a mass import process (or something along those lines). Call it a "bug by design", but it seems that you have to manually reset the a primary key index after restoring from a dump file. At any rate, to see if your values are out of sync, run these two commands:
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
If the first value is higher than the second value, your sequence is out of sync. Back up your PG database (just in case), then run thisL
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
That will set the sequence to the next available value that's higher than any existing primary key in the sequence.
You rather would want to do a UPDATE JOIN like
UPDATE test_table1 AS v
SET col1 = s.col1,
col2 = s.col2,
col3 = s.col3,
.....
colN = s.colN
FROM table1 AS s
WHERE v.id = s.id;
what you want to do is an upsert.
with upsert as (
update test_table1 tt
set col1 = t.col1,
col2 = t.col2,
col3 = t.col3
from table1 t
where t.id = tt.id
returning *
)
insert into test_table1(id, col1, col2, col3)
select id, col1,col2,col3
from table1
where not exists (select * from upsert)

How to select data and insert those data using single sql?

I want to select some data using simple sql and insert those data into another table. Both table are same. Data types and column names all are same. Simply those are temporary table of masters table. Using single sql I want to insert those data into another table and in the where condition I check E_ID=? checking part. My another problem is sometime there may be any matching rows in the table. In that time is it may be out sql exception? Another problem is it may be multiple matching rows. That means one E_ID may have multiple rows. As a example in my attachment_master and attachments_temp table has multiple rows for one single ID. How do I solve those problems? I have another problem. My master table data can insert temp table using following code. But I want to change only one column and others are same data. Because I want to change temp table status column.
insert into dates_temp_table SELECT * FROM master_dates_table where e_id=?;
In here all data insert into my dates_temp_table. But I want to add all column data and change only dates_temp_table status column as "Modified". How should I change this code?
You could try this:
insert into table1 ( col1, col2, col3,.... )
SELECT col1, col2, col3, ....
FROM table2 where (you can check any condition here on table1 or table2 or mixed)
For more info have a look here and this similar question
Hope it may help you.
EDit : If I understand your requirement properly then this may be a helpful solution for you:
insert into table1 ( col-1, col-2, col-3,...., col-n, <Your modification col name here> )
SELECT col-1, col-2, col-3,...., col-n, 'modified'
FROM table2 where table1.e_id=<your id value here>
As per your comment in above other answer:
"I send my E_ID. I don't want to matching and get. I send my E_ID and
if that ID available I insert those data into my temp table and change
temp table status as 'Modified' and otherwise don't do anything."
As according to your above statements, If given e_id is there it will copy all the columns values to your table1 and will place a value 'modified' in the 'status' column of your table1
For more info look here
You can use merge statement if I understand your requirement correctly.
Documentation
As I do not have your table structure below is based on assumption, see whether this cater your requirement. I am assuming that e_id is primary key or change as per your table design.
MERGE INTO dates_temp_table trgt
USING (SELECT * FROM master_dates_table WHERE e_id=100) src
ON (trgt.prm_key = src.prm_key)
WHEN NOT MATCHED
THEN
INSERT (trgt.col, trgt.col2, trgt.status)
VALUES (src.col, src.col2, 'Modified');
More information and examples here
insert into tablename( column1, column2, column3,column4 ) SELECT column1,
column2, column3,column4 from anothertablename where tablename.ID=anothertablename.ID
IF multiple values are there then it will return the last result..If not you have narrow your search..

SQL Server Generate Script To Fill Tables With Data From Other Database?

Let's say I have two databases with identical tables, but one database's tables contains data while the other doesn't. Is there a way in SQL Server to generate a script to fill the empty tables with data from the full tables?
If the tables are identical and don't use an IDENTITY column, it is quite easy.
You would do something like this:
INSERT INTO TableB
SELECT * FROM TableA
Again, only for identical table structures, otherwise you have to change the SELECT * to the correct columns and perform any conversions that are necessary.
And, to add to the #WilliamD answer, if there is an IDENTITY column you can use a variation of the INSERT statement.
Assuming you have two columns (Col1 and Col2, with Col1 having IDENTITY property) in the tables, you can do the following:
SET IDENTITY_INSERT TableB ON
INSERT INTO TableB (col1, col2)
SELECT col1, col2 FROM TableA
SET IDENTITY_INSERT TableB OFF
It's necessary to list the columns in this situation.