I have two tables in two Databases having identical schema. The two databases are on different servers at different location. Now the data can be inserted and updated in any of the two databases table. The requirement is to sync the two tables in different databases so that they are always having the updated information.
The primary key column will always be unique in either database table.
How to achieve this via SSIS ?
Kindly guide.
You can achieve it with 2 Script Tasks. In the first one:
-- what exists in A and not in B
SELECT * INTO DB1.temp.TBL_A_except FROM
(
SELECT pk FROM DB1.schema1.TBL_A
EXCEPT
SELECT pk FROM DB2.schema2.TBL_B
);
-- what exists in B and not in A
SELECT * INTO DB2.temp.TBL_B_except FROM
(
SELECT pk FROM DB2.schema2.TBL_B
EXCEPT
SELECT pk FROM DB1.schema1.TBL_A
);
Second one:
INSERT INTO DB2.schema2.TBL_B
SELECT * FROM DB1.temp.TBL_A_except;
INSERT INTO DB1.schema1.TBL_A
SELECT * FROM DB2.schema2.TBL_B_except;
DROP TABLE DB1.temp.TBL_A_except;
DROP TABLE DB2.schema2.TBL_B;
If you really want to achieve this with SSIS transformation techniques, I'd use two data flows with 2xCache Connection Manager as our temp table 1 and 2. First one to save data into cache, second to load from cache into tables.
or
Two data flows. Source -> Lookup -> Destination.
Implement lookup to check the second table for existance of PK. If for a record Tbl_A there is no such PK in Tbl_B it means you have to insert this row into Tbl_B. No Match Output, directs row to Destination.
Related
Prolog:
I have two tables in two different databases, one is an updated version of the other. For example we could imagine that one year ago I duplicated table 1 in the new db (say, table 2), and from then I started working on table 2 never updating table 1.
I would like to compare the two tables, to get the differences that have grown in this period of time (the tables has preserved the structure, so that comparison has meaning)
My way of proceeding was to create a third table, in which I would like to copy both table 1 and table 2, and then count the number of repetitions of every entry.
In my opinion, this, added to a new attribute that specifies for every entry the table where he cames from would do the job.
Problem:
Copying the two tables into the third table I get the (obvious) error to have two duplicate key values in a unique or primary key costraint.
How could I bypass the error or how could do the same job better? Any idea is appreciated
Something like this should do what you want if A and B have the same structure, otherwise just select and rename the columns you want to confront....
SELECT
*
FROM
B
WHERE NOT EXISTS (SELECT * FROM A)
if NOT EXISTS doesn't work in your DBMS you could also use a left outer join comparing the rows columns values.
SELECT
A.*
from
A left outer join B
on A.col = B.col and ....
I have a table with 5 billion rows (table1) another table with 3 billion rows in table 2. These 2 tables are related. I have to delete 3 billion rows from table 1 and its related rows from table 2. Table1 is child of table 2. I tried using the for all method from plsql it didn't help much. Then I thought of using oracle partition strategy. Since I am not a DBA I would like to know if partioning of a existing table is possible on primary key column for a selected number of id's? My primary key is 64 bit auto generated number.
It is hard to partition the objects online(it can be done using dbms_redefinition). And not necessary(with the details you gave).
Best ideea would be to recreate the objects without the undesired rows.
For example some simple code would be like:
create table undesired_data as (select undesired rows from table1);
Create table1_new as (select * from table1 where key not in (select key from undesired_data));
Create table2_new as (select * from table2 where key not in (select key from undesired_data));
rename table1 to table1_old;
rename table2 to table2_old;
rename table1_new to table1;
rename table2_new to table2;
recreate constraints;
check if everything is ok;
drop table1_old and table2_old;
This can be done offlining consumers, but would be very small downtime for them if scripts are ok(you should test them in a test environment).
Sounds very dubious.
If it is real use-case then you don't delete you create another table, well defined, including partitioned and you fill it using insert /*+ append */ into MyNewTable select ....
The most common practice is to define partitions on dates (record create date, event date etc.).
Again, if this is a real use-case I strongly recommend that you will reach for real help, not seek for advice on the internet and not doing it yourself.
I managed to delete 4,000 rows from a table in my 129,000-row production database (Postgres 9.4 on Heroku), but only identified the problem a few days later.
I have a backup from before the loss, but only want to selectively restore the missing rows back to the table, preserving their id's. (A complete restore is not an option as new data has since been added to the table.)
Into a local testing database I have imported the backed-up table as articles_backup, alongside the actual articles table. I want to find all the rows in articles_backups that are missing from articles and then copy these to a new table articles_restores that I will then restore to the production database, back into the articles table (preserving record id's).
This query successfully returns all the id's of the deleted records:
select articles_backups.id
from articles_backups
left outer join articles on (articles_backups.id = articles.id)
where articles.id is null
But I have not been able to copy the result to a new table. I have unsuccessfully tried:
select *
into articles_restores
from articles_backups
left outer join articles on (articles_backups.id = articles.id)
where articles.id is null;
Which gives:
ERROR: column "id" specified more than once
Basically your query with LEFT JOIN / IS NULL does what you are after:
Select rows which are not present in other table
You get the error because you select all columns from both tables, and there is an id column in both. It's not possible to create a new table with duplicate column names, and it's not what you want to begin with. Only select columns from articles_backups:
CREATE TABLE articles_restores AS
SELECT ab.*
FROM articles_backups ab
LEFT JOIN articles a USING (id)
WHERE a.id IS NULL;
While being at it I simplified your query syntax with table aliases. The USING clause is just for the convenience of shorter code. It folds the two id columns into one, but all other columns are still in there twice if you SELECT *.
Use CREATE TABLE AS. SELECT INTO is also defined by the SQL standard and implemented in Postgres, but its use is discouraged. It's used in PL/pgSQL functions for a different purpose. Details:
Creating temporary tables in SQL
You could use an except to retrieve all the rows from articles_backup that are different from articles:
(assuming both tables have the same columns in the same order)
you could also create a temp table with this info to make it easy on your repairing statements:
create table temp_articles as
select * from articles_backup
except
select * from articles
step 1 - update rows from 'articles_backup' present in articles.
This step needs attention... you will have to establish a rule to choose between the data present in articles and the one present in temp_articles.
UPDATE articles a
SET a.col1=b.col1,
a.col2=b.col2,
(... other columns ...)
FROM (SELECT * FROM temp_articles) AS b
WHERE a.id = b.id and /* your rule for data to be (or not) updated goes here */
step 2 - insert rows from 'articles_backup' not present in articles (your deleted records):
insert into articles
select * from temp_articles where id not in (select id from articles)
Let us know if you need more help.
I have two table A and B. A has a column b_id which act as a foreign key reference for a many to one relationship.
So is there any load difference in Oracle when executing the query like
select A.* from A, B where A.b_id=B.ID and B.ID=? -- auto-generated by hibernate
and
select * from A where b_id = ? -- Created manually
UPDATE : I need data from only table A
For sure there will be a difference between both queries, the first one is getting data from two tables and the second one is just querying one unique table.
Even if you don't return any results of table B in the first query, these data are used for the jointure condition (not the case in the second query).
I have a table ( A ) in a database that doesn't have PK's it has about 300 k records.
I have a subset copy ( B ) of that table in other database, this has only 50k and contains a backup for a given time range ( july data ).
I want to copy from the table B the missing records into table A without duplicating existing records of course. ( I can create a database link to make things easier )
What strategy can I follow to succesfully insert into A the missing rows from B.
These are the table columns:
IDLETIME NUMBER
ACTIVITY NUMBER
ROLE NUMBER
DURATION NUMBER
FINISHDATE DATE
USERID NUMBER
.. 40 extra varchar columns here ...
My biggest concern is the lack of PK. Can I create something like a hash or a PK using all the columns?
What could be a possible way to proceed in this case?
I'm using Oracle 9i in table A and Oracle XE ( 10 ) in B
The approximate number of elements to copy is 20,000
Thanks in advance.
If the data volumes are small enough, I'd go with the following
CREATE DATABASE LINK A CONNECT TO ... IDENTIFIED BY ... USING ....;
INSERT INTO COPY
SELECT * FROM table#A
MINUS
SELECT * FROM COPY;
You say there are about 20,000 to copy, but not how many in the entire dataset.
The other option is to delete the current contents of the copy and insert the entire contents of the original table.
If the full datasets are large, you could go with a hash, but I suspect that it would still try to drag the entire dataset across the DB link to apply the hash in the local database.
As long as no duplicate rows should exist in the table, you could apply a Unique or Primary key to all columns. If the overhead of a key/index would be to much to maintain, you could also query the database in your application to see whether it exists, and only perform the insert if it is absent