SQL Server replication using triggers - sql

I have two databases (db1,db2) that reside on different servers, db1 resides on dbserver1, db2 on dbserver2.
Now I want to replicate data from db1 (old schema) to the new schema in db2 in REAL TIME. What is the best/most efficient approach here?
The first thing comes to my mind are triggers, is it possible to have trigger in db1 that inserts/updates record to db2? Is there any other approach? thanks..
[db1.OldSchema] => [db2.NewSchema]
ADDITIONAL: this only one way sync, because db2 will be used only in reporting..

This question is probably best for Database Administrators, but short answer is that there's a variety of methods you can use:
Scheduled backup/restores (if you're
happy to blow away your 2nd DB on
each restore)
Log shipping (passes over changes since last updated)
SSIS package (if you need to change the structure of the database ie: transform, then this is a good method, if the structure is the same use one of the other methods)
Replication (as you seem to want 1 way, I'd suggest transactional replication, this is the closest you'll probably get to real time, but should not be entered into lightly as it will have impacts on how you work with both databases)

Related

Progress DB: backup restore and query individual tables

Here is the use-case: we need to backup some of the tables from a client server, copy it to our servers, restore it, then running some queries using ODBC.
I managed to do this process for the entire database by using probkup for backup, prorest for restore and proserve to make it accessible for SQL queries.
However, some of the databases are big (> 8GB), so we are looking for a solution to do the backup for only the tables we need. I didn't find anything with the documentation of probkup how this can be done.
Progress only supports full database backups.
To get the effect that you are looking for you could dump (export) the tables that you want and then load them into an empty database.
"proutil dump" and "proutil load" are where you want to start digging.
The details will vary depending on exactly what you want to do and what resources and capabilities you have available to you.
Another option would be to replicate the tables in question to a partial database. Progress has a product called "pro2" that can help with that. It is usually pointed at SQL targets but you could also point it at a Progress database.
Or, if you have programming skills, you could put together a solution using replication triggers (under the covers that's what pro2 does...)
probkup and prorest are block-level programs and can't do a backup or restore by table.
To do what you're asking for, you'll need to do a dump the data from the source db's tables and then load it into the target db.
If your object is simply to maintain a copy of the DB, you might also try incremental backups. Depending upon your situation, that might speed things up a bit.
Other options include various forms of DB replication, which allow you to keep real- or near-real-time copies of your database.
OpenEdge Replication. With the correct license, you can do query-only access on the replication target, which is good for reporting and analysis.
Third-party replication products. These can be more flexible in terms of both target DBs and limiting the tables to be replicated.
Home-grown replication (by copying and applying AI files). This is not terribly complicated, but you have to factor the cost of doing the work and maintaining the system. There are some scripts out there that can get you started.
Or, as Tom said, you can get clever with replication via triggers.

connecting to remote oracle database in SQL

I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).

SQL Server and Oracle terminology

SQL Server and Oracle terminology -
In SQL Server If I have two applications and want to keep the database completely separate, I could simply create 1 database for each application therefore I end up with 2 databases.
If I wanted to do the same thing in oracle, what do I need to create?
- create a new "Databases"? "Instance", "Schema", or "Tablespace" per application?
(Note, these two applications is the same application used by two different companies, that do not share data!)
Reference: http://www.codeproject.com/Tips/492342/Concept-mapping-between-SQL-Server-and-Oracle
Having worked with SQL Server a lot in the past, I have sympathy with trying to figure out how Oracle organizes things as I struggled with the same thing. My comments below are from SQL Server 2000 and 2003 so forgive me if things have changed since then.
Previous responders have been helpful. I think one problematic assumption here is that there is an exact "level" equivalency between SQL Server and Oracle. What I mean by "level" is something that occupies the same space in the hierarchies that you have diagrammed above (and which, btw, I think are a good place to start but might need a bit of editing in a couple of places, for example how you have diagrammed "user" and "schema" in the Oracle hierarchy, I might put them side-by-side.) I do not think these concept "levels" match exactly between the DB platforms.
A schema in Oracle is somewhat equivalent to a separate database in SQL Server but not entirely.
I would say that the "walls" -- not an exact technical term but oh well -- between databases in SQL server are a bit higher than the "walls" between schemas in Oracle. Others might disagree but here is my reasoning:
a. A schema in Oracle is a purely logical construct. It denotes who has ownership of objects. It has nothing to do with the physical location or layout of the objects. A tablespace (orthagonal concept, as noted by a previous poster) indicates the physical location of objects. A tablespace can hold objects that are in multiple schemas and vice versa. In SQL Server these two concepts are sort of merged into one -- a database is both tablespace and schema, more or less, although in some respects within a DB in SQL Server you then have multiple owners with various object ownership. This can get a bit confusing because as I remember (it's been a couple of years) if not using NT Authentication the users are defined at the server level and then have to "link" to the users in the individual DBs.
b. I remember finding it easier, or at least a bit simpler, to assure myself that users to two separate DBs in SQL Server had no access to the relative other user's DB than I have found it in Oracle.
c. Because a DB in SQL server represents both physical storage and logical ownership, you can detach the DB and move it to another SQL Server Instance and attach it. You can't do this with a schema in Oracle. I mean, you can datapump the data out or back it up or whatever to another server and another schema, but that all takes at least some scripting and such or at least a fair amount of clicking in Enterprise Manager. It doesn't give you the one-click "Detach DB" option that you have in SQL Server which makes it a lot easier to get the idea that SQL Server DBs are units that you can more-or-less move back and forth between databases.
To sum things up, I think either option would work. That is, 1) Create two separate instances of Oracle with one schema in each instance for each application, or 2) Create two separate schemas in one Oracle instance.
There are pros and cons for each option. Option 1 is probably going to be more work to set up and configure but will also give you more separation, independence, ability to have separate hardware, etc., for each DB. Option 2 will be quite a bit simpler but gives you less separation between the data and greater risk of configuration screw-ups or other things allowing users of one schema to access the other. It also means you have to be a bit more careful that someone writing a query accessing data in one schema doesn't use all the CPU and IO resources and starve a user on the other schema.
Also, yes, you could use pluggable databases in 12c. However, given the fact that you need to ask these questions (no shame, just pointing out where you're at) makes me hesitant about recommending what can easily be a more complex setup.
TL;DR -- SQL Server isn't Oracle and Oracle isn't SQL Server. Either option works and there are pros and cons to each.
If you're using 12.1 or later with the multitenant option, you could create separate pluggable databases in a single container database. The other option, which works in any version of Oracle, would be to create a separate schema. It would be possible, as well, to create a separate database, though that is generally not the preferred approach unless you have a particular need to do things like upgrade the database that one application is using without affecting the other.
Creating a Database
If you create a separate database, you'd end up with complete separate memory structures (i.e. the SGA and PGA for each database would be separate) as well as a completely separate set of background processes (each database would have its own log writer process(es) for example). That is a very heavyweight option-- you can't have too many databases on a single server before you start having a lot of contention for RAM, for scheduling all the background processes, etc. It does provide for the maximum separation between different applications-- each database can be running a different version of Oracle with a different set of initialization parameters-- but this also tends to increase the complexity of managing the environment. This generally only makes sense when you have third party applications that require a specific version of the database or a specific set of initialization parameters.
Creating a Schema
If you create a separate schema, you still have a single database so the two schemas are sharing the same memory structures (competing with each other for space in the SGA's buffer cache, for example), initialization parameters, etc. You have to exercise a modicum of planning to ensure that that the two don't interfere with each other-- you'd probably want to make sure that nether application creates public synonyms or at least that they won't wan to create the same public synonym as the other application-- but this is generally pretty trivial.
Creating a Pluggable Database
This only works in 12.1 and only if you have the multitenant option. This is the most similar to the SQL Server concept of creating a new database for each application.
You should create a new instance (schema) on the same database, where the schema in oracle is the same as the SQL server database

Synchronize none-data related changes between different DB instances

Here's the scenario.
Two identical databases:
One Live database, one Archive database, they're suppose have the exact same schema (table, view, indexes, SPs, functions), the only difference is the data in the databases. The data in Live DB will be archived with some business rules and apparently the data in Archive DB will be different from in Live DB.
The challenge is that we keep on patching changes (SP change, function change, data change, or even table schema change) to the Live DB in each release. Unfortunately, the changes required on Archive DB are forgotten for a long time and the issues have just not been addressed yet. It will happen one day that the out-of-sync DBs come back and bite us.
Here's what I want to do: I want to synchronize non-data related changes from Live DB to Archive DB. Either automated or manually.
Any idea is welcome. Here are some ideas that have come to my mind:
replication? I find replication does not fit this scenario quite well.
scripting the SP/function/view changes? I can manually pull out the scripts and combine them together. What about the table schema changes? It's difficult for me to track back to find out what's happened on table schema changes.
I know there's Redgate and other product can do the job but I'd like to explore the full potential.
If anybody can point out some feasible way that'd be great.
If you are using SQL Server, Visual Studio Team System Database Edition has a schema comparison and patching tool. Have a look at this article

Cross Database Stored Procedure performance considerations

In my project I have two separate DBs on the same server. The DBs are self-sufficient except for three columns in DB "B" that need to be accessed in DB "A".
Are there performance considerations if I were to have a stored proc in A that accessed three columns from B directly?
Currently, we run a nightly job to import the needed data from table B to table A, so that the stored proc isn't going out of the scope of A.
Is that the best method?
Are cross DB stored procs within best practices?
There should be no problem since the databases are on the same server. Usually problems occur when you do this with linked servers and you could run into network latency
To clarify the other posters comments.
There is no "direct" negative performance impact when using cross database access via stored procedures. Performance will be determined by the underlying architecture of the individual databases, i.e. indexes available, physical storage locations etc.
This is actually quite a common practice and so long as you follow standard query tuning principals you will be just fine.
Yes, what you're currently doing - i.e. replication - is the "correct" thing to do.
When referencing data in another database, you can't use referential integrity, data constraints and lots of the other good stuff that make an RDBMS a good tool to use.
Accessing the other database direcly ties the databases together - they MUST exist on the same server, for all time. You may run into badwidth issues using linked servers and executing the proc on demand.
Replication gives you far more flexibility.
It all depends on the data you are referencing and whether or not indexes are set up to support this direct access. The best thing I can tell you is to create the query, run it, and see if the performance is good enough.
If the performance is not satisfactory then run the query again and have your management tool generate a execution plan so you can identify the bottleneck.
We do this all the time. As long as they are on the same server there is no problem. If you have a requirement that the data must be in table a on database a before it can go into table B on database b, you will need to write a trigger to check as foreign key relationships can't be set across databases.