Can you SQL replicate tables from a logged-shipped secondary DB to a separate third DB? - replication

Can you SQL replicate tables from a logged-shipped secondary DB to a separate third DB?
The idea is to offload some long running scripts from those tables, and if possible avoiding touching the primary DB.
I have tried to research this before just jumping straight in and have found very few references as to if this is supported or possible.
Any advice appreciated
Thanks

Related

Can I use postgres_fdw without foreign tables defined?

I have a production database "PRODdb1", with a read-only user account. I have a need to query(select statement) this database and insert the data into a secondary database named "RPTdb1". I originally planned to just create a temp table in PRODdb1 from my select, but permissions are the issue.
I've read abut dblink & postgres_fdw, but are either of these a solution for my issue? I wouldn't be creating foreign tables because my SELECT is joining many tables from PRODdb1, so I'm unfamiliar if postgres_fdw would still be an option for my use case.
Another option would be any means of getting the results of the SELECT to a .CSV file or something. My main blocker here is that I only have a read-only user to work with, but no way around that issue.
The simple answer is no. You can not use postgres_fdw without defining a foreign table in your RPTdb1. This should not be much of an issue though, since it is quite easy to create the foreign tables.
I am in much the same boat as you. We use a 3rd party product (based on Postgres 9.3) for our production database and the user roles we have are very restrictive (i.e. read-only access, no replication, no ability to create triggers/functions/tables/etc).
I believe that postgres_fdw has the functionality you are looking for, with one caveat. Your local reporting server needs to be running PostgreSQL version 10 (or 9.6 at a minimum). We currently use 9.3 on our local server and while simple queries work beautifully, anything more complicated takes forever, because the FDW in 9.3 tries to pull all data in the table before it is able to do JOINs or even use the WHERE statement.
version 9.6: Pushes down JOIN to the remote database before returning results.
version 10: Pushes down aggregates such as COUNT and SUM to the remote database before returning results.
(I am not sure which version adds the ability to push down WHERE statements to the remote DB, but I know it was not possible in 9.5).
We are in the process of upgrading our local server to version 10 this week. I can try to keep you updated with our progress, feel free to do the same.

connecting to remote oracle database in SQL

I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).

Can we add comments or a README file to a SQL Server database/table?

These days I am importing quite a lot of databases from my server and working on them locally. In the process, I am making a number of changes to the table structure and in the process using some complex SQL statements to add the table columns.
Keeping track of everything in a separate file is beginning to be a pain and am wondering if there is a way to do this directly in the SSMS so that I can store the instructions along with the database. Is there any way this can be done or do I have to resort to writing documentation outside SQL Server?
Of course, I can always create a stub table called comments and put everything there but I was looking for a way to associate comments with a particular database or tables. Any suggestions would be greatly appreciated.
SQL-Server handles commenting on database objects through Extended Properties:
http://msdn.microsoft.com/en-us/library/ms190243.aspx

Continuously synchronize tables between two databases

I have had my experience with MSSQL Server somewhat 6 years ago, so I have only basic knowledge of its workings now.
The problem I'm posed with is that of syncing the databases between two live CRMs (NopCommerce and Rainbow Portal-based one if anyone's curious) running on the same DB server. The data I'm interested in is spread out among 7 tables in one DB and 5 in the other one. The idea is to have two web applications with same data with updates in one instantly propagating to the other.
Each database has numerous triggers and stored procedures that are used to keep the data consistent.
I am not aware of all possibilities of SQL Server, so I am open to suggestions as to what is the best and quickest way to achieve the goal. Is it about writing more triggers? Should I create a "watcher" application? Is there some built-in mechanism for that?
Thanks!
You should look at SQL Replication, and / or using SSIS for the integration ETL and scheduling etc.
Triggers (especially cross DB) can be messy to maintain and debug - you might also consider loading data into a separate (third) staging database, before then propogating the data into your other 2 databases?
(Other alternatives include Synchronous and Asynchronous Mirroring, which would require the entire DB's to be in synch, and log shipping - also entire DB - which would be one way only, typically for redundancy - These aren't likely to be useful for your purpose though)
You might want to look at SQL Server Replication - http://msdn.microsoft.com/en-us/library/bb500346.aspx in particular Merge Replication

Cross Database Stored Procedure performance considerations

In my project I have two separate DBs on the same server. The DBs are self-sufficient except for three columns in DB "B" that need to be accessed in DB "A".
Are there performance considerations if I were to have a stored proc in A that accessed three columns from B directly?
Currently, we run a nightly job to import the needed data from table B to table A, so that the stored proc isn't going out of the scope of A.
Is that the best method?
Are cross DB stored procs within best practices?
There should be no problem since the databases are on the same server. Usually problems occur when you do this with linked servers and you could run into network latency
To clarify the other posters comments.
There is no "direct" negative performance impact when using cross database access via stored procedures. Performance will be determined by the underlying architecture of the individual databases, i.e. indexes available, physical storage locations etc.
This is actually quite a common practice and so long as you follow standard query tuning principals you will be just fine.
Yes, what you're currently doing - i.e. replication - is the "correct" thing to do.
When referencing data in another database, you can't use referential integrity, data constraints and lots of the other good stuff that make an RDBMS a good tool to use.
Accessing the other database direcly ties the databases together - they MUST exist on the same server, for all time. You may run into badwidth issues using linked servers and executing the proc on demand.
Replication gives you far more flexibility.
It all depends on the data you are referencing and whether or not indexes are set up to support this direct access. The best thing I can tell you is to create the query, run it, and see if the performance is good enough.
If the performance is not satisfactory then run the query again and have your management tool generate a execution plan so you can identify the bottleneck.
We do this all the time. As long as they are on the same server there is no problem. If you have a requirement that the data must be in table a on database a before it can go into table B on database b, you will need to write a trigger to check as foreign key relationships can't be set across databases.