If the publisher side of a transactional replication database goes down along with its backups for some reason, how would
I go about making the subscriber database as a publisher again with minimal impact and time.
As I understand at this point in time, the subscriber database comes loaded with triggers and views that are
needed for replication
TIA
Gopal
Short answer is: you can't easily.
You need expert help (especially since you have no backups), not just the (albeit good) answers you might receive here.
Related
can someone give me a clear idea about which technique/ method is more reliable, less memory consuming and faster in replicating data from one Database to another in MSSQL database(SQl Server 2012) and why. We are in the process of developing a Live GPS based tracking application and I am confused with which method to proceed with
Trigger Based Replication (Live Sync)
(OR)
Transactional Replication
Thanks in Advance ☺
I would recommend using standardised solutions whenever possible. Within the choice given to you, transaction replication should be an obvious favourite, because:
It doesn't require any coding and can be deployed using standard tools. This makes it much faster to deploy and maintain - any proper DBA can do it, some of them even being blindfolded.
Actual data transfer is done by replication agents which are separate applications external to the SQL Server process and client connections. Any network issues within the publisher-distributor-subscriber(s) chain will lead to delays in copying the data, but they will not affect the performance of the publisher database itself.
With triggers, you have neither of these advantages: you will have to add a lot of code, and sluggish network will make data-changing queries slower, potentially leading to timeouts.
Of course, there are many more ways to move the data between the databases in SQL Server, such as (in no particular order):
AlwaysOn Availability Groups (Database mirroring);
Log shipping;
CDC (Change Data Capture);
Service Broker.
However, given your needs, transaction replication still looks like your best bet, overall.
Please advise what suits my problem better. I have a highload web app hosted on the same server where SQL server is hosted. I also have SQL Service reporting running on the same server, generating user reports.
So my server basically works on top of disk read/write speed. I'm going to get another server and install there another SQL server in order to host SSRS there. So my criteria is to get as fresh data as it possible.
I've looked a couple of solution, currently I do make backup via jobs, copy it to second server and restore it there, also via jobs. But that's not the best solution.
All replication mechanism(transaction, merge, snapshot) affect publisher database by locking it's table, what is unacceptable for me.
So I wonder is there any possibility to create a replica with read only access, that would be synced periodically not affecting main db? I would put all report load to that replica and make my primary db be used only by web app.
What solution might suit my problem? As I'm not a DBA, I'd start investigating that direction. Thanks.
Transactional Replication is typically used to off-load reporting to another server/instance and can be near real-time in a best case scenario. The benefit of Transactional Replication is you can place different indexes on the subscriber(s) to optimize reporting. You can also choose to replicate only a portion of the data if only a subset is needed for reporting.
The only time locking occurs with Transactional Replication is when you generate a snapshot. With concurrent snapshot processing, which is the default for Transactional Replication, the shared locks are only held for a short period of time, so users are able to continue working uninterrupted. Either way, this shouldn't be an issue since you'll likely be generating the snapshot during a period of low user activity anyway.
I am new to sql server .I have a sql server and I have 2nd sql server(backupserver).I want to copy data from sql server to 2nd one and all my transactions create update delete must be reflected in the 2nd server immediately in real time.I have very large data in my table assuming million of rows.How can I achieve this.I dont have previalge to use third party tools.
You specify that you need data to be reflected on the second server immediately, in real time.
This requirement will add latency to every transaction that occurs on your primary server, since every action will need to be committed on the secondary server before it can be committed on your primary server.
In addition to the latency, this requirement will also likely reduce your availability. If the secondary server is no longer able to successfully commit transactions from the primary, then the primary can no longer commit transactions either, and your system is down.
For more information about these constraints, refer to the extensive discussion around this topic(CAP theorem).
If you're OK with these restrictions, you might consider using Synchronous Database Mirroring (High-Safety Mode).
If you're not OK with these restrictions, please adjust / clarify your requirements.
You may try to get some help from these two references:
Replication in MS SQL Server
SQL Server Replication
On a side note an important thing you should need to plan before doing replication is the Replication model which you will use for your replication.
There is a list of Replication model which you can use:-
Peer-to-peer model
Central publisher model
Central publisher with remote distributor model
Central subscriber model
Publishing subscriber model
Each one of the above has its own advantages. Check them as per your need.
Also to add to it there are three types of Replication:-
Transactional replication
Merge replication
Snapshot replication
Check out this tutorial on SQL Server 2008 replication:
Tutorial: Replicating Data Between Continuously Connected Servers
Since you said "... reflected in the 2nd server immediately in real time", that means you want to use Transactional Replication. You still may only choose to replicate certain tables.
Does the "millions of rows" represent some kind of history? If so, consider the risks that Michael mentioned... and whether you need all the history in the 2nd server, or just current / recent activity. If it's just current/recent, it may be safer and less of a system drain, to write something in T-SQL or SSIS, for a job to execute that loops, reads, and copies the data.
That could be done with linked servers and triggers... but the risks Michael mentions, about preventing the primary server from committing transactions, are as much or more a concern with triggers... that you can avoid with your own T-SQL/SSIS + job.
Hope that helps...
I wanted to see if others are using SQL Server 2008 Change Data Capture and if so how do you like it? We currently use APEXSQL Audit Triggers for our auditing purposes which seems to work pretty well, but means we have to add triggers to all of our "audited" tables.
Some of the articles I have read have pointed out things like having to create a new capture table when you change a schema then drop the old one, but as far as the general maintenance is concerned it seems to be fairly straight forward.
Any comments /input is greatly appreciated.
--S
How busy is the system and what is the end goal for the Auditing; tracking changes in a short period of time, or auditing changes for a long time? One of the biggest problems I have with CDC is that it utilizes the log reader and SQL Agent jobs to capture changes, so a busy system can get behind to the point that it will never catch up unless you turn off CDC, leading to at worst a full transaction log, or at best delayed truncation causing the log to grow in size. If your intent is to do real auditing CDC is not built for that, its more for synchronizing changes than it is for auditing for a long term, unless you setup jobs to pull the data over into audit tables like you would with a triggered solution.
You don't mention the new Server Audit Specifications here, which would be another option to look at, but keep in mind that Server Audit Specifications are used for auditing by inclusion. This is one of the reasons that I still use the old tried and true triggers and audit tables method in my SQL Server 2008 Ent databases, its still the easiest solution until the newer features get past being v1.0 features in the product.
If you have a working auditing solution, I wouldn't even try it.
Another problem I noted when looking into this was that you can't add things like the user who made the change to the tables (or at least I couldn't figure out how), so your audit tables may be more flexible than CDC allows.
Finally CDC tables expire in 3 days (although I think you can change the expiration but still have to set a specific time frame.) We keep our audit records for longer than that so you still need to copy them out of the CDc tables to an audit table.
I need to set up this scenario:
A SQL Server 2005 database will create a transactional replication subscription from another database to populate a set of lookup tables. These lookup tables will then be published as a merge replication publication to the client's SQL Server Mobile.
I remember seeing a similar scenario defined in the SQL Server Books Online somewhere, but I was unable to find the link anymore. I hope someone can help me find it, or otherwise point me to any other similar links.
Okay, I managed to get the answers I needed at the MSDN SQL Server Replication forum.
The article I was looking for is called: Republishing Data.
Apparently, it is located within "Advanced Replication Features and Internals" of the "Configuring and Maintaining Replications" section. It's a little non-obvious, so I spent most of my time looking in the "Replicating Data Between a Server and Clients" section, instead of there. Good to know, as there seems to be a number of other special scenarios worth looking at in there.
I do not get the interest of having a transactional replication to generate lookup tables. Such tables are not made to be updated from the client side, so why do you want to combine transactional + merge replication when you don't have the data modified by the subscribers?
Maybe the original scenario is not clear, so let me clarify.
The database where the original lookup tables are located either remotely with bad network connection, or operated under heavy load. This approach was suggested so that the lookup tables are replicated to another database which all merge replication with the clients will be performed.
Of course, it may not be the most appropriate approach to our problem, so if anyone has a better idea, we'll like to hear them.
Still, the main reason for this question is to find an article I previously found (but stupidly did not bookmarked) which described our scenario quite well. Any possible leads to this article (title, author, similar topics, etc) are definitely appreciated.