What is the difference between Replication and Mirroring in SQL server 2005?
In short, mirroring allows you to have a second server be a "hot" stand-by copy of the main server, ready to take over any moment the main server fails. So mirroring offers fail-over and reliability.
Replication, on the other hand, allows two or more servers to stay "in sync" - that means the secondary servers can answer queries and (depending on setup) actually change data (it will be merged in the sync). You can also use it for local caching, load balancing, etc.
Mirroring is a feature that creates a copy of your database at bit level. Basically you have the same, identical, database in two places. You cannot optionally leave out parts of the database. You can have only one mirror, and the 'mirror' is always offline (it cannot be modified). Mirroring works by shipping the database log as is being created to the mirror and apply (redo-ing) the log on the mirror. Mirroring is a technology for high availability and disaster recoverability.
Replication is a feature that allow 'slices' of a database to be replicated between several sites. The 'slice' can be a set of database objects (ie. tables) but it can also contain parts of a table, like only certain rows (horizontal slicing) or only certain columns to be replicated. You can have multiple replicas and the 'replicas' are available to query and even can be updated. Replication works by tracking/detecting changes (either by triggers or by scanning the log) and shipping the changes, as T-SQL statements, to the subscribers (replicas). Replication is a technology for making data available at off sites and to consolidate data to central sites. Although it is sometimes used for high availability or for disaster recoverability, it is an artificial use for a problem that mirroring and log shipping address better.
There are several types and flavours of replication (merge, transactional, peer-to-peer etc.) and they differ in how they implement change tracking or update propagation, if you want to know more details you should read the MSDN spec on the subject.
Database mirroring is used to increase database uptime and reliability.
Replication is used primarily to distribute portions of your primary database -- the publisher -- to one or more subscriber databases. This is often done to make data available (typically for read only) on remote servers so that remote clients can access the data locally (to them) rather than directly from the publisher across a slower WAN connection. Although, as the previous posts indicate, there are more complex scenarios where updates are permitted on the subscribers. It also can have the benefit of reducing the I/O load on the publisher.
Related
We are currently in design phase of a product. The idea is that we have a master (calling it) which contains all user information including user/registration/role/licence etc. and we have a slave database (calling it) which contains main application related data.
Some columns from the master database will be used in the slave database. for e.g. userid will be used everywhere in the slave database. There will be multiple versions of salve in different tiers (depending on customers subscription). So for e.g. Some customer will have a dedicated slave databases for their application.
Also some data/tables/columns from slave will be used in master.
How do we manage this scenario so that we can have maximum referential integrity (I know it will not be possible all the time) without using linked servers. (We dont want to use linked servers because for improper design it can be abused and can effect performance as a result).
Or is this a bad idea. Just have single database design (no master/slave) with different nodes. And customer's data in different nodes depending on their subscription? The problem that I see with this is that the registration/user tables are now fragmented in different database nodes. So for e.g. userA will be in database01 and so on.
Any idea?
I have an application that, for performance reasons, will have completely independent standalone instances in several Azure data centers. The stack of Azure IaaS and PaaS components at each data center will be exactly the same. Primarily, there will be a front end application and a database.
So let's say I have the application hosted in 4 data centers. I would like to have the data coming into each Azure SQL database replicate it's data asynchronously to all of the other 3 databases, in an eventually consistent manner. Each of these databases needs to be updatable.
Does anyone know if Active Geo-Replication can handle this scenario? I know I can do this using a VM and IaaS, but would prefer to use SQL Azure.
Thanks...
Peer-to-peer tranasaction replication supports what you're asking for, to some extent - I'm assuming that's what you're referring to when you mention setting it up in IaaS, but it seems like it would be self defeating if you're looking to it for a boost in write performance (and against their recommendations):
From https://msdn.microsoft.com/en-us/library/ms151196.aspx
Although peer-to-peer replication enables scaling out of read operations, write performance for the topology is like that for a single node. This is because ultimately all inserts, updates, and deletes are propagated to all nodes. Replication recognizes when a change has been applied to a given node and prevents changes from cycling through the nodes more than one time. We strongly recommend that write operations for each row be performed at only node, for the following reasons:
If a row is modified at more than one node, it can cause a conflict or even a lost update when the row is propagated to other nodes.
There is always some latency involved when changes are replicated. For applications that require the latest change to be seen immediately, dynamically load balancing the application across multiple nodes can be problematic.
This makes me think that you'd be better off using Active Geo Replication - you get the benefit of PaaS and not having to manage your own VMs, not having to manage TR, which gets messy, and if the application is built to deal with "eventual consistency" in the UI, you might be able to get away with slight delays in the secondaries being up to date.
While working on my current development product I have setup SQL server mirroring between the primary data center and the secondary data center. In the primary data center the SQL .mdf and .ldf files are stored on the SAN.
Now admittedly it should be very unlikely for us to lose the SAN but if for example the connection to the SAN was lost and the database integrity was lost. Would the mirroring still happen? I.e. would SQL now mirror the broken database and now both are equally broken?
From googling its not clear when mirroring will and will not happen so I was hoping that the community may be able to share some of there experiences.
I also have backup schedules setup which would be a final fail safe but realistically I would hope that the mirrored database would be our quickest way to bring everything back online.
In this scenario at present there is no witness server in the mirroring process although with the benefits of automatic failover I am thinking of adding one.
Thanks
As far as mirroring corruption between PRIMARY and SECONDARY goes: unfortunately, it depends. If the corruption is immediate and physical, then not normally -- the corruption is typically picked up by checks done at the end of the transaction and rolled back.
However, a database can exist in a corrupted state for some time before anything realises it is corrupted. If the underlying data pages are not touched, the engine never has cause to check them. So it is possible that underlying storage issues may mean that either database can become corrupted and you won't know until you attmept to access the affected pages. Traditionally, this would be a write operation, since your client connection will only read from the current active database (and not the partner).
This is why it is important to perform regular maintenance checks on your databases (e.g. DBCC CHECKDB). This becomes harder in a mirrored environment because only PRIMARY can typically be checked, so you really have to induce a manual failover to test your SECONDARY (unless you are running Enterprise, where you might be able to snapshot the mirror and check that -- I've not tried).
Starting with SQL Server 2008, the engine will attempt something called Automatic Page Repair, where it tries to automatically recover corrupted pages it encounters during the mirroring process. You should probably keep an eye on sys.dm_db_mirroring_auto_page_repair if this is something you are worried about.
If it is logical corruption, where the wrong data is entered, this will push across to SECONDARY without any means of stopping it.
However, I should point out that your approach might leave you with other issues. Mirroring isn't backup. And mirroring isn't great over WAN links.
In synchronous mode, it receives the client request, then writes to PRIMARY, then writes to SECONDARY, gets the OK back from SECONDARY and then sends an OK back to the client. If it can't write to SECONDARY, or doesn't get the response from SECONDARY, it rolls back the operation on PRIMARY (even though it was successful) and sends a failure back to the client.
A failing WAN link (even temporarily) can cause PRIMARY to choose not to accept connections (because it can't see SECONDARY). A failover mid-connection can leave you in an invalid logical data state, so make sure your transactions are sound.
With a WITNESS server, this can be a little more robust -- placing the witness server alongside PRIMARY in the same LAN allows WITNESS and PRIMARY to form quorum and agree that PRIMARY is still working, even though it can't see SECONDARY (thus not locking you out of a perfectly functioning database).
Instead, over my slower site-to-site links, I prefer to use log shipping between PRIMARY and SECONDARY. With a bit of effort I can control the transport between sites so as to rate-limit over the WAN link and it is possible keep the log-shipped SECONDARY in a single-user standby mode. This allows me to run the standard DBCC CHECKDB commands against SECONDARY, as well as also querying the SECONDARY for data reconcilliation purposes, too. I can also put a delay on the restoration, too, so I have some leeway to failover before a major logical data error reaches the SECONDARY (although that really depends on the RDO).
If I have a high-availability requirement, I might put in mirroring at the main site only -- i.e. two servers + witness. The relatively-quick few-second automatic failover time provided by the witnessed environment has saved me a few late-night calls, in the past.
Hope this helps.
J.
We have two systems where system A sends data to system B. It is a requirement that each system can run independently of the other and neither will blow up if the other is down. The question is what is the best way for system A to communicate with system B while meeting the decoupling requirement.
System B currently has a process that polls data in a db table and processes any new rows that have been inserted.
One proposed design is for system A to just insert data into system b's db table and have system B process the new rows by the existing process. Question is does this solution meet the requirement of decoupling the two systems? Is a database considered part of a system B which might become unavailable and cause system A to blow up?
Another solution is for system A to put data into an MQ queue and have a process that would read from MQ and then insert into system B's database. But is this just extra overhead? Ultimately is an MQ queue any more fault tolerant than a db table?
Generally speaking, database sharing is a close coupling and not to be preferred except possibly for speed purposes. Not only for availability purposes, but also because system A and B will be changed and upgraded at several points in their future, and should have minimal dependencies on each other - message passing is an obvious dependency, whereas shared databases tend to bite you (or your inheritors) on the posterior when least expected. If you go the database sharing route, at least make the sharing interface explicit with dedicated tables or views.
There are four common levels of integration:
Database sharing
File sharing
Remote procedure call
Message passing
which can be applied and combined in various situations, with different availability and maintainability. You have an excellent overview at the enterprise integration patterns site.
As with any central integration infrastructure, MQ should be hosted in an environment with great availability, full failover &c. There are other queue solutions which allow you to distribute the queue coordination.
Use Queues for communication. Do not "pass" data from System A to System B through the database. You're using the database as a giant, expensive, complex message queue.
Use a message queue as a message queue.
This is not "Extra" overhead. This is the best way to decouple systems. It's called Service Oriented Architecture (SOA) and using messages is absolutely central to the design.
An MQ queue is far simpler than a DB table.
Don't compare "fault tolerance" because an RDBMS uses huge (almost unimaginable) overheads to achieve a reasonable level of assurance that your transaction finished properly. Locking. Buffering. Write Queues. Storage Management. Etc. Etc.
A reliable message queue implementation uses some backing store to keep the queue's state. The overhead is much, much less than an RDBMS. The performance is much better. And it's much, much simpler to interact with.
In SQL Server I would do this through an SSIS package or a job (depending on the number of records and the complexity of what I was moving). Other databases also have ETL solutions. I like the ETL solution becasue I can keep logs of what was changed and what errors were processed, I can send records which for some reason won't go to the other system (data structures are rarely the same between two databases) to a holding table without killing the rest of the process. I can also make changes to the data as it flows to adjust for database differences (things like lookup table values, say the completed status in db1 is 5 and it is 7 in db2 or say db2 has a required field that db1 does not and you have to add a default value to the filed if it is null). If one or the other servver is down the job running the SSIS package will fail and neither system will be affected, so it keeps the datbases decoupled as using triggers or replication would not.
Our database architecture consists of two Sql Server 2005 servers each with an instance of the same database structure: one for all reads, and one for all writes. We use transactional replication to keep the read database up-to-date.
The two servers are very high-spec indeed (the write server has 32GB of RAM), and are connected via a fibre network.
When deciding upon this architecture we were led to believe that the latency for data to be replicated to the read server would be in the order of a few milliseconds (depending on load, obviously). In practice we are seeing latency of around 2-5 seconds in even the simplest of cases, which is unsatisfactory. By a simplest case, I mean updating a single value in a single row in a single table on the write db and seeing how long it takes to observe the new value in the read database.
What factors should we be looking at to achieve latency below 1 second? Is this even achievable?
Alternatively, is there a different mode of replication we should consider? What is the best practice for the locations of the data and log files?
Edit
Thanks to all for the advice and insight - I believe that the latency periods we are experiencing are normal; we were mis-led by our db hosting company as to what latency times to expect!
We're using the technique described near the bottom of this MSDN article (under the heading "scaling databases"), and we'd failed to deal properly with this warning:
The consequence of creating such specialized databases is latency: a write is now going to take time to be distributed to the reader databases. But if you can deal with the latency, the scaling potential is huge.
We're now looking at implementing a change to our caching mechanism that enforces reads from the write database when an item of data is considered to be "volatile".
No. It's highly unlikely you could achieve sub-1s latency times with SQL Server transactional replication even with fast hardware.
If you can get 1 - 5 seconds latency then you are doing well.
From here:
Using transactional replication, it is
possible for a Subscriber to be a few
seconds behind the Publisher. With a
latency of only a few seconds, the
Subscriber can easily be used as a
reporting server, offloading expensive
user queries and reporting from the
Publisher to the Subscriber.
In the following scenario (using the
Customer table shown later in this
section) the Subscriber was only four
seconds behind the Publisher. Even
more impressive, 60 percent of the
time it had a latency of two seconds
or less. The time is measured from
when the record was inserted or
updated at the Publisher until it was
actually written to the subscribing
database.
I would say it's definately possible.
I would look at:
Your network
Run ping commands between the two servers and see if there are any issues
If the servers are next to each other you should have < 1 ms.
Bottlenecks on the server
This could be network traffic (volume)
Like network cards not being configured for 1GB/sec
Anti-virus or other things
Do some analysis on some queries and see if you can identify indexes or locking which might be a problem
See if any of the selects on the read database might be blocking the writes.
Add with (nolock), and see if this makes a difference on one or two queries you're analyzing.
Essentially you have a complicated system which you have a problem with, you need to determine which component is the problem and fix it.
Transactional replication is probably best if the reports / selects you need to run need to be up to date. If they don't you could look at log shipping, although that would add some down time with each import.
For data/log files, make sure they're on seperate drives so the performance is maximized.
Something to remember about transaction replication is that a single update now requires several operations to happen for that change to occur.
First you update the source table.
Next the log readers sees the change and writes the change to the distribution database.
Next the distribution agent sees the new entry in the distribution database and reads that change, then runs the correct stored procedure on the subscriber to update the row.
If you monitor the statement run times on the two servers you'll probably see that they are running in just a few milliseconds. However it is the lag time while waiting for the log reader and distribution agent to see that they need to do something which is going to kill you.
If you truly need sub second processing time then you will want to look into writing your own processing engine to handle data moving from one server to another. I would recommend using SQL Service Broker to handle this as this way everything is native to SQL Server and no third party code has to be written.