In a passive replication based distributed system, if the primary server fails, one of the backups is promoted as primary. However, suppose that the original primary server recovers, then how do we switch back the primary server to it from the current backup?
I was wondering
if the failed primary server recovers, it must be incorporated into the system as a secondary and updated to reflect the most accurate information at the given point of time. To restore it as the primary server, it can be promoted as the primary in case the current primary (which was originally a backup) fails, otherwise, if required the current primary can be blocked for a while, the original primary promoted as primary again and the blocked reintroduced as backup.
I could not find an answer to this question elsewhere and this is what I feel. Please suggest any better alternatives.
It depends on what system you're looking at. Usually there's no immediate need to replace the backup when the original primary server recovers; if there is, you'd need to synchronize the two and promote the original primary.
Distributed synchronization (or consensus) is a hard problem. There's a lot of literature out there and I recommend that you read up. An example of a passively replicated system (with Leaders/Followers/Candidates) is Raft, which you could start with. A good online visualization can be found here, and the paper is here.
ZAB and Paxos are worth a read as well!
Related
I have two nodes in Partitioned mode and I use Continuous Query. When I put value to cache I see RemoteFilter is working twice (on primary node and on backup node). How can I check in filter if current node is primary or backup?
Well, there are several methods on Affinity API to help you detect whether a node is a primary or backup. However, if the topology changes while checking the Affinity API, then you may end up on a primary node that became a backup or vice versa.
There is a way to check this deterministically, which is described in IGNITE-3878 ticket. This should come in the next release.
This morning I ran into an issue were my primary node in replication group was changed. I still need to investigate why this happened.
The upshot was lots of failures in a Rails application as it was trying to write to what was the primary node but had become a read replica.
Is there a URL I can use that basically says "write to the primary node of this replication group, I don't care which node that is"
Right now I am using something similar to;
name-002.aaaaa.0001.use1.cache.amazonaws.com
My "fix" for now was changing what was name-001 to name-002 but until I know the reason why the primary node was changed I have to assume this will break again.
I think I have answered my own question.
In the admin section for the replication group there is a Primary Endpoint which seems to do the job of delegating that work out.
Premium service tier of Azure SQL database provides active geo replication due which upto 4 readable secondaries can be created. I want to know if the communication between primary and secondary database is secure and are there any chances of data being hacked in the transit?
For more infomation:Azure SQL Database Inside#High Availability
First, a transaction is not considered to be committed unless the
primary replica and at least one secondary replica can confirm that
the transaction log records were successfully written to disk. Second,
if both a primary replica and a secondary replica must report success,
small failures that might not prevent a transaction from committing
but that might point to a growing problem can be detected.
I have been wondering about the uniqueness of the GUID across the sql servers.
I have one central Database server and 100's of client databases (both SQL Servers). I have a merge replication (bi-directional) setup to sync the data between client and master servers. The sync process will happen 2-3 times a day.
For each of the tables to be synced I am using GUID as PrimaryKey and each table locally gets new records added and new GUIDs are generated locally.
When GUIDs are getting created at each client machine as well as at master DB server, how it will make sure it generates the unique GUID across all Client & Master DBs?
How it will keep track of GUID generated at other client/server DB, so that it will not repeat that GUID?
GUIDs are unique (for your purposes)
There are endless debates on the internet - I like this one
I think GUID's are not really necessarily unique. Their uniqueness comes from the fact that it's extremely unlikely to generate the same GUID randomly but that's all.
But for your purpose, that should be ok - they should be unique on a distributed system with extremely high probability.
You will have to do more research, but I think GUID is based upon MAC address and timestamp, if I remember right.
http://www.sqlteam.com/article/uniqueidentifier-vs-identity
I know some MCM's who have come across a unique key violation on a GUID.
How can this happen? Well, in the Virtual World, you have virtual adapters.
If you copy one virtual machine from one host to another, you can have the same adapter, MAC address?
Now if both images are running at the same time, it is possible to get no unique GUIDs.
However, the condition is rare. You can always add another field to the key to make it unique.
There is a whole debate on whether or not to use a GUID as a clustered PK. Remember, any other index will take a copy of the PK in the leaf (nodes). This is 16 bytes for every record x number of indexes.
I hope this helps.
John
You don't need to do anything special to ensure a GUID/Uniqueidentifier is globally unique. That basic guarantee is the motivating requirement for the GUID.
While working on my current development product I have setup SQL server mirroring between the primary data center and the secondary data center. In the primary data center the SQL .mdf and .ldf files are stored on the SAN.
Now admittedly it should be very unlikely for us to lose the SAN but if for example the connection to the SAN was lost and the database integrity was lost. Would the mirroring still happen? I.e. would SQL now mirror the broken database and now both are equally broken?
From googling its not clear when mirroring will and will not happen so I was hoping that the community may be able to share some of there experiences.
I also have backup schedules setup which would be a final fail safe but realistically I would hope that the mirrored database would be our quickest way to bring everything back online.
In this scenario at present there is no witness server in the mirroring process although with the benefits of automatic failover I am thinking of adding one.
Thanks
As far as mirroring corruption between PRIMARY and SECONDARY goes: unfortunately, it depends. If the corruption is immediate and physical, then not normally -- the corruption is typically picked up by checks done at the end of the transaction and rolled back.
However, a database can exist in a corrupted state for some time before anything realises it is corrupted. If the underlying data pages are not touched, the engine never has cause to check them. So it is possible that underlying storage issues may mean that either database can become corrupted and you won't know until you attmept to access the affected pages. Traditionally, this would be a write operation, since your client connection will only read from the current active database (and not the partner).
This is why it is important to perform regular maintenance checks on your databases (e.g. DBCC CHECKDB). This becomes harder in a mirrored environment because only PRIMARY can typically be checked, so you really have to induce a manual failover to test your SECONDARY (unless you are running Enterprise, where you might be able to snapshot the mirror and check that -- I've not tried).
Starting with SQL Server 2008, the engine will attempt something called Automatic Page Repair, where it tries to automatically recover corrupted pages it encounters during the mirroring process. You should probably keep an eye on sys.dm_db_mirroring_auto_page_repair if this is something you are worried about.
If it is logical corruption, where the wrong data is entered, this will push across to SECONDARY without any means of stopping it.
However, I should point out that your approach might leave you with other issues. Mirroring isn't backup. And mirroring isn't great over WAN links.
In synchronous mode, it receives the client request, then writes to PRIMARY, then writes to SECONDARY, gets the OK back from SECONDARY and then sends an OK back to the client. If it can't write to SECONDARY, or doesn't get the response from SECONDARY, it rolls back the operation on PRIMARY (even though it was successful) and sends a failure back to the client.
A failing WAN link (even temporarily) can cause PRIMARY to choose not to accept connections (because it can't see SECONDARY). A failover mid-connection can leave you in an invalid logical data state, so make sure your transactions are sound.
With a WITNESS server, this can be a little more robust -- placing the witness server alongside PRIMARY in the same LAN allows WITNESS and PRIMARY to form quorum and agree that PRIMARY is still working, even though it can't see SECONDARY (thus not locking you out of a perfectly functioning database).
Instead, over my slower site-to-site links, I prefer to use log shipping between PRIMARY and SECONDARY. With a bit of effort I can control the transport between sites so as to rate-limit over the WAN link and it is possible keep the log-shipped SECONDARY in a single-user standby mode. This allows me to run the standard DBCC CHECKDB commands against SECONDARY, as well as also querying the SECONDARY for data reconcilliation purposes, too. I can also put a delay on the restoration, too, so I have some leeway to failover before a major logical data error reaches the SECONDARY (although that really depends on the RDO).
If I have a high-availability requirement, I might put in mirroring at the main site only -- i.e. two servers + witness. The relatively-quick few-second automatic failover time provided by the witnessed environment has saved me a few late-night calls, in the past.
Hope this helps.
J.