I am interesting is there any solution to have two physical machine, with Microsoft SQL servers 2012 that work on the same database. somthing like Cluster just with bothe node to be active... Any idea?
Microsoft SQL Server does not support a 'real' load balancing scheme out of the box. AFAIK, this is still true with SQL Server 2012. (Someone will enlighten me if I'm wrong.) It doesn't matter if we are talking about database mirroring or AlwaysOn or clusters.
(In order to hammer that point home, MS seems to call SQL Server clusters "SQL Server failover clusters" lately. Pedantics.)
If you want to load balance your databases, you have to do the hard work yourself with some sort of sharding, federation or replication. (Note that federation (by views) has been in the product since SQL Server 2000, it just wasn't very popular.) And, of course, that would mean modifying either your databases or the apps themselves, which is almost always either too much work or violates your vendor agreements. With 150 databases, it's just that much more insurmountable.
You can have an active-active cluster, but the thing is that you would have to carefully distribute your databases on your nodes to divvy up the load. With 150 databases, this might be more granular than if you just had five databases, but if you have one database that is a ton of load and 149 that are light-weight or rarely used, you might still find one machine bogged down and the other isn't. And, some databases are busy sometimes and hardly busy at other times. Which means that everything might come down to when a user decides to run some heavy process.
Of course, you have to be able to support all of that load on a single node when you fail over, for whatever reason, even if it is something mundane like patching Windows. If you only patch during known slow traffic periods, that's great. If you don't have slow periods, or if the failover occurs because the hardware actually has a fault, the other node might not take the load and your users will be out of luck. If you think about it like that, having the second machine "doing nothing" isn't quite so irritating. At least you know that it will take all of the traffic that the primary usually does.
Yes, you can can two databases active sharing the same information and replicating it back and forth. This is referred to as "Merge Replication". In this configuration, both nodes can accept read and write transactions.
How Merge Replication Works
Related
I have an application that, for performance reasons, will have completely independent standalone instances in several Azure data centers. The stack of Azure IaaS and PaaS components at each data center will be exactly the same. Primarily, there will be a front end application and a database.
So let's say I have the application hosted in 4 data centers. I would like to have the data coming into each Azure SQL database replicate it's data asynchronously to all of the other 3 databases, in an eventually consistent manner. Each of these databases needs to be updatable.
Does anyone know if Active Geo-Replication can handle this scenario? I know I can do this using a VM and IaaS, but would prefer to use SQL Azure.
Thanks...
Peer-to-peer tranasaction replication supports what you're asking for, to some extent - I'm assuming that's what you're referring to when you mention setting it up in IaaS, but it seems like it would be self defeating if you're looking to it for a boost in write performance (and against their recommendations):
From https://msdn.microsoft.com/en-us/library/ms151196.aspx
Although peer-to-peer replication enables scaling out of read operations, write performance for the topology is like that for a single node. This is because ultimately all inserts, updates, and deletes are propagated to all nodes. Replication recognizes when a change has been applied to a given node and prevents changes from cycling through the nodes more than one time. We strongly recommend that write operations for each row be performed at only node, for the following reasons:
If a row is modified at more than one node, it can cause a conflict or even a lost update when the row is propagated to other nodes.
There is always some latency involved when changes are replicated. For applications that require the latest change to be seen immediately, dynamically load balancing the application across multiple nodes can be problematic.
This makes me think that you'd be better off using Active Geo Replication - you get the benefit of PaaS and not having to manage your own VMs, not having to manage TR, which gets messy, and if the application is built to deal with "eventual consistency" in the UI, you might be able to get away with slight delays in the secondaries being up to date.
I was wondering if it is possible to add/update/delete an SQL Server database table, as well as an Informix database table at the same time.
Both databases will have the same table (data and all), so the query would only change just based on which database it is going to. For some reason, we need the data inside both databases and kept up in real time.
Is it possible to do this with a SQL Trigger or maybe a SProc?
Any insight of how to do this, or a push in the right direction would be very much appreciated.
Doing a synchronous update, ie. a distributed transaction by using a linked server, possible for a trigger, while technically possible, I would definitely advise against it. Aaron brings the issue of how reliable XA in general is, but my point is different: availability. Your update in SQL Server will fail if it cannot connect and update in Informix. Downtime (patching, maintenance, not to mention disasters) of the Informix site will imply downtime of the SQL Server site, driving your five 9's toward nine 5's quite fast... This is why I strongly advocate decoupling the application of updates. Transactional Replication is such an example of decoupling and it supports heterogenous environments (ie. Informix client downstream to accept the changes).
You will have a delay of update visibility (state in SQL Server will be reflected in Informix after delay that can be milliseconds, seconds, minutes, even hours in a bad day). And the updates are one way, nothing flows back from Informix to SQL Server. But doing master-master replication in an heterogeneous environment is something that not even Chuck Norris would attempt, just saying.
Maintaining two different DBMS with a single transaction requires a transaction monitor such as the XA system to coordinate the transactions. There are such systems. The XA specification is typically the underlying standard. Both Microsoft's SQL Server and IBM's Informix work with such systems, and it is possible to have SQL Server and Informix controlled by the same transaction monitor. I have fewer qualms about the technical competency of such systems than the others who've answered; I share their concerns about whether it is appropriate for you.
Such systems are very heavyweight. If you want consistency, all transactions that modify the single table described in the question will need to use the same XA services (plural; likely one for insert, one for update, one for delete) to do so. Further, if the same transactions need to manage any other tables too, then you need to add and use services for those tables as well. It is this aspect that tends to make such systems difficult to manage.
Using a replication system with the potential for delay before the sites are consistent is probably better than trying for absolute synchronicity, unless there are cogent demands for such synchronicity.
If there really is a demand for absolute synchronicity, then use a transaction monitor.
Do not roll your own.
They are hard to get right. Handling all the special cases is tricky. And (under the hypothesis that you need absolute synchronicity) doing it wrong is costly but easy.
That depends on your definition of "possible". Technically, you can use a technique called "two-phase commit."
The idea is you send the data to both databases and then a "prepare commit" command which does everything necessary to commit the data except for committing it. If the prepare fails, the commit would fail too. If prepare succeeds, then commit must succeed.
Brilliant idea, doesn't work in practice. One common case is that you send the commit to both databases and one of them gets lost on the way (network outage). Happens rarely but when it happens, you have an inconsistent state and, since this step must not fail, no good way to clean up.
So my solution works like this:
You load the data into a new table which has two extra columns where you can say "server X has seen this record"
You add a job which copies all jobs for server X to server X and updates the respective column. Write the job in such a way that it can be aborted and restarted at any time (i.e. it must be able to cope with cases where data already exists on the target side).
That way, you can distribute the data to any number of servers in a consistent, fault tolerant way.
At my current workplace, the production SQL server and web servers are also used as development and test servers. I've asked for dedicated servers, but been refused as I can't justify it to satisfaction (the reasons against being cost of software, software licenses and hardware resources).
So, what justifications are there for a dedicated test/development server (a combined server at the moment - I don't want to push my luck and ask for 6 servers!)?
Summarised list
Resource usage
Prevention of errors
DR purposes
The list doesn't seem as extensive as I'd hoped.
Consider using Virtual Machines to reduce costs.
Well for starters the potential resources the production database has to use is restricted.
Also rogue/accidental developer SQL scripts could play havock with the production data.
Could there be issues with production data sensitivity? (eg personal data)
just a few to get started :)
Try to calculate the cost of downtime if you take the production system down due to a mistake in development.
Try also to calculate the cost of slow response times in production if/when you are doing performance testing.
As a cost benefit the test/dev hardware can be used as a spare if something bad happens to the production hardware.
Explain how often developer have fat-handed moments and hit enter too soon while editing statements starting...
drop table...
UPDATE veryImportantTable SET veryImportantField = '' WHERE 1 = 1 --TODO: make proper condition
This'd be reason enough for me. :)
I hope you have at least separate databases and are not developing on production data.
Check the data protection act, and also look into PCI-DSS if you want to be really secure (Payment Card Industry Data Security Standard).
I think it's livable to have a test-database on the same physical machine as your production DB. Performance is often not an issue (and assuming it's a multicore muchas memory machine, even if you do a heavy query on test, production will often not noticably slow down), and so long as the DB connections are separate, the chance of accidental damage is very very low.
As for a web-server, almost any machine can run one of those (apache is free, and even IIS is free for 10 simultaneous connections or fewer) - you could install a test web server on any old machine, configure it to use your test DB, and have a decent, low-cost solution.
'course a separate machine is "cleaner" - but the difference isn't huge.
One strong argument is availability / reduce downtime / disaster recovery.
i.e. to have another machine on standby to replace the production machine should anything bad happen to it hardware-wise (e.g. disk controllers or motherboards or power supplies dying).
Ideally the additional machine should be identical to the production one so it can be swapped directly, or individual parts swapped in as required. They can also back each other up or have a local copy of their counterparts last backup so they can be restored from quickly.
Of course it depends on how critical uptime is to the business as to how much value they'll see it this. If you're able to roughly work out how much they'll lose in $ due to lost business with and without a 'hot spare' server and present your case from a $ saved viewpoint (hopefully a lot more than the cost of the server), they might go for it.
I need SQL to be running on 2 data centers(DC) active/active.
There are tonnes of challanges to be done here and below are my requirements.
Data synchronization must be async. (For higher performance)
I need to be able to read/write on both DC
When Site1 goes down, all the traffic will be routed to Site2 and when Site1 comes back live, traffic will be shared again. In this case, data must sync back within 1-2 hours (based on down time obviously)
SQL Transactional replication or other SQL replications seems not a good option because of the following.
a. If replication breaks, building back the replication will require 500GB to be transferred to the other Site.
b. We need to break the repl sometimes to make changes such as adding new tables or changing primary keys.
c. Sometimes for what ever the reason, that replication breaks by itself and even MS cannot find solution to that.
d. I am not sure if peer to peer replication will resolve this.
e. Merge replication seems scary and we don't know its implications that much and we don't want to carry extra GUIDs.
Today it will be 2 DCs but tomorrow we will add some more DCs and with possibility having one in Europe and one in Asia.
Desirable latency in replication is MAX 15 mins.
Most of all, I need a solution without a headache or with minimal.
We are getting EMC recovery point but that does not help me in ACTIVE/ACTIVE scenario as it is only DR.
I've evaluated the following productions and none provide me workable DB on the other end.
I will appreciate your help on this issue.
Thanks in advance
The only active/active option for SQL Server is replication. There is no other active/active option for SQL Server as of this writing.
We are using Replication and seem to be having endless problems with it. It seems to shut down for unknown reasons. It needs to be shut down to remove a column and only starts back up half the time. Does anyone have any advice on how to properly use replication or some alternatives to it.
Edit:
We are using Sql Server 2005, We cannot use database mirroring as we used the other database for reporting. As far as I am aware you cannot query from a mirrored database.
If you need just couple of tables from your DB for reports, replication is more useful, but you also can set up log shipping with secondary server in STAND BY mode (especially if you need significant part of your data for reports), then you can run reports on secondary server. You just have to remember that log shipping will interfere with transaction log backups, so you have to use the same folder with log backup files for both processes.
I would think the combination of database mirroring and database snapshots will solve your issues.
First, database mirroring is very easy to setup and I have never had any problems with it (using it for the past 4+ years).
Second, creating a database snapshot on your failover server will allow you to run reports. You can setup a sql agent job to drop and re-create the snapshot on whatever acceptable interval you like.
Of course this is all dependent on if you need your reports to run on real-time data or if they can be delayed somewhat.
Here are a list of the problems that I have had to resolve to get replication working:
1) The replication sometimes lies to me and tells me this, even when its working fine.
"The server 'Bob' is not a Subscriber. (.Net SqlClient Data Provider)" I have tried to re-initialise it thinking that it was broken and it never was...
2) It can take a little while to restart itself, especially if your remote DB is on the other side of the planet, which it is in my case. If you are on a slow network connection, or it is not 100% reliable, then you can have problems. Also, the jobs which restart the process can sometimes take a while to run, which also delays things further.
3) Some changes require full re-initalisation which involves sending a new snapshot out. If you don't have your permissions quite right, and you can re-initialise manually, but it doesn't happen automatically, then this can be a another reason for problems.
We have a SQL transactional replication which runs perfectly happily. You seem to say that it is when you are making schema changes to the publisher that you get problems. Each time we do a schema change we drop the publication, subscription and the subscription database. Do the change, then re-build it all. We can do this becuase we can tolerate the time it takes to re-apply the snapshot. There are ways to apply schema changes to the publication and have them propogate to the subscriber. Take a look at sp_register_custom_scripting. We have made this work once, so I can give some more information about it if you need.
As #Jason says, you can report from a mirrored database by using a snapshot. Beware that the snapshot will take up space, and cause more work for the mirror server. Although how much space will depend on how much data is changing and how big your original database is. We do use a snapshot on a mirrored database for occasional reports because our entire database is not replicated.
log shipping http://msdn.microsoft.com/en-us/library/ms187103.aspx
What version of SQL Server are you using?
We're using replication now for a particular solution, and it seems to just work, day in, day out.
I would examine your event log's, and SQL Server logs to see if you can determine why it is shutting down, and why it doesn't start up.
Are you possibly patching the servers, or are you having network errors?
The alternatives to replication are log shipping, or database mirroring.
I personally prefer Database Mirroring, but it really depends what you're trying to do, as some of these aren't appropriate for certain situations.
We also have used SQL transactional replication. We had the same pains with updating schema, which requires dropping the publication on all servers, performing the updates, and then reinitializing replication, and hoping for the best. Sometimes it would not initialize, or a node would fall behind and we'd get little warning for it. A few times we even lost all the stored procedure execute permissions causing pretty much total failure on the websites.
We have a rather large database so reinitialization could take quite some time, meaning all updates had to be done at 2am on Sunday - not exactly when we're awake and alert and able to use all our faculties to deal with a problem that might arise.
We are ditching replication in favor of failover clustering on SQL 2008, but it can still be done all the way back to SQL 2000.
http://technet.microsoft.com/en-us/library/cc917693.aspx