What happen when SQL Server AG failover? - sql-server-2012

While analyzing Dynamic Management Views captured against fail-over server, observed that DMV is getting flush or SQL Engine going to reset the statistics.
In the production environment, it is not allowed to flush/clear the DMV, based on this I am identifying the delta between them. While calculating the delta I come to know that many times previous value is greater than current value.
My question is, let say if database A is configured in AG1 with 2 server like primary-secondary, while switching from primary to secondary will it be going to reset the primary server stats and what are the different reasons that could cause for DMV is getting reset?
Also what happens in the recompilation case for that particular procedure is it going to reset the DMV stats ?

When the failover occurs, you're moving from one server to another server. Sys.dm_exec_procedure_stats is providing information about procedures that are currently in cache. Since you changed servers, there is nothing in cache for that database after the failover. Therefore, you're going to see radical differences from one server to another after a failover.
It's not a reset of the information. It's simply that the information in the procedure cache of one server is not the same as the information in the procedure cache of another server.

Related

SQL Server full copy of database for read operations

Please advise what suits my problem better. I have a highload web app hosted on the same server where SQL server is hosted. I also have SQL Service reporting running on the same server, generating user reports.
So my server basically works on top of disk read/write speed. I'm going to get another server and install there another SQL server in order to host SSRS there. So my criteria is to get as fresh data as it possible.
I've looked a couple of solution, currently I do make backup via jobs, copy it to second server and restore it there, also via jobs. But that's not the best solution.
All replication mechanism(transaction, merge, snapshot) affect publisher database by locking it's table, what is unacceptable for me.
So I wonder is there any possibility to create a replica with read only access, that would be synced periodically not affecting main db? I would put all report load to that replica and make my primary db be used only by web app.
What solution might suit my problem? As I'm not a DBA, I'd start investigating that direction. Thanks.
Transactional Replication is typically used to off-load reporting to another server/instance and can be near real-time in a best case scenario. The benefit of Transactional Replication is you can place different indexes on the subscriber(s) to optimize reporting. You can also choose to replicate only a portion of the data if only a subset is needed for reporting.
The only time locking occurs with Transactional Replication is when you generate a snapshot. With concurrent snapshot processing, which is the default for Transactional Replication, the shared locks are only held for a short period of time, so users are able to continue working uninterrupted. Either way, this shouldn't be an issue since you'll likely be generating the snapshot during a period of low user activity anyway.

Replicating data in microsoft sqlserver

I am new to sql server .I have a sql server and I have 2nd sql server(backupserver).I want to copy data from sql server to 2nd one and all my transactions create update delete must be reflected in the 2nd server immediately in real time.I have very large data in my table assuming million of rows.How can I achieve this.I dont have previalge to use third party tools.
You specify that you need data to be reflected on the second server immediately, in real time.
This requirement will add latency to every transaction that occurs on your primary server, since every action will need to be committed on the secondary server before it can be committed on your primary server.
In addition to the latency, this requirement will also likely reduce your availability. If the secondary server is no longer able to successfully commit transactions from the primary, then the primary can no longer commit transactions either, and your system is down.
For more information about these constraints, refer to the extensive discussion around this topic(CAP theorem).
If you're OK with these restrictions, you might consider using Synchronous Database Mirroring (High-Safety Mode).
If you're not OK with these restrictions, please adjust / clarify your requirements.
You may try to get some help from these two references:
Replication in MS SQL Server
SQL Server Replication
On a side note an important thing you should need to plan before doing replication is the Replication model which you will use for your replication.
There is a list of Replication model which you can use:-
Peer-to-peer model
Central publisher model
Central publisher with remote distributor model
Central subscriber model
Publishing subscriber model
Each one of the above has its own advantages. Check them as per your need.
Also to add to it there are three types of Replication:-
Transactional replication
Merge replication
Snapshot replication
Check out this tutorial on SQL Server 2008 replication:
Tutorial: Replicating Data Between Continuously Connected Servers
Since you said "... reflected in the 2nd server immediately in real time", that means you want to use Transactional Replication. You still may only choose to replicate certain tables.
Does the "millions of rows" represent some kind of history? If so, consider the risks that Michael mentioned... and whether you need all the history in the 2nd server, or just current / recent activity. If it's just current/recent, it may be safer and less of a system drain, to write something in T-SQL or SSIS, for a job to execute that loops, reads, and copies the data.
That could be done with linked servers and triggers... but the risks Michael mentions, about preventing the primary server from committing transactions, are as much or more a concern with triggers... that you can avoid with your own T-SQL/SSIS + job.
Hope that helps...

MS sql. Not inserted value

In our company we using many DB servers in different cities. Sometimes data in one server should be synchronized with another. For example, in table "Monitor" values "status" and "date" may be updated very often. My problem is when theese values updated in server A, they also should be updated in server B:
Update Monitor set(date='2013-06-13')
and then
'Update Monitor set(status=4)'
in server A udating of both values is sucsessfull, but in server B (usualy with highest loading) somtimes, in approx. 0.03% cases updated only value date and status is stil old. Can anybody explain, is it possible in DB server with high loading?
It's hard to explain without looking at the boxes, logs and workload each is doing; there are a thousand things that would cause server "B" to miss data, including table and row locks, requests dropped by the network, unfinished transactions and the like. To find out exactly, you'd have to turn on the logging and compare the requests on "A" versus "B". The first thing I'd do, however, would be to look for errors in the SQL logs.
But in general keeping database synchronized across regions is do-able using existing technologies available in MS and Oracle. One scenario involves using a master, central db to receive all requests. It then distributes inserts, updates, delete and queries out to the regional DBs using SSIS or regular DB connectivity over a WAN.
Here's a high-level guide to the technology solution available in SQL Server.
http://msdn.microsoft.com/en-us/library/hh868047.aspx
You were probably looking for a simple answer, but I don't think there is one.

SQL Server 2005 won't update rows

I currently have a SQL Server 2005 set up and has been running successfully for quite a long period of time without any issues.
As of this morning our website applications have been attempting to perform udpates on various rows. However, every time an update happens the data never gets updated in the database.
Our application's code hasn't been changed in any way, and there appears to be no errors of any kind.
Is there anything in SQL Server that can prevent updates from being performed on a database? Can the size of transaction logs prevent data from being updated on a SQL Server database? Or anything at all that can cause this strange behaviour?
We had similar behaviour on one of our servers and it was due to the log file being on a hard drive that had run out of disk space - so worth checking that.
Also check that the Autogrowth limits haven't been reached:

Mirroring vs. Log Shipping in Sql Server 2005

I'm interested in hearing people's thoughts about the pros and cons of database mirroring vs. log shipping in this scenario: we need to setup a database backup situation wherein there is exactly one secondary server that need not automatically pick up when the primary fails. Recovering and starting with the secondary should not have to take too long though.
Mirroring
Database mirroring is limited to only two servers.
Mirroring with a Witness Server allows for High Availability and automatic fail over.
You can configure your DSN string to have both mirrored servers in it so that when they switch you notice nothing.
While mirrored, your Mirrored Database cannot be accessed. It is in Synchronizing/Restoring mode.
Mirroring with SQL Server 2005 standard edition is not good for load balancing (see sentence above)
Log Shipping
You can log ship to multiple servers.
Log shipping is only as current as how often the job runs. If you ship logs every 15 minutes, the secondary server could be as far as 15 minutes. Making it more of a Warm Standby.
You can leave the database in read only mode while it is being updated. Good for reporting servers.
Good for disaster recovery
For backup purposes I would recommend Mirroring: it keeps an always up-to-date copy of your database with no hassle.. If you don't need automatic fail-over you need just two servers/instances. Note that High Performance mode is only available in the Enterprice (sp) edition!
Switching to the secondary database does take longer with log shipping, but it's not too bad. You'll have to manually copy any uncopied backup files, apply the transaction log backups to the secondary database, recover the secondary database, and change its role to primary. If the old primary databases accessible, you should back up its transaction log before beginning. Failing over with mirroring is somewhat simpler, and can be done automatically if you are using High Availability mode. Even when using High Performance mode, it's still a one statement operation.