I am thinking this is impossible but I wanted to make sure.
Is there a way for me to know when a table was locked and maybe for how long? I know that I can see whether a table is currently locked, but I would like to have a "history" of locks.
A "free" alternative to the RedGate tool mentioned in other response, is the MS-SQL Server Profiler (see in Tools menu from SQL Server Management Studio). This tool lets you define "traces" by specifying the type of event you wish to monitor and/or record. There's even [in SQL2008, maybe also in older versions] a default template for locking issues: TSQL_Locks.
Beware that analyzing the profiler's logs may require a bit of work/figuring out. It is possible to filter events based on a particular set of criteria (as well as filtering these at the source, i.e. excluding these from the log in the first place), but third party products such as RedGate's are likely to offer more ease-of-use, better aggregation features etc.
Edit: (following Metju's remark)
The solution suggested above, implies that one would start recording lock-related events in the profiler tool ahead of time. There is nothing, at least nothing publicly documented, in MS-SQL 2005 which would provide access to a complete historical info about the locks, "post facto". However, depending on one's needs, enough insight may sometimes be gathered from the SQL Activity Monitor (from 'Management' in the databases etc. tree on the left, in Management Studio), in particular the "Locks by object" view. This information, which is implicitly available (no need for any setup), can often be sufficient to identify the origins of dead-locks and similar issues.
Check this tool out from Red-Gate. I use it an awful lot for exactly this kind of thing. Plus it lets me check out long running queries and a host of other useful information. There are filters for Last hour, last day, forever etc...
RedGate SQL Response
Without running a monitoring tool (a roll your own, third party, or processing the output of SQL profiler), no there isn't.
Related
I am interesting is there any solution to have two physical machine, with Microsoft SQL servers 2012 that work on the same database. somthing like Cluster just with bothe node to be active... Any idea?
Microsoft SQL Server does not support a 'real' load balancing scheme out of the box. AFAIK, this is still true with SQL Server 2012. (Someone will enlighten me if I'm wrong.) It doesn't matter if we are talking about database mirroring or AlwaysOn or clusters.
(In order to hammer that point home, MS seems to call SQL Server clusters "SQL Server failover clusters" lately. Pedantics.)
If you want to load balance your databases, you have to do the hard work yourself with some sort of sharding, federation or replication. (Note that federation (by views) has been in the product since SQL Server 2000, it just wasn't very popular.) And, of course, that would mean modifying either your databases or the apps themselves, which is almost always either too much work or violates your vendor agreements. With 150 databases, it's just that much more insurmountable.
You can have an active-active cluster, but the thing is that you would have to carefully distribute your databases on your nodes to divvy up the load. With 150 databases, this might be more granular than if you just had five databases, but if you have one database that is a ton of load and 149 that are light-weight or rarely used, you might still find one machine bogged down and the other isn't. And, some databases are busy sometimes and hardly busy at other times. Which means that everything might come down to when a user decides to run some heavy process.
Of course, you have to be able to support all of that load on a single node when you fail over, for whatever reason, even if it is something mundane like patching Windows. If you only patch during known slow traffic periods, that's great. If you don't have slow periods, or if the failover occurs because the hardware actually has a fault, the other node might not take the load and your users will be out of luck. If you think about it like that, having the second machine "doing nothing" isn't quite so irritating. At least you know that it will take all of the traffic that the primary usually does.
Yes, you can can two databases active sharing the same information and replicating it back and forth. This is referred to as "Merge Replication". In this configuration, both nodes can accept read and write transactions.
How Merge Replication Works
I am developing a VB.Net application. That application might be working on a LAN. MS Access as a back end will be used. I have developed many single user applications, but don't know of multi user , LAN, manage DB etc. How do I make the program as Multi user on LAN. Data will be accessed at the same time. How to manage such things.
Please give me some help and Guidance.
Thanks
Your VB application does not care how many people run it.
Your database, with MS Access, has some serious issues with multiple users. Get away from it if you can. SQL Server has a free version called SQL Express. If you only plan on 2 people, you might be OK with Access for a while but be prepared to support it more.
That was all the easy stuff, now you have to think about how you are going to handle multiple users trying to access and update the same data (concurrency).
Imagine this, you are a user looking at employee record 1 and so is someone else. You change the birthday and save. The the other user changes thier suppervisor and saves. How do you know something changed? What do you do if something changed? These are questions I cannot answer for you, you must decide based on your situation.
There are 2 main types of concurrency, optimistic and pessimistic. See this link for a great explaination and discussion on them: optimistic-vs-pessimistic-locking
You can look at this on a table-by-table basis.
If a table is never updated, you dont have to worry about concurrency
If a table is rarely updated, like a table of states, you can decide if it is worth the extra effort to add concurrency.
Everything else, pretty much should have some type of concurrency.
Now, the million dollar question, how?
You will find as many ways to handle concurrency as you will find colors in the rainbow. Here are some of the ones I like:
Simple number that you increment with each save. Small and easy.
DateTime stamp - As long as you dont expect to ever have 2 people save the same record during the same second, this is easy. (I personally dont like it by it's self)
User Name - Pretty simple gives a little bit of an audit by knowing who last inserted/edited the record but doesn't handle an issue I have seen to often. Imagine the same senerio as above but you had 2 instances of record 1. Now you change the data again, maybe supervisor, and when you save, you overwrite the changes from your first save with those of the second save.
Guid - VB can create a guid, SQL Server can create a guid and so can Access. It is nice an unique and most important, you can create it on the client so you dont have to requery the database after you save the record to get a refreshed record.
Combination of these. I like 2 and 3 myself. Gives a mini audit and is unique to the user.
If you use a DataAdapter, by default, MS will assume concurrency checking means to compare EVERY field to make sure it did not change. This works, but is completely un-scaleable and should not be done.
All of this depends on the size of your application and how you see it being used. Definately do some more research before you settle on a decision.
There are a number of solutions here.
If I may suggest a drastic alternative, have you considered pairing the client running on the user's computer with a server component (through a web service)? A simpler alternative would be for the client to talk directly to a SQL Server (or other database) instance through the network?*
*I'm not a fan of having client side apps talk directly to the database. It will mean maintenance headaches in the future, but I
included it to give you options
.
I found this random example via Google so YMMV.
The company I work for would like to use Always on availability groups architecture in our SQL Server supported application. we have 3 databases straight off of installation and one of those is partitioned by configuration, we currently use MSDTC to coordinate transaction between the three, i.e. if committing to databases A and B, and A commit succeeds, a failure on B means a rollback on A and B as opposed to just B.
We ran into an issue when we saw this article
from my understanding this basically means MSDTC is not supported in an Always on availability group mode.
I could not find a replacement for this in SQL server 2012
So my questions are:
What options do we have (Shelve or open source Product/Code change)?
What is specifically the impact of running MSDTC in this setting (complete crash/missing transactions)?
Thanks in advance, your help is greatly appreciated.
Dor
I recently asked a similar question at: https://dba.stackexchange.com/questions/47108/alwayson-ag-dtc-with-failover
> What options do we have (Shelve or open source Product/Code change)?
I think you have two options:
Change your app so that it does not need DTC.
Change your database setup so that it does not use Availability Groups.
In my circumstance, we're using a commercial app so option 1 is not viable. We are currently using database mirroring and based on recent research I now understand that that is also not supported.
My take away is that it is possible to make it work. But it's not simple to do and it puts you in an unsupported situation - which is not acceptable for us. Therefore, I plan to look at utilizing log shipping and change from having a hot standby (with mirroring) or a warm standby (with log shipping).
What is specifically the impact of running MSDTC in this setting (complete crash/missing transactions)?
If you do decide to make use of DTC with Availability Groups or mirroring you run the risk of having corrupted/inconsistent data in a failover scenario. The article you cited gives a good example of how that can happen.
Admittedly, with Log Shipping the same issue can occur. The argument I plan to make is that with log shipping we'll have the ability to roll to a specific point in time and we can make sure we only move to a point in time where we know everything is consistent.
The commercial app we are using does not support high availability. This is our attempt at making it highly available.
I was wondering if it is possible to add/update/delete an SQL Server database table, as well as an Informix database table at the same time.
Both databases will have the same table (data and all), so the query would only change just based on which database it is going to. For some reason, we need the data inside both databases and kept up in real time.
Is it possible to do this with a SQL Trigger or maybe a SProc?
Any insight of how to do this, or a push in the right direction would be very much appreciated.
Doing a synchronous update, ie. a distributed transaction by using a linked server, possible for a trigger, while technically possible, I would definitely advise against it. Aaron brings the issue of how reliable XA in general is, but my point is different: availability. Your update in SQL Server will fail if it cannot connect and update in Informix. Downtime (patching, maintenance, not to mention disasters) of the Informix site will imply downtime of the SQL Server site, driving your five 9's toward nine 5's quite fast... This is why I strongly advocate decoupling the application of updates. Transactional Replication is such an example of decoupling and it supports heterogenous environments (ie. Informix client downstream to accept the changes).
You will have a delay of update visibility (state in SQL Server will be reflected in Informix after delay that can be milliseconds, seconds, minutes, even hours in a bad day). And the updates are one way, nothing flows back from Informix to SQL Server. But doing master-master replication in an heterogeneous environment is something that not even Chuck Norris would attempt, just saying.
Maintaining two different DBMS with a single transaction requires a transaction monitor such as the XA system to coordinate the transactions. There are such systems. The XA specification is typically the underlying standard. Both Microsoft's SQL Server and IBM's Informix work with such systems, and it is possible to have SQL Server and Informix controlled by the same transaction monitor. I have fewer qualms about the technical competency of such systems than the others who've answered; I share their concerns about whether it is appropriate for you.
Such systems are very heavyweight. If you want consistency, all transactions that modify the single table described in the question will need to use the same XA services (plural; likely one for insert, one for update, one for delete) to do so. Further, if the same transactions need to manage any other tables too, then you need to add and use services for those tables as well. It is this aspect that tends to make such systems difficult to manage.
Using a replication system with the potential for delay before the sites are consistent is probably better than trying for absolute synchronicity, unless there are cogent demands for such synchronicity.
If there really is a demand for absolute synchronicity, then use a transaction monitor.
Do not roll your own.
They are hard to get right. Handling all the special cases is tricky. And (under the hypothesis that you need absolute synchronicity) doing it wrong is costly but easy.
That depends on your definition of "possible". Technically, you can use a technique called "two-phase commit."
The idea is you send the data to both databases and then a "prepare commit" command which does everything necessary to commit the data except for committing it. If the prepare fails, the commit would fail too. If prepare succeeds, then commit must succeed.
Brilliant idea, doesn't work in practice. One common case is that you send the commit to both databases and one of them gets lost on the way (network outage). Happens rarely but when it happens, you have an inconsistent state and, since this step must not fail, no good way to clean up.
So my solution works like this:
You load the data into a new table which has two extra columns where you can say "server X has seen this record"
You add a job which copies all jobs for server X to server X and updates the respective column. Write the job in such a way that it can be aborted and restarted at any time (i.e. it must be able to cope with cases where data already exists on the target side).
That way, you can distribute the data to any number of servers in a consistent, fault tolerant way.
I'm looking to apply continuous delivery concepts to web app we are building, and wondering if there any solution to protecting the database from accidental erroneous commit. For example, a bug that erases whole table instead of a single record.
How this issue impact can be limited according to continuous delivery doctorine, where the application deployed gradually over segments of infrastructure?
Any ideas?
Well first you cannot tell just from looking what is a bad SQL statement. You might have wanted to delete the entire contents of the table. Therefore is is not physiucally possible to have an automated tool that detects intent.
So to protect your database, first make sure you are in full recovery (not simple) mode and have full backups nightly and transaction log backups every 15 minutes or so. Now you cannot lose much information no matter how badly the process breaks. Your dbas should be trained to be able to recover to a point in time. If you don't have any dbas, I'd suggest the best thing you can do to protect your data is hire some. This is a non-negotiable in any non-trivial database environment and it is terribly risky not to have trained, experienced dbas if your data is critical to the business.
Next, you need to treat SQL like any other code, it should be in source control in scripts. If you are terribly concerned about accidental deletions, then write the scripts for deletes to copy all deletes to a staging table and delete the content of the staging table once a week or so. Enforce this convention in the code reviews. Or better yet set up an auditing process that runs through triggers. Once all records are audited, it is much easier to get back the 150 accidental deletions without having to restore a database. I would never consider having any enterprise application without auditing.
All SQL scripts without exception should be code-reviewed just like other code. All SQL scripts should be tested on QA and passed before moving to porduction. This will greatly reduce the possiblility for error. No developer should have write rights to production, only dbas should have that. Therefore each script should be written so that is can just be run, not run one chunk at a time where you could accidentally forget to highlight the where clause. Train your developers to use transactions correctly in the scripts as well.
Your concern is bad data happening to the database. The solution is to use full logging of all transactions so you can back out of transactions that you want to. This would usually be used in a context of full backups/incremental backups/full logging.
SQL Server, for instance, allows you to restore to a point in time (http://msdn.microsoft.com/en-us/library/ms190982(v=sql.105).aspx), assuming you have full logging.
If you are creating and dropping tables, this could be an expensive solution, in terms of space needed for the log. However, it might meet your needs for development.
You may find that full-logging is too expensive for such an application. In that case, you might want to make periodic backups (daily? hourly?) and just keep these around. For this purpose, I've found LightSpeed to be a good product for fast and efficient backups.
One of the strategies that is commonly adopted is to log the incremental sql statements rather than a collective schema generation so you can control the change at a much granular levels:
ex:
change 1:
UP:
Add column
DOWN:
Remove column
change 2:
UP:
Add trigger
DOWN:
Remove trigger
Once the changes are incrementally captured like this, you can have a simple but efficient script to upgrade (UP) from any version to any version without having to worry about the changes that happening. When the change # are linked to build, it becomes even more effective. When you deploy a build the database is also automatically upgraded(UP) or downgraded(DOWN) to that specific build.
We have an pipeline app which does that at CloudMunch.