MS SQL Concurrency, excess Locks - sql

I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.

Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.

The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.

Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.

Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.

Related

SQL Server - Avoiding write timeouts on logging table due to reporting queries

I have two very busy tables in an email dispatch system. One is for batching mail for dispatch, the other is used for logging. Expensive queries are ran that use both of these tables to produce stats for a UI. I would like to remove the reporting overhead on these tables as I am seeing timeouts during report generation.
My question is - what are my options for reducing the query overhead on these two tables while generating the report data.
I've considered using triggers to create exact copies of the tables. Is there any built in functionality in SQL server for mirroring data within a database? If I can avoid growing the database unnecessarily though that would be an advantage. It doesn't matter if the stats are not real time.
There is a built in functionality for this scenario and it's known as Database Snapshot.
If you run a query against a DB snapshot table, no shared locks should be created on original database.
You can use Resource Governor for SQL Server. Unfortunately, I have only read about it and haven't used it yet. It is used to isolate workloads on SQL Server.
Please try and let us know if it helps.
Some helpful links: MSDN SQLBlog technet
Kind Regards,
Sumit

SQL Server full copy of database for read operations

Please advise what suits my problem better. I have a highload web app hosted on the same server where SQL server is hosted. I also have SQL Service reporting running on the same server, generating user reports.
So my server basically works on top of disk read/write speed. I'm going to get another server and install there another SQL server in order to host SSRS there. So my criteria is to get as fresh data as it possible.
I've looked a couple of solution, currently I do make backup via jobs, copy it to second server and restore it there, also via jobs. But that's not the best solution.
All replication mechanism(transaction, merge, snapshot) affect publisher database by locking it's table, what is unacceptable for me.
So I wonder is there any possibility to create a replica with read only access, that would be synced periodically not affecting main db? I would put all report load to that replica and make my primary db be used only by web app.
What solution might suit my problem? As I'm not a DBA, I'd start investigating that direction. Thanks.
Transactional Replication is typically used to off-load reporting to another server/instance and can be near real-time in a best case scenario. The benefit of Transactional Replication is you can place different indexes on the subscriber(s) to optimize reporting. You can also choose to replicate only a portion of the data if only a subset is needed for reporting.
The only time locking occurs with Transactional Replication is when you generate a snapshot. With concurrent snapshot processing, which is the default for Transactional Replication, the shared locks are only held for a short period of time, so users are able to continue working uninterrupted. Either way, this shouldn't be an issue since you'll likely be generating the snapshot during a period of low user activity anyway.

Setting Locking Granularity for All Statements in a Stored Procedure - Microsoft SQL Server 2000

I'm developing an ETL application with batch processing. There is low (i.e. no) concurrency for updates. I'd like to avoid the overhead of granular locks and lock escalation by merely locking the entire table.
I'd like to avoid having to specify TABLOCK in every statement. Is there any way to set the locking granularity at the top of a stored procedure such that every statement automatically gets table locks on every table used? Shared or exclusive doesn't matter, though shared is preferred; the ETL will run overnight with no adhoc query load and prior to a batch of reports triggered when the ETL is complete.
Thanks!
You will need to take a look at Transaction Isolation Levels
To be honest though, I can't see why you need to be doing anything. I would have thought SQL Server would do a good enough job of locking by itself.

Really slow schema information queries on SQL Server 2005

I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.

should i advocate migrating from access to (my)sql

We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets.
The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year.
We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it.
It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed?
Thanks for any wisdom you can share..
Since your database is small and very few users, I could not make a solid case for migration. I would definetly set up an script to archive old records on a more frequent basis (don't archive into same db, this would somewhat defeat the purpose).
But also make sure two things are correct as well.
INDEXES. If your queries start slowing down, make sure you have proper indexes
http://support.microsoft.com/kb/304272
Your network connection between computers is fast. Maybe upgrade to gigabit cards and router? Possibly put the db on a scsi drive (raid 10 for speed and redundancy)
Throwing advanced technology at simple problems is an expensive way to go and not always the answer!
First of all, the information that the whole table and the whole database is transferred across the network is simply incorrect. If the queries are indexed, then the search times should not go up that much over time.
As others have mentioned spending the time + money to setup and maintain and then have someone maintain and manage and support that database server is certainly a possibility here. However, keep in mind that simply migrating a JET based application to sql server in many cases will run slower, and in fact sql server is slower then JET when no network is involved.
So, I would take some time to ascertain why things slow down so much, and also check into how indexing is setup.
So, just keep in mind that it is pure folklore and myth that the whole tables and whole database is transferred over the network. This concept is ONLY DUE to most people really not having any computer training and not knowing and understanding how the JET data engine works.
I would probably move to either Microsoft SQL Server 2008 R2 Express Edition (free) or MySQL (free) if there is both funding and time to put in a data access layer. Because you will be making requests of a remote server and not operating on data at the local workstation this move is very involved from the development standpoint.
However you should analyze whether or not its more cost effective to perform your archival process quarterly or monthly, and just move the archive database to SQL Server 2008 R2 Express Edition. (You can install the Microsoft SQL Server Management Studio client tools on workstations and query the archival database for faster reports on historical data without rewriting your entire production application; similar solutions exist for using MySQL or other OSS/free RDBMS).
I have cilents with 300 mb databases although they should be upsizing to SQL Server for other reasons. 19 Mb is relatively small. If performance is bad enough that archiving speeds things up then check the indexes to the tables for all your sorting and selection fields. Albert gave you a good URL there to check.
Entire MDB files do not go down the wire. Unless you are missing indexes.
Instead of shipping the DB over the network to the client and then performing queries, you could instead write a small wrapper on the server that handles requests, looks up the result in the Access DB (using SQL + the Access ODBC driver), and returns the result. This avoids the overhead of a large migration you might not need and still gets rid of the basic problem the users are experiencing.
Moving to a "proper" database solution is the best long term solution, but if your needs scale linearly and slowly over the next 30 years, it's hard to justify an expensive migration. That said, if you expect to really ramp up, or want to be more "future-proof", migrating now will likely save money/time.
It is my understanding that if we were
using sql, the search time would not
go up as the table gets bigger because
the entire .mdb does not have to be
sent over the network each time. Is
this correct?
This general idea is true for almost all databases. The idea of a database is to separate your application from the actual data. The data resides in a database server. Your application doesn't.
Does anyone have any insight about
whether it could be worth it to go to
the trouble (time and money) of
migrating to a new db
Yes. Having proposed this many times. It's expensive. It's complicated. Your MS-Access database will never get better or faster.
Other database servers will (and can) get faster and more sophisticated. After all, you're not sending .MDB files through a network anymore. The limitations are reduced. You're working with standard SQL through ODBC. Any database will work at the end of ODBC. You can fire vendors to find better, faster, cheaper products. Once you stop using Access you have choices.
Either stop using Access now or plan to suffer with it forever. And remake this decision every year until the end of time.