SQL 2005 tempdb growth and DTA syntax errors - sql

I arrived at work to today to discover one of our SQL 2005 servers had run out of disk space.
On examination the database causing the problem was tempdb. It seems to have grown from around 8mb to 16gb, causing me some concern. After kicking everyone out of the server and restarting the problem the tempdb is now back to its original size, not a problem.
So I now decided to try and trace the query(s) causing the tempdb to grow. There are only two active databases on the server so I launched the SQL server profiler. I ran it using the "blank" template with the following events selected:
All of Errors / Warnings
T-SQL
Stored Procedures
I then threw this into the database tuning advisor which is now reporting that "67% of the consumed workload has syntax errors".
Question1) should I be worried about such a high level of syntax errors? The errors are coming from a very well known supplier of project management software, should I be contacting them regarding these errors?
Question2) Are the events I selected likely to discover the root cause of my tempdb growth?
Apologies for the long questions, trying to include as much details as I can.
Thanks in advance for any advice I receive.

I have used this one Properly Sizing the SQL Server TempDB Database to monitor the growth. Hope this helps.

Related

SQL Server Management Studio : execution timeout expired

I keep on getting a timeout in my Microsoft SQL Server Management Studio. I'm not running any code. All I am doing is trying to look at the tables within a database in the Object Explorer. I always get:
Execution Timeout Expired
I've looked at some of my settings and it says lockout of 0, meaning it should be unlimited time. I'm not even running any code. Just trying to understand what's in my database by going through the Object Explorer.
Thanks!
It depends on your work environment. But in all cases, I trust it is related to the Database but not the Studio itself.
If you are working on a server that is reached by the network by many other clients, then:
It could be a transient network problem,
High load of requests on the Server,
Deadlock or other problems related to multiprocess contention.
I suggest you troubleshoot your server in idle time, and if possible you detach the databases one by one and work to see which database is resulting in the problem. For this database, you go through the Stored Procedures and Functions and try to enhance them in terms of performance.

Map out SQL Server 2008 usage? (from logs)

All-
I'm trying to determine which SQL databases are currently being used the most (as well as what applications are requesting information from them).
Is there a log analyzing tool? Or something built into SQL server that could help me achieve this?
Ideally I'd like to show a map of server usage and understand which applications are actually hitting them.
Thanks!
sys.dm_db_index_usage_stats shows exactly how many time each index/table was read/scanned/updated since the server started up. This is the most important piece of information since everything else (IO, RAM, CPU) can be ultimately traced to these operations. The one information not revealed from here is blocking and contention, for which a good starting point is sys.dm_os_wait_stats. And finally there is sys.dm_exec_query_stats which will drill down to the individual query CPU and execution times.
If you right-click on the server in Management Studio you will see a 'Reports' option. There are a lot of built in reports which might give you what you need (the 'Server Dashboard' report in particular shows which databases are consuming the most CPU and I/O).
Alternatively the Profiler provides a lot of (perhaps too much) valuable data.

What does it mean if the database always keeps going into RECOVERY?

Every time I run a query, my database does not respond to an immediate second query and complains that it is in recovery mode (though it does not show anything beside the database name). This happens for about 5-10 minutes after which everything goes back to being normal.
I am expecting a major crash so I am copying the tables into a different database but anyone knows why this could happen or if there is a permanent fix?
Normally, a database is only in "Recovery" mode during startup - when SQL Server starts up the database. If your database goes into Recovery mode because of a SQL statement, you almost definitely have some sort of corruption.
This corruption can take one of many forms and can be difficult to diagnose. Before you do anything, you need to check a few things.
Make sure you have good backups of your database - copied onto a separate file system/server.
Check Windows Event Log and look for errors. If any critical errors are found, contact Microsoft.
Check SQL Server ERRORLOG and look for errors. If any critical errors are found, contact Microsoft.
Run chkdsk on all the hard drives on the server.
Run dbcc checkdb against your database. If any errors are found, you can attempt to fix the database with the REPAIR_REBUILD option. If any errors could not be fixed, contact Microsoft.
Restore a backup copy of your database onto a different server. This will confirm whether it is a problem within your database or the SQL Server/machine.
After step #4, #5, and #6, run your queries again to see if you can cause the database to go into Recovery mode. Unfortunately, corruption can occur because of an untold number of reasons, but more important than anything is the data. It will confirm whether it is a problem with your data or elsewhere. As long as you have backups that can be restored to a different SQL Server and a restored copy does not continually go into Recovery mode, you don't have to worry too much.
I always put Number 6 last because setting up a separate server with SQL Server and moving/restoring a large database can take an extensive amount of time; but if you already have a backup/test server in place, this might be a good first option. After all, it won't cause any downtime with your live server.
Finally, don't be afraid to contact Microsoft over this. Databases are often mission-critical, and Microsoft has plenty of tools at their disposal to diagnose problems just like this.
Late answer...
Does your database have autoclose set to true? When set, the DBMS has to bring the database online which may account for your symptoms
This can happen when the SQL Server Service has gone down hard in the middle of write operations and sometimes during mode during server startup. Follow the query in this link to monitor
http://errorbank.blogspot.com/2012/09/mssql-server-database-in-recovery.html
I've only had this happen when the service (or the SQL Server Service) has gone down hard in the middle of write operations. Once it came back, everything was fine.
However, if this happening often, then I would suspect a disk level failure of some sort. I would make sure the database is fully backed up and move it to another server while you run diagnostics / rebuild the problem server.

Really slow schema information queries on SQL Server 2005

I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.

MS SQL Concurrency, excess Locks

I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.
Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.
The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.
Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.
Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.