I have a big stored procedure on a SQL Server 2008 Express SP2 database that gets run about every 200 ms. Normal execution time is about 50ms. What I am seeing is large inconsistencies in this run time. It will execute for while, say 50-100 times at 40-60ms which is expected, then seemingly at random the same stored procedure will take way longer, say 900ms or 1.5 seconds to run. Sometimes more than one call of the same procedure in a row will take longer too.
It appears that something is causing sql server to slow down dramatically every minute or so, but I can't figure out what. There is no timing pattern between the occurences.
I have the same setup on two different computers, one of which is a clean XP Pro load with no virus checking and nothing installed except SQL server.
Also, The recovery options for all the databases are set to "Simple".
I would suggest breaking out applicable sections into their own stored procedures; there is only one query plan cached per batch.
It looks like my problems happen simultaneously with the SQL Server Plan Cache Object Counts hitting 999 and resetting.
Related
Well, I have a stored procedure on SQL server 2012. When I execute it with the same parameters from SSMS it always takes different time to get results.I have observed that I need from 10sec to 10mins to wait. What could be a reason? Where is to start digging? I can not post the code here because it's too large, but I think some common recommendations might appear.
Well, the time difference between runs is rather large, so it could be that the system may be under load when you run this queries. Is this in a production environment?
To troubleshoot:
Turn on Actual Query Plan and execute the query/
Check that other queries are not blocking your query (sp_who2)
You can also run SQL Profiler when you run your query.
We have an ASP.net MVC website running in a Webecs.com virtual dedicated server with 1Gb CPU, 3 Gb of RAM and using a SQL Server Express database within the same server. Now and then, the database is giving a timeout error that is fixed temporary by executing sp_updatestats stored procedure.
Initially, we thought it was a RAM problem and we raised the RAM in the server to the current 3Gb amount. Even though the problem is less frequent now, it is still happening when the traffic in the website increases and more queries are executed. We have been monitoring the CPU and RAM usage and it does not seem to be the problem, CPU is around 30% with some picks up to 90% and RAM is around 80%.
We have the exact same website running in a different, more potent server with SQL Server 2008 R2 and is working without problems.
Any ideas what’s happening here?
Edit
Queries are of a normal size, nothing too big.
We have profile it with glimpse.
There is not n+1 queries, there is an average of 10 queries per page and sometimes the timeout happens in the login page where there is only one query.
The database isn't too big either.
I'm developing an application which queries data from the host, and then inserts them into an SQL database. My application runs in random times on our PC's ( we have 600-700 PCs ) from a server location, so my application isn't located at all PC. It is only located at the server. So it is possible that sometimes it runs on 50 PCs.
I keep getting two types of SQL error messages. The first is time out, and the other is SSL security error. Right now my application executes about 5-10 SQL commands. So i'm thinking to rewrite my code to call stored procedures which could reduce the number of SQL calls. The only question is that is it worth it? I mean of course, it has its advantages, because if i have to change something then it is enough to change the stored procedure, and i don't have to recompile my application. But won't that cause trouble for the SQL server? I mean isn't that a problem when the same stored procedure will be executed 50 times at the same time?
So what way is better? Using stored procedures, or using SQL commands in my code?
Thanks!
If you often call several identical SQL statements in a row, the answer is clear-cut: use SPs to reduce the number of round trips. The load on your SQL server is not going to change, but the network latency will go down due to reduced number of round-trips. As an added bonus, your system architecture may become easier to understand and maintain, because the complex SQL logic would be encapsulated in your data access layer.
This may not have anything to do with your SSL errors, but the situation with timeouts has a decent chance of improving.
I am facing some issues with my stored procedures.
I am having 1 stored procedure for a Stack Bar graph, showing one months data.
Earlier on my local server it was taking more than 40 seconds, so I made some changes and now it takes 4 seconds. The same query when I run on my live server takes more than 40 seconds.
The count of the records are the same as in my local server.
Can anybody tell me what I should do to make it more fast on my live server?
A good start is to run SQL Server Management Studio (SSMS), load up the query, and switch on 'Display Actual Execution Plan', this will show you exactly what SQL is doing with your query. It will also show you a relative '%cost' in relation to the steps in the query. This helps to identify which table/join/aggregate whatever is causing the query to take so long.
I also believe that in the latest version of SSMS it advises which indexes should be added.
Hope this helps.
Rich.
It's complicated question . It's a lot of parameters
Can change time of execution
CPU speed,
Ram,
Indexes
Assuming server will be more powerful in terms of processing power and RAM, indexes is something you would like to look into.
use indexes with your mysql tables and also it may be because of your hosting server's performance. The server may be faced with down time.
I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.