My Entity Framework is set up to emit the SQL generated, followed by the time taken to run the query, to the Output pane.
When I run it locally, the EF query takes .064s (as can be seen in the Output pane), and the SQL (when run by itself in Management Studio) takes about the same. In production the EF query takes .660s, yet the SQL generated only takes .157s.
There are about 50 rows returned. All the other EF queries are running at the expected speed.
What can cause the EF to take so much longer to run than the SQL it generates?
Thank you for any ideas.
The simplest way to figure out why queries run differently when called from different locations is to check the execution plans for both of them. From SSMS it's simply a case of including the plan in the output. For a live SQL Server you can use SQL Profiler.
When you have both plans, compare them and figure out the differences.
One example when a query runs differently is the ARITHABORT setting. Your app probably connects to SQL Server with this turned off whereas in SSMS it is on. This may cause it to use a different execution plan.
Related
When I troubleshoot a large .NET app which uses only stored procedures, I capture the sql which includes the SP name from SQL Server Profiler and then it's easy to do a global search for the SP in the source files and find the exact line which produced the SQL.
When using Entity Framework, this is not possible due to the dynamic creation of SQL statements. However there are times when I capture some problematic sql statements from production and want to know where in the code they were generated from.
I know one can have EF generate logs and tracing on demand. This probably would be taxing for a busy server and produces too much logs. I read some stuff about using mini profiler but not sure if it fits my needs as I don't have access to the production server. I do however have access to attach SQL Server Profiler to the database server.
My idea is to find a way to have EF attach/inject a unique code to the generated SQL but it doesn't affect the outcome of the SQL. I can then use it to cross reference it to the line of code which injected it into the SQL. The unique code is static which means a unique static code is used for every EF linq statement. Maybe sent as a dummy sql or a comment along with the sql statement.
I know this will add some extra traffic but in my case, it will add extra flexibility and cut a lot of troubleshooting time.
Any ideas of how to do this or any alternatives?
One very simple approach would be to execute something via ExecuteStoreCommand(): Refresh data from stored procedure. I'm not sure if you can "execute" just a comment, but at the very least you should be able to do something like:
ExecuteStoreCommand("DECLARE #MyTag VARCHAR(100) = 'some_unique_id';");
This is very simple, but you would have to find the association in two steps:
Get the SessionID (i.e. SPID) from poorly performing query in SQL Server Profiler
Search the Profiler entries for the prior SQL statement for that same SPID
Another option that might be a little more complicated but would remove that additional step when it comes to making that association is to "intercept" the commands before they get executed and inject a comment with your unique id. Please see the following S.O. Answer for details. You shouldn't need the full extent of what they did, but even if you do, it seems like all of the code (or all the relevant stuff) is there:
Adding a query hint when calling Table-Valued Function
By the way, this situation is a point in favor of using Stored Procedures instead of an ORM. And, what do you expect to be able to do in terms of performance tuning once you do find the offending app code? (another point in favor of using Stored Procedures instead of an ORM ;-).
Well, I have a stored procedure on SQL server 2012. When I execute it with the same parameters from SSMS it always takes different time to get results.I have observed that I need from 10sec to 10mins to wait. What could be a reason? Where is to start digging? I can not post the code here because it's too large, but I think some common recommendations might appear.
Well, the time difference between runs is rather large, so it could be that the system may be under load when you run this queries. Is this in a production environment?
To troubleshoot:
Turn on Actual Query Plan and execute the query/
Check that other queries are not blocking your query (sp_who2)
You can also run SQL Profiler when you run your query.
I have a rather complex Linq-to-SQL query which when run locally or from the AAT environment takes about 2-3 seconds. That's ok-ish. The problem I've got is on the QA and Production servers where the query takes 2-5 minutes(!). When profiling it looks like EF CF generates different SQL in these environments.
We are targeting .net 4.0 using EF 5(4.4)
A quick recap:
Operation takes 2-3 sec in local/AAT
Operation takes 2-5 minutes in QA/Prod
Databases have the same data
It does not matter which database I connect to or which environment I connect from, the performance issues stay the same.
Catching the generated SQL and running it through SQL Management studio gives the same execution times.
It seems like the biggest difference in the queries is that one of them uses more outer joins and the other uses more inner joins.
Does anybody have a clue to what's going on?
EDIT: Queries generated locally(22 sec) and on server(3min 45sec)
I have windows server 2008 r2 with microsoft sql server installed.
In my application, I am currently designing a tool for my users, that is querying database to see, if user has any notifications. Since my users can access the application multiple times in a short timespan, i was thinking about putting some kind of a cache on my query logic. But then I thought, that my ms sql server probably does that already for me. Am I right? Or do I need to configure something to make it happen? If it does, then for how long does it keep the cache up?
It's safe to assume that MSSQL will has the caching worked out pretty well =)
Don't bother trying to build anything yourself on top of it, simply make sure that the method you use to query for changes is efficient (eg. don't query on non-indexed columns).
PS: wouldn't caching locally defeat the whole purpose of checking for changes on the database?
Internally the database does all sorts of things, including 'caching', but at all times it works incredibly hard to make sure your users see up-to-date data. So it has to do some work each time your application makes a request.
If you want to reduce the workload by keeping static data in your application then you have to implement it yourself.
The later versions of the .net framework have caching features built in so you should take a look at those (building your own caching can get very complex).
SQL Server will handle caching for you, yes. When you create a query or a stored procedure SQL Server will cache that execution plan and reuse it accordingly. From MSDN:
SQL Server execution plans have the following main components: Query
Plan The bulk of the execution plan is a re-entrant, read-only data
structure used by any number of users. This is referred to as the
query plan. No user context is stored in the query plan. There are
never more than one or two copies of the query plan in memory: one
copy for all serial executions and another for all parallel
executions. The parallel copy covers all parallel executions,
regardless of their degree of parallelism.
Execution Context, each user that is currently executing the query has a data structure that holds
the data specific to their execution, such as parameter values. This
data structure is referred to as the execution context. The execution
context data structures are reused. If a user executes a query and one
of the structures is not being used, it is reinitialized with the
context for the new user.
If you wish to clear this cache you can execute sp_recompile or DBCC FREEPROCHCACHE
I am facing some issues with my stored procedures.
I am having 1 stored procedure for a Stack Bar graph, showing one months data.
Earlier on my local server it was taking more than 40 seconds, so I made some changes and now it takes 4 seconds. The same query when I run on my live server takes more than 40 seconds.
The count of the records are the same as in my local server.
Can anybody tell me what I should do to make it more fast on my live server?
A good start is to run SQL Server Management Studio (SSMS), load up the query, and switch on 'Display Actual Execution Plan', this will show you exactly what SQL is doing with your query. It will also show you a relative '%cost' in relation to the steps in the query. This helps to identify which table/join/aggregate whatever is causing the query to take so long.
I also believe that in the latest version of SSMS it advises which indexes should be added.
Hope this helps.
Rich.
It's complicated question . It's a lot of parameters
Can change time of execution
CPU speed,
Ram,
Indexes
Assuming server will be more powerful in terms of processing power and RAM, indexes is something you would like to look into.
use indexes with your mysql tables and also it may be because of your hosting server's performance. The server may be faced with down time.