Our application issues an NHibernate-generated SQL query. At application runtime, the query takes about 12 seconds to run against a SQL Server database. SQL Profiler shows over 500,000 reads.
However, if I capture the exact query text using SQL Profiler, and run it again from SQL Studio, it takes 5 seconds and shows less than 4,600 reads.
The query uses a couple of parameters whose values are supplied at the end of the SQL text, and I'd read a little about parameter sniffing and inefficient query plans, but I had thought that related to stored procedures. Maybe NHibernate holds the resultset open while it instantiates its entities, which could explain the longer duration, but what could explain the extra 494,000 "reads" for the same query as performed by NHibernate? (No additional queries appear in the SQL Profiler trace.)
The query is specified as a LINQ query using NHibernate 3.1's LINQ facility. I didn't include the query itself because it seems like a basic question of philosophy: what could explain such a dramatic difference?
In case it's pertinent, there also happens to be a varbinary(max) column in the results, but in our situation it always contains null.
Any insight is much appreciated!
Be sure to read: http://www.sommarskog.se/query-plan-mysteries.html
Same rules apply for procs and sp_executesql. A huge reason for shoddy plans can be passing in a nvarchar param for a varchar field, it causes index scans as opposed to seeks.
I very much doubt the output is affecting the perf here, it is likely to be an issue with one of the params sent in, or selectivity of underlying tables.
When testing your output from profiler, be sure to include sp_executesql and make sure your settings match (stuff like SET ARITHABORT), otherwise you will cause a new plan to be generated.
You can always dig up the shoddy plan from the execution cache via sys.dm_exec_query_stats
Related
Some programm generate and send queries to sql server(on high load production). I want take plan of concrete query of concrete table. I start profiler with "Showplan XML" and set filter on TextData(like %MyTable%) and DatabaseName. It show rows with xml in TextData that describe execution plans(for all queries of my table). But I know that exist 5 different sql queries for this table.
How I can match some concrete query with correspond plan without use statistic?
Is there a reason this has to be done on the production environment? Most really bad execution plans (missing indexes causing table scans etc.) will be obvious enough on a dev environment where you can use all the diagnostics you want.
Otherwise running the SQL on the query cache (as in the linked question someone else mentioned) will probably have the lowest impact as it just queries a system table rather than adding diagnostics to every query.
So I have a summary i need to return to the end user application.
It should accept 3 parameters DateType, StartDate, EndDate.
Date Type will determine the date field I use to filter the data.
The way i accomplished this was putting all the IDs of the records for a datetype into a TEMP table and then joining my summary to the list of IDs.
This worked fine when running on the query on the SQL server that houses the data.
However, that is a replicated server, so when I compiled to a stored proc that would be on the server with the rest of the application data, it slowed the query down. IE 2 seconds vs 50 seconds.
I think the cross join from the temp table that is created on the SQL server then joining to the tables on the replciation server, is causing the slow down.
Are there any methods or techniques that I can use to get around this and build this all in one stored procedure?
If I create 3 stored procedures with their own date range, then they are fast again. However, this means maintaining multiple stored procs for the same thing.
First off, if you are running a version of SQL Server older than 2012 SP1, one problem is that users who aren't allowed to run DBCC SHOW_STATISTICS (which is most users who aren't sysadmins, see the "Permissions" section in the documentation) don't get access to statistics on remote tables. This can severely cripple the optimizer's ability to generate a good execution plan. Upgrading SQL Server or granting more permissions can help there.
If your query involves filtering or joining on a character column, make sure the remote server is flagged in the linked server options as "collation compatible". If this option is off, SQL Server can't assume strings can be compared across the servers and it will start pumping entire tables up and down just to make sure the data ends up where the comparison has to be made.
If the execution plan is as good as it gets and it's still not good enough, one general (lame) technique is to transfer all data locally first (SELECT * INTO #localtable FROM remote.db.schema.table), then run the query as a non-distributed query. Obviously, in order for this to work, the remote table cannot be "too big" and in some cases this actually has worse performance, depending on how many rows are involved. But it's always worth considering, because the optimizer does a better job with local tables.
Another approach that avoids pulling tables together across servers is packing up data in parameters to remote stored procedure calls. Entire tables can be passed as XML through an NVARCHAR(MAX), since neither XML columns nor table-valued parameters are supported in distributed queries. The basic idea is the same: avoid the need for the the optimizer to figure out an efficient distributed query. The best approach greatly depends on your data and your query, obviously.
I have inherited an existing system and am trying to figure out a few things.
The system does a
SELECT * FROM v_myView WHERE mvViewCol = 'someValue'
and v_myView performs summation of Table1 based on myViewCol
Does SQL Server 2005 optimize the query or will summation always occur across the entire Table1?
I understand that I could use a parameterized view but don't want to go changing things unnecessarily.
Cheers
Geoff
Views have no runtime cost at all. They are always inlined into the surrounding query as if you had pasted the view definition as text. They would be impractical to use otherwise.
Does SQL Server (2005) optimize the query or will summation always occur across the entire Table1.
It will be optimized.
This is a complicated question. I think the best explanation is here. I do wish Microsoft documentation were a little clearer on this point.
When a view is created, the query is parsed. This ensures that it is correct.
The execution plan is determined the first time the query is run (to a close approximation). This execution plan then remains in the plan cache for subsequent calls. So, if you have an index on the appropriate columns and the first execution has a where clause that would use the index, then subsequent calls will also use the index.
I say to a close approximation, because it is really the first time that a view is called when the plan is not in the plan cache. Certain changes to the database will flush the plan, as will restarting the server.
So, if you only access the view with the where clause, then subsequent uses of the view will be optimized for that purpose.
SQL Server 2005 will optimize the view each time it is referenced in a query : http://technet.microsoft.com/en-us/library/cc917715.aspx
"After view expansion, the SQL Server query optimizer compiles a single execution plan for the executing query."
I don't have 2005 installed but it will operate similiar to 2008R2 - To view the Query Optimization Plan, right click in the query window and select "Display Estimated Execution Plan" for more info and to spot any bottlenecks.
In the Query menu option, there is "Analyse Query in Database Tuning Advisor" that may also be of benefit to you.
I'm trying to optimize a SQL Server. I have some experience with Mysql and one of the things that usually help is to enable query cache, that will basically cache query results as long as you are running the same query.
Is there something similar to this on SQL Server? Could you please point what is the name of this feature?
Thanks!
SQL Server doesn't cache result sets per se, but it does cache data pages which have been read, in addition to caching query execution plans. This means that if it has to read the same data pages again to answer a query, it will be faster since there are fewer physical reads (from disk) but you should still see the same amount of logical reads.
Here is an article with more details.
SQL Server will cache query results, but it's a little more complicated than in MySQL's case (since SQL Server provides ACID guarantees that MySQL does not - at least, not with MyISAM). But you'll definitely find that the second time you execute a query on SQL Server, it'll be faster than the first time (as long as no other changes have happened).
There's no specific name for this feature, that I'm aware of. It's more a combination of caches...
Do linq generated queries get cached effectively by SQL Server 2008?
or is it better to use stored procedures with linq or what about a view and then using compiled linq queries... opinions?
cheers
emphasis here is on "effectively", and or is it better....
ie. views are cached well by sql server, and then using linq on the view....
On top of the answers already given according to Damien Guard there's a glitch in the LINQ to SQL and EF LINQ providers that fails to set the variable lengths consistently for queries involving string parameters.
http://damieng.com/blog/2009/12/13/sql-server-query-plan-cache
Apparently it's fixed in .NET 4.0.
In the past I've written stored proc's in place of LINQ queries, mainly for complex reporting-like queries rather than simple CRUD but only following profiling of my application.
L2S simply passes queries on to SQL Server 2008. So they will get cached, or not cached, like any other query submitted by any other process. The fact that a Linq query is compiled has no impact on how SQL Server processes the query.
Queries that LINQ generates are normal SQL queries like your own hand-crafted SQL queries, and they follow the same rules: if the query text is identical (down to the last comma and whitespace) than a query text before, chances are its query execution plan might have been cached and thus able to be reused.
The point is: the query text has to be absolutely identical - if even a single whitespace is different, SQL Server considers it a new query and thus will go through the full process of parsing, analysing, finding a query plan and executing it.
But basically, yes - queries sent off by LINQ will be cached and reused - if they meet those criteria!