SQL View optimisation - sql

I have inherited an existing system and am trying to figure out a few things.
The system does a
SELECT * FROM v_myView WHERE mvViewCol = 'someValue'
and v_myView performs summation of Table1 based on myViewCol
Does SQL Server 2005 optimize the query or will summation always occur across the entire Table1?
I understand that I could use a parameterized view but don't want to go changing things unnecessarily.
Cheers
Geoff

Views have no runtime cost at all. They are always inlined into the surrounding query as if you had pasted the view definition as text. They would be impractical to use otherwise.
Does SQL Server (2005) optimize the query or will summation always occur across the entire Table1.
It will be optimized.

This is a complicated question. I think the best explanation is here. I do wish Microsoft documentation were a little clearer on this point.
When a view is created, the query is parsed. This ensures that it is correct.
The execution plan is determined the first time the query is run (to a close approximation). This execution plan then remains in the plan cache for subsequent calls. So, if you have an index on the appropriate columns and the first execution has a where clause that would use the index, then subsequent calls will also use the index.
I say to a close approximation, because it is really the first time that a view is called when the plan is not in the plan cache. Certain changes to the database will flush the plan, as will restarting the server.
So, if you only access the view with the where clause, then subsequent uses of the view will be optimized for that purpose.

SQL Server 2005 will optimize the view each time it is referenced in a query : http://technet.microsoft.com/en-us/library/cc917715.aspx
"After view expansion, the SQL Server query optimizer compiles a single execution plan for the executing query."
I don't have 2005 installed but it will operate similiar to 2008R2 - To view the Query Optimization Plan, right click in the query window and select "Display Estimated Execution Plan" for more info and to spot any bottlenecks.
In the Query menu option, there is "Analyse Query in Database Tuning Advisor" that may also be of benefit to you.

Related

How I can find sql query for execution plan?

Some programm generate and send queries to sql server(on high load production). I want take plan of concrete query of concrete table. I start profiler with "Showplan XML" and set filter on TextData(like %MyTable%) and DatabaseName. It show rows with xml in TextData that describe execution plans(for all queries of my table). But I know that exist 5 different sql queries for this table.
How I can match some concrete query with correspond plan without use statistic?
Is there a reason this has to be done on the production environment? Most really bad execution plans (missing indexes causing table scans etc.) will be obvious enough on a dev environment where you can use all the diagnostics you want.
Otherwise running the SQL on the query cache (as in the linked question someone else mentioned) will probably have the lowest impact as it just queries a system table rather than adding diagnostics to every query.

Poor SQL performance after server transfer

We had a SQL 2005 server running for XML EXPLICIT queries quite happily with no performance issues. The machine (a Windows 2003 server) has unfortunately died so I've had to do an emergency provision of a Windows 2012 box. The databases files have been reattached to a 2008r2 and "work". However the queries are horrendously slow. 5 seconds per query when previously they were in the .x times. This makes the websites that they power unusable.
I've rebuilt all the indexes and I've run DBCC FREEPROCCACHE on all machines but this has had no noticable effect. What else can I look at ? I can't run them on the 2016 SQL instance on the box because some of the queries use non-ANSI *= joins (I said it was old!).
If your query was running fine before, consider what else have changed - the query planner and actual execution plan might help to pinpoint this.
When you say you are joining, have you considered how much you join? If the new machine have more data in the database, then a join might quickly become prohibitively expensive. This can be done by reducing the data you need, as less datahandling means less workload.
Is there something you can pre-calculate before you run your query, or otherwise change to make it run faster?
I assume you do a SELECT, but if you UPDATE or DELETE data, the indexes also need to be recalculated, which takes a long time (in this case, disable the index, insert all the needed data and then recalculate the index)
You don't mention any XML handling, but have marked the for-xml tag. If your join is performed on XML data, using Xquery to get the data might also give a boost to performance.

Measuring the Performance of SQL Queries

Let me say ahead of time, that I have very little understanding of the algorithms that SQL queries go through, so please excuse my ignorance.
My question is: How do you go about evaluating the performance of a particular SQL query? And what metrics are considered?
For example,
SELECT * FROM MyTable;
and
SELECT * FROM MyTable UNION SELECT * From MyTable;
I'm guessing the second one is a lot slower even though both queries return the same results. But, how could someone evaluate the two and decide which one is better and why?
Are there tools to do this? Is there any type of SQL stack trace? Etc...
Thanks.
Assuming you're talking about SQL Server (you didn't specify...):
You need to look into SQL Server Profiler - and the best intro around is a six-part webcast series called
Mastering SQL Server Profiler
in which Brad MacGehee walks you through how to start using Profiler and what to get from it.
Red-Gate Software also publishes a free e-book on Mastering SQL Server Profiler (also by Brad)
Also assuming you are talking about SQL Server, if you are using SQL Server Management Studio, then you can try 'Display Estimatesd Execution Plan' and/or 'Include Actual Execution Plan' (from the Query menu).
The difference is that the first one doesn't execute the query, while the second does. So the second is more accurate, but the first is useful if you only want to execute the 'lighter' query.
Both of them display the execution tree. You can hover over each node and see statistics.
The one to use to compare is 'Estimate Subtree Cost' (the higher the worse).
Hope this was helpful.

Same SQL Query Slower from NHibernate Application than SQL Studio?

Our application issues an NHibernate-generated SQL query. At application runtime, the query takes about 12 seconds to run against a SQL Server database. SQL Profiler shows over 500,000 reads.
However, if I capture the exact query text using SQL Profiler, and run it again from SQL Studio, it takes 5 seconds and shows less than 4,600 reads.
The query uses a couple of parameters whose values are supplied at the end of the SQL text, and I'd read a little about parameter sniffing and inefficient query plans, but I had thought that related to stored procedures. Maybe NHibernate holds the resultset open while it instantiates its entities, which could explain the longer duration, but what could explain the extra 494,000 "reads" for the same query as performed by NHibernate? (No additional queries appear in the SQL Profiler trace.)
The query is specified as a LINQ query using NHibernate 3.1's LINQ facility. I didn't include the query itself because it seems like a basic question of philosophy: what could explain such a dramatic difference?
In case it's pertinent, there also happens to be a varbinary(max) column in the results, but in our situation it always contains null.
Any insight is much appreciated!
Be sure to read: http://www.sommarskog.se/query-plan-mysteries.html
Same rules apply for procs and sp_executesql. A huge reason for shoddy plans can be passing in a nvarchar param for a varchar field, it causes index scans as opposed to seeks.
I very much doubt the output is affecting the perf here, it is likely to be an issue with one of the params sent in, or selectivity of underlying tables.
When testing your output from profiler, be sure to include sp_executesql and make sure your settings match (stuff like SET ARITHABORT), otherwise you will cause a new plan to be generated.
You can always dig up the shoddy plan from the execution cache via sys.dm_exec_query_stats

Prepared SQL query time vs regular query time

I know, from whatever I've read, prepared statements are faster since pre-compiled cached version is used for recurring queries. My doubt is : Exactly where time is saved? I see, only the time taken in preparing a query could be saved. Even prepared statements have to do database search and so no time is saved there. Am I wrong?
That's correct. The time saved by using prepared statements is generally in the database engine planning/compiling the query.
The part of a "prepared query" that is prepared is the execution plan. The plan tells the database how to execute the query; which indexes to use, in which order. The execution plan also resolves any access rights.
Time is saved by building the execution plan once instead of for every query.