Prepared SQL query time vs regular query time - sql

I know, from whatever I've read, prepared statements are faster since pre-compiled cached version is used for recurring queries. My doubt is : Exactly where time is saved? I see, only the time taken in preparing a query could be saved. Even prepared statements have to do database search and so no time is saved there. Am I wrong?

That's correct. The time saved by using prepared statements is generally in the database engine planning/compiling the query.

The part of a "prepared query" that is prepared is the execution plan. The plan tells the database how to execute the query; which indexes to use, in which order. The execution plan also resolves any access rights.
Time is saved by building the execution plan once instead of for every query.

Related

Hasura querying becomes slower For Tracked SQL Function

For tracked SQL function, hasura query is taking a lot of time but when it is executed from SQL directly it takes only few milliseconds to get the data. We are not able to figure out what is the actual problem as we are using Postgresql DB
We followed some steps to reduce the response time
Applying indexes on DB
Analysing the query plan to reduce the cost
Querying only a limited set of data to reduce the response size
We tried to run that query from SQL directly which only took few milliseconds but when when try to run from hasura query it took a lot of time for same parameters
I suspect that it is probably due to permissions that are being evaluated when you run the function through Hasura.
When you were analysing the query plan, did you also make sure that you were passing in roles to ensure that the plan captures any additional changes to the query that are required in order for the permissions to be evaluated?

How I can find sql query for execution plan?

Some programm generate and send queries to sql server(on high load production). I want take plan of concrete query of concrete table. I start profiler with "Showplan XML" and set filter on TextData(like %MyTable%) and DatabaseName. It show rows with xml in TextData that describe execution plans(for all queries of my table). But I know that exist 5 different sql queries for this table.
How I can match some concrete query with correspond plan without use statistic?
Is there a reason this has to be done on the production environment? Most really bad execution plans (missing indexes causing table scans etc.) will be obvious enough on a dev environment where you can use all the diagnostics you want.
Otherwise running the SQL on the query cache (as in the linked question someone else mentioned) will probably have the lowest impact as it just queries a system table rather than adding diagnostics to every query.

SQL View optimisation

I have inherited an existing system and am trying to figure out a few things.
The system does a
SELECT * FROM v_myView WHERE mvViewCol = 'someValue'
and v_myView performs summation of Table1 based on myViewCol
Does SQL Server 2005 optimize the query or will summation always occur across the entire Table1?
I understand that I could use a parameterized view but don't want to go changing things unnecessarily.
Cheers
Geoff
Views have no runtime cost at all. They are always inlined into the surrounding query as if you had pasted the view definition as text. They would be impractical to use otherwise.
Does SQL Server (2005) optimize the query or will summation always occur across the entire Table1.
It will be optimized.
This is a complicated question. I think the best explanation is here. I do wish Microsoft documentation were a little clearer on this point.
When a view is created, the query is parsed. This ensures that it is correct.
The execution plan is determined the first time the query is run (to a close approximation). This execution plan then remains in the plan cache for subsequent calls. So, if you have an index on the appropriate columns and the first execution has a where clause that would use the index, then subsequent calls will also use the index.
I say to a close approximation, because it is really the first time that a view is called when the plan is not in the plan cache. Certain changes to the database will flush the plan, as will restarting the server.
So, if you only access the view with the where clause, then subsequent uses of the view will be optimized for that purpose.
SQL Server 2005 will optimize the view each time it is referenced in a query : http://technet.microsoft.com/en-us/library/cc917715.aspx
"After view expansion, the SQL Server query optimizer compiles a single execution plan for the executing query."
I don't have 2005 installed but it will operate similiar to 2008R2 - To view the Query Optimization Plan, right click in the query window and select "Display Estimated Execution Plan" for more info and to spot any bottlenecks.
In the Query menu option, there is "Analyse Query in Database Tuning Advisor" that may also be of benefit to you.

Same SQL Query Slower from NHibernate Application than SQL Studio?

Our application issues an NHibernate-generated SQL query. At application runtime, the query takes about 12 seconds to run against a SQL Server database. SQL Profiler shows over 500,000 reads.
However, if I capture the exact query text using SQL Profiler, and run it again from SQL Studio, it takes 5 seconds and shows less than 4,600 reads.
The query uses a couple of parameters whose values are supplied at the end of the SQL text, and I'd read a little about parameter sniffing and inefficient query plans, but I had thought that related to stored procedures. Maybe NHibernate holds the resultset open while it instantiates its entities, which could explain the longer duration, but what could explain the extra 494,000 "reads" for the same query as performed by NHibernate? (No additional queries appear in the SQL Profiler trace.)
The query is specified as a LINQ query using NHibernate 3.1's LINQ facility. I didn't include the query itself because it seems like a basic question of philosophy: what could explain such a dramatic difference?
In case it's pertinent, there also happens to be a varbinary(max) column in the results, but in our situation it always contains null.
Any insight is much appreciated!
Be sure to read: http://www.sommarskog.se/query-plan-mysteries.html
Same rules apply for procs and sp_executesql. A huge reason for shoddy plans can be passing in a nvarchar param for a varchar field, it causes index scans as opposed to seeks.
I very much doubt the output is affecting the perf here, it is likely to be an issue with one of the params sent in, or selectivity of underlying tables.
When testing your output from profiler, be sure to include sp_executesql and make sure your settings match (stuff like SET ARITHABORT), otherwise you will cause a new plan to be generated.
You can always dig up the shoddy plan from the execution cache via sys.dm_exec_query_stats

First query slow on Firebird

The first query run on a large dataset on a Firebird database after starting our application is always very slow. Subsequent calls to the same query (it is a stored procedure) are fine. I assume that this is to do with something being loaded into memory but I could do with a explanation of what and whether there is anything that can be done to get around the issue.
If is a stored procedure the first query it compiles the stored procedure also it fetches the buffers and caches the result.
On the second query the procedure is not compiled again (precached) and the results are instant (the fetches are also in memory for some operating systems so no need for disk io)
one way is to optimize the sp or the tables
How larger are they? (number of records for each table)
one simple way to optimize this is to put a cron script that will run once per day/hour to prefill the caches so you will get fast sp
Maybe it's not about the query, but the connection time (delay) is long? There was such a problem with [old] Firebird/Interbase engines.
You didn't explain which Firebird version you are using but, in version 2.50, there is a bug (CORE 3227 - slow compilation of stored procedures) that can be the cause of your problem. More details:
http://www.firebirdnews.org/?p=5282&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+FirebirdNews+%28Firebird+News%29