My query takes about 3-5 seconds to run. When I run the report, a simple summary of a few columns it takes 25-30 minutes!! It is a Group Left report. I've tried playing around with the query, and I've tried handling the grouping in the query with no luck. Any ideas what might be causing this?
Is the query being executed within a stored procedure? If yes try executing the SQL without passing the variables through an SQL stored procedure.
If there is a difference in time it takes to execute, try some optimizations like for example removing parameter sniffing (create local variables from within the stored procedure which contain copies of the variable values being passed through the stored procedure). These can give you an indication if the query requires optimization.
In my experience, sometimes queries that return a lot of data will appear to run fast from a tool like Toad or SQL Developer, but when you try to get all the rows, then you hit the real overall performance of the query.
So perhaps your query returns a lot of rows and all that time is spending doing all the i/o.
Related
For tracked SQL function, hasura query is taking a lot of time but when it is executed from SQL directly it takes only few milliseconds to get the data. We are not able to figure out what is the actual problem as we are using Postgresql DB
We followed some steps to reduce the response time
Applying indexes on DB
Analysing the query plan to reduce the cost
Querying only a limited set of data to reduce the response size
We tried to run that query from SQL directly which only took few milliseconds but when when try to run from hasura query it took a lot of time for same parameters
I suspect that it is probably due to permissions that are being evaluated when you run the function through Hasura.
When you were analysing the query plan, did you also make sure that you were passing in roles to ensure that the plan captures any additional changes to the query that are required in order for the permissions to be evaluated?
In SQL Server Management Studio, when running a query that produces a very large resultset, it appears to sometimes display the results of the resultset as it's loading them, rather than them all appearing at once.
My normal assumption would be that it's simply it populating the grid(s) in SSMS with the results of the finished query, and that the SQL query itself is finished.
However, if I run the following query:
SELECT 1
SELECT * FROM EnormousTable
INSERT INTO SomeOtherTable([Column1]) SELECT 'Test3'
That last INSERT does not occur until after the results from the larger resultset have been fully returned.
I have two main questions:
1. What is happening here? Is SSMS breaking down the query into separate batches even without GO statements? Please note that I'm not a DBA, so if there's some fundamental reason for this behaviour that 'any DBA would know', there's a good chance I don't know it.
2. Is there a way to attain similar functionality in .NET? What I mean by this is, when running a set of queries that will produce multiple resultsets, whether or not it's possible to have a DataSet get populated with the results of each successive query as it finishes (without waiting for all the queries to finish), without me having to manually break down the query (unless that's what SSMS is actually doing under the hood).
I have an excel file that will select roughly 1100 rows with 5 columns of data. Most columns are 5 digits long and are integers. I am using a macro to connect to a SQL server database and insert these rows into one maybe two tables. This is all its doing and then it closes the connection. So the user opens an excel file that has the rows, clicks a button and it executes the macro.
My question is, should the query be written in Excel since its simple and merely inserts the data into a few tables. Or is it more efficient calling a stored procedure and passing all of the values in the stored procedure and have it allocate where the values go in the different tables. When I mean efficient, i mean which is the quickest? I know this will probably take a few seconds to complete. I just feel going to a stored procedure is an extra point along the path that the data has to get to before it reaches the tables. Am I wrong? Any thoughts?
There are some advantages to using stored procedures in SQL Server. One is that SQL Server precompiles and saves the query execution plan, which increases performance. With your current method, SQL Server will generally need to generate the execution plan each time. Stored procedures can also reduce client/server network traffic.
So, even though it may seem like an extra point along the path, it actually can be faster.
In addition to #mark d.'s answer, another reason for using a stored procedure is security.
Your comment says that a customer is entering the data into Excel, so if you are putting direct SQL into your spreadsheet, then there is a risk that someone will open your spreadsheet and find out information about your database. But if you use a stored procedure then there is far less that can be learned.
Either way, make sure that you aren't hardcoding any connection string/account credentials into the spreadsheet.
Our application issues an NHibernate-generated SQL query. At application runtime, the query takes about 12 seconds to run against a SQL Server database. SQL Profiler shows over 500,000 reads.
However, if I capture the exact query text using SQL Profiler, and run it again from SQL Studio, it takes 5 seconds and shows less than 4,600 reads.
The query uses a couple of parameters whose values are supplied at the end of the SQL text, and I'd read a little about parameter sniffing and inefficient query plans, but I had thought that related to stored procedures. Maybe NHibernate holds the resultset open while it instantiates its entities, which could explain the longer duration, but what could explain the extra 494,000 "reads" for the same query as performed by NHibernate? (No additional queries appear in the SQL Profiler trace.)
The query is specified as a LINQ query using NHibernate 3.1's LINQ facility. I didn't include the query itself because it seems like a basic question of philosophy: what could explain such a dramatic difference?
In case it's pertinent, there also happens to be a varbinary(max) column in the results, but in our situation it always contains null.
Any insight is much appreciated!
Be sure to read: http://www.sommarskog.se/query-plan-mysteries.html
Same rules apply for procs and sp_executesql. A huge reason for shoddy plans can be passing in a nvarchar param for a varchar field, it causes index scans as opposed to seeks.
I very much doubt the output is affecting the perf here, it is likely to be an issue with one of the params sent in, or selectivity of underlying tables.
When testing your output from profiler, be sure to include sp_executesql and make sure your settings match (stuff like SET ARITHABORT), otherwise you will cause a new plan to be generated.
You can always dig up the shoddy plan from the execution cache via sys.dm_exec_query_stats
The first query run on a large dataset on a Firebird database after starting our application is always very slow. Subsequent calls to the same query (it is a stored procedure) are fine. I assume that this is to do with something being loaded into memory but I could do with a explanation of what and whether there is anything that can be done to get around the issue.
If is a stored procedure the first query it compiles the stored procedure also it fetches the buffers and caches the result.
On the second query the procedure is not compiled again (precached) and the results are instant (the fetches are also in memory for some operating systems so no need for disk io)
one way is to optimize the sp or the tables
How larger are they? (number of records for each table)
one simple way to optimize this is to put a cron script that will run once per day/hour to prefill the caches so you will get fast sp
Maybe it's not about the query, but the connection time (delay) is long? There was such a problem with [old] Firebird/Interbase engines.
You didn't explain which Firebird version you are using but, in version 2.50, there is a bug (CORE 3227 - slow compilation of stored procedures) that can be the cause of your problem. More details:
http://www.firebirdnews.org/?p=5282&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+FirebirdNews+%28Firebird+News%29