I have a SQL script that I want to output progress messages as it runs. Having it output messages between SQL statements is easy, however I have some very long running INSERT INTO SELECTs. Is there a way to have a select statement output messages as it goes, for example after every 1000 rows, or every 5 seconds?
Note: This is for SQL Anywhere, but answers in any SQL dialect will be fine.
There's no way to retrieve the execution status of a single query. None of the mainstream database engines provide this functionality.
Furthermore, a measurable overhead would be generated from any progress implementation were one to exist, so if a query is already taking an uncomfortably long time such that you want to show progress, causing additional slowdown by showing said progress might not be a design goal.
You may find this article on estimating SQL execution progress helpful, though its practical implications are limited.
SQL itself has no provision for this kind of thing. Any way of doing this would involve talking directly to the database engine, and would not be standard across databases.
Really the idea of progress with set based operations (which is what a relational database uses) wouldn't be too helpful, at least not as displayed with a progress bar (percent done vs total). By the time the optimizer figured out what it needed to do and really understood the full cost of the operation, you have already completed a significant portion of the operation. Progress displays are really meant for iterative operations rather than set operations.
That's talking about your general SELECT statement execution. For inserts that are separate statements there are all kinds of ways to do that from the submitter by monitoring the consumption rate of the statements. If they are bulk inserts (select into, insert from, and the like) then you really have the same problem that I described above. Set operations are batched in a way that make a progress bar type of display somewhat meaningless.
I am on the SQL Anywhere engine development team and there is currently no way to do this. I can't promise anything, but we are considering adding this type of functionality to a future release.
There's certainly no SQL-standard solution to this. Sorry to be doom-laden, but I haven't seen anything that can do this in Oracle, SQL Server, Sybase or MySQL, so I wouldn't be too hopeful for SQLAnywhere.
I agree that SQL does not have a way to do this directly. One way might be to only insert the TOP 1000 at a time and then print your status message. Then keep repeating this as needed (in a loop of some kind). The downside is that you would then need a way to keep track of where you are.
I should note that this approach will not be as efficient as just doing one big INSERT
Here's what I would do (Sybase / SQL Server syntax):
DECLARE #total_rows int
SELECT #total_rows = count(*)
FROM Source_Table
WHILE #total_rows > (SELECT count(*) FROM Target_Table)
BEGIN
SET rowcount 1000
print 'inserting 1000 rows'
INSERT Target_Table
SELECT *
FROM Source_Table s
WHERE NOT EXISTS( SELECT 1
FROM Target_Table t
WHERE t.id = s.id )
END
set rowcount 0
print 'done'
Or you could do it based on IDs (assumes Id is a number):
DECLARE #min_id int,
#max_id int,
#start_id int,
#end_id int
SELECT #min_id = min(id) ,
#max_id = max(id)
FROM Source_Table
SELECT #start_id = #min_id ,
#end_id = #min_id + 1000
WHILE #end_id <= #max_id
BEGIN
print 'inserting id range: ' + convert(varchar,#start_id) + ' to ' + convert(varchar,#end_id)
INSERT Target_Table
SELECT *
FROM Source_Table s
WHERE id BETWEEN #start_id AND #end_id
SELECT #start_id = #end_id + 1,
#end_id = #end_id + 1000
END
set rowcount 0
print 'done'
One thought might to have another separate process count the number of rows in the table where the insert is being done to determine what percentage of them are there already. This of course would require that you know the total in the end. This would probably only be okay if this you're not too worried about server load.
On the off chance you're using Toad, you can generate a set of INSERT statements from a table and configure it to commit at a user input frequency. You could modify your scripts a little bit and then see how much of the new data has been commited as you go.
You can simulate the effect for your users by timing several runs, then having a progress bar advance at the average records / second rate.
The only other ways will be
1 - Refer to the API of your database engine to see if it makes any provision for that
or
2 - Break your INSERT into many smaller statements, and report on them as you go. But that will have a significant negative performance impact.
If you need to have it or you die, for insert,update,delete you can use some trigger logic with db variables, and time by time you do sql to retrieve variable data and display some progress to user.
If you wan`t to use it, I can write an example and send it.
Stumbled upon this old thread looking for something else. I disagree with the idea that we don't want progress information just because it's a set operation. Users will often tolerate even a long wait if they know how long it is.
Here's what I suggest:
Each time this runs, log the number of rows inserted and the total time, then add a step at the beginning of that process to query that log and calculate an estimated total time. If you base your estimate on the last runs, you should be able to present an acceptably good guess for the wait time for the thing to finish.
Related
I'm just getting started with SQL Server's Change Data Capture functionality. I'd like to be able to pick out entries from the change table based on which transactions they were part of. Is there a way to do this? The transaction ID field in those tables doesn't seem to correspond to anything meaningful.
I realize that there are the LSN's and you can look for general start and end times, but is it possible to account for multiple transactions which run at the same time, each affecting different records? (with potentially interleaving start and end times for different operations)
I ran into this same problem a few minutes ago. I did some Googling and it appears there's a cdc.lsn_time_mapping table that may do this for you.
Here's how I used it (not sure how reliable this is, what would happen if you truncate the transaction log, etc.):
DECLARE #from_lsn binary(10), #to_lsn binary(10);
SET #from_lsn = sys.fn_cdc_get_min_lsn('Application_Person');
SET #to_lsn = sys.fn_cdc_get_max_lsn();
SELECT m.*, c.*
FROM cdc.fn_cdc_get_all_changes_Application_Person(#from_lsn, #to_lsn, N'all') c
INNER JOIN cdc.lsn_time_mapping m ON c.__$start_lsn = m.start_lsn
Other useful links for future readers:
https://msdn.microsoft.com/en-us/library/bb510494.aspx (How to set up
CDC)
https://msdn.microsoft.com/en-us/library/bb510627.aspx (How to
query changes)
I am beginner with Oracle DB.
I want to know execution time for a query. This query returns around 20,000 records.
When I see the SQL Developer, it shows only 50 rows, maximum tweakable to 500. And using F5 upto 5000.
I would have done by making changes in the application, but application redeployment is not possible as it is running on production.
So, I am limited to using only SQL Developer. I am not sure how to get the seconds spent for execution of the query ?
Any ideas on this will help me.
Thank you.
Regards,
JE
If you scroll down past the 50 rows initially returned, it fetches more. When I want all of them, I just click on the first of the 50, then press CtrlEnd to scroll all the way to the bottom.
This will update the display of the time that was used (just above the results it will say something like "All Rows Fetched: 20000 in 3.606 seconds") giving you an accurate time for the complete query.
If your statement is part of an already deployed application and if you have rights to access the view V$SQLAREA, you could check for number of EXECUTIONS and CPU_TIME. You can search for the statement using SQL_TEXT:
SELECT CPU_TIME, EXECUTIONS
FROM V$SQLAREA
WHERE UPPER (SQL_TEXT) LIKE 'SELECT ... FROM ... %';
This is the most precise way to determine the actual run time. The view V$SESSION_LONGOPS might also be interesting for you.
If you don't have access to those views you could also use a cursor loop for running through all records, e.g.
CREATE OR REPLACE PROCEDURE speedtest AS
count number;
cursor c_cursor is
SELECT ...;
BEGIN
-- fetch start time stamp here
count := 0;
FOR rec in c_cursor
LOOP
count := count +1;
END LOOP;
-- fetch end time stamp here
END;
Depending on the architecture this might be more or less accurate, because data might need to be transmitted to the system where your SQL is running on.
You can change those limits; but you'll be using some time in the data transfer between the DB and the client, and possibly for the display; and that in turn would be affected by the number of rows pulled by each fetch. Those things affect your application as well though, so looking at the raw execution time might not tell you the whole story anyway.
To change the worksheet (F5) limit, go to Tools->Preferences->Database->Worksheet, and increase the 'Max rows to print in a script' value (and maybe 'Max lines in Script output'). To change the fetch size go to the Database->Advanced panel in the preferences; maybe to match your application's value.
This isn't perfect but if you don't want to see the actual data, just get the time it takes to run in the DB, you can wrap the query to get a single row:
select count(*) from (
<your original query
);
It will normally execute the entire original query and then count the results, which won't add anything significant to the time. (It's feasible it might rewrite the query internally I suppose, but I think that's unlikely, and you could use hints to avoid it if needed).
Ok I am going to do my best describing this. I have a SP which passes in XML and updates and inserts another table. This was working yesterday. All I changed today was loading the temp table with a OPENXML vs xml.nodes. I even changed it back and I am still getting this interesting issue. I have an update and insert in the same transaction. The update works and then the Insert hangs, no error no nothing... going on 9 minutes. Normally takes 10 seconds. No Blocking processes according to master.sys.sysprocesses. The funny thing is the Select of the Insert returns no rows as they are already in the database. The update updates 72438 in
SQL Server Execution Times:
CPU time = 1359 ms, elapsed time = 7955 ms.
ROWS AFFECTED(72438)
I am out of ideas as to what could be causing my issue? Permissions I don't think so? Space I don't think so because a Error would be returned?
queries:
UPDATE [Sales].[dbo].[WeeklySummary]
SET [CountryId] = I.CountryId
,[CurrencyId] = I.CurrencyId
,[WeeklySummaryType] = #WeeklySummaryTypeId
,[WeeklyBalanceAmt] = M.WeeklyBalanceAmt + I.WeeklyBalanceAmt
,[CurrencyFactor] = I.CurrencyFactor
,[Comment] = I.Comment
,[UserStamp] = I.UserStamp
,[DateTimeStamp] = I.DateTimeStamp
FROM
[Sales].[dbo].[WeeklySummary] M
INNER JOIN
#WeeklySummaryInserts I
ON M.EntityId = I.EntityId
AND M.EntityType = I.EntityType
AND M.WeekEndingDate = I.WeekEndingDate
AND M.BalanceId = I.BalanceId
AND M.ItemType = I.ItemType
AND M.AccountType = I.AccountType
and
INSERT INTO [Sales].[dbo].[WeeklySummary]
([EntityId]
,[EntityType]
,[WeekEndingDate]
,[BalanceId]
,[CountryId]
,[CurrencyId]
,[WeeklySummaryType]
,[ItemType]
,[AccountType]
,[WeeklyBalanceAmt]
,[CurrencyFactor]
,[Comment]
,[UserStamp]
,[DateTimeStamp])
SELECT
I.[EntityId]
, I.[EntityType]
, I.[WeekEndingDate]
, I.[BalanceId]
, I.[CountryId]
, I.[CurrencyId]
, #WeeklySummaryTypeId
, I.[ItemType]
, I.[AccountType]
, I.[WeeklyBalanceAmt]
, I.[CurrencyFactor]
, I.[Comment]
, I.[UserStamp]
, I.[DateTimeStamp]
FROM
#WeeklySummaryInserts I
LEFT OUTER JOIN
[Sales].[dbo].[WeeklySummary] M
ON I.EntityId = M.EntityId
AND I.EntityType = M.EntityType
AND I.WeekEndingDate = M.WeekEndingDate
AND I.BalanceId = M.BalanceId
AND I.ItemType = M.ItemType
AND I.AccountType = M.AccountType
WHERE M.WeeklySummaryId IS NULL
UPDATE
Trying the advice here worked for a while I run the following before my stored procedure call
UPDATE STATISTICS Sales.dbo.WeeklySummary;
UPDATE STATISTICS Sales.dbo.ARSubLedger;
UPDATE STATISTICS dbo.AccountBalance;
UPDATE STATISTICS dbo.InvoiceUnposted
UPDATE STATISTICS dbo.InvoiceItemUnposted;
UPDATE STATISTICS dbo.InvoiceItemUnpostedHistory;
UPDATE STATISTICS dbo.InvoiceUnpostedHistory;
EXEC sp_recompile N'dbo.proc_ChargeRegister'
Still stalling at the Insert Statement, which again inserts 0 rows.
There are really only a few things that can be going on, and the trick here is to eliminate them in order, from simplest to most complex.
STEP 1: Hand craft a set of XML to run that will produce exactly one insert and no updates, so you can go "back to basics" as it were and establish that the code is still doing what you expect, and the result is exactly what you expect. This may seem silly or unnecessary but you really need this reality check to start.
STEP 2: Hand craft a set of XML that will produce a medium-size set of inserts, still with no updates. Based on your experience with the routine, try to find something that will run in a 3-4 seconds. Perhaps 5000 rows. Does it continue to behave as expected?
STEP 3: Assuming steps 1 and 2 pass easily, the next most likely problem is TRANSACTION SIZE. If your update hits 74,000 rows in a single statement, then SQL Server must allocate resources to be able to roll back all 74,000 rows in the case of an abort. Generally you should assume the resources (and time) required to maintain a transaction explode exponentially as the row count goes up. So hand-craft one more set of inserts that contains 50,000 rows. You should find it takes dramatically more time. Let it finish. Is it 10 minutes, an hour? If it takes a long time but finishes, you have an issue with TRANSACTION SIZE, the server is choking trying to keep track of everything required to roll back the insert in the event of failure.
STEP 4: Determine if your entire stored procedure is operating within a single implied transaction. If it is, the matter is entirely worse, because SQL Server is tracking together everything required to roll back both the 74,000 updates and the ??? inserts in a single transaction. See this page:
http://msdn.microsoft.com/en-us/library/ms687099(v=vs.85).aspx
STEP 5: If you've got a single implicit transaction, you can either. A) Turn that off, which may help some but will not entirely fix the problem, or B) break the sproc into two separate calls, one for updates, one for inserts, so that at least the two are in separate transactions.
STEP 6: Consider "chunking". This is a technique for avoiding exploding transaction costs. Considering just the INSERT to get us started, you wrap the insert into a loop that begins and commits a transaction inside each iteration, and exits when affected rows is zero. The INSERT is modified so that you pull only the first 1000 rows from the source and insert them (that 1000 number is kind of arbitrary, you may find 5000 produces better performance, you have to experiment a bit). Once the INSERT affects zero rows, there are no more rows to handle and the loop exits.
QUICK EDIT: The "chunking" system works because the complete throughput for a large set of rows looks something like a quadratic. If you execute an INSERT that affects a huge number of rows, the total time for all rows to be handled explodes. If on the other hand you break it up and go row-by-row, the overhead of opening and committing each statement causes the total time for all rows to explode. Somewhere in the middle, when you've "chunked" out 1k rows per statement, the transaction requirements are at their minimum and the overhead of opening and committing the statement is negligible, and the total time for all rows to be handled is a minimum.
I had a problem where the stored proc was actually getting recompiled in the middle of running because it was deleting rows from a temp table. My situation doesn't look like yours, but mine was so odd that reading about it might give you some ideas.
Unexplained SQL Server Timeouts and Intermittent Blocking
I think you should post the full stored proc because the problem doesn't look to be where you think it is.
if a query in oracle takes the first time it is executed 11 minutes, and the next time, the same query 25 seconds, with the buffer being flushed, what is the possible cause? could it be that the query is written in a bad way?
set timing on;
set echo on
set lines 999;
insert into elegrouptmp select idcll,idgrpl,0 from elegroup where idgrpl = 109999990;
insert into SLIMONTMP (idpartes, indi, grecptseqs, devs, idcll, idclrelpayl)
select rel.idpartes, rel.indi, rel.idgres,rel.iddevs,vpers.idcll,nvl(cdsptc.idcll,vpers.idcll)
from
relbqe rel,
elegrouptmp ele,
vrdlpers vpers
left join cdsptc cdsptc on
(cdsptc.idclptcl = vpers.idcll and
cdsptc.cdptcs = 'NOS')
where
rel.idtits = '10BCPGE ' and
vpers.idbqes = rel.idpartes and
vpers.cdqltptfc = 'N' and
vpers.idcll = ele.idelegrpl and
ele.idgrpl = 109999990;
alter system flush shared_pool;
alter system flush buffer_cache;
alter system flush global context;
select /* original */ mvtcta_part_SLIMONtmp.idpartes,mvtcta_part_SLIMONtmp.indi,mvtcta_part_SLIMONtmp.grecptseqs,mvtcta_part_SLIMONtmp.devs,
mvtcta_part_SLIMONtmp.idcll,mvtcta_part_SLIMONtmp.idclrelpayl,mvtcta_part_vrdlpers1.idcll,mvtcta_part_vrdlpers1.shnas,mvtcta_part_vrdlpers1.cdqltptfc,
mvtcta_part_vrdlpers1.idbqes,mvtcta_part_compte1.idcll,mvtcta_part_compte1.grecpts,mvtcta_part_compte1.seqc,mvtcta_part_compte1.devs,mvtcta_part_compte1.sldminud,
mvtcta.idcll,mvtcta.grecptseqs,mvtcta.devs,mvtcta.termel,mvtcta.dtcptl,mvtcta.nusesi,mvtcta.fiches,mvtcta.indl,mvtcta.nuecrs,mvtcta.dtexel,mvtcta.dtvall,
mvtcta.dtpayl,mvtcta.ioi,mvtcta.mtd,mvtcta.cdlibs,mvtcta.libcps,mvtcta.sldinitd,mvtcta.flagtypei,mvtcta.flagetati,mvtcta.flagwarnl,mvtcta.flagdonei,mvtcta.oriindl,
mvtcta.idportfl,mvtcta.extnuecrs
from SLIMONtmp mvtcta_part_SLIMONtmp
left join vrdlpers mvtcta_part_vrdlpers1 on
(
mvtcta_part_vrdlpers1.idbqes = mvtcta_part_SLIMONtmp.idpartes
and mvtcta_part_vrdlpers1.cdqltptfc = 'N'
and mvtcta_part_vrdlpers1.idcll = mvtcta_part_SLIMONtmp.idcll
)
left join compte mvtcta_part_compte1 on
(
mvtcta_part_compte1.idcll = mvtcta_part_vrdlpers1.idcll
and mvtcta_part_compte1.grecpts = substr (mvtcta_part_SLIMONtmp.grecptseqs, 1, 2 )
and mvtcta_part_compte1.seqc = substr (mvtcta_part_SLIMONtmp.grecptseqs, -1 )
and mvtcta_part_compte1.devs = mvtcta_part_SLIMONtmp.devs
and (mvtcta_part_compte1.devs = ' ' or ' ' = ' ')
and mvtcta_part_compte1.cdpartc not in ( 'L' , 'R' )
)
left join mvtcta mvtcta on
(
mvtcta.idcll = mvtcta_part_SLIMONtmp.idclrelpayl
and mvtcta.devs = mvtcta_part_SLIMONtmp.devs
and mvtcta.grecptseqs = mvtcta_part_SLIMONtmp.grecptseqs
and mvtcta.flagdonei <> 0
and mvtcta.devs = mvtcta_part_compte1.devs
and mvtcta.dtvall > 20101206
)
where 1=1
order by mvtcta_part_compte1.devs,
mvtcta_part_SLIMONtmp.idpartes,
mvtcta_part_SLIMONtmp.idclrelpayl,
mvtcta_part_SLIMONtmp.grecptseqs,
mvtcta.dtvall;
"if a query in oracle takes the first
time it is executed 11 minutes, and
the next time, the same query 25
seconds, with the buffer being
flushed, what is the possible cause?"
The thing is, flushing the DB Buffers, like this ...
alter system flush shared_pool
/
... wipes the Oracle data store but there are other places where data gets cached. For instance the chances are your OS caches its file reads.
EXPLAIN PLAN is good as a general guide to how the database thinks it will execute a query, but it is only a prediction. It can be thrown out by poor statistics or ambient conditions. It is not good at explaining why a specific instance of a query took as much time as it did.
So, if you really want to understand what occurs when the database executes a specific query you need to get down and dirty, and learn how to use the Wait Interface. This is a very powerful tracing mechanism, which allows us to see the individual events that happen over the course of a single query execution. Each version of Oracle has extended the utility and richness of the Wait Interface, but it has been essential to proper tuning since Oracle 9i (if not earlier).
Find out more by reading Roger Schrag's very good overview .
In your case you'll want to run the trace multiple times. In order to make it easier to compare results you should use a separate session for each execution, setting the 10046 event each time.
What else is happening on the box when you ran these? You can get different timings based on other processes chewing resources. Also, with a lot of joins, performance will depend on memory usage (hash_area_size, sort_area_size, etc) and availability, so perhaps you are paging (check temp space size/usage also). In short, try sql_trace and tkprof to analyze deeper
Sometimes a block is written to the file system before it is committed (a dirty block). When that block is read later, Oracle sees that it was uncommitted. It checks the open transaction and, if the transaction isn't still there, it knows the change was committed. Therefore it writes the block back as a clean block. It is called delayed block cleanout.
That is one possible reason why reading blocks for the first time can be slower than a subsequent re-read.
Could be the second time the execution plan is known. Maybe the optimizer has a very hard time finding a execution plan for some reason.
Try setting
alter session set optimizer_max_permutations=100;
and rerun the query. See if that makes any difference.
could it be that the query is written in a bad way?
"bad" is a rather emotional expression - but broadly speaking, yes, if a query performs significantly faster the second time it's run, it usually means there are ways to optimize the query.
Actually optimizing the query is - as APC says - rather a question of "down and dirty". Obvious candidate in your examply might be the substring - if the table is huge, and the substring misses the index, I'd imagine that would take a bit of time, and I'd imagine the result of all those substrin operations are cached somewhere.
Here's Tom Kyte's take on flushing Oracle buffers as a testing practice. Suffice it to say he's not a fan. He favors the approach of attempting to emulate your production load with your test data ("real life"), and tossing out the first and last runs. #APC's point about OS caching is Tom's point - to get rid of that (non-trivial!) effect you'd need to bounce the server, not just the database.
Just speaking to a colleague of mine. He was walking with a hop in his step, on the way to the coffee machine.
I asked him "what's with the 'swarmy' walk?", he said, "I just reduced a two hour long query down to 40 seconds! It feels so good".
He altered a stored procedure, that was using cursors and introduced a temp table, that was refactored from the original dataset - I will email him soon, to get more info on actual implemetation.
But ultimately, he was buzzing.
Question is, what SQL, that sticks in your mind and has made you buzz, whilst optimising slow performing queries?
I have to say when I learned how to create and use covered indexes. Now, THAT was a performance booster.
Using SQL's BULKIMPORT to reduce several hours of inherited INSERT code to less than a minute.
It's always nice to take a poorly written, cursor-laden query and eliminate cursors, cut the code by half, and improve performance many-fold.
Some of the best improvements are in clarity (and often result in nice performance boosts, too).
Sorry, I don't tend to get a buzz from that sort of thing but most situations have been pretty basic, monitoring performance of queries and adding indexes to speed them up.
Now increasing the speed of "real" code that I've written by changing data structures and algorithms within the class, that's where I get my buzz (and reputation a the go-to man for performance fixes at work).
hey on the iphone which uses sqlite, i straight away reduced by database processing time from 40 seconds to 2 seconds with the use of exclusive write transactions... i was super happy doing this
as this was my first experience of sql on an embedded device - quite different from the usual server related stuff (indexes, normalizations, etc etc)
--- as far as servers go, indexes are real blessing. also if you take a bit of pain and get rid of as many nulls as you can in your table, you would be surprised with the performance gains - not many developers focus on nulls, they usually go with indexes and other documented stuff
few other lesser exploited ways - using xml to process multiple batch inserts / updates / deletes at 1 go instead of doing 1 insert at a time - in sql 2005 this can be super cool
It's all about indexes. And avoiding stupid things that make them useless.
Changing order of conditions inside WHERE clause so it filters the most discriminating condition first (while at the same time indexes from non-discriminating columns like gender are removed).
Back in the day, I worked on a CICS/DB2 system, written in COBOL. A lot of our queries were doing full table scans (and slow) even though we had all the proper indexes and WHERE clauses.
It turned out (and I may have this backwards, it's been 15 years) that the problem was that we were using PIC S9(n) COMP in WORKING STORAGE for the query parameters, but DB2 wanted PIC S9(n) COMP-3. By using the wrong data type, DB2 had to do a full table scan in order to convert the values in the database to the value we were passing in. We changed our variable definitions and the queries were able to use the indexes now, which dramatically improved our performance.
I had a query that was originally written for SQL Server 6.5, which did not support the SQL 92 join syntax, i.e.
select foo.baz
from foo
left outer join bar
on foo.a = bar.a
was instead written as
select foo.baz
from foo, bar
where foo.a *= bar.a
The query had been around for a while, and the relevant data had accumulated to make the query run too slow, abut 90 seconds to complete. By the time this problem arose, we had upgraded to SQL Server 7.
After mucking about with indexes and other Easter-egging, I changed the join syntax to be SQL 92 compliant. The query time dropped to 3 seconds.
I don't think I'll ever have that feeling again. I was a f%$^ing hero.
I answered this on a previous question ("Biggest performance improvement you’ve had with the smallest change?"), however, it's such a simple improvement, yet one that is and can be so often overlooked, that it bears repeating:
Indexes!
Well we had a similar thing where we had a slow query on a Open Freeway site. The answer wasn't so much optimising the query, but to optimise the server that it was on. We increased the cache limit and cache size so that the server would not run the query so often.
This has massively increased the speed of the system and ultimately made the client happy! :)
Not quite the calibre of the original posts optimisation skills, but it definitely made us buzz!
Splitting one ridiculously long stored procedure, which did a great deal of "if it's after 5 pm, return this bit of sql" and which took in excess of 20 seconds to run, into a set of stored procedures that were called by one controlling sp, and got the times down to subsecond responses.
One Word, Dynamic Queries
If you serching with large numbers of parameters you can discount them from the SQL string. This has sped up my queries dramatically and with reletive ease.
Create PROCEDURE dbo.qryDynamic
(
#txtParameter1 nvarchar(255),
#txtParameter2 nvarchar(255),
AS
SELECT qry_DataFromAView.*
FROM qry_DataFromAView
BEGIN
DECLARE #SQL nvarchar(2500)
DECLARE #txtJoin nvarchar(50)
Set #txtJoin = ' Where '
SET #SQL = 'SELECT qry_DataFromAView.*
FROM qry_DataFromAView'
IF #txtParameter1 is not null
Begin
SET #SQL=#SQL + #txtJoin + ' Field1 LIKE N''%'' + #dynParameter1 + N''%'') '
Set #txtJoin = ' And '
end
IF #txtParameter2 is not null
Begin
SET #SQL=#SQL + #txtJoin + ' Field2 LIKE N''%'' + #dynParameter2 + N''%'') '
Set #txtJoin = ' And '
end
SET #SQL=#SQL + ' ORDER BY Field2'
Exec sp_executesql #SQL, N'#dynParameter1 nvarchar(255), #dynParameter2 nvarchar(255)', #dynParameter1 = #txtParameter1 ,#dynParameter2 = #txtParameter2
END
GO
I had a warm glow after being able to use a Cross Tab query to scrap oodles (technical term) of processing and lookups...
Usually it's simple things like adding indexes or only getting the data you need, but when you find a problem that fits an answer you've seen before... good times!
(Half way of topic)
I rewrote a 3000 line stored procedure into LINQ2SQL/C#.
The stored procedure juggled lots of data between a bunch of unindexed temp tables.
The LINQ2SQL version read the data into a couple of Dictionaries and ILookups and then I joined the data manually with plain old C# code.
The stored procedure took about 20 seconds and the LINQ2SQL/C# version took 0.2 seconds.