Reverse Script Execution - sql

Is it possible to reverse the execution order of a script/stored proc?
For example:
SELECT 1
SELECT 2
SELECT 3
Returns:
3
2
1
I am open to workarounds, creative ideas, magic from Mordor and 'not possible' (more prefer Gandalf to step in though).
Background:
I am writing a number of scripts to identify where problems have occurred in a number of stored procedures. (They are effectively just SELECT statements) Ideally, I would like to write these checks in the order of appearance in the stored procedure (for readability) but I would want it to be executed in the reverse order.
EDIT:
The types of operations these stored procedures are performing are INSERTS and UPDATES (they are part of a larger ETL procedure). So when a problem occurs, I would like to check how far down the stored proc it got by checking how many records are still left to be updated or inserted and then once I know where it stopped, I can remove the records already inserted and insert the rest. (This is less important for an update since I can generally just run it again).
Effectively I want my script to execute queries in a LIFO fashion.

Related

Pre-execute a query when any Stored Procedure is called

Our enterprise's database is 20+ years old, and it's filled with junk, so we're planning to start deleting tables and Stored Procedures. The problem is that we don't exactly know which of those are unused, so we thought on doing a research to spot them.
I tried this answer's solution, but I think the number of queries returned are the ones in the system cache.
I have an idea of how to do it, but I don't know if it's possible:
- Create a system table with 3 columns: Stored Procedure name, number of executions, and date of last call
- The tricky part: everytime a Stored Procedure is executed, perform a query to insert/update that table.
To avoid having to modify ALL our Stored Procedures (those are easily 600+), I thought of adding a Database Trigger, but turns out it's only possible to link them to tables, not Stored Procedures.
My question is, is there any way to pre-execute a query when ANY Stored Procedure is called?
EDIT: Our Database is a SQL Server
I'm aware that I asked this question a while ago, but I'll post what I've found, so anyone who stumbles with it can use it.
When the question was asked, my goal was to retrieve the number of times all Stored Procedures were executed, to try to get rid of the unused ones.
While this is not perfect, as it doesn't show the date of last execution, I found this query, which retrieves all Stored Procedures on all databases, and displays the number of times it's been executed since it's creation:
SELECT
Db_name(st.dbid) [Base de Datos],
Object_schema_name(st.objectid, dbid) [Schema],
Object_name(st.objectid, dbid) [USP],
Max(cp.usecounts) [Total Ejecuciones]
FROM
sys.dm_exec_cached_plans cp
CROSS apply sys.Dm_exec_sql_text(cp.plan_handle) st
WHERE
Db_name(st.dbid) IS NOT NULL
AND cp.objtype = 'proc'
GROUP BY
cp.plan_handle,
Db_name(st.dbid),
Object_schema_name(objectid, st.dbid),
Object_name(objectid, st.dbid)
ORDER BY
Max(cp.usecounts)
I found this script on this webpage (it's on spanish). It also has 2 more useful scripts about similar topics.
I used this script (subsequently improved)
https://chocosmith.wordpress.com/2012/12/07/tsql-recompile-all-views-and-stored-proceedures-and-check-for-error/#more-571
To run through all of your objects and find the ones that are no longer valid.
If you want I will post my enhanced version which fixes a few things.
Then create a new schema (I call mine recycle) and move those invalid objects in there.
Now run it again.
You may end up moving a whole bunch on non functional objects out

Can you run a portion of a script in parallel, based off the results of a select statement?

I have a portion of code which, when simplified, looks something like this:
select #mainlooptableid = min(uid)
from queueofids with (nolock)
while (#mainlooptableid is not null)
begin
-- A large block of code that does several things depending on the nature of #mainlooptableid
-- .
-- .
-- .
-- End of this blocks main logic
delete from queueofids where uid = #mainlooptableid
select #mainlooptableid = min(uid)
from queueofids with (nolock)
end
I would like to be able to run the segment of code that's inside the while loop parallel for all uids inside the queueofids table. Based on what happens inside the loop I can guarantee that they will not interfere with each other in any way if they were to run concurrently, so logically it seems perfectly safe for it to run like this. The real question is if there is any way to get sql to run a portion of code for all values in there?
NOTE: I did think about generating a temp table with a series of created sql statements stored as strings, where each one is identical except for the #mainlooptableid value. But even if I have this table of sql statements ready to execute, I'm not sure how I would get all of these statements to execute concurrently.
I can't think of a way to do this within a single SQL script; scripts are procedural. If you want to explore this idea, you'd probably need to involve some form of multi-threaded application which would handle the looping aspect, and open a thread to hand off the parallelized portion of your current script. Not impossible, but it does introduce some complexity.
If you want to do this all in SQL, then you'll have to rewrite the code to eliminate the loop. As noted in the comments above, SQL Server is set-based, which means that it handles a certain amount of parallelization by doing work "all at once" against a set.
No, there is no way to get SQL statements in the same script to run in parallel.
The closest thing to it is to try to create a set-based way of handling them, instead of running them in a loop.
Be aware that run in parallel will not necessarily make it faster if the threads are competing for the same resources.
I don't think SQL will parallelize statements. But SQL will parallelize execution within a single statement.
Most programming frameworks have parallel. For example in .NET this would rather straight forward. Create a procedure where you pass #mainlooptableid and just call it in parallel.

How can a stored proc have multiple execution plans?

I am working with MS SQL Server 2008 R2. I have a stored procedure named rpt_getWeeklyScheduleData. This is the query I used to look up its execution plan in a specific database:
select
*
from
sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
where
OBJECT_NAME(st.objectid, st.dbid) = 'rpt_getWeeklyScheduleData' and
st.dbid = DB_ID()
The above query returns me 9 rows. I was expecting 1 row.
This stored procedure has been modified multiple times so I believe SQL Server has been building a new execution plan for it whenever it was modified and run. Is it correct explanation? If not then how can you explain this?
Also is it possible to see when each plan was created? If yes then how?
UPDATE:
This is the stored proc's signature:
CREATE procedure [dbo].[rpt_getWeeklyScheduleData]
(
#a_paaipk int,
#a_location_code int,
#a_department_code int,
#a_week_start_date varchar(12),
#a_week_end_date varchar(12),
#a_language_code int,
#a_flag int
)
as
begin
...
end
The stored proc is long; has only 2 if conditions both for #a_flag parameter.
if #a_flag = 0
begin
...
end
if #a_flag = 1
begin
...
end
Depending on the nature of the stored procedure (which wasn't provided) this is very possible for any number of reasons (most likely not limited to below):
Does the proc use a lot of if this then this select, else this select/update
Does the proc contain dynamic sql?
Are you executing the SP from both web and SSMS? Then you're likely executing the SP with different connection settings.
Does the stored proc have parameters? Sometimes a difference in parameters can cause one execution plan to be terrible for a specific set, so a different plan is used.
Going to try an analogy which might help... maybe...
Say you have a stored procedure for your weekend shopping.
You typically need to get groceries, sometimes an air filter, and even less often a big pack of something that needs replacing 4 times a year.
The grocery store can handle groceries, and is the closest to your house (5 minutes).
Target can handle the air filter and groceries, but add 25 minutes travel time.
"Big place of everything" has everything you'd possibly need, but is an hours drive away.
So here, depending on your parameters #needsAirFilter and #needsBigPackOfSomething could vastly change your "execution plan" of your stored procedure of "shopping".
If #needsAirFilter and #needsBigPackOfSomething is false, there's no reason to make the 30 minute or hour drive, as everything you need is at the grocery store.
One a month, #needsAirFilter is true, in that case we need to go to Target, as the grocery store's execution plan is insufficient.
4 times a year #needsBigPackOfSomething is true, and we need to make the hour drive to get the big pack of something, while grabbing groceries, and airfilter since we're there.
Sure... we could make the hour drive every time to get groceries, and the other things when needed (imagine single execution plan). But this is in no way the most efficient way to do it. In instances like this, we have different execution plans for what information/goods are actually needed.
No idea if that helps... but I had fun :D
Typically SQL Server will generate a new query plan depending on the values of the parameters being passed in (this can determine what indexes, if any, it will use) and if indexes are added, changed or updated (on the tables/views being used in the proc) so SQL Server may decide that it is more effective to use one or more indexes that it previously ignored. The more involved the SQL in the proc will also kick off more work on SQL Server side as it attempts to optimize the query. If the data changes (suddenly you have many more customers in NJ and there is a query and index for states) it may decide that its going to use that index and the query plan is changed. If any of the tables or views involved in the query change (schema change) will also invalidate an existing plan and result in a new plan being generated.

Debug Insert and temporal tables in SQL 2012

I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END

MS SQL Server 2005 - Stored Procedure "Spontaneously Breaks"

A client has reported repeated instances of Very strange behaviour when executing a stored procedure.
They have code which runs off a cached transposition of a volatile dataset. A stored proc was written to reprocess the dataset on demand if:
1. The dataset had changed since the last reprocessing
2. The datset has been unchanged for 5 minutes
(The second condition stops massive repeated recalculation during times of change.)
This worked fine for a couple of weeks, the SP was taking 1-2 seconds to complete the re-processing, and it only did it when required. Then...
The SP suddenly "stopped working" (it just kept running and never returned)
We changed the SP in a subtle way and it worked again
A few days later it stopped working again
Someone then said "we've seen this before, just recompile the SP"
With no change to the code we recompiled the SP, and it worked
A few days later it stopped working again
This has now repeated many, many times. The SP suddenly "stops working", never returning and the client times out. (We tried running it through management studio and cancelled the query after 15 minutes.)
Yet every time we recompile the SP, it suddenly works again.
I haven't yet tried WITH RECOMPILE on the appropriate EXEC statments, but I don't particularly want to do that any way. It gets called hundred of times an hour and normally does Nothing (It only reprocesses the data a few times a day). If possible I want to avoid the overhead of recompiling what is a relatively complicated SP "just to avoid something which "shouldn't" happen...
Has anyone experienced this before?
Does anyone have any suggestions on how to overcome it?
Cheers,
Dems.
EDIT:
The pseduo-code would be as follows:
read "a" from table_x
read "b" from table_x
If (a < b) return
BEGIN TRANSACTION
DELETE table_y
INSERT INTO table_y <3 selects unioned together>
UPDATE table_x
COMMIT TRANSACTION
The selects are "not pretty", but when executed in-line they execute in no time. Including when the SP refuses to complete. And the profiler shows it is the INSERT at which the SP "stalls"
There are no parameters to the SP, and sp_lock shows nothing blocking the process.
This is the footprint of parameter-sniffing. Yes, first step is to try RECOMPILE, though it doesn't always work the way that you want it to on 2005.
Update:
I would try statement-level Recompile on the INSERT anyway as this might be a statistics problem (oh yeah, check that automatics statistics updating is on).
If this does not seem to fit parameter-sniffing, then compare th actual query plan from when it works correctly and from when it is running forever (use estimated plan if you cannot get the actual, though actual is better). You are looking to see if the plan changes or not.
I totally agree with the parameter sniffing diagnosis. If you have input parameters to the SP which are varying (or even if they aren't varying) - be sure to mask them with a local variable and use the local variable in the SP.
You can also use the WITH RECOMPILE if the set is changing but the query plan is no longer any good.
In SQL Server 2008, you can use the OPTIMIZE FOR UNKNOWN feature.
Also, if your process involves populating a table and then using that table in another operation, I recommend breaking the process up into separate SPs and calling them individually WITH RECOMPILE. I think the plans generated at the outset of the process can sometimes be very poor (so poor as not to complete) when you populate a table and then use the results of that table to carry out an operation. Because at the time of the initial plan, the table was a lot different than after the initial insert.
As others have said, something about the way the data or the source table statistics are changing is causing the cached query plan to go stale.
WITH RECOMPILE will probably be the quickest fix - use SET STATISTICS TIME ON to find out what the recompilation cost actually is before dismissing it out of hand.
If that's still not an acceptable solution, the best option is probably to try to refactor the insert statement.
You don't say whether you're using UNION or UNION ALL in your insert statement. I've seen INSERT INTO with UNION produce some bizarre query plans, particularly on pre-SP2 versions of SQL 2005.
Raj's suggestion of dropping and
recreating the target table with
SELECT INTO is one way to go.
You could also try selecting each of
the three source queries into their own
temporary table, then UNION those temp tables
together in the insert.
Alternatively, you could try a
combination of these suggestions -
put the results of the union into a
temporary table with SELECT INTO,
then insert from that into the target
table.
I've seen all of these approaches resolve performance problems in similar scenarios; testing will reveal which gives the best results with the data you have.
Obviously changing the stored procedure (by recompiling) changes the circumstances that led to the lock.
Try to log the progress of your SP as described here or here.
I would agree with the answer given above in a comment, this sounds like an unclosed transaction, particularly if you are still able to run the select statement from query analyser.
Sounds very much like there is an open transaction with a pending delete for table_y and the insert can't happen at this point.
When your SP locks up, can you perform an insert into table_y?
Do you have an index maintenance job?
Are your statistics up to date? One way to tell is examine the estimated and actual query plans for large variations.
As others have said, this sounds very likely to be an uncommitted transaction.
My best guess:
You'll want to make sure that table_y can be deleted completely and quickly.
If there are other stored procedures or external pieces of code that ever hold transactions on this table, you may be waiting forever. (They may error out and never close the transaction)
Another note: try using truncate if possible. it uses fewer resources than a delete with no where clause:
truncate table table_y
Also, once an error happens within your OWN transaction, it will cause all following calls (every 5 minutes apparently) to "hang", unless you handle your error:
begin tran
begin try
-- do normal stuff
end try
begin catch
rollback
end catch
commit
The very first error is what will give you information about the actual error. Seeing it hang in your own subsequent tests is just a secondary effect.
If you are doing these steps:
DELETE table_y
INSERT INTO table_y <3 selects unioned together>
You might want to try this instead
DROP TABLE table_y
SELECT INTO table_y <3 selects unioned together>