Is there a 'Are you sure want to continue?' SQL Command? - sql

We have many SQL Server scripts. But there are a few critical scripts that should only be run at certain times under certain conditions. Is there a way to protect us from ourselves with some kind of popup warning?
i.e. When these critical scripts are run, is there a command to ask the user if they want to continue?
(We've already made some rollback scripts to handle these, but it's better if they not be accidentally run at all).

No, there is no such thing.
You can write an application (windows service?) that will only run the scripts as and when they should be.
The fact that you are even asking the question shows that this is something that should be automated, the sooner the better.
You can mitigate the problem in the meanwhile by using if to test for these conditions and only execute if they are met. If this is a series of scripts you should wrap them in transactions to boot.

One work-around you can use is the following, which would require you to update a value in another table:
CREATE PROC dbo.MyProc
AS
WHILE (SELECT GoBit FROM dbo.OKToRun) = 0
BEGIN
RAISERROR('Waiting for GoBit to be set!', 0,1)
WAITFOR DELAY '00:00:10'
END
UPDATE dbo.OKtoRun
SET GoBit = 0
... DO STUFF ...
This will require you to, in another spid or session, update that table manually before it'll proceed.
This gets a lot more complicated with multiple procedures, so it will only work as a very short-term workaround.

sql is a query language. does not have ability to accept user inputs.
only thing i can think of would be to have it #variable driven. first part should update #shouldRunSecond = 1. and the second part should be wrapped in a
if #shouldRunSecond = 1
begin
...
end
second portion will be skipped if not desired.

The question is - where are these scripts located ?
If you have them as .sql file that you open every time before you run, then you can simply add some "magic numbers" before beginning of the script, that you will have to calculate every time, before you run it. In example below each time before you run your script you have to put correct date and minute into IF fondition, other wise script will not run
IF DATEPART(dd,GETDATE())!=5 or DATEPART(mi,(GETDATE()))!=43
BEGIN
RAISERROR ('You have tried occasionally to run your dangerous script !!!',16,1);
RETURN
END
--Some dangerous actions
drop database MostImportantCustomer
update Personal set Bonus=0 where UserName=SUSER_SNAME()
If your scripts reside in stored procedure - you can add some kind of "I am sure, I know what I do" parameter, where you will always pass, for example Minute multiplied by Day.
Hote it helps

I have seen batch scripts containing SQLCMD ..., so instead of running the .sql script from code or management studio, you could add a prompt in the script.

I have (on limited occasion) created an #AreYouSure parameter that must be passed into a stored procedure, then put comments next to the declaration in the stored procedure explaining the danger of running said procedure.
At least that way, no RANDOs will wander into your environment and kick off stored procedures when they don't understand the consequences. The parameter could be worked into an IF statement that checks it's value, or it doesn't really have to be used at all, but if it must be passed, then they have to at least figure out what to pass.
If you use this too much, though, others may just start passing a 'Y' or a 1 into every stored procedure without reading the comments. You could switch up the datatypes, but at some point it becomes more work to maintain this scheme than it is worth. That is why I use it on limited occasion.

Related

Calling a series of stored procedures sequentially SQL

Is it possible (using only T-SQL no C# code) to write a stored procedure to execute a series of other stored procedures without passing them any parameters?
What I mean is that, for example, when I want to update a row in a table and that table has a lot of columns which are all required, I want to run the first stored procedure to check if the ID exists or not, if yes then I want to call the update stored procedure, pass the ID but (using the window that SQL Server manager shows after executing each stored procedure) get the rest of the values from the user.
When I'm using the EXEC command, I need to pass all the parameters, but is there any other way to call the stored procedure without passing those parameter? (easy to do in C# or VB, I mean just using SQL syntax)
I think you are asking "can you prompt for user input in a sql script?". No not really.
You could actually do it with seriously hack-laden calls to the Windows API. And it would almost certainly have serious security problems.
But just don't do this. Write a program in C#, VB, Access, Powerscript, Python or whatever makes you happy. Use an tool appropriate to the task.
-- ADDED
Just so you know how ugly this would be. Imagine using the Flash component as an ActiveX object and using Flash to collect input from the user -- now you are talking about the kind of hacking it would be. Writing CLR procs, etc. would be just as big of a hack.
You should be cringing right now. But it gets worse, if the TSQL is running on the sql server, it would likely prompt or crash on the the server console instead of running on your workstation. You should definitely be cringing buy now.
If you are coming from Oracle Accept, the equivalent in just not available in TSQL -- nor should it be, and may it never be.
Right after reading your comment now I can understand what you are trying to do. You want to make a call to procedure and then ask End User to pass values for Parameters.
This is a very very badddddddddddddddddddd approach, specially since you have mentioned you will be making changes to database with this SP.
You should get all the values from your End Users before you make a call to your database(execute procedure), Only then make a call to database you Open a transaction and Commit it or RollBack as soon as possible and get out of there. as it will be holding locks on your resources.
Imagine you make a call to database (execute sp) , sp goes ahead and opens a transaction and now wait for End user to pass values, and your end user decides to go away for a cig, this will leave your resources locked and you will have to go in and kill the process yourself in order to let other user to go and use database/rows.
Solution
At application level (C#,VB) get all the values from End users and only when you have all the required information, only then pass these values to sp , execute it and get out of there asap.
You can specify the parameters by prefixing the name of the parameter with the # sign. For example, you can call an SP like this:
EXEC MyProc #Param1='This is a test'
But, if you are asking if you can get away with NOT providing required parameters, the answer is NO. Required is required. You can make them optional by providing a default value in the declaration of the SP. Then you can either not pass the value or call it like this:
EXEC MyProc #Param1=DEFAULT
--OR
EXEC MyProc DEFAULT

Can the use or lack of use of "GO" in T-SQL scripts effect the outcome?

We have an SSIS package that ran in production on a SQL 2008 box with a 2005 compatibility setting. The package contains a SQL Task and it appears as though the SQL at the end of the script did not run.
The person who worked on that package noted before leaving the company that the package needed "GOs" between the individual SQL commands to correct the issue. however, when testing in development on SQL Server 2008 with 2008 compatibility, the package worked fine.
From what I know, GO's place commands in batches, where commands are sent to the database provider in a batch, for efficiency's sake. I am thinking that the only way that GO should effect the outcome is if there was an error in that script somewhere above it. I can imagine GO in that case, and only that case, effecting the outcome. However, we have seen no evidence of any errors logged.
Can someone suggest to me whether or not GO is even likely related to the problem? Assuming no error was encountered, my understanding of the "GO" command suggests that it use or lack of use is most likely unrelated to the problem.
The GO keyword is, as you say, a batch separator that is used by the SQL Server management tools. It's important to note, though, that the keyword itself is parsed by the client, not the server.
Depending on the version of SQL Server in question, some things do need to be placed into distinct batches, such as creating and using a database. There are also some operations that must take place at the beginning of a batch (like the use statement), so using these keywords means that you'll have to break the script up into batches.
A couple of things to keep in mind about breaking a script up into multiple batches:
When an error is encountered within a batch, execution of that batch stops. However, if your script has multiple batches, an error in one batch will only stop that batch from executing; subsequent batches will still execute
Variables declared within a batch are available to that batch only; they cannot be used in other batches
If the script is performing nothing but CRUD operations, then there's no need to break it up into multiple batches unless any of the above behavioral differences is desired.
All of your assumptions are correct.
One thing that I've experienced is that if you have a batch of statements that is a pre-requisite for another batch, you may need to separate them with a GO. One example may be if you add a column to a table and then update that column (I think...). But if it's just a series of DML queries, then the absence or presence of GO shouldn't matter.
I've noticed that if you set up any variables in the script their state (and maybe the variables themselves) are wiped after a 'GO' statement so they can't be reused. This was certainly the case on SQL Server 2000 and I presume it will be the case on 2005 and 2008 as well.
Yes, GO can affect outcome.
GO between statements will allow execution to continue if there is an error in between. For example, compare the output of these two scripts:
SELECT * FROM table_does_not_exist;
SELECT * FROM sys.objects;
...
SELECT * FROM table_does_not_exist;
GO
SELECT * FROM sys.objects;
As others identified, you may need to issue GO if you need changes applied before you work on them (e.g. a new column) but you can't persist local or table variables across GO...
Finally, note that GO is not a T-SQL keyword, it is a batch separator. This is why you can't put GO in the middle of a stored procedure, for example ... SQL Server itself has no idea what GO means.
EDIT however one answer stated that transactions cannot span batches, which I disagree with:
CREATE TABLE #foo(id INT);
GO
BEGIN TRANSACTION;
GO
INSERT #foo(id) SELECT 1;
GO
SELECT ##TRANCOUNT; -- 1
GO
COMMIT TRANSACTION;
GO
DROP TABLE #foo;
GO
SELECT ##TRANCOUNT; -- 0

How can I tell what the parameter values are for a problem stored procedure?

I have a stored procedure that causes blocking on my SQL server database. Whenever it does block for more than X amount of seconds we get notified with what query is being run, and it looks similar to below.
CREATE PROC [dbo].[sp_problemprocedure] (
#orderid INT
--procedure code
How can I tell what the value is for #orderid? I'd like to know the value because this procedure will run 100+ times a day but only cause blocking a handful of times, and if we can find some sort of pattern between the order id's maybe I'd be able to track down the problem.
The procedure is being called from a .NET application if that helps.
Have you tried printing it from inside the procedure?
http://msdn.microsoft.com/en-us/library/ms176047.aspx
If it's being called from a .NET application you could easily log out the parameter being passed from the .net app, but if you don't have access, also you can use SQL Server profiling. Filters can be set on the command type i.e. proc only as well as the database that is being hit otherwise you will be overwhelmed with all the information a profile can produce.
Link: Using Sql server profiler
rename the procedure
create a logging table
create a new one (same signature/params) which calls the original but first logs the params and starting timestamp and logs after the call finishes the end timestamp
create a synonym for this new proc with the name of the original
Now you have a log for all calls made by whatever app...
You can disbale/enable the logging anytime by simply redefining the synonym to point to the logging wrapper or to the original...
The easiest way would be to run a profiler trace. You'll want to capture calls to the stored procedure.
Really though, that is only going to tell you part of the story. Personally I would start with the code. Try and batch big updates into smaller batches. Try and avoid long-running explicit transactions if they're not necessary. Look at your triggers (if any) and cascading Foreign keys and make sure those are efficient.
easiest way is to do the following:
1) in .NET, grab the date-time just before running the procedure
2) in .Net, after the procedure is complete grab the date-time
3) in .NET, do some date-time math, and if it is "slow", write to a file (log) those start and end date-times, user info, all the the parameters, etc.

How to troubleshoot a stored procedure?

what is the best way of troubleshoot a stored procedure in SQL Server, i mean from where do you start etc..?
Test each SELECT statements (if any) outside of your stored procedure to see whether it returns the expected results;
Make INSERT and UPDATE statements as simple as possible;
Try to test Inserts and Updates outside of your SP so that you can check it gives the expected results;
Use the debugger provided with SSMS Express 2008.
Visual Studio 2008 / 2010 has a debug facility. Simply connect to to your SQL Server instance in 'Server Explorer' and browse to your stored procedure.
Visual Studio 'Test Edition' also can generate Unit Tests around your stored procedures.
Troubleshooting a complex stored proc is far more than just determining if you can get it to run or not and finding the step which won't run. What is most critical is whether it actually returns the corect results or performs the correct actions.
There are two kinds of stored procs that need extensive abilites to troublshoot. First the the proc which creates dynamic SQL. I never create one of these without an input parameter of #debug. When this parameter is set, I have the proc print the SQl statment as it would have run and not run it. Almost everytime, this leads you right away to the problem as you can then see the syntax error in the generated SQL code. You also can run this sql code to see if it is returning the records you expect.
Now with complex procs that have many steps that affect data, I always use an #test input parameter. There are two things I do with the #test parameter, first I make it rollback the actions so that a mistake in development won't mess up the data. Second, I have it display the data before it rollsback to see what the results would have been. (These actually appear in the reverse order in the proc; I just think of them in this order.)
Now I can see what would have gone into the table or been deleted from the tables without affecting the data permananently. Sometimes, I might start with a select of the data as it was before any actions and then compare it to a select run afterwards.
Finally, I often want to log actions of a complex proc and see exactly what steps happened. I don't want those logs to get rolled back if the proc hits an error, so I set up a table variable for the logging information I want at the start of the proc. After each step (or after an error depending on what I want to log), I insert to this table variable. After the rollback or commit statement, I select the results of the table variable or use those results to log to a permanent logging table. This can be especially nice if you are using dynamic SQL because you can log the SQL that was run and then when something strange fails on prod, you have a record of which statement was run when it failed. You do this in a table variable because those do not go out of scope in a rollback.
In SSMS, you can simply start by opening the proc., and clicking on the check mark button (Parse) next to the Execute button on the menu bar. It reports any errors it finds.
If there are no errors there and you're stored procedure is harmless to run (you're not inserting into tables, just creating a temp table for example), then comment out the CREATE PROCEDURE x (or ALTER PROCEDURE x) and declare all the parameters by copying that part, then define them with valid values. Then run it to see what happens.
Maybe this is simple, but it's a place to start.

Why is this sql script so slow in SQL Server Management Studio?

I have a SQL Script that inserts about 8000 rows into a TABLE variable.
After inserting into that variable, I use a WHILE loop to loop over that table and perform other operations. That loop is perhaps 60 lines of code.
If I run the TABLE variable insert part of the script, without the while loop, it takes about 5 seconds. That's great.
However, if I run the entire script, it takes about 15 minutes.
Here's what is interesting and what I can't figure out:
When I run the entire script, I don't see any print statements until many minutes into the script.
Then, once it figures out what to do (presumably), it runs the inserts into the table var, does the loop, and that all goes rather fast.
Then, toward the end of the loop, or even after it, it sits and hangs for many more minutes. Finally, it chugs through the last few lines or so of the script that come after the loop.
I can account for all the time taken during the insert, and then all the time taken in the loop. But I can't figure out why it appears to be hanging for so many minutes before and at the end of the script.
for kicks, I added a GO statement after the insert into the temp table, and everything up to that point ran as you'd expect; however, I can't do that because I need that variable, and the GO statement obviously kills that variable.
I believe I'm going to stop using the table variable and go with a real table so that I can issue the GO, but I would really like to know what's going on here.
Any thoughts on what SQL Server is up to during that time?
Thanks!
You can always check what a script is doing from the Activity Monitor or from the sys.dm_exec_requests view. The script will be blocked by something, and you'll be able to see what is that is blocking in the wait_type and wait_resource columns.
There are several likely culprits, like waiting on row locks or table locks, but from the description of the problem I suspect is a database or log growth event. Those tend to be very expensive once the database is a big enough and the default 10% increase means growth of GBs. If that's the case, try to pre-size the database at the required size and make sure Instant File Initialization is enabled for data files.
PRINTs are buffered, so you can't judge performance from them.
Use RAISERROR ('Message', 0, 1) WITH NOWAIT to see the output immediately.
To understand what the process is doing, I'd begin with calling sp_who2 a few times and looking at the values for the process of interest: isn't it being blocked, what are the wait types if any, and so on. Also, just looking at the server hardware load (CPU, disk activity) might help (unless there're other active processes).
And please post some code. Table var definition and the loop will be enough, I believe, no need for INSERT stuff.
If you are using the table variable, can you try substituting it with temp table and see if there is any change in performance?
And if possible, please post the code so that it can be analysed for possible area of interest.
From the wording of your question, it sounds like you're using a cursor to loop through the table. If this is the case, issuing a "SET NOCOUNT ON" command before starting the loop will help.
The table variable was mentioned in a previous answer, but as a general rule, you should really use a temp table if you have more than a few rows.