SQL Server: Terminate SQL From Running - sql

Is there away to cause a script to prevent running if an if statement is true even with "GO"'s?
For example I want to do something similar to the following:
insert into table1 (col1, col2) value ('1', '2')
GO
if exists(select * from table1 where col1 = '1')
BEGIN
--Cause Script to fail
END
GO
insert into table1 (col1, col2) value ('1', '2') --Wont run
The actual purpose of doing this is to prevent table create scripts/inserts/deletes/updates from running more than once when we drop of packages for the DBA's to run.

GO is not a transact-sql keyword - it's actually a batch terminator understood by common SQL Server tools. If you use it in your application, your app wil fail.
Why wouldn't you do something like this?
IF NOT EXISTS (select * from table1 where col1 = '1')
BEGIN
--Do Some Stuff
END
Rather than abort the script if the condition is met, only run the script if the condition isn't met.
Alternatively, you could wrap the code in a proc and use RETURN to exit from the proc.

According to the documentation, certain values passed for the severity to RAISEERROR() can cause varying levels of termination.
The ones of most interest (if you are running a script through SQL Management Studio or similar, and want to prevent any attempt to run any subsequent commands in a the file) may be:
Severity levels from 20 through 25 are considered fatal. If a fatal severity level is encountered, the client connection is terminated after receiving the message, and the error is logged in the error and application logs.

You did not specify what tool you use to run the said script. The sqlcmd and osql tools have the -b parameter 'On error batch abort', which does exactly what you're asking. Simply replace 'cause script t fail' with a RAISERROR("failure", 16, 1).
#Jason I'd highly recommend against using error levels higher than 16 in your scripts. For one the levels above 18 require sysadmin privileges to start with. Second they may trigger all sort of automated responses from the administrative monitoring in place and even internally by the engine, including drastic measures like initiating a failover or taking the database offline. And last but not least they will cause a lot of wasted time if someone is investigating a real incident: 'a hardware failure one hour before crash, hmmm, let me look into this'.

Use goto. It sounds ugly, but works perfect for this.
Edit - nm, this doesn't work. It must have been a dream when I thought I saw it work

Related

Have SQL UPDATE throw error message on record lock like native IO does

I'm somewhat new to the IBM i and I come from a Java / C# background. So if I use the wrong jargon please correct me. We are currently on IBM i 7.1 and will be shifting to 7.3 within the next year.
I've been tasked with updating records in a file. I've written two RPGLE programs here that do the same thing.
Native I/O:
**free
dcl-f table1 usage(*update) keyed;
setll ('001': 'Y') table1;
reade ('001': 'Y') table1;
dow not %eof(table1);
letter = 'X';
update table1;
reade ('001': 'Y') table1;
enddo;
*inlr = *ON;
return;
Embedded SQL:
**free
exec sql UPDATE table1
SET letter = 'X'
WHERE GroupCode = '001' AND Newbie = 'Y';
*inlr = *ON;
return;
In my mind, this is one of the times where SQL has a powerful edge over native I/O with how succinct it is.
Here's my problem. My company relies on CPF messages. We, like many companies, are not perfect and have legacy code. There is quite a bit of it and sometimes we get record locks. When this happens, an error message is sent to the system and it waits for someone to log on and answer it.
This results in a call to our internal support and usually by the time someone gets to it, the program that had a lock on the file is done and we just need to answer R to retry and the update continues where it left off. This doesn't happen often enough that we can spend time working on fixing the scheduling process and all of the programs out of record locks, but often enough that not having this ability would cause us great amounts of pain.
With native I/O these messages are thrown. With embedded SQL the SQLSTATE and SQLCODE variables are set and the program continues on, without sending any message to the system. I could check this variable, write my own message, and send a message to the system if something goes wrong. However; this is less than ideal. If I manually give the option to retry, there is no way to resume the update from where it left off. I would have to run the whole update statement again.
I have looked at compiler options, control options, sql options and sql commands and none that I have seen give me the ability I'm looking for.
In short, I would like to know, is there a way that I can get embedded SQL to behave like native I/O upon an unmonitored error message?
In short...no.
SQL is a different paradigm. SQL error handling in DB2 for i works just like error handling in SQL in any other language...
You probably want to run your UPDATE under commitment control. Then check sqlstate. If there was no error, COMMIT the transaction.
Otherwise, ROLLBACK the transaction, and display your message. At this point an option to retry would loop back and reissue the SQL UPDATE.
Editoral
And after you add all that, your SQL version isn't looking so succinct is it? Heck the RPG version could actually be cut down to 5 or 6 lines...if you wanted to go old school and use an input primary file. Not that I'd recommend it now-a-days.
RPG's close integration with the DB does a lot for you. SQL's great and I use it all the time but sometimes a little more control is better. Thankfully, on the IBM i there's an alternative to SQL.
Better solution is to catch the record locked error and automatically retry the DB update operation until the lock is gone, or bail out after specific time-out interval. This can be easily implemented either with native I/O or with SQL and should cut down on the significant percentage of support incidents.
I see this in the wild all the time. Not using optimistic code. That fail you got maybe optimistic coding would have got past it even with legacy programs running.
There still is the how do I do a redo when the row is locked. Oldtimers that caused the problem with outdated coding techniques would lock the process with a message because its what they do they write code that depends on human intervention. Now anything else in the queue is blocked and more things will fail waiting for the human operator to answer a message.
A modern program will be able to reschedule itself in the future when it fails. Maybe send an email when the process reschedules. Like a runnable is scheduled in the future.
exec sql UPDATE table1
SET letter = 'X'
WHERE GroupCode = '001' AND Newbie = 'Y'
and letter <> 'X'; /* and letter <> 'X' is the optimistic part */
*inlr = *ON;
return;
if calledpgm failed then reschedule to run in the future send an email to support if desired.
If you look for a solution changing the approach altogether, there can be many ways. Taking only on one particular path (use of only native or only sql etc.) narrows down the options.
I tried to look at different answers above, here is my perspective
- with danny117's approach, if there is no dependency on the "time" when the file-field 'letter' can be updated to 'x' then having the update sql inside a 'monitor' and letting the entire program run multiple schedule with filter on letter ='x' can be an option
- for jmarkmurphy's approach but where the wait depends on the file attribute and can have problem if the file gets locked for a longer time outside.
- since even Charles option also do not help with the occasional lock and release
- i was thinking another option, if the update can run faster than the duration other program will lock the file. "allocate" the file exclusively for this update and release it after the successful update. during allocate loop the step with a delay till you get the exclusive lock on the file, then update.
In this case you could just:
**free
dou sqlstate = '00000' or sqlstate = '02000';
exec sql UPDATE table1
SET letter = 'X'
WHERE GroupCode = '001'
AND Newbie = 'Y'
and letter <> 'X';
enddo;
*inlr = *ON;
return;
The lock will still cause a wait in SQL for 60 seconds or so depending on the WAITRCD attribute of the file or override, so this won't cause too great an issue.
You could also create a procedure to send a message in the case of a lock and implement that as:
**free
dou sqlstate = '00000' or sqlstate = '02000';
exec sql UPDATE table1
SET letter = 'X'
WHERE GroupCode = '001'
AND Newbie = 'Y'
and letter <> 'X';
if sqlstate = '57033';
SendSqlMsg('SQL0913');
endif;
enddo;
*inlr = *ON;
return;
You could even make that procedure send an inquiry message and retrieve the reply. Then handle the reply as necessary. I will leave that as an exercise for you.

How to test your query first before running it sql server

I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added
One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
It seems to me that you just need to get into the habit of opening a transaction:
BEGIN TRANSACTION;
UPDATE [TABLENAME]
SET [Col1] = 'something', [Col2] = '..'
OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did
WHERE ....;
ROLLBACK;
Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done!
If you are using Microsoft SQL Server Management Studio you can go to Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do,
The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it.
The do a select first is always a good idea to see which rows are affected.
However for a quicker way back to a good state of the database which I would do anyway is
For a simple update etc
Use transactions
Do a begin transaction and then do all the updates etc and then select to check the data
The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
If you must test in a production database and you have the requisite permissions, then write your queries to create and use temporary tables that in name are similar to the production tables and whose schema other than index names is identical. Index names are unique across a databse, at least on Informix.
Then run your queries and look at the data.
Other than that, IMHO you need a development database, and perhaps even a development server with a development instance. That's paranoid advice, but you'd have to be very careful, even if you were allowed -- MS SQLSERVER lingo here -- a second instance on the same server.
I can reload our test database at will, and that's why we have a test system. Our production system contains citizens' tax payments and other information that cannot be harmed, "or else".
For our production data changes, we always ensure that we use a BEGIN TRAN and a ROLLBACK TRAN and then all statements have an OUTPUT clause. This way we can run the script first (usually in a copy of PRODUCTION db first) and see what is affected before changing the ROLLBACK TRAN to COMMIT TRAN
Have you considered explain ?
If there is a mistake in the command, it will report it as with usual commands.
But if there are no mistakes it will not run the command, it will just explain it.
Example of a "passed" test:
testdb=# explain select * from sometable ;
QUERY PLAN
------------------------------------------------------------
Seq Scan on sometable (cost=0.00..12.60 rows=260 width=278)
(1 row)
Example of a "failed" test:
testdb=# explain select * from sometaaable ;
ERROR: relation "sometaaable" does not exist
LINE 1: explain select * from sometaaable ;
It also works with insert, update and delete (i.e. the "dangerous" ones)

Is there a 'Are you sure want to continue?' SQL Command?

We have many SQL Server scripts. But there are a few critical scripts that should only be run at certain times under certain conditions. Is there a way to protect us from ourselves with some kind of popup warning?
i.e. When these critical scripts are run, is there a command to ask the user if they want to continue?
(We've already made some rollback scripts to handle these, but it's better if they not be accidentally run at all).
No, there is no such thing.
You can write an application (windows service?) that will only run the scripts as and when they should be.
The fact that you are even asking the question shows that this is something that should be automated, the sooner the better.
You can mitigate the problem in the meanwhile by using if to test for these conditions and only execute if they are met. If this is a series of scripts you should wrap them in transactions to boot.
One work-around you can use is the following, which would require you to update a value in another table:
CREATE PROC dbo.MyProc
AS
WHILE (SELECT GoBit FROM dbo.OKToRun) = 0
BEGIN
RAISERROR('Waiting for GoBit to be set!', 0,1)
WAITFOR DELAY '00:00:10'
END
UPDATE dbo.OKtoRun
SET GoBit = 0
... DO STUFF ...
This will require you to, in another spid or session, update that table manually before it'll proceed.
This gets a lot more complicated with multiple procedures, so it will only work as a very short-term workaround.
sql is a query language. does not have ability to accept user inputs.
only thing i can think of would be to have it #variable driven. first part should update #shouldRunSecond = 1. and the second part should be wrapped in a
if #shouldRunSecond = 1
begin
...
end
second portion will be skipped if not desired.
The question is - where are these scripts located ?
If you have them as .sql file that you open every time before you run, then you can simply add some "magic numbers" before beginning of the script, that you will have to calculate every time, before you run it. In example below each time before you run your script you have to put correct date and minute into IF fondition, other wise script will not run
IF DATEPART(dd,GETDATE())!=5 or DATEPART(mi,(GETDATE()))!=43
BEGIN
RAISERROR ('You have tried occasionally to run your dangerous script !!!',16,1);
RETURN
END
--Some dangerous actions
drop database MostImportantCustomer
update Personal set Bonus=0 where UserName=SUSER_SNAME()
If your scripts reside in stored procedure - you can add some kind of "I am sure, I know what I do" parameter, where you will always pass, for example Minute multiplied by Day.
Hote it helps
I have seen batch scripts containing SQLCMD ..., so instead of running the .sql script from code or management studio, you could add a prompt in the script.
I have (on limited occasion) created an #AreYouSure parameter that must be passed into a stored procedure, then put comments next to the declaration in the stored procedure explaining the danger of running said procedure.
At least that way, no RANDOs will wander into your environment and kick off stored procedures when they don't understand the consequences. The parameter could be worked into an IF statement that checks it's value, or it doesn't really have to be used at all, but if it must be passed, then they have to at least figure out what to pass.
If you use this too much, though, others may just start passing a 'Y' or a 1 into every stored procedure without reading the comments. You could switch up the datatypes, but at some point it becomes more work to maintain this scheme than it is worth. That is why I use it on limited occasion.

Can the use or lack of use of "GO" in T-SQL scripts effect the outcome?

We have an SSIS package that ran in production on a SQL 2008 box with a 2005 compatibility setting. The package contains a SQL Task and it appears as though the SQL at the end of the script did not run.
The person who worked on that package noted before leaving the company that the package needed "GOs" between the individual SQL commands to correct the issue. however, when testing in development on SQL Server 2008 with 2008 compatibility, the package worked fine.
From what I know, GO's place commands in batches, where commands are sent to the database provider in a batch, for efficiency's sake. I am thinking that the only way that GO should effect the outcome is if there was an error in that script somewhere above it. I can imagine GO in that case, and only that case, effecting the outcome. However, we have seen no evidence of any errors logged.
Can someone suggest to me whether or not GO is even likely related to the problem? Assuming no error was encountered, my understanding of the "GO" command suggests that it use or lack of use is most likely unrelated to the problem.
The GO keyword is, as you say, a batch separator that is used by the SQL Server management tools. It's important to note, though, that the keyword itself is parsed by the client, not the server.
Depending on the version of SQL Server in question, some things do need to be placed into distinct batches, such as creating and using a database. There are also some operations that must take place at the beginning of a batch (like the use statement), so using these keywords means that you'll have to break the script up into batches.
A couple of things to keep in mind about breaking a script up into multiple batches:
When an error is encountered within a batch, execution of that batch stops. However, if your script has multiple batches, an error in one batch will only stop that batch from executing; subsequent batches will still execute
Variables declared within a batch are available to that batch only; they cannot be used in other batches
If the script is performing nothing but CRUD operations, then there's no need to break it up into multiple batches unless any of the above behavioral differences is desired.
All of your assumptions are correct.
One thing that I've experienced is that if you have a batch of statements that is a pre-requisite for another batch, you may need to separate them with a GO. One example may be if you add a column to a table and then update that column (I think...). But if it's just a series of DML queries, then the absence or presence of GO shouldn't matter.
I've noticed that if you set up any variables in the script their state (and maybe the variables themselves) are wiped after a 'GO' statement so they can't be reused. This was certainly the case on SQL Server 2000 and I presume it will be the case on 2005 and 2008 as well.
Yes, GO can affect outcome.
GO between statements will allow execution to continue if there is an error in between. For example, compare the output of these two scripts:
SELECT * FROM table_does_not_exist;
SELECT * FROM sys.objects;
...
SELECT * FROM table_does_not_exist;
GO
SELECT * FROM sys.objects;
As others identified, you may need to issue GO if you need changes applied before you work on them (e.g. a new column) but you can't persist local or table variables across GO...
Finally, note that GO is not a T-SQL keyword, it is a batch separator. This is why you can't put GO in the middle of a stored procedure, for example ... SQL Server itself has no idea what GO means.
EDIT however one answer stated that transactions cannot span batches, which I disagree with:
CREATE TABLE #foo(id INT);
GO
BEGIN TRANSACTION;
GO
INSERT #foo(id) SELECT 1;
GO
SELECT ##TRANCOUNT; -- 1
GO
COMMIT TRANSACTION;
GO
DROP TABLE #foo;
GO
SELECT ##TRANCOUNT; -- 0

How can a SQL Sever TSQL script tell what security permissions it has?

I have a TSQL script that is used to set up a database as part of my product's installation. It takes a number of steps which all together take five minutes or so. Sometimes this script fails on the last step because the user running the script does not have sufficient rights to the database. In this case I would like the script to fail strait away. To do this I want the script to test what rights it has up front. Can anyone point me at a general purpose way of testing if the script is running with a particular security permission?
Edit: In the particular case I am looking at it is trying to do a backup, but I have had other things go wrong and was hoping for a general purpose solution.
select * from fn_my_permissions(NULL, 'SERVER')
This gives you a list of permissions the current session has on the server
select * from fn_my_permissions(NULL, 'DATABASE')
This gives you a list of permissions for the current session on the current database.
See here for more information.
I assume it is failing on an update or insert after a long series of selects.
Just try a simple update or insert inside a transaction. Hard-code the row id, or whatever to make it simple and fast.
Don't commit the transaction--instead roll it back.
If you don't have rights to do the insert or update, this should fail. If you DO, it will roll back and not cause a permanent change.
try the last insert/update up front with some where condition like
insert/update
where 1=2
if (##error <> 0)
raise error 6666 'no permissions'
this would not cause any harm but would raise a flag upfront about the lack of rights.