I have the problem that all the triggers in a MS SQL database contains the old SQL 2000 syntax like:
raiserror 99999 'ErrorMsg'
This causes error. The correct should be:
raiserror ('ErrorMsg',16,1)
The text 'ErrorMsg' is different in different triggers (about 45 variants). The thing is, there are 123 triggers and 185 occurrences that needs to be corrected in the database.
It is possible to script out each trigger, delete it, correct the syntax and re-create it. 123 times :-(
So - I am looking for a script that can loop through the triggers and correct all occurrences of raiserror 99999. The script should be smart enough to keep the 'ErrorMsg' for each trigger.
In SSMS generate a single script for all the triggers, then in the SSMS query editor use Find and Replace to make the changes.
You can spend time to write clever regular expressions with Find and Replace, or since there are only "185 occurrences" just jump down through the script and change each one by hand. Sometimes you'll end up doing some combination of the two. EG a regular expression to find and replace common cases, and manual changes for other cases.
Related
I have a problem to solve which requires undo operation of each executed sql file in Oracle Database.
I execute them in an xml file with MSBuild - exec command sqlplus with log in and #*.sql.
Obviously rollback won't do, because it can't rollback already commited transaction.
I have been searching for several days and still can't find the answer. What I learned is Oracle Flashback and Point in Time Recovery. The problem is that I want the changes to be undone only for the current user i.e. if another user makes some changes at the same time then my solution performs undo only on user 'X' not 'Y'.
I found the start_scn and commit_scn in flashback_transaction_query. But does it identify only one user? What if I flashback to a given SCN? Will that undo only for me or for other users as well? I have taken out
select start_scn from flashback_transaction_query WHERE logon_user='MY_USER_NAME'
and
WHERE table_name = "MY_TABLE NAME"
and performed
FLASHBACK TO SCN"here its number"
on a chosen operation's SCN. Will that work for me?
I also found out about Point in Time Recovery but as I read it makes the whole database unavailable so other users will be unable to work with it.
So I need something that will undo a whole *.sql file.
This is possible but maybe not with the tools that you use. sqlplus can rollback your transaction, you just have to make sure auto commit isn't enabled and that your scripts only contain a single commit right before you end the sqlplus session (if you don't commit at all, sqlplus will always roll back all changes when it exits).
The problems start when you have several scripts and you want, for example, to rollback a script that you ran yesterday. This is a whole new can of worms and there is no general solution that will always work (it's part of the "merge problem" group of problems, i.e. how can you merge transactions by different users when everyone can keep transactions open for as long as they like).
It can be done but you need to carefully design your database for it, the business rules must be OK with it, etc.
To general approach would be to have a table which contains the information which rows were modified (= created,updated,deleted) by the script plus the script name plus the time when it was executed.
With this information, you can generate SQL which can undo the changes created by a script. To fill such a table, use triggers or generate your scripts in such a way that they write this information as well (note: This is probably beyond a "simple" sqlplus solution; you will have to write your own data loader for this).
Ok I solved the problem by creating a DDL and DML TRIGGER. The first one takes "extra" column (which is the DDL statement you have just entered) from v$open_cursor and inserts into my table. The second gets "undo_sql" from flashback_transaction_query which is the opposite action of your DML action - if INSERT then undo_sql is DELETE with all necessary data.
Triggers work before DELETE,INSERT (DML) on specific table and ALTER,DROP,CREATE (DDL) on specific SCHEMA or VIEW.
We have many SQL Server scripts. But there are a few critical scripts that should only be run at certain times under certain conditions. Is there a way to protect us from ourselves with some kind of popup warning?
i.e. When these critical scripts are run, is there a command to ask the user if they want to continue?
(We've already made some rollback scripts to handle these, but it's better if they not be accidentally run at all).
No, there is no such thing.
You can write an application (windows service?) that will only run the scripts as and when they should be.
The fact that you are even asking the question shows that this is something that should be automated, the sooner the better.
You can mitigate the problem in the meanwhile by using if to test for these conditions and only execute if they are met. If this is a series of scripts you should wrap them in transactions to boot.
One work-around you can use is the following, which would require you to update a value in another table:
CREATE PROC dbo.MyProc
AS
WHILE (SELECT GoBit FROM dbo.OKToRun) = 0
BEGIN
RAISERROR('Waiting for GoBit to be set!', 0,1)
WAITFOR DELAY '00:00:10'
END
UPDATE dbo.OKtoRun
SET GoBit = 0
... DO STUFF ...
This will require you to, in another spid or session, update that table manually before it'll proceed.
This gets a lot more complicated with multiple procedures, so it will only work as a very short-term workaround.
sql is a query language. does not have ability to accept user inputs.
only thing i can think of would be to have it #variable driven. first part should update #shouldRunSecond = 1. and the second part should be wrapped in a
if #shouldRunSecond = 1
begin
...
end
second portion will be skipped if not desired.
The question is - where are these scripts located ?
If you have them as .sql file that you open every time before you run, then you can simply add some "magic numbers" before beginning of the script, that you will have to calculate every time, before you run it. In example below each time before you run your script you have to put correct date and minute into IF fondition, other wise script will not run
IF DATEPART(dd,GETDATE())!=5 or DATEPART(mi,(GETDATE()))!=43
BEGIN
RAISERROR ('You have tried occasionally to run your dangerous script !!!',16,1);
RETURN
END
--Some dangerous actions
drop database MostImportantCustomer
update Personal set Bonus=0 where UserName=SUSER_SNAME()
If your scripts reside in stored procedure - you can add some kind of "I am sure, I know what I do" parameter, where you will always pass, for example Minute multiplied by Day.
Hote it helps
I have seen batch scripts containing SQLCMD ..., so instead of running the .sql script from code or management studio, you could add a prompt in the script.
I have (on limited occasion) created an #AreYouSure parameter that must be passed into a stored procedure, then put comments next to the declaration in the stored procedure explaining the danger of running said procedure.
At least that way, no RANDOs will wander into your environment and kick off stored procedures when they don't understand the consequences. The parameter could be worked into an IF statement that checks it's value, or it doesn't really have to be used at all, but if it must be passed, then they have to at least figure out what to pass.
If you use this too much, though, others may just start passing a 'Y' or a 1 into every stored procedure without reading the comments. You could switch up the datatypes, but at some point it becomes more work to maintain this scheme than it is worth. That is why I use it on limited occasion.
We have an SSIS package that ran in production on a SQL 2008 box with a 2005 compatibility setting. The package contains a SQL Task and it appears as though the SQL at the end of the script did not run.
The person who worked on that package noted before leaving the company that the package needed "GOs" between the individual SQL commands to correct the issue. however, when testing in development on SQL Server 2008 with 2008 compatibility, the package worked fine.
From what I know, GO's place commands in batches, where commands are sent to the database provider in a batch, for efficiency's sake. I am thinking that the only way that GO should effect the outcome is if there was an error in that script somewhere above it. I can imagine GO in that case, and only that case, effecting the outcome. However, we have seen no evidence of any errors logged.
Can someone suggest to me whether or not GO is even likely related to the problem? Assuming no error was encountered, my understanding of the "GO" command suggests that it use or lack of use is most likely unrelated to the problem.
The GO keyword is, as you say, a batch separator that is used by the SQL Server management tools. It's important to note, though, that the keyword itself is parsed by the client, not the server.
Depending on the version of SQL Server in question, some things do need to be placed into distinct batches, such as creating and using a database. There are also some operations that must take place at the beginning of a batch (like the use statement), so using these keywords means that you'll have to break the script up into batches.
A couple of things to keep in mind about breaking a script up into multiple batches:
When an error is encountered within a batch, execution of that batch stops. However, if your script has multiple batches, an error in one batch will only stop that batch from executing; subsequent batches will still execute
Variables declared within a batch are available to that batch only; they cannot be used in other batches
If the script is performing nothing but CRUD operations, then there's no need to break it up into multiple batches unless any of the above behavioral differences is desired.
All of your assumptions are correct.
One thing that I've experienced is that if you have a batch of statements that is a pre-requisite for another batch, you may need to separate them with a GO. One example may be if you add a column to a table and then update that column (I think...). But if it's just a series of DML queries, then the absence or presence of GO shouldn't matter.
I've noticed that if you set up any variables in the script their state (and maybe the variables themselves) are wiped after a 'GO' statement so they can't be reused. This was certainly the case on SQL Server 2000 and I presume it will be the case on 2005 and 2008 as well.
Yes, GO can affect outcome.
GO between statements will allow execution to continue if there is an error in between. For example, compare the output of these two scripts:
SELECT * FROM table_does_not_exist;
SELECT * FROM sys.objects;
...
SELECT * FROM table_does_not_exist;
GO
SELECT * FROM sys.objects;
As others identified, you may need to issue GO if you need changes applied before you work on them (e.g. a new column) but you can't persist local or table variables across GO...
Finally, note that GO is not a T-SQL keyword, it is a batch separator. This is why you can't put GO in the middle of a stored procedure, for example ... SQL Server itself has no idea what GO means.
EDIT however one answer stated that transactions cannot span batches, which I disagree with:
CREATE TABLE #foo(id INT);
GO
BEGIN TRANSACTION;
GO
INSERT #foo(id) SELECT 1;
GO
SELECT ##TRANCOUNT; -- 1
GO
COMMIT TRANSACTION;
GO
DROP TABLE #foo;
GO
SELECT ##TRANCOUNT; -- 0
I’ve always been confused with when I should use the GO keyword after commands and whether a semi-colon is required at the end of commands. What is the differences and why/when I should use them?
When I run the Generate-script in SQL Server Management Studio, it seems to use GO all over the place, but not the semi-colon.
GO only relates to SSMS - it isn't actual Transact SQL, it just tells SSMS to send the SQL statements between each GO in individual batches sequentially.
The ; is a SQL statement delimiter, but for the most part the engine can interpret where your statements are broken up.
The main exception, and place where the ; is used most often is before a Common Table Expression Statement.
The reason why you see so many GO's in Generated DDL scripts is because of the following rule about batches.
CREATE DEFAULT, CREATE FUNCTION,
CREATE PROCEDURE, CREATE RULE, CREATE
TRIGGER, and CREATE VIEW statements
cannot be combined with other
statements in a batch. The CREATE
statement must begin the batch. All
other statements that follow in that
batch will be interpreted as part of
the definition of the first CREATE
statement.
One of the use cases for Generated DDL is to generate multiple objects in a single file. Because of this a DDL generator must be able to generate batches. As others have said the GO statement ends the batch.
GO
Go is a batch separator. This means that everything in that batch is local to that particular batch.
Any declarations of Variables, Table Variables, etc do not go across GO statements.
#Temp tables are local to a connection, so they span across GO statements.
Semicolon
A Semicolon is a statement terminator. This is purely used to identify that a particular statement has ended.
In most cases, the statement syntax itself is enough to determine the end of a statement.
CTE's however, demand that the WITH is the first statement so you need a semicolon before the WITH.
You should use a semi-colon to terminate every SQL statement. This is defined in the SQL Standards,
Sure, more often than not SQL Server allows you to omit the statement terminator but why get into bad habits?
As others have pointed out, the statement preceding a common table expression (CTE) must be terminated with a semi-colon. As a consequence, from folk who have not fully embraced the semi-colon terminator, we see this:
;WITH ...
which I think looks really odd. I suppose it makes sense in an online forum when you can't tell the quality of code it will be pasted into.
Additionally, a MERGE statement must be terminated by a semi-colon. Do you see a pattern here? These are a couple of the newer additions to TSQL which closely follow SQL Standards. Looks like the SQL Server team are going down the road of mandating the use of the semi-colon terminator.
GO is a batch terminator, a semi-colon is a statement terminator.
you will use GO when you want to have multiple create proc statements in 1 script because create proc has to be the first statement in a batch. If you use common table expressions then the statement before it needs to be terminated with a semi-colon
This is a pretty specific question, albeit possibly subjective, but I've been using this pattern very frequently while not seeing others use it very often. Am I missing out on something or being too paranoid?
I wrap all my UPDATE,DELETE,INSERT operations in stored procedures, and only give EXECUTE on my package and SELECT on my tables, to my application. For the UPDATE and DELETE procedures I have an IF statement at the end in which I do the following:
IF SQL%ROWCOUNT <> 1 THEN
RAISE_APPLICATION_ERROR(-20001, 'Invalid number of rows affected: ' || SQL%ROWCOUNT);
END IF;
One could also do this check in the application code, as the number of rows affected is usually available after a SQL statement is executed.
So am I missing something or is this not the safest way to ensure you're updating or deleting exactly what you want to, nothing more, nothing less?
I think this is a fine way to go. If the pl/sql proc is expected to always update/delete/insert a row and it's considered an error otherwise, then what better place to put this check than in the pl/sql proc itself? That way, no matter what client side code (C#, JAVA, PHP, Rails, etc.) uses this proc, you have this error check centralized in one place.
However, I'm not sure you need the check for an insert. If the insert fails, you should get some sort of DB exception, so no need to check for it explicitly unless you are wrapping the error in some other error message and re-raising it.
In most cases I'd use an ORM like Hibernate, which does a similar thing in order to handle Optimistic locking. Also it will use the PK in the where clause.
So I would consider this kind of stored procedure a waste of time:
- A lot of effort for minimal benefit
- Makes usage of tools like ORMs harder, which solve more and more important problems.