I run a lot of queries that perform INSERT's, insert SELECT's, UPDATE's and ALTER's on tables, and when developing these queries, the intermediate steps that are run to test that various parts of the query work, potentially change the table or the data within the table.
Is it possible to do a dry run of a query and have SQL Management Studio give you what the results would be, without actually modifying the data or the table structure?
At the moment I have to back up the database, and run the query. If it works, good, if it doesn't, I have to restore the database (which can take around a hour) and I'm trying to avoid wasting all this time having to restore databases.
Use an SQL transaction to make your changes then back them out.
Before you execute your script:
BEGIN TRANSACTION;
After you execute your script and have done your checking:
ROLLBACK TRANSACTION;
Every change in your script will then be undone.
Note: Make sure you don't have a COMMIT in your script!
Begin the transaction, perform the table operations, and rollback as shown below:
BEGIN TRAN
UPDATE C
SET column1 = 'XXX'
FROM table1 C
SELECT *
FROM table1
WHERE column1 = 'XXX'
ROLLBACK TRAN
This will rollback all the operations performed since the last commit since the beginning of this transaction.
Related
I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added
One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
It seems to me that you just need to get into the habit of opening a transaction:
BEGIN TRANSACTION;
UPDATE [TABLENAME]
SET [Col1] = 'something', [Col2] = '..'
OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did
WHERE ....;
ROLLBACK;
Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done!
If you are using Microsoft SQL Server Management Studio you can go to Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do,
The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it.
The do a select first is always a good idea to see which rows are affected.
However for a quicker way back to a good state of the database which I would do anyway is
For a simple update etc
Use transactions
Do a begin transaction and then do all the updates etc and then select to check the data
The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
If you must test in a production database and you have the requisite permissions, then write your queries to create and use temporary tables that in name are similar to the production tables and whose schema other than index names is identical. Index names are unique across a databse, at least on Informix.
Then run your queries and look at the data.
Other than that, IMHO you need a development database, and perhaps even a development server with a development instance. That's paranoid advice, but you'd have to be very careful, even if you were allowed -- MS SQLSERVER lingo here -- a second instance on the same server.
I can reload our test database at will, and that's why we have a test system. Our production system contains citizens' tax payments and other information that cannot be harmed, "or else".
For our production data changes, we always ensure that we use a BEGIN TRAN and a ROLLBACK TRAN and then all statements have an OUTPUT clause. This way we can run the script first (usually in a copy of PRODUCTION db first) and see what is affected before changing the ROLLBACK TRAN to COMMIT TRAN
Have you considered explain ?
If there is a mistake in the command, it will report it as with usual commands.
But if there are no mistakes it will not run the command, it will just explain it.
Example of a "passed" test:
testdb=# explain select * from sometable ;
QUERY PLAN
------------------------------------------------------------
Seq Scan on sometable (cost=0.00..12.60 rows=260 width=278)
(1 row)
Example of a "failed" test:
testdb=# explain select * from sometaaable ;
ERROR: relation "sometaaable" does not exist
LINE 1: explain select * from sometaaable ;
It also works with insert, update and delete (i.e. the "dangerous" ones)
I just discovered the idea of testing a stored proc by calling it from within a BEGIN TRAN t1 ROLLBACK TRAN t1 pair.
I am a bit afraid of this. Is that a common practice ? Is it reliable ?
My goal here is to quicly test a stored proc that reads and updates 2 databases (same server). The SP does not do any truncate but uses a table variable combined with an INSERT.. OUTPUT statement.
The volume will be low (less than 1000 lines affected).
Thanks
There are a few things that can go wrong:
The proc could do its own transaction management
It could execute non-transactable statements like CREATE DATABASE
It could have an error, causing the transaction to automatically rollback. If the proc then continues to run in some way, it might write stuff outside of a transaction
XACT_ABORT might be used inconsistently, causing the previously mentioned effect
In general, this is a good technique, though.
Truncate is transacted, btw.
I have run SQL server command (update command).
the command has been performed successfully and the table has been updated
is there any way to take back in that command ?
note: no backup taken
If you had originally asked the question how do I do an UPDATE with the possibility of ROLLBACK I would tell you you should do your ad-hoc updates like this.
BEGIN TRANSACTION
UPDATE blah
SET value = newvalue
WHERE condition = someothervalue
--COMMIT TRANSACTION
Then if the results are as expected run the COMMIT TRANSACTION. If they are not than you could do a ROLLBACK TRANSACTION. However since you already did the updates and have no backups or recovery plan you are pretty much out of luck.
After you have already executed an update command the only way back would be via restoring a backup.
Something I do when writing any modification scripts is to wrap the command in a transaction and then either run a rollback or a commit depending on if the query performed as suspected.
Example:
--start the transaction only execute the first three lines, this leaves the transaction open
BEGIN TRANSACTION
UPDATE TABLEA
SET COL1 = "newValue"
--examine data and based on the results run one of these two lines
ROLLBACK TRANSACTION
COMMIT TRANSACTION
I need to run a test on stored procedure in a client's database. Is there anyway to test the stored procedure without affecting the data in the database?
For example, there is an insert query in the SP, which will change the data of the database.
Is there anyway to solve this problem?
You could run the stored procedure in a transaction. Use this script by placing your statements between the comment lines. Run the whole script, your transaction will be in an uncommitted state. Then, highlight the line ROLLBACK or COMMIT and execute either accordingly to finish.
Always have backups.
If possible work in a sandbox away from your clients data as a matter of principle.
Be aware that you could be locking data which could be holding up other sql statements by your client while you are deciding whether to commit or rollback.
BEGIN TRANSACTION MyTransaction
GO
-- INSERT SQL BELOW
-- INSERT SQL ABOVE
GO
IF ##ERROR != 0
BEGIN
PRINT '--------- ERROR - ROLLED BACK ---------'
ROLLBACK TRANSACTION MyTransaction
END
ELSE
BEGIN
PRINT '--------- SCRIPT EXECUTE VALID ---------'
PRINT '--------- COMPLETE WITH ROLLBACK OR COMMIT NOW! ---------'
--ROLLBACK TRANSACTION MyTransaction
--COMMIT TRANSACTION MyTransaction
END
If the SP is meant to change data, and if you don't permit the data to change, then how will you "test" the SP? Will you just make sure it doesn't die? What if it returns no errors, but inserts no data?
You can follow a similar path to what Valamas suggested, but you will also need to actually test the SP. For instance, if particular data are meant to be inserted based on particular parameter values, then you'll have to:
Start a transaction
Create any test data in the database
Call the SP with the particular parameter values
Still within the transaction, check the database to see if the correct rows were inserted
Roll back the transaction
I can't show you the code, but I have had success in doing the above in code in .NET, using the Visual Studio unit test framework. One could do the same with NUnit or any other unit test framework. I did not use the Database Unit Test feature of Visual Studio Database Projects. I simply did the steps above in code, using ADO.NET and the SqlTransaction class to control the transaction.
I need to modify approx. 24 huge UDP and for production deployment i need to do a BEGIN TRANSACTION / ROLLBACK / COMMIT PROCESS.
How can I add the ALTER PROCEDURE my_proc between BEGIN TRANSACTION and COMMIT or ROLLBACK?
Note: EXEC('ALTER PROCEDURE..') can NOT be implemented.
Thanks
Update: there is a way to alter a procedure and rollback if it fails?
why can't you the regular way.
BEGIN TRANSACTION
GO
CREATE PROCEDURE testProcedure
AS
SELECT 1
GO
SELECT OBJECT_ID('testProcedure') ObjectID --this will return the object ID
GO
rollback TRANSACTION
SELECT OBJECT_ID('testProcedure') ObjectID --this will return NULL because the proc creation was rolled back
GO
You cannot have BEGIN TRY and BEGIN CATCH around batches. However you can use the last batch to check that all previous steps have succeeded (by examining the catalog views like sys.objects for example). Then you can decide if the batch all succeeded and either commit or roll back.
(Leandro, I’m adding a new answer because it would be too long for a compent)
I’ve been thinking. I don’t think this is a solution I would ever implement, but based on your requirements (and specially your restrictions), here is an idea that would work:
There is a modify_date on the sys.objects catalog so, why don’t you store the dates off all your objects before you run your updates and compare with the dates after you ran your updates. If ALL the dates are different, it means that all of them were updated correctly, if one of the dates is equal, it means that one failed and then you run a rollback script (you will need the rollback code, won’t be easy as just type ROLLBACK)