ALTER PROCEDURE with TRANSACTION - sql

I need to modify approx. 24 huge UDP and for production deployment i need to do a BEGIN TRANSACTION / ROLLBACK / COMMIT PROCESS.
How can I add the ALTER PROCEDURE my_proc between BEGIN TRANSACTION and COMMIT or ROLLBACK?
Note: EXEC('ALTER PROCEDURE..') can NOT be implemented.
Thanks
Update: there is a way to alter a procedure and rollback if it fails?

why can't you the regular way.
BEGIN TRANSACTION
GO
CREATE PROCEDURE testProcedure
AS
SELECT 1
GO
SELECT OBJECT_ID('testProcedure') ObjectID --this will return the object ID
GO
rollback TRANSACTION
SELECT OBJECT_ID('testProcedure') ObjectID --this will return NULL because the proc creation was rolled back
GO
You cannot have BEGIN TRY and BEGIN CATCH around batches. However you can use the last batch to check that all previous steps have succeeded (by examining the catalog views like sys.objects for example). Then you can decide if the batch all succeeded and either commit or roll back.

(Leandro, I’m adding a new answer because it would be too long for a compent)
I’ve been thinking. I don’t think this is a solution I would ever implement, but based on your requirements (and specially your restrictions), here is an idea that would work:
There is a modify_date on the sys.objects catalog so, why don’t you store the dates off all your objects before you run your updates and compare with the dates after you ran your updates. If ALL the dates are different, it means that all of them were updated correctly, if one of the dates is equal, it means that one failed and then you run a rollback script (you will need the rollback code, won’t be easy as just type ROLLBACK)

Related

Should I use sp_getapplock to prevent multiple instances of a stored procedure that conditionally inserts?

Hear me out! I know this use case sounds suspect, but...
I have a stored procedure which checks a table (effectively a cache) for data for a given requested ID. If it doesn't find any data for that ID, or deems it out of date, it executes a second stored procedure which will pull data from a separate database (using dynamic SQL, source DB name is based on the requested ID) and insert it into the local table. It then selects from this table.
If the data is in the table, everything returns quickly (ms), but if it needs to be brought in from the other database, it takes about 10 seconds. We're seeing race conditions where two concurrent instances check the local cache, see something is missing, and queue up sequential ingestions of the remote data into the cache. To avoid double-insertion, the cache-populating procedure will clear whatever is already there for this id, but this just means the first instance of the procedure can selecting no rows because the second instance deleted the just-inserted records before re-inserting them itself.
I think I want to put a lock around the entire procedure (checking the cache, potentially populating the cache, selecting from the cache) - although I'm open to other solutions. I think the overall caching approach has to remain on-demand though, the remote databases come and go by the hundreds, and we only want to cache the ones actually requested by reporting as-needed.
BEGIN TRANSACTION;
BEGIN TRY
-- Take out a lock intended to prevent anyone else modifying the cache while we're reading and potentially modifying it
EXEC sp_getapplock #Resource = '[private].[cache_entries]', #LockOwner='Transaction', #LockMode = 'Exclusive', #LockTimeout = 120000;
-- Invoke a stored procedure that ingests any required data that is not already cached
EXEC [private].populate_cache #required_dbs
-- CALCULATIONS
-- ... SELECT FROM [private].cache_entries
COMMIT TRANSACTION; -- Free the lock
END TRY
BEGIN CATCH --Ensure we release our lock on failure
ROLLBACK TRANSACTION;
THROW
END CATCH;
The alternative to sp_getapplock is to use locking hints with your transaction. Both are reasonable approaches. Locking hints can be complex, but they protect the target object itself rather than a single code path. So sometimes necessary. sp_getapplock is simple (with Transaction as owner), and reliable.
You can do this without sp_getapplock, which tends to inhibit concurrency a lot.
The way to do this is to continue do your checks within a transaction, but to apply a HOLDLOCK hint, as well as a UPDLOCK hint.
HOLDLOCK aka the SERIALIZABLE isolation level, will place a lock not only on the ID you specify, but even on the absence of such data, in other words it will prevent anyone else inserting into that ID.
You must use both these hints, as well as have an index that matches that SELECT, otherwise you could run into major blocking and deadlocking problems due to full table scans.
Also, you don't need a CATCH and ROLLBACK. Just use SET XACT_ABORT ON; which ensures a rollback in any event of an error.
SET XACT_ABORT ON; -- always have this set
BEGIN TRANSACTION;
DECLARE #SomeData nvarchar(100) = (
SELECT ce.SomeColumn
FROM [private].cache_entries ce WITH (HOLDLOCK, UPDLOCK)
WHERE ce.SomeCondition = 1
);
IF #SomeData IS NULL
BEGIN
-- Invoke a stored procedure that ingests any required data that is not already cached
EXEC [private].populate_cache #required_dbs
END
-- CALCULATIONS
-- ... SELECT FROM [private].cache_entries
COMMIT TRANSACTION; -- Free the lock

Trigger to detect whether DELETE or UPDATE is called by stored proc

I have a scenario where certain users must have the rights to update or delete certain records in production. What I want to do put in a safeguard to make sure they do not accidentally update or delete a large section (or entirety) of the table, but only a few records as per their need. So I wrote a simple trigger to accomplish this.
CREATE TRIGGER delete_check
ON dbo.My_table
AFTER UPDATE,DELETE AS
BEGIN
IF (SELECT COUNT(*) FROM Deleted) > 15
BEGIN
RAISERROR ('Bulk deletes from this table are not allowed', 16, 1)
ROLLBACK
END
END --end trigger
But here is the problem. There is a stored procedure that can do bulk updates to the table. The users can and should be allowed to call the stored procedure, as it's scope is more constrained. So my trigger would unfortunately preclude them from calling the stored proc when they need to.
The only solution I have thought of is to run the stored proc as an impersonated user, then modify the trigger to exclude that user from the logic. But that will bring up other issues in my environment. Not unsurmountable, but annoying. Nevertheless, this seems the only viable option.
Am I thinking about this the right way, or is there a better approach?
You can add a check of ##NESTLEVEL in the trigger. The value will be 1 for an ad-hoc statement or 2 when called from the stored procedure.
CREATE TRIGGER delete_check
ON dbo.My_table
AFTER UPDATE,DELETE AS
BEGIN
IF (SELECT COUNT(*) FROM Deleted) > 15
AND ##NESTLEVEL = 1 --ad-hoc delete
BEGIN
RAISERROR ('Bulk deletes from this table are not allowed', 16, 1);
ROLLBACK;
END;
END;
I usually handle this with CONTEXT_INFO(). This gives you a better control than ##NESTLEVEL because you can actually identify the specific stored procedure doing the calling and handle them individually if required. You do this as follows:
Add the procedure name to CONTEXT_INFO() e.g.
-- START OF STORED PROCEDURE
-- Tell the trigger who we are, and that we can be trusted.
declare #OldContext char(128), #NewContext varbinary(128);
-- Get existing context_info()
set #OldContext = coalesce(convert(char(128), context_info()), '');
-- Add new info to context_info
set #NewContext = convert(varbinary(128),convert(char(128), 'dbo.MyProcedureName'));
-- Store new context info
set context_info #NewContext;
-- STORED PROCEDURE CONTENT
-- END OF STORED PROCEDURE
-- Restore context_info
set #NewContext = convert(varbinary(128), #OldContext);
set context_info #NewContext;
In the trigger return early if the CONTEXT_INFO() is from a trusted source e.g.
-- START OF TRIGGER
declare #NewContext char(128) = coalesce(convert(char(128),context_info()),'');
if #NewContext in ('dbo.MyProcedureName') begin
return;
end;
Another advantage of this approach (for other trigger uses) is you can avoid carrying out logic in a trigger when called from a SP. Because often you put logic in a trigger to ensure that it happens regardless of how the insert/update/delete happens. But when done in an SP you can ensure the required logic is carried out within the SP which avoids the need to do it in a trigger. Especially useful if you end up with performance issues due to too much logic in the trigger.
Note: For SQL Server 2016+ you can use SESSION_CONTEXT() in a similar way.
One way of doing this would be:
Create a stored procedure to perform the desired work
If the criteria varies greatly, create several procedures for each "kind" of work
Grant EXECUTE permissions to the desired users on those procedures ("kind of work") that they are permitted to do. (You are using Database Roles and Domain Groups, right?)
Revoke permissions to make ad hoc data modifications on the table
It can be fussy to set up, but it supports the "Principle of Least Permissions", permitting users to do only what they are supposed to do.

Is it possible to release Transaction-log locks from within an active (massive data movement) stored procedure?

edited terminology for accuracy:
We have large, daily flows of data within our data-mart. Some of the largest, done with Stored procedures managed by SSIS, take several hours. These long-running stored procedures are preventing the transaction-log from clearing (which compounds the issue because we have numerous SP's running at once, which are then all writing to the T-log with no truncate). Eventually this breaks our database and we're forced to recover from the morning snapshot.
We have explored doing "sub"-commits within the SP, but as I understand it you can't fully release the transaction log within an active stored procedure, because it is itself a transaction.
Without refactoring our large SP's to run in batches, or something to that effect, is it possible to commit to the transaction log periodically within an active SP, so that we release the lock on the transaction log?
edit / extension:
Perhaps I was wrong above:
Will committing intermittently within the SP allow the transaction-log to truncate?
Will committing intermittently within the SP allow the transaction-log to truncate?
If the client starts a transaction, it's not recommended to COMMIT that transaction inside a stored procedure. It's not allowed to exit the stored procedure with a different ##trancount than it was entered with.
The following pattern is technically allowed, although I have never seen it used in the real world:
use tempdb
if ##trancount > 0 rollback
go
drop table if exists T
create table T(id int identity)
go
create or alter procedure tranTest
as
begin
insert into T default values
commit transaction
begin transaction
end
go
begin transaction
exec tranTest
select * from T
rollback
go 5
It would be deeply confusing for client code to rollback a transaction and not have the stored procedure's work rolled back.
If the client doesn't start a transaction, you can have multiple transactions inside a stored procedure, but the smallest granularity for a transaction is a single DML statement. So each INSERT, UPDATE, DELETE, or MERGE would be run in a single transaction.
The practical solutions to this are, in descending order of goodness:
1) Increase the storage available to the log file to accommodate the transactions.
2) Refactor the ETL to use shorter transactions, possibly readying data in stating tables and loading or switching it in in a single, final transaction
3) Refactor the ETL to run in smaller batches.

Using transaction rolled back for testing

I just discovered the idea of testing a stored proc by calling it from within a BEGIN TRAN t1 ROLLBACK TRAN t1 pair.
I am a bit afraid of this. Is that a common practice ? Is it reliable ?
My goal here is to quicly test a stored proc that reads and updates 2 databases (same server). The SP does not do any truncate but uses a table variable combined with an INSERT.. OUTPUT statement.
The volume will be low (less than 1000 lines affected).
Thanks
There are a few things that can go wrong:
The proc could do its own transaction management
It could execute non-transactable statements like CREATE DATABASE
It could have an error, causing the transaction to automatically rollback. If the proc then continues to run in some way, it might write stuff outside of a transaction
XACT_ABORT might be used inconsistently, causing the previously mentioned effect
In general, this is a good technique, though.
Truncate is transacted, btw.

Auto update with script file with transaction

I need to provide an auto update feature to my application.
I am having problem in applying the SQL updates. I have the updated SQL statement in my .sql file and what i want to achieve is that if one statment fails then entire script file must be rolled back
Ex.
create procedure [dbo].[test1]
#P1 varchar(200),
#C1 int
as
begin
Select 1
end
GO
Insert into test (name) values ('vv')
Go
alter procedure [dbo].[test2]
#P1 varchar(200),
#C1 int
as
begin
Select 1
end
GO
Now in the above example, if i get the error in third statement of "alter procedure [dbo].[test2]" then i want to rollback the first two changes also which is creating SP of "test1" and inserting data into "test" table
How should i approach this task? Any help will be much appreciated.
If you need any more info then let me know
Normally, you would want to add a BEGIN TRAN at the beginning, remove the GO statements, and then handle the ROLLBACK TRAN/COMMIT TRAN with a TRY..CATCH block.
When dealing with DML though there are often statements that have to be at the start of a batch, so you can't wrap them in a TRY..CATCH block. In that case you need to put together a system that knows how to roll itself back.
A simple system would be just to backup the database at the start and restore it if anything fails (assuming that you are the only one accessing the database the whole time). Another method would be to log each batch that runs successfully and to have corresponding rollback scripts which you can run to put everything back should a later batch fail. This obviously requires much more work (writing an undo script for every script PLUS fully testing the rollbacks) and can also be a problem if people are still accessing the database while the upgrade is happening.
EDIT:
Here's an example of a simple TRY..CATCH block with transaction handling:
BEGIN TRY
BEGIN TRANSACTION
-- All of your code here, with `RAISERROR` used for any of your own error conditions
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH
However, the TRY..CATCH block cannot span batches (maybe that's what I was thinking of when I said transactions couldn't), so in your case it would probably be something more like:
IF (OBJECT_ID('dbo.Error_Happened') IS NOT NULL)
DROP TABLE dbo.Error_Happened
GO
BEGIN TRANSACTION
<Some line of code>
IF (##ERROR <> 0)
CREATE TABLE dbo.Error_Happened (my_id INT)
IF (OBJECT_ID('dbo.Error_Happened') IS NOT NULL)
BEGIN
<Another line of code>
IF (##ERROR <> 0)
CREATE TABLE dbo.Error_Happened (my_id INT)
END
...
IF (OBJECT_ID('dbo.Error_Happened) IS NOT NULL)
BEGIN
ROLLBACK TRANSACTION
DROP TABLE dbo.Error_Happened
END
ELSE
COMMIT TRANSACTION
Unfortunately, because of the separate batches from the GO statements you can't use GOTO, you can't use the TRY..CATCH, and you can't persist a variable across the batches. This is why I used the very kludgy trick of creating a table to indicate an error.
A better way would be to simply have an error table and look for rows in it. Just keep in mind that your ROLLBACK will remove those rows at the end as well.