SQL Server Multiple DDLs Ignoring Order And in a Single Transaction - sql

I'm trying to run multiple DDLs (around 90) on an SQL Server.
The DDLs don't contain any changes to tables, only view, stored procedures, and functions. The DDLs might have inter-dependencies between them, one STP that calls another, for example.
I don't want to start organizing the files in the correct order, because it would take too long, and I want the entire operation to fail if any one of the scripts has an error.
How can I achieve this?
My idea so far, is to start a transaction, tell the SQL to ignore errors (which I don't know how to do) run all the scripts once, tell the SQL to start throwing errors again, run all the scripts again, and then commit if everything succeeds.
Is this a good idea?
How do I CREATE \ ALTER a stored procedure or view even though it has errors?
To clarify and address some concerns...
This is not intended for production. I just don't want to leave the DB I'm testing on broken.
What I would like to achieve is this: run a big group of scripts on the server, without taking the time to order them. But if any of the scripts has an error in it, I want to rollback the entire operation.
I don't care about isolation, I only want the operation to happen as a single transaction.

Organize the files in the correct order, test the procedure on a test environment, have a validation and acceptance test, then run it in production.
While running DDL in a transaction may seem possible, in practice is not. There are many DDL statements that don't mix well with transactions. You must put the application offline, take a database backup (or create a snapshot) before the schema changes, run the tested and verified upgrade procedure (your scripts), validate the result with acceptance tests and then turn the application back online. If something fails, revert to the backup created initially (with all the implications vis-a-vis any downstream log consumer like replication, log shipping or mirroring).
This is the correct way, and as far as I'm concerned the only way. I know you'll find plenty of advice on how to do this the wrong way.

We actually do something like this to deploy our database scripts to production. We do this in an application that connects to our databases. To add to the complication, we also have 600 databases that should have the same schema, but don't really. Here's our approach:
Merge all our scripts into one big file. Injecting go's in between every single file. This makes it look like there's one very long script. We do a simple ordering based on what the coders requested.
Split everything into "go blocks". Since go isn't legal sql, we split them up into multiple blocks that get executed one at a time.
Open a database connection.
Start a transaction.
for each go block:
Make sure the transaction is still active. (This is VERY important. I'll explain why in a bit.)
Run the code, recording the errors.
If there were any errors, rollback. Otherwise, commit.
In our multi database set up, we do this whole thing twice. Run through every database once, "testing" the code to make sure there are no errors on any database, and then go back and run them again "for real".
Now on to why you need to make sure the transaction is still active. There are some commands that will rollback your transaction on error! Imagine our surprise the first time we found this out... Everything before the error was rolled back, but everything after was committed. If there is an error, however, nothing in that same block gets committed, so it's all good.
Below is our core of our execution code. We use a wrapper around SqlClient, but it should look very similar to SqlClient.
Dim T = New DBTransaction(client)
For Each block In scriptBlocks
If Not T.RestartIfNecessary Then
exceptionCount += 1
Log("Could not (re)start the transaction for {0}. Not executing the rest of the script.", scriptName)
Exit For
End If
Debug.Assert(T.IsInTransaction)
Try
client.Text = block
client.ExecNonQuery()
Catch ex As Exception
exceptionCount += 1
Log(ex.Message + " on {0} executing: '{1}'", client.Connection.Database, block.Replace(vbNewLine, ""))
End Try
Next
If exceptionCount > 0 Then Log("There were {0} exceptions while executing {1}.", exceptionCount, scriptName)
If testing OrElse
exceptionCount > 0 Then
Try
T.Rollback()
Log("Rolled back all changes for {0} on {1}.", scriptName, client.Connection.Database)
Catch ex As Exception
Log("Could not roll back {0} on {1}: {2}", scriptName, client.Connection.Database, ex.Message)
If Debugger.IsAttached Then
Debugger.Break()
End If
End Try
Else
T.Commit()
Log("Successfully committed all changes for {0} on {1}.", scriptName, client.Connection.Database)
End If
Return exceptionCount
Class DBTransaction
Private _tName As String
Public ReadOnly Property name() As String
Get
Return _tName
End Get
End Property
Private _client As OB.Core2.DB.Client
Public Sub New(client As OB.Core2.DB.Client, Optional name As String = Nothing)
If name Is Nothing Then
name = "T" & Guid.NewGuid.ToString.Replace("-", "").Substring(0, 30)
End If
_tName = name
_client = client
End Sub
Public Function Begin() As Boolean
Return RestartIfNecessary()
End Function
Public Function RestartIfNecessary() As Boolean
Try
_client.Text = "IF NOT EXISTS (Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "') BEGIN BEGIN TRANSACTION " & name & " END"
_client.ExecNonQuery()
Return IsInTransaction()
Catch ex As Exception
Return False
End Try
End Function
Public Function IsInTransaction() As Boolean
_client.Text = "Select transaction_id From sys.dm_tran_active_transactions where name = '" & name & "'"
Dim scalar As String = _client.ExecScalar
Return scalar <> ""
End Function
Public Sub Rollback()
_client.Text = "ROLLBACK TRANSACTION " & name
_client.ExecNonQuery()
End Sub
Public Sub Commit()
_client.Text = "COMMIT TRANSACTION " & name
_client.ExecNonQuery()
End Sub
End Class

You have a good answer, here is "hack" answer. For the case "You cannot do this, but if you want it very much, then go on". I'm quite confident that you will not achieve what you are thinking of, therefore
DO FULL BACKUP!
Assuming there are no COMMIT or GO statements (explicit or !implicit!) in any of these files, the only thing you need to do is to run them in a single transaction. Combine them in one file, wrap in a transaction, and run.
How to combine 90 files in 1 file:
If sorting by name brings them in right order, then run this from folder with files in command prompt:
FOR /F "tokens=1" %G IN ('dir /b /-d /o:n *.sql') DO (
type %G >> Big_SQL_Script.sql && echo. >> Big_SQL_Script.sql
)
If order is random, then create a list of files dir /b /-d *.sql > File_Name_List.txt and order it manually. Then run:
FOR /F "tokens=1" %G IN (File_Name_List.txt) DO (
type %G >> Big_SQL_Script.sql && echo. >> Big_SQL_Script.sql
)
This way you can concatenate 90 files in automated order. Run and see what happens.
Good luck!

Related

SQLite Transaction fails with .NET

i am trying to implement SQLite in our backed and run into a problem that i get timeout and exception that database is locked in this simple code
try
lDb.ConnectionString = String.Format("Data Source={0};Version=3;Pooling=True;Max Pool Size=100;", TextBoxSqlite.Text)
ldb.Connection = New SQLiteConnection(ldb.ConnectionString)
ldb.Connection.Open()
lDb.Connection.BeginTransaction()
for lIndex As Integer = 1 To 100
lQuery = String.Format("INSERT INTO [TableTest] VALUES ('{0}','{1}','{2}') ", lIndex , lIndex , lIndex )
lCommand = New SQLiteCommand()
lCommand.CommandText = lQuery
lCommand.Connection = lDb.Connection
lDb.ExecuteCommand(lCommand)
next
' if ok'
db.CommitTransaction()
catch ex as exception
' if failed rollback'
db.RollbackTransaction()
end try
It runs once, on second run it hangs couple of seconds and throw exception.
The first insert ist not (rightfully) inserted.
if i remove the BeginTransaction line it works as advertised.
I use the .NET SQlite (System.Data.SQLite) in latest 3.12 version
Any idea what it might be?
Thanks in advance!
The documentation says:
Follow these steps to perform a transaction.
Call the BeginTransaction method of the SqlConnection object to mark the start of the transaction.
[…]
Execute the required commands.
Call the Commit method of the SqlTransaction object to complete the transaction, or call the Rollback method to end the transaction.
I've found the problem, so posting answer as reference.
we had transaction scope open in our wrapper db class. If i remove the transaction scope definition, it works as advertised.
So bug was on my side, or apparently the transaction scope does not work well with the system.data.sqlite (as it hangs also when i do not try to start another transaction).

Transaction inside of code

I'm having an issue where I'm preaty not sure how to resolve this and I want to know what is the best approach I should consider in order to achieve this task.
We are developping an application VB.net 2.0 and SQL 2005. Users are allowed to cancel a reception based on a purchase which may contains many received goods. But, during the process of cancellation, some questions are asked to users such as "Do you want to cancel Good #1". If yes, delete. Then, "Do you want to cancel Good #2", no, do not delete and one another question (if received item is issued, a process must be made manualy by the user). And, at the end, if all goods were successfully cancelled, we have to cancel the reception itself. But sometime, if an error occurs or some conditions occurs once asked to user in this process, we want to cancel any actions made from the beginning and make it back to his original state. So I thought about Transaction.
I know there is Transaction for SQL which can be used and I know good enough how to use it, but I can't realy use this as user must perform actions which possibly cancel this transaction.
I also remembered TransactionScope from .NET 2.X and over which can achieve something similar and I also know as well how to use it. The problem comes with TransactionScope and MSDTC. When using this, we still getting an error which said :
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
I've tried what is describe here in another stack post and it works great... until user restard their computer. EVERY time users restart their computer, they must put value back. Plus, per default, no computer have this value set to On. At least on 10 computers bases, none were activated. There is something like 300 computers on which this program is installed so it's surely not the good things to consider neither.
So anyone have an idea of how I can acheive this? Is there anything else doing transaction via code which I can use?
NOTE1 : I know some would say, first ask conditions to user and maintain values in memory. Once done, if everything went well, go with delete. But what if an error occurs when deleting let's say, goods #4? And how can I give to a store procedure a dynamic list of goods to be deleted?
NOTE2 : Sorry for my english, I usualy talk french.
NOTE3 : Any exemple in C# can be provide also as I know both VB and C#.
Assuming you already have similar stored procedure to manage cancelation:
create proc CancelGood (#goodID int)
as
SET NOCOUNT ON
SET XACT_ABORT ON
begin transaction
update table1 set canceled = 1
where GoodID = #GoodID
update table2 set on_stock = on_stock + 1
where GoodID = #GoodID
commit transaction
VB code adds a string to some canceledGoods list if user selects 'Oui'. I'm not familiar with VB.Net; in c# it would look like:
canceledGoods.Add (string.Format("exec dbo.CancelGood {0}", goodID));
Than, if there is at least one string in canceledGoods, build and execute batch:
batch = "BEGIN TRANSACTION" +
" BEGIN TRY " +
string.Join (Environment.NewLine, canceledGoods.ToArray()) +
" END TRY" +
" BEGIN CATCH " +
" -- CODE TO CALL IF THERE WAS AN ERROR" +
" ROLLBACK TRANSACTION" +
" RETURN" +
" END CATCH" +
" -- CODE TO CALL AFTER SUCCESSFULL CANCELATION OF ALL GOODS" +
" COMMIT TRANSACTION"
conn.ExecuteNonQuery (batch);

Linq to SQL Transaction Insert then Select really, really slow

I'm developing a piece of a system that basically migrates data from one set of tables to another set. Everything works fine, but I've decided to employ transactions instead of just failing on things that are partially completed. (That is, if some exception occurs, I want to rollback instead of having partial data migrated.)
I have a service (in the 3-tier architecture way, not web) which begins a transaction on the data access layer. The data context is shared in the data access class which contains many methods. Those methods use various LINQ-to-SQL techniques to update/insert/delete. All the LINQ-to-SQL "selects" are within CompiledQueries.
The "BeginTransaction" method starts a transaction like this:
Public Sub BeginTransaction() Implements ITransactionalQueriesBase.BeginTransaction
Me.Context.Connection.Open()
Me.Context.Transaction = Context.Connection.BeginTransaction()
IsInTransaction = True
End Sub
Basically, I have written a test which starts a transaction, inserts into a table, and then attempts to retrieve the value that was just inserted, all during the transaction. I did this because I wanted to assert that the insert method actually tries to insert. Then, during the test I would rollback, then test to ensure that the newly inserted value is not actually committed to the table. The test looks something like this:
<TestMethod()>
Public Sub FacilityService_Can_Rollback_A_Transaction()
faciService.BeginTransaction()
Dim devApp = UnitTestHelper.CreateDevelopmentApplication(devService.GetDevelopmentType("NEWFACI").ID, 1, 1, 1, 1)
Dim devInsertRes = devService.InsertDevelopmentApplication(devApp)
Assert.IsTrue(devInsertRes.ReturnValue > 0)
For Each dir1 In devInsertRes.Messages
Assert.Fail(dir1)
Next
Dim migrationResult = faciService.ProcessNewFacilityDevelopment(devInsertRes.ReturnValue)
Assert.IsTrue(migrationResult.ReturnValue.InsertResult)
Dim faciRetrieval1 = faciService.GetFacilityByID(migrationResult.ReturnValue.FacilityID)
Assert.IsNotNull(faciRetrieval1.ReturnValue)
faciService.Rollback()
Dim faciRetrieval2 = faciService.GetFacilityByID(migrationResult.ReturnValue.FacilityID)
Assert.IsNull(faciRetrieval2.ReturnValue)
End Sub
So, to my problem...
When the test gets to the "faciRetrieval1" step, it stays there for about 30-60 seconds before moving on. I'm not sure why this is happening. If I run the same queries in a transaction within SSMS it happens instantly. Does anyone have any ideas? The database is a SQL Server 2008 SP1 (R2?).
I figured out that if you have a data context using a transaction, any other data context appears to not be able to select from another context of the same type.
I ended up fixing it by using the same context throughout every select/update/delete while a transaction was happening.

File.Copy FileNotFoundException reported randomly when it's never true

The code is very simple.
If File.Exists(strFileMovingTo) Then File.Delete(strFileMovingTo)
If File.Exists(strFileMovingTo) Then
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#fad.co.uk", "Display Jpg Problem", "The file " & strFileMovingTo & " cannot be removed by the file mover(to allow a new file to be moved over)")
Return False
Else
If File.Exists(strFileMovingFrom) Then
File.Copy(strFileMovingFrom, strFileMovingTo, True)
If File.Exists(strFileMovingTo) = False Then
''tried to copy file over but must have failed ... send email
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#friday-ad.co.uk", "Display Jpg Problem", "The file cannot be moved by the file mover from " & strFileMovingFrom & " to " & strFileMovingTo & ". Please have a look at why.")
Return False
Else
Return True
End If
End If
Return False
''make sure this file exists on fad dev
End If
However a FileNotFoundException exception is thrown during File.Copy even though its wrapped in a If File.Exists ... End If to check its existance.
The great thing is if you run this through the debugger it nearly always works, when released as an app it almost never works.
Scarily the file always exists.
Anyone know what's going on?
There's probably something else deleting the file and there's a race condition between the call to File.Exists and File.Copy.
I agree with Dave's answer that this looks like a timing issue. Also, if a file can't be deleted for any reason then usually File.Delete will throw an exception. Perhaps you should be catching that instead and reworking your logic.
There is many race condition, you shouldn't blindly rely on File.Exists for other file operations. Anybody can delete or add a file with the same name between two function calls.
If File.Exists(strFileMovingFrom) Then
// AT THIS TIME, another thread or another process might run
// the equivalent to **File.Delete(strFileMovingFrom)**
File.Copy(strFileMovingFrom, strFileMovingTo, True) //Can throw!
The fact that it works in debug tells me it's a timing problem. You're not waiting long enough for the deletes or other file system changes to happen.
Build in a wait of one to two seconds after making file system changes.
UPDATE:
How about this: Create a shared dictionary of file moves you want to perform and use a FileSystemWatcher to carry out the copy action.
If File.Exists(strFileMovingTo) Then File.Delete(strFileMovingTo)
Thread.Sleep(1000) --Add wait
If File.Exists(strFileMovingTo) Then
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#fad.co.uk", "Display Jpg Problem", "The file " & strFileMovingTo & " cannot be removed by the file mover(to allow a new file to be moved over)")
Return False
Else
If File.Exists(strFileMovingFrom) Then
File.Copy(strFileMovingFrom, strFileMovingTo, True)
Thread.Sleep(1000) --Add wait
If File.Exists(strFileMovingTo) = False Then
'tried to copy file over but must have failed ... send email
Call SendEmail(Globals.EmailInternetTeam, "dev-sql#friday-ad.co.uk", "Display Jpg Problem", "The file cannot be moved by the file mover from " & strFileMovingFrom & " to " & strFileMovingTo & ". Please have a look at why.")
Return False
Else
Return True
End If
End If
Return False
'make sure this file exists on fad dev
End If
When working with some of file functions of Windows API, which also should be true for .NET one should always be aware about asynchronous nature of file system functions. Asynchronous, means that there is a non-zero, unpredictable, non-guaranteed time between you call to API affecting file system and next successful call to the same API related to the file or directory.
In non-transactional APIs it is common mistake to call something like "create file" then immediatelly try to "findFirst" and fail. Just treat the file system as messaging system with unpredictable delays and develop sort of "protocol" with repetitive polling, sleeps and timeouts or event notifications and callbacks.
However since introduction of Vista there is a different set of guarantees and expectations when applications can use so named "transactional" file API.

Log every SQL query to database in Rails

I want to save to a log file some SQL query rails performs, (namely the CREATE, UPDATE and DELETE ones)
therefore I need to intercept all queries and then filter them maybe with some regexp and log them as needed.
Where would I put such a thing in the rails code?
Here a simplified version of what c0r0ner linked to, to better show it:
connection = ActiveRecord::Base.connection
class << connection
alias :original_exec :execute
def execute(sql, *name)
# try to log sql command but ignore any errors that occur in this block
# we log before executing, in case the execution raises an error
begin
File.open(Rails.root.join("/log/sql.txt"),'a'){|f| f.puts Time.now.to_s+": "+sql}
rescue Exception => e
;
end
# execute original statement
original_exec(sql, *name)
end
end
SQL logging in rails -
In brief - you need to override ActiveRecord execute method. There you can add any logic for logging.
As a note for followers, you can "log all queries" like Rails - See generated SQL queries in Log files and then grep the files for the ones you want, if desired.
If you are using mysql I would look into mysqlbinlog . It is going to track everything that potentially updates data. you can grep out whatever you need from that log easily.
http://dev.mysql.com/doc/refman/5.0/en/mysqlbinlog.html
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
SQL Server? If so...
Actually, I'd do this at the SQL end. You could set up a trace, and collect every query that comes through a connection with a particular Application Name. If you save it to a table, you can easily query that table later.
Slightly updated version of #luca's answer for at least Rails 4 (and probably Rails 5)
Place this in config/initializers/sql_logger.rb:
connection = ActiveRecord::Base.connection
class << connection
alias :original_exec :execute
def execute(sql, *name)
# try to log sql command but ignore any errors that occur in this block
# we log before executing, in case the execution raises an error
begin
File.open(Rails.root.join("log/sql.log"), 'a') do |file|
file.puts Time.now.to_s + ": " + sql
end
rescue Exception => e
"Error logging SQL: #{e}"
end
# execute original statement
original_exec(sql, *name)
end
end