Isolated committing of transaction in SQL - sql

I have a n-tier C# ASP .Net application server which uses stored procedures to communicate with the database.
I have a service layer which rolls back all ADO .net transactions if an exception is thrown, using TransactionScope.requiresNew.
In my stored procedure, I want to track login attempt numbers, so we want to keep the transaction framework as is, but want to have an isolated transaction which we commit.
How do I do this?
I have tried using a new TransactionScope.RequiresNew in our data layer, but this has no effect.

Strange - RequiresNew in the inner (Logging) TransactionScope should work.
In the below nested transaction, TransactionScopeOption.Suppress or TransactionScopeOption.RequiresNew both work for me - the inner transaction is committed (Dal2.x), and the outer one aborted (Dal1.x).
try
{
using (TransactionScope tsOuter = new TransactionScope(TransactionScopeOption.Required))
{
DAL1.Txn1();
using (TransactionScope tsLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
DAL2.Txn2();
tsLogging.Complete();
}
throw new Exception("Big Hairy Exception");
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
Edit : Mixing TransactionScope and explicit T-SQL transactions is to be avoided - this is stated in the same link you've referenced viz http://msdn.microsoft.com/en-us/library/ms973865.aspx, quoted below
TransactionScopes manage transaction escalation quite intelligently - they will use the (e.g. DTC will only be used if the transactions span multiple databases or resources - e.g. SQL and MSMQ). They also work with the SQL 2005+ Lightweight transactions, so multiple connections to the same database will also be managed within a transaction without the overheads of DTC.
IMHO the decision as to whether to use Suppress vs RequiresNew will depend on whether you need to do your auditing within a transaction at all - RequiresNew for an isolated txn, vs Suppress for none.
When using System.Transactions,
applications should not directly
utilize transactional programming
interfaces on resource managers—for
example the T-SQL BEGIN TRANSACTION or
COMMIT TRANSACTION verbs, or the
MessageQueueTransaction() object in
System.Messaging namespace, when
dealing with MSMQ. Those mechanisms
would bypass the distributed
transaction management handled by
System.Transactions, and combining the
use of System.Transactions with these
resource manager "internal"
transactions will lead to inconsistent
results .... Never mix the two

Related

sqlps transaction issue

I wrote a script that uses sqlps to deploy a SSAS tabular cube. After the deployment, I need to perform a couple of actions on the cube but if I try to access it, I get a message saying that the cube doesn't exist. But it does, in fact, if I split the actions into two scripts (deploy -> exit sql ps -> new sqlps session), it works (for reasons that don't matter now, I cant do that).
It seems that the sqlps session doesn't see the cube it just deployed. I'm wondering if there is a refresh command I can run or if I can run sqlps in a "read uncommited" state.
Do you use the PowerShell Transactions supporting cmdlets? Looks like the transaction is being commited only after your script runs out.
Try to split the deploying logic into two transactions: creating the cube and the other actions you need to perform. For each of this parts you should use it's own transaction, like this:
Start-PSTransaction
// logic
Complete-PSTransaction
If you need to access the transactions programmatically, you should use the TransactionScope analog in the PowerShell, the Cmdlet's CurrentPsTransaction property, like this:
using(CurrentPsTransaction)
{
... // Perform transactional work here
}
... // Perform non-transacted work here
using(CurrentPsTransaction)
{
... // Perform more transactional work here
}
Update:
May be you can turn on the transaction using the declaration?
Developing for transactions – Cmdlet and Provider declarations
Cmdlets declare their support for transactions in a similar way that they declare their support for ShouldProcess:
[Cmdlet(“Get”, “Process”, SupportsTransactions=True)]
Providers declare their support for transactions through the ProviderCapabilities flag:
[CmdletProvider(“Registry”, ProviderCapabilities.Transactions)]
More about the isolation levels in Powershell:
TransactionScope with IsolationLevel set to Serializable is locking all SQL SELECTs

when commit will affect actually tables while procedure call?

I am working with ms sql with struts framework.
While calling procedure I put autocommit false in program.
when the procedure run I have to commit one seperate transaction and it must be affect the table externally
But it never be save until conn.commit() statement execute in program.
Is it any other way to commit the transaction in procedure itself, to affect the table on the end of the single transaction in procedure?
Pl. tell me if you know.
T.Saravanan
You should start and commit/rollback a transaction at the same level, otherwise you are introducing a lot of unpredictable paths - and frankly some bad design. So: if you need to commit at the server, use BEGIN TRAN / COMMIT TRAN in the TSQL to handle the transaction locally.
Note, though, that TSQL exception/error handling is not as rich as handling errors at a caller such as java/C#. If the problem is that you want to disassociate this work from another unrelated transaction, then it depends on how your calling code works:
if it is using connection-level transactions, then you will need to use a separate connection; just run the transaction on a different connection using the java/C#/whatever transaction API (i.e. the same as your existing code, by the sound of it, but on a different connection)
if it is using things like scope-based transactions (TransactionScope in C#; not sure about java etc - but this is an LTM or DTC transaction) then you can explicitly create a new scope that is bound to either a new (isolated) transaction, or the nil-transaction (i.e. the inner scope is not enlisted)
As for affecting the tables... SQL Server generally does optimistic changes, i.e. yes the changes are applied immediately (so that commit is cheap, and rollback is more expensive) - however, the isolation level will generally prevent other SPIDs from seeing the data. A competing SPID with a low isolation level (or using the NOLOCK hint) will see the uncommitted data, but this may be a phantom/non-repeatable read if the data eventually gets rolled back.

nhibernate one isession same idbconnection

I have some code doing 2 times session.Get(id) on the same ISession. I can see that the ISession creates 2 idbconnections. I guess this is because of some kind of configuration. I would like it to do the fetch on the same idbconnection. How?
If both Get operations are in the same transaction, they will share the same IDbConnection. Otherwise you end up with implicit transactions and NHibernate will open and close an IDbConnection for each query. In general, you should try do something like:
using (var tx = session.BeginTransaction())
{
var customer = session.Get<Customer>(123);
var order = session.Get<Order>(456);
// do stuff
tx.Commit();
}
Use of implicit transactions is discouraged:
When we don't define our own
transactions, it falls back into
implicit transaction mode, where every
statement to the database runs in its
own transaction, resulting in a large
performance cost (database time to
build and tear down transactions), and
reduced consistency.
Even if we are only reading data, we
should use a transaction, because
using transactions ensures that we get
consistent results from the database.
NHibernate assumes that all access to
the database is done under a
transaction, and strongly discourages
any use of the session without a
transaction.

Handle NHibernate Transaction Errors

Our application (which uses NHibernate and ASP.NET MVC), when put under stress tests throws a lot of NHibernate transaction errors. The major types are:
Transaction not connected, or was disconnected
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Transaction (Process ID 177) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Can someone help me in identifying the reason for Exception 1?
I know I have to handle the other exceptions in my code. Can someone point me to resources which can help me handle these errors in an efficient manner?
Q. How do we manage Sessions and Transactions?
A. We are using Autofac. For every server request, we create a new request container which has the session in the container lifetime scope. On activating the session we begin the transaction. When the request completes, we commit the transaction. In some cases, the transaction can be huge. To simplify, every server request is contained in a transaction.
Have a look at this thread:
http://n2cms.codeplex.com/Thread/View.aspx?ThreadId=85016
Basically what it says as a possible cause of this exception:
2010-02-17 21:01:41,204 1 WARN
NHibernate.Util.ADOExceptionReporter -
System.Data.SqlClient.SqlException:
The transaction log for database
'databasename' is full. To find out
why space in the log cannot be reused,
see the log_reuse_wait_desc column in
sys.databases
As the transaction log's size is proportional to the amount of work done during the transaction, perhaps you ought to look into putting your transactional boundaries across command handlers 'handling' of commands on the write-part of transactions. You would then, with a session#X, load the state you wish to mutate, mutate it and commit it, all as one unit of work in #X.
With regards to the read-side of things, you might then have another ISession#Y that reads data; this ISession could be used to batch reads within e.g. RepeatableRead or something similar with the Futures feature and could simply be reading from a cache (albiet it being a crutch indeed). Doing it this way might help you recover from "errors" that aren't; livelocks, deadlocks and victim transactions.
The problem with using a transaction per request is that your ISession acquires a lot of book keeping data while you are working, all of which is part of the transaction. Hence the database marks the datas (rols, cols, tables, etc) as partaking in the transaction, causing the wait-graph to span 'entities' (in the database-sense, not the DDD-sense), which are not actually part of the transactional boundary of the command your application took.
For the record (other people googling this), Fabio had a post dealing with dealing with exceptions from the data layer. Quoting some of his code;
public class MsSqlExceptionConverterExample : ISQLExceptionConverter
{
public Exception Convert(AdoExceptionContextInfo exInfo)
{
var sqle = ADOExceptionHelper.ExtractDbException(exInfo.SqlException) as SqlException;
if(sqle != null)
{
switch (sqle.Number)
{
case 547:
return new ConstraintViolationException(exInfo.Message,
sqle.InnerException, exInfo.Sql, null);
case 208:
return new SQLGrammarException(exInfo.Message,
sqle.InnerException, exInfo.Sql);
case 3960:
return new StaleObjectStateException(exInfo.EntityName, exInfo.EntityId);
}
}
return SQLStateConverter.HandledNonSpecificException(exInfo.SqlException,
exInfo.Message, exInfo.Sql);
}
}
547 is the exception number for constraint conflict.
208 is the exception number for an invalid object name in the SQL.
3960 is the exception number for Snapshot isolation transaction aborted due to update conflict.
So if you are running into concurrency issues like what you describe; remember that they will invalidate your ISession and that you'd have to handle them like the above.
Part of what you might be looking for is CQRS, where you have separate read and write-sides. This might help: http://abdullin.com/cqrs/, http://cqrsinfo.com.
So to summarize; your problems might be related to the way your handle your transactions. Also, try running select log_wait_reuse_desc from sys.databases where name='MyDBName' and see what it gives you.
This thread has an explanation:
http://groups.google.com/group/nhusers/browse_thread/thread/7f5fb68a00829d13
In short, the database probably rolls back the transaction by itself due to some error, so that when you try to rollback the transaction later it is already rolled back and in a zombie state. This tends to hide the actual reason for the rollback since all you see is a TransactionException instead of the exception that actually triggered the rollback in the first place.
I don't think there is much you can do about it beyond logging it and trying to figure out what is causing the underlying error.
I know this post was a while back and assume you fixed it, but seems like you have thread sharing issues with the NHibernate ISession which is not threadsafe. Basically 1 thread is starting a transaction and another is attempting to close it causing all sorts of chaos.

Asynchronous Triggers in SQL Server 2005/2008

I have triggers that manipulate and insert a lot of data into a Change tracking table for audit purposes on every insert, update and delete.
This trigger does its job very well, by using it we are able to log the desired oldvalues/newvalues as per the business requirements for every transaction.
However in some cases where the source table has a lot columns, it can take up to 30 seconds for the transaction to complete which is unacceptable.
Is there a way to make the trigger run asynchronously? Any examples.
You can't make the trigger run asynchronously, but you could have the trigger synchronously send a message to a SQL Service Broker queue. The queue can then be processed asynchronously by a stored procedure.
these articles show how to use service broker for async auditing and should be useful:
Centralized Asynchronous Auditing with Service Broker
Service Broker goodies: Cross Server Many to One (One to Many) scenario and How to troubleshoot it
SQL Server 2014 introduced a very interesting feature called Delayed Durability. If you can tolerate loosing a few rows in case of an catastrophic event, like a server crash, you could really boost your performance in schenarios like yours.
Delayed transaction durability is accomplished using asynchronous log
writes to disk. Transaction log records are kept in a buffer and
written to disk when the buffer fills or a buffer flushing event takes
place. Delayed transaction durability reduces both latency and
contention within the system
The database containing the table must first be altered to allow delayed durability.
ALTER DATABASE dbname SET DELAYED_DURABILITY = ALLOWED
Then you could control the durability on a per-transaction basis.
begin tran
insert into ChangeTrackingTable select * from inserted
commit with(DELAYED_DURABILITY=ON)
The transaction will be commited as durable if the transaction is cross-database, so this will only work if your audit table is located in the same database as the trigger.
There is also a possibility to alter the database as forced instead of allowed. This causes all transactions in the database to become delayed durable.
ALTER DATABASE dbname SET DELAYED_DURABILITY = FORCED
For delayed durability, there is no difference between an unexpected
shutdown and an expected shutdown/restart of SQL Server. Like
catastrophic events, you should plan for data loss. In a planned
shutdown/restart some transactions that have not been written to disk
may first be saved to disk, but you should not plan on it. Plan as
though a shutdown/restart, whether planned or unplanned, loses the
data the same as a catastrophic event.
This strange defect will hopefully be addressed in a future release, but until then it may be wise to make sure to automatically execute the 'sp_flush_log' procedure when SQL server is restarting or shutting down.
To perform asynchronous processing you can use Service Broker, but it isn't the only option, you can also use CLR objects.
The following is an example of an stored procedure (AsyncProcedure) that asynchronous calls another procedure (SyncProcedure):
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Runtime.Remoting.Messaging;
using System.Diagnostics;
public delegate void AsyncMethodCaller(string data, string server, string dbName);
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void AsyncProcedure(SqlXml data)
{
AsyncMethodCaller methodCaller = new AsyncMethodCaller(ExecuteAsync);
string server = null;
string dbName = null;
using (SqlConnection cn = new SqlConnection("context connection=true"))
using (SqlCommand cmd = new SqlCommand("SELECT ##SERVERNAME AS [Server], DB_NAME() AS DbName", cn))
{
cn.Open();
using (SqlDataReader reader = cmd.ExecuteReader())
{
reader.Read();
server = reader.GetString(0);
dbName = reader.GetString(1);
}
}
methodCaller.BeginInvoke(data.Value, server, dbName, new AsyncCallback(Callback), null);
//methodCaller.BeginInvoke(data.Value, server, dbName, null, null);
}
private static void ExecuteAsync(string data, string server, string dbName)
{
string connectionString = string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI", server, dbName);
using (SqlConnection cn = new SqlConnection(connectionString))
using (SqlCommand cmd = new SqlCommand("SyncProcedure", cn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#data", SqlDbType.Xml).Value = data;
cn.Open();
cmd.ExecuteNonQuery();
}
}
private static void Callback(IAsyncResult ar)
{
AsyncResult result = (AsyncResult)ar;
AsyncMethodCaller caller = (AsyncMethodCaller)result.AsyncDelegate;
try
{
caller.EndInvoke(ar);
}
catch (Exception ex)
{
// handle the exception
//Debug.WriteLine(ex.ToString());
}
}
}
It uses asynchronous delegates to call SyncProcedure:
CREATE PROCEDURE SyncProcedure(#data xml)
AS
INSERT INTO T(Data) VALUES (#data)
Example of calling AsyncProcedure:
EXEC dbo.AsyncProcedure N'<doc><id>1</id></doc>'
Unfortunatelly, the assembly requires UNSAFE permission.
I wonder if you could tag a record for the change tracking by inserting into a "too process" table including who did the change etc etc.
Then another process could come along and copy the rest of the data on a regular basis.
There's a basic conflict between "does its job very well" and "unacceptable", obviously.
It sounds to me that you're trying to use triggers the same way you would use events in an OO procedural application, which IMHO doesn't map.
I would call any trigger logic that takes 30 seconds - no, more that 0.1 second - as disfunctional. I think you really need to redesign your functionality and do it some other way. I'd say "if you want to make it asynchronous", but I don't think this design makes sense in any form.
As far as "asynchronous triggers", the basic fundamental conflict is that you could never include such a thing between BEGIN TRAN and COMMIT TRAN statements because you've lost track of whether it succeeded or not.
Create history table(s). While updating (/deleting/inserting) main table, insert old values of record (deleted pseudo-table in trigger) into history table; some additional info is needed too (timestamp, operation type, maybe user context). New values are kept in live table anyway.
This way triggers run fast(er) and you can shift slow operations to log viewer (procedure).
From sql server 2008 you can use CDC feature for automatically logging changes, which is purely asynchronous. Find more details in here
Not that I know of, but are you inserting values into the Audit table that also exist in the base table? If so, you could consider tracking just the changes. Therefore an insert would track the change time, user, extra and a bunch of NULLs (in effect the before value). An update would have the change time, user etc and the before value of the changed column only. A delete has the change at, etc and all values.
Also, do you have an audit table per base table or one audit table for the DB? Of course the later can more easily result in waits as each transaction tries to write to the one table.
I suspect that your trigger is of of these generic csv/text generating triggers designed to log all changes for all table in one place. Good in theory (perhaps...), but difficult to maintain and use in practice.
If you could run asynchronously (which would still require storing data somewhere for logging again later), then you are not auditing and neither do have history to use.
Perhaps you could look at the trigger execution plan and see what bit is taking the longest?
Can you change how you audit, say, to per table? You could split the current log data into the relevant tables.