Asynchronous Triggers in SQL Server 2005/2008 - sql

I have triggers that manipulate and insert a lot of data into a Change tracking table for audit purposes on every insert, update and delete.
This trigger does its job very well, by using it we are able to log the desired oldvalues/newvalues as per the business requirements for every transaction.
However in some cases where the source table has a lot columns, it can take up to 30 seconds for the transaction to complete which is unacceptable.
Is there a way to make the trigger run asynchronously? Any examples.

You can't make the trigger run asynchronously, but you could have the trigger synchronously send a message to a SQL Service Broker queue. The queue can then be processed asynchronously by a stored procedure.

these articles show how to use service broker for async auditing and should be useful:
Centralized Asynchronous Auditing with Service Broker
Service Broker goodies: Cross Server Many to One (One to Many) scenario and How to troubleshoot it

SQL Server 2014 introduced a very interesting feature called Delayed Durability. If you can tolerate loosing a few rows in case of an catastrophic event, like a server crash, you could really boost your performance in schenarios like yours.
Delayed transaction durability is accomplished using asynchronous log
writes to disk. Transaction log records are kept in a buffer and
written to disk when the buffer fills or a buffer flushing event takes
place. Delayed transaction durability reduces both latency and
contention within the system
The database containing the table must first be altered to allow delayed durability.
ALTER DATABASE dbname SET DELAYED_DURABILITY = ALLOWED
Then you could control the durability on a per-transaction basis.
begin tran
insert into ChangeTrackingTable select * from inserted
commit with(DELAYED_DURABILITY=ON)
The transaction will be commited as durable if the transaction is cross-database, so this will only work if your audit table is located in the same database as the trigger.
There is also a possibility to alter the database as forced instead of allowed. This causes all transactions in the database to become delayed durable.
ALTER DATABASE dbname SET DELAYED_DURABILITY = FORCED
For delayed durability, there is no difference between an unexpected
shutdown and an expected shutdown/restart of SQL Server. Like
catastrophic events, you should plan for data loss. In a planned
shutdown/restart some transactions that have not been written to disk
may first be saved to disk, but you should not plan on it. Plan as
though a shutdown/restart, whether planned or unplanned, loses the
data the same as a catastrophic event.
This strange defect will hopefully be addressed in a future release, but until then it may be wise to make sure to automatically execute the 'sp_flush_log' procedure when SQL server is restarting or shutting down.

To perform asynchronous processing you can use Service Broker, but it isn't the only option, you can also use CLR objects.
The following is an example of an stored procedure (AsyncProcedure) that asynchronous calls another procedure (SyncProcedure):
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Runtime.Remoting.Messaging;
using System.Diagnostics;
public delegate void AsyncMethodCaller(string data, string server, string dbName);
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void AsyncProcedure(SqlXml data)
{
AsyncMethodCaller methodCaller = new AsyncMethodCaller(ExecuteAsync);
string server = null;
string dbName = null;
using (SqlConnection cn = new SqlConnection("context connection=true"))
using (SqlCommand cmd = new SqlCommand("SELECT ##SERVERNAME AS [Server], DB_NAME() AS DbName", cn))
{
cn.Open();
using (SqlDataReader reader = cmd.ExecuteReader())
{
reader.Read();
server = reader.GetString(0);
dbName = reader.GetString(1);
}
}
methodCaller.BeginInvoke(data.Value, server, dbName, new AsyncCallback(Callback), null);
//methodCaller.BeginInvoke(data.Value, server, dbName, null, null);
}
private static void ExecuteAsync(string data, string server, string dbName)
{
string connectionString = string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI", server, dbName);
using (SqlConnection cn = new SqlConnection(connectionString))
using (SqlCommand cmd = new SqlCommand("SyncProcedure", cn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#data", SqlDbType.Xml).Value = data;
cn.Open();
cmd.ExecuteNonQuery();
}
}
private static void Callback(IAsyncResult ar)
{
AsyncResult result = (AsyncResult)ar;
AsyncMethodCaller caller = (AsyncMethodCaller)result.AsyncDelegate;
try
{
caller.EndInvoke(ar);
}
catch (Exception ex)
{
// handle the exception
//Debug.WriteLine(ex.ToString());
}
}
}
It uses asynchronous delegates to call SyncProcedure:
CREATE PROCEDURE SyncProcedure(#data xml)
AS
INSERT INTO T(Data) VALUES (#data)
Example of calling AsyncProcedure:
EXEC dbo.AsyncProcedure N'<doc><id>1</id></doc>'
Unfortunatelly, the assembly requires UNSAFE permission.

I wonder if you could tag a record for the change tracking by inserting into a "too process" table including who did the change etc etc.
Then another process could come along and copy the rest of the data on a regular basis.

There's a basic conflict between "does its job very well" and "unacceptable", obviously.
It sounds to me that you're trying to use triggers the same way you would use events in an OO procedural application, which IMHO doesn't map.
I would call any trigger logic that takes 30 seconds - no, more that 0.1 second - as disfunctional. I think you really need to redesign your functionality and do it some other way. I'd say "if you want to make it asynchronous", but I don't think this design makes sense in any form.
As far as "asynchronous triggers", the basic fundamental conflict is that you could never include such a thing between BEGIN TRAN and COMMIT TRAN statements because you've lost track of whether it succeeded or not.

Create history table(s). While updating (/deleting/inserting) main table, insert old values of record (deleted pseudo-table in trigger) into history table; some additional info is needed too (timestamp, operation type, maybe user context). New values are kept in live table anyway.
This way triggers run fast(er) and you can shift slow operations to log viewer (procedure).

From sql server 2008 you can use CDC feature for automatically logging changes, which is purely asynchronous. Find more details in here

Not that I know of, but are you inserting values into the Audit table that also exist in the base table? If so, you could consider tracking just the changes. Therefore an insert would track the change time, user, extra and a bunch of NULLs (in effect the before value). An update would have the change time, user etc and the before value of the changed column only. A delete has the change at, etc and all values.
Also, do you have an audit table per base table or one audit table for the DB? Of course the later can more easily result in waits as each transaction tries to write to the one table.

I suspect that your trigger is of of these generic csv/text generating triggers designed to log all changes for all table in one place. Good in theory (perhaps...), but difficult to maintain and use in practice.
If you could run asynchronously (which would still require storing data somewhere for logging again later), then you are not auditing and neither do have history to use.
Perhaps you could look at the trigger execution plan and see what bit is taking the longest?
Can you change how you audit, say, to per table? You could split the current log data into the relevant tables.

Related

NHibernate generates INSERT and UPDATE for new entity

I have an entity with Id column generated using Hilo.
I have a transaction, creating a new entity and calling SaveOrUpdate() in order to get the Hilo generated Id of the entity (I need to write that Id to another DB).
later on, within the same transaction I update the new entity, just a simple update of a simple property, and in the end I call SaveOrUpdate() again.
I see that the SQL commands generated are first INSERT and then an UPDATE, but what I want is just an INSERT with the final details of the entity. is that possible? am I doing something wrong?
EDIT: added code sample
here's a very simplified example of pseudo code:
Person newPerson = new Person(); // Person is a mapped entity
newPerson.Name = "foo";
_session.SaveOrUpdate(newPerson); // generates INSERT statement
newPerson.BirthDate = DateTime.Now;
_session.SaveOrUpdate(newPerson); // generates UPDATE statement
// assume session transaction was opened before and disposed correctly for sake of simplicity
_session.Transaction.Commit();
The point is that with ORM tools like NHibernate, we are working different way, then we did with ADO.NET.
While ADO.NET Commands and their Execute() method family would cause immediate SQL statement execution on the DB server... with NHibernate it is dramatically different.
We are working with a ISession. The session, could be thought as a C# collection in a memory. All the Save(), SaveOrUdpate(), Update(), Delete() ... calls are executed against that object representation. NO SQL Command is executed, when calling these methods, no low-level ADO.NET calls at the moment.
That abstraction allows NHibernate to optimize the final SQL Statement batch... based on all the information gathered in the ISession. And that's why, you will never see INSERT, UPDATE if working with one Session, unless we explictly call the magical Flush() or change the FlushMode at all.
In that case (calling Flush() ), we are trying to say: NHibernate we are smart enough, now is the time to execute commands. In other scenarios, usually it is good enough to leave it on NHibernate...
See here:
- 9.6. Flush

how perform batch operation with dapper orm?

I Have dapper orm in project and i have save alto of data (1200000row) in database but in transaction with dapper is very slow i want fast.with nhibernate (session statetless)is slow.
I think dapper is fast because that fetch data(700000) with nhibernate in 33 second that with dapper in 9 second.
how solved problem ?
my code is :
IDbTransaction trans = connection.BeginTransaction();
connection.Execute(#"
insert DailyResult(Id, PersonId,DateTaradod,DailyTaradods)
values(#Id, #PersonId,#DateTaradod,#DailyTaradods)", entity, trans);
trans.Commit();
There is no mechanism to make inserting 1200000 rows in a transaction instant, via any regular ADO.NET API. That simply isn't what the intent of that API is.
For what you want, it sounds like you should be using SqlBulkCopy. This supports transactions, and you can use FastMember to help here; for example:
IEnumerable<YourEntity> source = ...
using(var bcp = new SqlBulkCopy(
connection, SqlBulkCopyOptions.UseInternalTransaction))
using(var reader = ObjectReader.Create(source,
"Id", "PersonId", "DateTaradod", "DailyTaradods"))
{
bcp.DestinationTableName = "DailyResult";
bcp.WriteToServer(reader);
}
It also supports external transactions, but if you are going to "create tran, push, commit tran" you might as well use the internal transaction.
If you don't want to use SqlBulkCopy, you can also look at table-valued-parameter approaches, but SqlBulkCopy would be my recommended API when dealing with this volume.
Note: if the table has more columns than Id, PersonId, DateTaradod and DailyTaradods, you can specify explicit bcp.ColumnMappings to tweak how the insert behaves.

Execute multiple stored procedures in one transaction from WCF

This is my first post on here..
I'm writing a program in MVC3 that has a WCF service which acts as the Data Access Layer. In my DAL, I have to do some sort of 'batch' inserts and updates.. particularly with orders for example.. let's say one order has several items and could have several payment methods etc.. so when I insert a new order I'll need to insert all items related to that order and so on..
Therefore, what I'm looking for is the better way and feasible method to be able to run several stored procedures, e.g one which will insert the order, another which will insert its items, etc..
The tables Order and Item are linked together with a third table called Order_Items, which will have (fk) order_id, (fk) item_id, qty, price..
I know I can run multiple commands by changing command text and and executing non query withing a transaction.. but I would like to run stored procedures instead of hardcoding text commands.. or I can run the procedures by making command text something like
cmd.CommandText = 'exec sp_insert_order #order_number #order_date ...'
cmd.ExecuteNonQuery();
and then loop the items say
foreach (string s in insert_items)
{
cmd.CommandText = s;
cmd.ExecuteNonQuery();
}
all this within a transaction and then do a commit.. but I don't feel this is such a clean way of doing things.. can someone please share their opinion.
If you're using stored procedure, you should change the way you call them - I would recommend using this approach:
// define your stored procedure name, and the type
cmd.CommandText = 'dbo.sp_insert_order';
cmd.CommandType = CommandType.StoredProcedure;
// define and fill your parameters
cmd.Parameters.Add("#order_number", SqlDbType.Int).Value = order_nr;
cmd.Parameters.Add("#order_date", SqlDbType.DateTime).Value = ......;
cmd.ExecuteNonQuery();
Basically, you'd have to do this for each stored procedure you want to call, and you could wrap all of those in a single transaction without any problems:
using(SqlConnection connection = new SqlConnection("your-connection-string-here"))
{
SqlTransaction transaction = connection.BeginTransaction();
try
{
// call all stored procuedures here - remember to assign the
// transaction to the SqlCommand!!
....
transaction.Commit();
}
catch(Exception exc)
{
transaction.Rollback();
}
}
You can use the TransactionScope attributes on your methods to enclose all work in a transaction proc or text
You may also be interested in the Transaction Propagation functionality built in to WCF. It can be configured in such a way that each web service call to WCF automatically creates, and commits or rolls-back transactions for you, basically wrapping the entire service method call in a transaction.
There is a good MSDN writeup on it here.
It is a bit of an advanced topic and may be overkill for what you need, but something to keep in mind.

ADO.NET DeadLock

I'm experiencing an intermittent deadlock situation with following (simplified) code.
DataSet my_dataset = new DataSet()
SqlCommand sql_command = new SqlCommand();
sql_command.Connection = <valid connection>
sql_command.CommandType = CommandType.Text;
sql_command.CommandText = 'SELECT * FROM MyView ORDER BY 1'
SqlDataAdapter data_adapter = new SqlDataAdapter(sql_command);
sql_command.Connection.Open();
data_adapter.Fill(my_dataset);
sql_command.Connection.Close();
The error I get is:
Transaction (Process ID 269) was
deadlocked on lock resources with
another process and has been chosen as
the deadlock victim. Rerun the
transaction.
As I understand it, simply filling a DataSet via the ADO.Net .Fill() command shouldn't create a lock on the database.
And, it would appear from the error message that the lock is owned by another process.
The View I'm querying against has select statements only, but it does join a few table together.
Can a view that is only going a select statement be affected by locked records?
Can/Does ADO.Net .Fill() Lock Records?
Assuming I need to fill a DataSet, is there a way to do so that would avoid potential data locks?
SQL Server 2005 (9.0.4035)
A select query with joins can indeed cause a deadlock. One way to deal with this is to do the query in a SqlTransaction using Snapshot Isolation.
using(SqlTransaction sqlTran = connection.BeginTransaction(IsolationLevel.Snapshot))
{
// Query goes here.
}
A deadlock can occur because it locks each table being joined one after another before performing the join. If another query has a lock on a table that the other query needs to lock, and vice versa, there is a dead lock. With Snapshot Isolation queries that just read from tables do not lock them. Integrity is maintained because the read is actually done from a snapshot of the data at the time the transaction started.
This can have a negative impact on performance, though, because of the overhead of having to produce the snapshots. Depending on the application, it may be better to not use snapshot isolation and instead, if a query fails do to a deadlock, wait a little while and try again.
It might also be better to try to find out why the deadlocks are occurring and change the structure of the database and/or modify the application to prevent deadlocks. This article has more information.
You may try this:
Lower the transaction level for that query (for instance, IsolationLevel.ReadUncommited).
Use the NOLOCK hint on you query.
It might be far off and not the solution to your problem, check other solutions first - but, we had a similar problem (a select that locks records!) that after much effort we tracked to the file/SMB layer. It seemed that under heavy load, reading files from the networked drive (SAN) got held up, creating a waiting read lock on the actual database files. This expressed as a lock on the records contained.
But this was a race condition and not reproducable without load on the drives. Oh, and it was SQL Server 2005, too.
You should be able to determine using the SQL Server included tools which transactions are deadlocking each other.

WCF Transaction Scope SQL insert table lock

I have two services talking to two different Data-stores (i.e SQL). I am using transactionscope:
eg:
using(TransactionScope scope = new TransactionScope())
{
service1.InsertUser(user);//Insert to SQL Service 1 table User
service2.SavePayment(payment);//Save payment SQL Service 2 table payment
scope.Complete();
}
Service1 is locking the table (User) until the transaction is completed making subsequent transactions with that table sequential. Is there a way to overcome the lock, so can have more than one concurrent calls to the SQL service1 table while the above code is executing?
I would appreciate any input.
Thanks in Advance.
Lihnid
I would guess that you may have triggers on your user or payment table that update the other one.
The most likely scenario is that your save call probably does some selects, which you cannot do in the same proc where you are inserting. This causes too much locking. Determine if you need to insert with a separate call to the db, using a new transactionscope inside and suppress option, removing the select from the transaction. I know your select would have the nolock, but it appears to be ignored in sql 2005 vs older versions. I've had this same problem.
Make all your calls as simple as possible.