IDbCommand's not being properly enlisted in NHibernate transaction - nhibernate

I have two IDbCommand objects that are created from an NHibernate session, and they are enlisted in a transaction via the NHibernate session. The first database command inserts a value into an Oracle global temporary table and the second command reads values from the table. With an Oracle GTT, a transaction is needed for both commands in order to preserve the data in the GTT.
The strange thing is that the second command reads values from the GTT, as expected, when it's run on one server, but the exact same code doesn't work on the other server. What's even stranger, is that the first request on the non-working server works if it happens immediately after the IIS worker processes have been recycled. Each request after that does not work - specifically, the values in the GTT are not maintained after being inserted.
ISession session = sessionFactory.GetSession();
ITransaction transaction = session.BeginTransaction();
IDbCommand cmdInsert = session.Connection.CreateCommand();
transaction.Enlist(cmdInsert);
cmdInsert.CommandText = "insert into TEMP_TABLE values (1)";
cmdInsert.ExecuteNonQuery();
IDbCommand cmdRead = session.Connection.CreateCommand();
transaction.Enlist(cmdRead);
cmdRead.CommandText = "select from TEMP_TABLE";
// Nothing is returned here after the second request
cmdRead.ExecuteQuery();
transaction.Commit();
Why would the transaction that is created from an NHibernate session not properly enlist IDbCommands after the first request to an IIS server?

We ended up using the Oracle Data Provider for .NET (ODP.NET) driver and replacing the deprecated Microsoft System.Data.OracleClient driver. This fixed the transaction support. Not sure why the deprecated driver worked on one server and not the other, but I guess it's deprecated, so I'm not going to investigate it further.

Related

How to Execute two different sessions of different databse of sql under single transaction scope using NHibernate

My requirement are - I have use two different database of SQL for insert,update,delete operation and I want take this two different session under one transaction, because any exception occur then rollback both database table data.
So how to resolve this problem using NHibernate.
Can Nhibernate give such type of facility.
And also any Idea to read two connection string using hibernate.cfg config file.
For example:
Session s1; -- db1
Session s2; -- db2
s1.Save(obj);
s2.Update(obj);
so take the above two operation in one transaction.
You can provide your own connection. So each session could be given a connection to a different database. Whether those connections would be enlisted in the ambient TransactionScope is something you would have to experiment with. If you enable sufficient logging, you'll be able to see this process happening via log output.

JOOQ: Is a single statement implicitly transactional, or do I still have to wrap it in a transactional block?

Say we have the following scenarios when wrirting a JOOQ query:
insertInto(TABLE)
.set(TABLE.NAME, "...")
.set(TABLE.FK, null) -- breaks FK constraint!
//...
.execute();
And:
transactional(configuration -> {
insertInto(TABLE)
// list of sets that are OK
.execute();
throw new RuntimeException();
}
Are the following statements:
The first query will fail (at the latest) with a DataAccessException when executing the statement in the Database and roll back the entire statement (no insert is committed).
The second query, although already executed without error, will be rolled back upon the exception being thrown.
correct?
And finally, in the following case:
{ // non transactional code block
insertInto(TABLE)
// list of sets that are OK
.execute();
throw new RuntimeException();
}
the insert will be performed on the Database but will not be rolled back when the exception is thrown, because it is not in a transactional context.
Is this all correct, or have I misunderstood something?
The first query will fail (at the latest) with a DataAccessException when executing the statement in the Database
Correct
and roll back the entire statement
Well, there is no "roll back" as in a transaction roll back. The statement just fails
(no insert is committed).
Correct (in any case, nothing is committed from a statement by itself)
The second query, although already executed without error, will be rolled back upon the exception being thrown.
Correct.
the insert will be performed on the Database but will not be rolled back when the exception is thrown, because it is not in a transactional context.
Correct.
have I misunderstood something?
Yes. This isn't strictly related to jOOQ but to SQL statements in general. First off, JDBC has an auto commit mode, which is sometimes active by default.
With JDBC auto commit
When it is active, then:
Every statement is always committed.
Which means that if it succeeds, the statement's changes will be applied to the database by the commit.
Which also means that if it fails, the statement has no effect because it fails. Yet, there is still a commit.
Note that auto-committing is a JDBC feature, not a jOOQ feature. So jOOQ inherits the JDBC setting here.
Without JDBC auto commit
If auto commit is inactive, then every new statement will start a transaction in JDBC. When you use jOOQ's transaction() API, then jOOQ will override the JDBC auto commit flag to become inactive and start a transaction for you. Now,
A commit is issued if your Transactional block / lambda succeeds. All your changes are written to the database by the commit.
A rollback is issued if your Transactional block / lambda fails with an exception. All your changes are reverted by the rollback.
A statement still needs to succeed to have any effect when committing

NHibernate generates INSERT and UPDATE for new entity

I have an entity with Id column generated using Hilo.
I have a transaction, creating a new entity and calling SaveOrUpdate() in order to get the Hilo generated Id of the entity (I need to write that Id to another DB).
later on, within the same transaction I update the new entity, just a simple update of a simple property, and in the end I call SaveOrUpdate() again.
I see that the SQL commands generated are first INSERT and then an UPDATE, but what I want is just an INSERT with the final details of the entity. is that possible? am I doing something wrong?
EDIT: added code sample
here's a very simplified example of pseudo code:
Person newPerson = new Person(); // Person is a mapped entity
newPerson.Name = "foo";
_session.SaveOrUpdate(newPerson); // generates INSERT statement
newPerson.BirthDate = DateTime.Now;
_session.SaveOrUpdate(newPerson); // generates UPDATE statement
// assume session transaction was opened before and disposed correctly for sake of simplicity
_session.Transaction.Commit();
The point is that with ORM tools like NHibernate, we are working different way, then we did with ADO.NET.
While ADO.NET Commands and their Execute() method family would cause immediate SQL statement execution on the DB server... with NHibernate it is dramatically different.
We are working with a ISession. The session, could be thought as a C# collection in a memory. All the Save(), SaveOrUdpate(), Update(), Delete() ... calls are executed against that object representation. NO SQL Command is executed, when calling these methods, no low-level ADO.NET calls at the moment.
That abstraction allows NHibernate to optimize the final SQL Statement batch... based on all the information gathered in the ISession. And that's why, you will never see INSERT, UPDATE if working with one Session, unless we explictly call the magical Flush() or change the FlushMode at all.
In that case (calling Flush() ), we are trying to say: NHibernate we are smart enough, now is the time to execute commands. In other scenarios, usually it is good enough to leave it on NHibernate...
See here:
- 9.6. Flush

SQLite with TransactionScope and Increment Identity Generator

When trying to use SQLite with System.Transactions TransactionScope with the identity generator as Increment, i noticed that i was getting an exception (given below along with code) when NHibernate was trying to retrieve the next Identity number.
This seems to be because the new SQLite connection is doing a auto enlist of the current transaction. From what i have heard SQLite only support single write transaction, but should support multiple read's, so i am surprised that i am getting a Database locked exception for a read operation. Did anybody use SQLite with Transaction Scope in this manner.
The same Code works fine if i use a NHibernate Transaction instead of TransactionScope
Code Block:
using (var scope = new TransactionScope())
{
var userRepository =
container.GetInstance<IUserRepository>();
var user = new User();
userRepository.SaveOrUpdate(user);
scope.Complete();
}
Exception:
19:34:19,126 ERROR [ 7] IncrementGenerator [(null)]- could not get
increment value
System.Data.SQLite.SQLiteException: The database file is locked
database is locked
at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery()
at System.Data.SQLite.SQLiteTransaction..ctor(SQLiteConnection
connection, Boolean deferredLock)
at System.Data.SQLite.SQLiteConnection.BeginTransaction(Boolean
deferredLock)
at System.Data.SQLite.SQLiteConnection.BeginTransaction()
at System.Data.SQLite.SQLiteEnlistment..ctor(SQLiteConnection cnn,
Transaction scope)
at
System.Data.SQLite.SQLiteConnection.EnlistTransaction(Transaction
transaction)
at System.Data.SQLite.SQLiteConnection.Open()
at NHibernate.Connection.DriverConnectionProvider.GetConnection()
at NHibernate.Id.IncrementGenerator.GetNext(ISessionImplementor
session)
19:34:20,063 ERROR [ 7] ADOExceptionReporter [(null)]- The database
file is locked
database is locked
There are two things at play here.
As you mentioned, System.Data.SQLite will auto-enlist in distributed transactions. This is on by default and can turn it off by adding Enlist=no.
The second is that System.Data.SQLite by default creates transactions with an automatic write lock. This is done based on the assumption that if a transaction is started, writes will be done. This can be overridden by starting the transactions with Serializable.ReadCommitted.
The default can also be specified in the connection string using the DefaultIsolationLevel key. Valid values are ReadCommitted and Serializable only. Other isolation levels are not supported by SQLite. ReadCommitted defers the write lock whereas Serializable obtains the write lock immediately.
Unspecified will use the default isolation level specified in the connection string. If no isolation level is specified in the connection string, Serializable is used. Serializable transactions are the default. In this mode, the engine gets an immediate lock on the database, and no other threads may begin a transaction. Other threads may read from the database, but not write.
With a ReadCommitted isolation level, locks are deferred and elevated as needed. It is possible for multiple threads to start a transaction in ReadCommitted mode, but if a thread attempts to commit a transaction while another thread has a ReadCommitted lock, it may timeout or cause a deadlock on both threads until both threads' CommandTimeout's are reached.

Asynchronous Triggers in SQL Server 2005/2008

I have triggers that manipulate and insert a lot of data into a Change tracking table for audit purposes on every insert, update and delete.
This trigger does its job very well, by using it we are able to log the desired oldvalues/newvalues as per the business requirements for every transaction.
However in some cases where the source table has a lot columns, it can take up to 30 seconds for the transaction to complete which is unacceptable.
Is there a way to make the trigger run asynchronously? Any examples.
You can't make the trigger run asynchronously, but you could have the trigger synchronously send a message to a SQL Service Broker queue. The queue can then be processed asynchronously by a stored procedure.
these articles show how to use service broker for async auditing and should be useful:
Centralized Asynchronous Auditing with Service Broker
Service Broker goodies: Cross Server Many to One (One to Many) scenario and How to troubleshoot it
SQL Server 2014 introduced a very interesting feature called Delayed Durability. If you can tolerate loosing a few rows in case of an catastrophic event, like a server crash, you could really boost your performance in schenarios like yours.
Delayed transaction durability is accomplished using asynchronous log
writes to disk. Transaction log records are kept in a buffer and
written to disk when the buffer fills or a buffer flushing event takes
place. Delayed transaction durability reduces both latency and
contention within the system
The database containing the table must first be altered to allow delayed durability.
ALTER DATABASE dbname SET DELAYED_DURABILITY = ALLOWED
Then you could control the durability on a per-transaction basis.
begin tran
insert into ChangeTrackingTable select * from inserted
commit with(DELAYED_DURABILITY=ON)
The transaction will be commited as durable if the transaction is cross-database, so this will only work if your audit table is located in the same database as the trigger.
There is also a possibility to alter the database as forced instead of allowed. This causes all transactions in the database to become delayed durable.
ALTER DATABASE dbname SET DELAYED_DURABILITY = FORCED
For delayed durability, there is no difference between an unexpected
shutdown and an expected shutdown/restart of SQL Server. Like
catastrophic events, you should plan for data loss. In a planned
shutdown/restart some transactions that have not been written to disk
may first be saved to disk, but you should not plan on it. Plan as
though a shutdown/restart, whether planned or unplanned, loses the
data the same as a catastrophic event.
This strange defect will hopefully be addressed in a future release, but until then it may be wise to make sure to automatically execute the 'sp_flush_log' procedure when SQL server is restarting or shutting down.
To perform asynchronous processing you can use Service Broker, but it isn't the only option, you can also use CLR objects.
The following is an example of an stored procedure (AsyncProcedure) that asynchronous calls another procedure (SyncProcedure):
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Runtime.Remoting.Messaging;
using System.Diagnostics;
public delegate void AsyncMethodCaller(string data, string server, string dbName);
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void AsyncProcedure(SqlXml data)
{
AsyncMethodCaller methodCaller = new AsyncMethodCaller(ExecuteAsync);
string server = null;
string dbName = null;
using (SqlConnection cn = new SqlConnection("context connection=true"))
using (SqlCommand cmd = new SqlCommand("SELECT ##SERVERNAME AS [Server], DB_NAME() AS DbName", cn))
{
cn.Open();
using (SqlDataReader reader = cmd.ExecuteReader())
{
reader.Read();
server = reader.GetString(0);
dbName = reader.GetString(1);
}
}
methodCaller.BeginInvoke(data.Value, server, dbName, new AsyncCallback(Callback), null);
//methodCaller.BeginInvoke(data.Value, server, dbName, null, null);
}
private static void ExecuteAsync(string data, string server, string dbName)
{
string connectionString = string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI", server, dbName);
using (SqlConnection cn = new SqlConnection(connectionString))
using (SqlCommand cmd = new SqlCommand("SyncProcedure", cn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#data", SqlDbType.Xml).Value = data;
cn.Open();
cmd.ExecuteNonQuery();
}
}
private static void Callback(IAsyncResult ar)
{
AsyncResult result = (AsyncResult)ar;
AsyncMethodCaller caller = (AsyncMethodCaller)result.AsyncDelegate;
try
{
caller.EndInvoke(ar);
}
catch (Exception ex)
{
// handle the exception
//Debug.WriteLine(ex.ToString());
}
}
}
It uses asynchronous delegates to call SyncProcedure:
CREATE PROCEDURE SyncProcedure(#data xml)
AS
INSERT INTO T(Data) VALUES (#data)
Example of calling AsyncProcedure:
EXEC dbo.AsyncProcedure N'<doc><id>1</id></doc>'
Unfortunatelly, the assembly requires UNSAFE permission.
I wonder if you could tag a record for the change tracking by inserting into a "too process" table including who did the change etc etc.
Then another process could come along and copy the rest of the data on a regular basis.
There's a basic conflict between "does its job very well" and "unacceptable", obviously.
It sounds to me that you're trying to use triggers the same way you would use events in an OO procedural application, which IMHO doesn't map.
I would call any trigger logic that takes 30 seconds - no, more that 0.1 second - as disfunctional. I think you really need to redesign your functionality and do it some other way. I'd say "if you want to make it asynchronous", but I don't think this design makes sense in any form.
As far as "asynchronous triggers", the basic fundamental conflict is that you could never include such a thing between BEGIN TRAN and COMMIT TRAN statements because you've lost track of whether it succeeded or not.
Create history table(s). While updating (/deleting/inserting) main table, insert old values of record (deleted pseudo-table in trigger) into history table; some additional info is needed too (timestamp, operation type, maybe user context). New values are kept in live table anyway.
This way triggers run fast(er) and you can shift slow operations to log viewer (procedure).
From sql server 2008 you can use CDC feature for automatically logging changes, which is purely asynchronous. Find more details in here
Not that I know of, but are you inserting values into the Audit table that also exist in the base table? If so, you could consider tracking just the changes. Therefore an insert would track the change time, user, extra and a bunch of NULLs (in effect the before value). An update would have the change time, user etc and the before value of the changed column only. A delete has the change at, etc and all values.
Also, do you have an audit table per base table or one audit table for the DB? Of course the later can more easily result in waits as each transaction tries to write to the one table.
I suspect that your trigger is of of these generic csv/text generating triggers designed to log all changes for all table in one place. Good in theory (perhaps...), but difficult to maintain and use in practice.
If you could run asynchronously (which would still require storing data somewhere for logging again later), then you are not auditing and neither do have history to use.
Perhaps you could look at the trigger execution plan and see what bit is taking the longest?
Can you change how you audit, say, to per table? You could split the current log data into the relevant tables.