I am kind of confused on how Flush ( and NHibernate.ISession) in NHibernate works.
From my code, it seems that when I saved an object by using ISession.Save(entity), the object can be saved directly to the database.
However, when I update and object using ISession.SaveOrUpdate(entity) or ISession.Update(entity), the object in the database is not updated--- I need to call ISession.Flush in order to update it.
The procedure on how I update the object is as follows:
Obtain the object from the database by using ISession.Get(typeof(T), id)
Change the object property, for example, myCar.Color="Green"
Commit it back to the database by using ISession.Update(myCar)
The myCar is not updated to database. However, if I call ISession.Flush afterwards, then it is updated.
When to use Flush, and when not to use it?
In many cases you don't have to care when NHibernate flushes.
You only need to call flush if you created your own connection because NHibernate doesn't know when you commit on it.
What is really important for you is the transaction. During the transaction, you are isolated from other transactions, this means, you always see your changes when you read form the database, and you don't see others changes (unless they are committed). So you don't have to care when NHibernate updates data in the database unless it is committed. It is not visible to anyone anyway.
NHibernate flushes if
you call commit
before queries to ensure that you filter by the actual state in memory
when you call flush
Example:
using (session = factory.CreateSession())
using (session.BeginTransaction())
{
var entity = session.Get<Entity>(2);
entity.Name = "new name";
// there is no update. NHibernate flushes the changes.
session.Transaction.Commit();
session.Close();
}
The entity is updated on commit. NHibernate sees that your session is dirty and flushes the changes to the database. You need update and save only if you made the changes outside of the session. (This means with a detached entity, that is an entity that is not known by the session).
Notes on performance: Flush not only performs the required SQL statements to update the database. It also searches for changes in memory. Since there is no dirty flag on POCOs, it needs to compare every property of every object in the session to its first level cache. This may become a performance problem when it is done too often. There are a couple of things you can do to avoid performance problems:
Do not flush in loops
Avoid serialized objects (serialization is required to check for changes)
Use read-only entities when appropriate
Set mutable = false when appropriate
When using custom types in properties, implement efficient Equals methods
Carefully disable auto-flush when you are sure that you know what you are doing.
NHibernate will only perform SQL statements when it is necessary. It will postpone the execution of SQL statements as long as possible.
For instance, when you save an entity which has an assigned id, it will likely postpone the executin of the INSERT statement.
However, when you insert an entity which has an auto-increment id for instance, then NHibernate needs to INSERT the entity directly, since it has to know the id that will be assigned to this entity.
When you explicitly call flush, then NHibernate will execute the SQL statements that are necessary for objects that have been changed / created / deleted in that session.
Flush
Related
I'm trying to commit DML update in a database table while the main program is still running without committing it as there may be errors in future and there might be a need to rollback it but the internal (saved) updates should stay.
Like in the Oracle autonomous transactions.
Call function ... starting new task ... or Submit ... and return don't work as they affect the main transaction.
Is there a way to start a nested database LUW and commit it without interrupting the main LUW?
I am not aware of a way to do this with OpenSQL. But when you are using the ADBC framework, then each instance of the class CL_SQL_CONNECTION operates within a separate database LUW.
I would generally not recommend using ADBC unless you have to, because:
You are now writing SQL statements as strings, which means you don't have compile-time syntax checking.
You can't put variables into SQL code anymore. (OK, you can, but you shouldn't, because you are probably creating SQL injection vulnerabilities that way). You need to pass all the variables using statement->set_param.
You are now writing NativeSQL, which means that you might inadvertently write SQL which can't be ported to other database backends.
You can create separate function for saving your changes and you can call your function with starting new task mode like below.
call function 'ZFUNCTION' starting new task 'SAVECHANGES'
exporting
param = value.
I have an entity with Id column generated using Hilo.
I have a transaction, creating a new entity and calling SaveOrUpdate() in order to get the Hilo generated Id of the entity (I need to write that Id to another DB).
later on, within the same transaction I update the new entity, just a simple update of a simple property, and in the end I call SaveOrUpdate() again.
I see that the SQL commands generated are first INSERT and then an UPDATE, but what I want is just an INSERT with the final details of the entity. is that possible? am I doing something wrong?
EDIT: added code sample
here's a very simplified example of pseudo code:
Person newPerson = new Person(); // Person is a mapped entity
newPerson.Name = "foo";
_session.SaveOrUpdate(newPerson); // generates INSERT statement
newPerson.BirthDate = DateTime.Now;
_session.SaveOrUpdate(newPerson); // generates UPDATE statement
// assume session transaction was opened before and disposed correctly for sake of simplicity
_session.Transaction.Commit();
The point is that with ORM tools like NHibernate, we are working different way, then we did with ADO.NET.
While ADO.NET Commands and their Execute() method family would cause immediate SQL statement execution on the DB server... with NHibernate it is dramatically different.
We are working with a ISession. The session, could be thought as a C# collection in a memory. All the Save(), SaveOrUdpate(), Update(), Delete() ... calls are executed against that object representation. NO SQL Command is executed, when calling these methods, no low-level ADO.NET calls at the moment.
That abstraction allows NHibernate to optimize the final SQL Statement batch... based on all the information gathered in the ISession. And that's why, you will never see INSERT, UPDATE if working with one Session, unless we explictly call the magical Flush() or change the FlushMode at all.
In that case (calling Flush() ), we are trying to say: NHibernate we are smart enough, now is the time to execute commands. In other scenarios, usually it is good enough to leave it on NHibernate...
See here:
- 9.6. Flush
I am working with ms sql with struts framework.
While calling procedure I put autocommit false in program.
when the procedure run I have to commit one seperate transaction and it must be affect the table externally
But it never be save until conn.commit() statement execute in program.
Is it any other way to commit the transaction in procedure itself, to affect the table on the end of the single transaction in procedure?
Pl. tell me if you know.
T.Saravanan
You should start and commit/rollback a transaction at the same level, otherwise you are introducing a lot of unpredictable paths - and frankly some bad design. So: if you need to commit at the server, use BEGIN TRAN / COMMIT TRAN in the TSQL to handle the transaction locally.
Note, though, that TSQL exception/error handling is not as rich as handling errors at a caller such as java/C#. If the problem is that you want to disassociate this work from another unrelated transaction, then it depends on how your calling code works:
if it is using connection-level transactions, then you will need to use a separate connection; just run the transaction on a different connection using the java/C#/whatever transaction API (i.e. the same as your existing code, by the sound of it, but on a different connection)
if it is using things like scope-based transactions (TransactionScope in C#; not sure about java etc - but this is an LTM or DTC transaction) then you can explicitly create a new scope that is bound to either a new (isolated) transaction, or the nil-transaction (i.e. the inner scope is not enlisted)
As for affecting the tables... SQL Server generally does optimistic changes, i.e. yes the changes are applied immediately (so that commit is cheap, and rollback is more expensive) - however, the isolation level will generally prevent other SPIDs from seeing the data. A competing SPID with a low isolation level (or using the NOLOCK hint) will see the uncommitted data, but this may be a phantom/non-repeatable read if the data eventually gets rolled back.
Our application (which uses NHibernate and ASP.NET MVC), when put under stress tests throws a lot of NHibernate transaction errors. The major types are:
Transaction not connected, or was disconnected
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Transaction (Process ID 177) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Can someone help me in identifying the reason for Exception 1?
I know I have to handle the other exceptions in my code. Can someone point me to resources which can help me handle these errors in an efficient manner?
Q. How do we manage Sessions and Transactions?
A. We are using Autofac. For every server request, we create a new request container which has the session in the container lifetime scope. On activating the session we begin the transaction. When the request completes, we commit the transaction. In some cases, the transaction can be huge. To simplify, every server request is contained in a transaction.
Have a look at this thread:
http://n2cms.codeplex.com/Thread/View.aspx?ThreadId=85016
Basically what it says as a possible cause of this exception:
2010-02-17 21:01:41,204 1 WARN
NHibernate.Util.ADOExceptionReporter -
System.Data.SqlClient.SqlException:
The transaction log for database
'databasename' is full. To find out
why space in the log cannot be reused,
see the log_reuse_wait_desc column in
sys.databases
As the transaction log's size is proportional to the amount of work done during the transaction, perhaps you ought to look into putting your transactional boundaries across command handlers 'handling' of commands on the write-part of transactions. You would then, with a session#X, load the state you wish to mutate, mutate it and commit it, all as one unit of work in #X.
With regards to the read-side of things, you might then have another ISession#Y that reads data; this ISession could be used to batch reads within e.g. RepeatableRead or something similar with the Futures feature and could simply be reading from a cache (albiet it being a crutch indeed). Doing it this way might help you recover from "errors" that aren't; livelocks, deadlocks and victim transactions.
The problem with using a transaction per request is that your ISession acquires a lot of book keeping data while you are working, all of which is part of the transaction. Hence the database marks the datas (rols, cols, tables, etc) as partaking in the transaction, causing the wait-graph to span 'entities' (in the database-sense, not the DDD-sense), which are not actually part of the transactional boundary of the command your application took.
For the record (other people googling this), Fabio had a post dealing with dealing with exceptions from the data layer. Quoting some of his code;
public class MsSqlExceptionConverterExample : ISQLExceptionConverter
{
public Exception Convert(AdoExceptionContextInfo exInfo)
{
var sqle = ADOExceptionHelper.ExtractDbException(exInfo.SqlException) as SqlException;
if(sqle != null)
{
switch (sqle.Number)
{
case 547:
return new ConstraintViolationException(exInfo.Message,
sqle.InnerException, exInfo.Sql, null);
case 208:
return new SQLGrammarException(exInfo.Message,
sqle.InnerException, exInfo.Sql);
case 3960:
return new StaleObjectStateException(exInfo.EntityName, exInfo.EntityId);
}
}
return SQLStateConverter.HandledNonSpecificException(exInfo.SqlException,
exInfo.Message, exInfo.Sql);
}
}
547 is the exception number for constraint conflict.
208 is the exception number for an invalid object name in the SQL.
3960 is the exception number for Snapshot isolation transaction aborted due to update conflict.
So if you are running into concurrency issues like what you describe; remember that they will invalidate your ISession and that you'd have to handle them like the above.
Part of what you might be looking for is CQRS, where you have separate read and write-sides. This might help: http://abdullin.com/cqrs/, http://cqrsinfo.com.
So to summarize; your problems might be related to the way your handle your transactions. Also, try running select log_wait_reuse_desc from sys.databases where name='MyDBName' and see what it gives you.
This thread has an explanation:
http://groups.google.com/group/nhusers/browse_thread/thread/7f5fb68a00829d13
In short, the database probably rolls back the transaction by itself due to some error, so that when you try to rollback the transaction later it is already rolled back and in a zombie state. This tends to hide the actual reason for the rollback since all you see is a TransactionException instead of the exception that actually triggered the rollback in the first place.
I don't think there is much you can do about it beyond logging it and trying to figure out what is causing the underlying error.
I know this post was a while back and assume you fixed it, but seems like you have thread sharing issues with the NHibernate ISession which is not threadsafe. Basically 1 thread is starting a transaction and another is attempting to close it causing all sorts of chaos.
In NHibernate, I want to retrieve an instance, and put an exclusive lock on the record that represents the retrieved entity on the database.
Right now, I have this code:
With.Transaction (session, IsolationLevel.Serializable, delegate
{
ICriteria crit = session.CreateCriteria (typeof (TarificationProfile));
crit.SetLockMode (LockMode.Upgrade);
crit.Add (Expression.Eq ("Id", tarificationProfileId));
TarificationProfile profile = crit.UniqueResult<TarificationProfile> ();
nextNumber = profile.AttestCounter;
profile.AttestCounter++;
session.SaveOrUpdate (profile);
});
As you can see, I set the LockMode for this Criteria to 'Upgrade'.
This issues an SQL statement for SQL Server which uses the updlock and rowlock locking hints:
SELECT ... FROM MyTable with (updlock, rowlock)
However, I want to be able to use a real exclusive lock. That is, prevent that others can read this very same record, until I have released the lock.
In other words, I want to be able to use an xlock locking hint, instead of an updlock.
I don't know how (or even if) I can achieve that .... Maybe somebody can give me some hints about this :)
If it is really necessary, I can use the SQLQuery functionality of NHibernate, and write my own SQL Query, but, I'd like to avoid that as much as possible.
A HQL DML query will accomplish your update without needing a lock.
This is available in NHibernate 2.1, but is not yet in the reference documentation. The Java hibernate documentation is very close to the NHibernate implementation.
Assuming you are using ReadCommitted Isolation, you can then safely read your value back inside the transaction.
With.Transaction (session, IsolationLevel.Serializable, delegate
{
session.CreateQuery( "update TarificationProfile t set t.AttestCounter = 1 + t.AttestCounter where t.id=:id" )
.SetInt32("id", tarificationProfileId)
.ExecuteUpdate();
nextNumber = session.CreateQuery( "select AttestCounter from TarificationProfile where Id=:id" )
.SetInt32("id", id )
.UniqueResult<int>();
}
Depending on your table and column names, the generated SQL will be:
update TarificationProfile
set AttestCounter = 1 + AttestCounter
where Id = 1 /* #p0 */
select tarificati0_.AttestCounter as col_0_0_
from TarificationProfile tarificati0_
where tarificati0_.Id = 1 /* #p0 */
I doubt it can be done from NHibernate. Personally, I would use a stored procedure to do what you're trying to accomplish.
Update: Given the continued downvotes I'll expand on this. Frederick is asking how to use locking hints, which are syntax- and implementation-specific details of his underlying database engine, from his ORM layer. This is the wrong level to attempt to perform such an operation - even if it was possible (it isn't), the likelihood it would ever work consistently across all NHibernate-supported databases is vanishingly low.
It's great Frederick's eventual solution didn't require pre-emptive exclusive locks (which kill performance and are generally a bad idea unless you know what you're doing), but my answer is valid. Anyone who stumbles across this question and wants to do exclusive-lock-on-read from NHibernate - firstly: don't, secondly: if you have to, use a stored procedure or a SQLQuery.
If all you read are done with a IsolationLevel of Serializable and all the write are also done with a IsolationLevel of Serializable I don't see why you need to do any locking of database rows your self.
So the serialize keeps the data safe, now we still have the problem of possible dead locks....
If the deadlocks are not common, just putting the [start transaction, read, update, save] in a retry loop when you get a deadlock may be good enough.
Otherwise a simple “select for update” statement generated directly (e.g. not with nhibernate) could be used to stop another transaction reading the row before it is changed.
However I keep thinking, that if the update rate is fast enough to get lots of deadlocks a ORM may not be the correct tool for the update, or the database schema may need redesigning to avoid the value that has to be read/written (e.g calculation it when reading the data)
You could use isolation level "repeatable read" if you want to make sure that values you read from the database don't change during the transaction. But you have to do this in all critical transactions. Or you lock it in the critical reading transaction with an upgrade lock.