I'm having a problem in my application when I'm saving an entity. On occasion I'd get NonUnique exception on that entity from NHibernate. Now, I know what causes those kind of exceptions and how to deal with them, but since the application codebase is rather large (200K LOC), it's very hard to pinpoint which object it was exactly that caused that error.
What I'd like to do is query or extract somehow all the objects that NHibernate keeps in the session scope cache, so i'd have a better idea of what it was exactly that caused that exception.
Is there a way to do something like that ?
As far as I know there is nothing in ISession to "list" its contents. You could use interceptors or event listeners to track and log your operations though.
Related
So the question is mostly in the title but after some research I can't really find any deeper information about this. Mostly I want to know if a deadlock situation occurs does Breeze automatically reattempt the commit or does it just return an error back to the front end to try saving again? Any documentation or articles going deeper into this would be appreciated!
To a certain extent this depends on the server backend that you are using. But in general breeze will NOT attempt to retry a deadlock failure and will instead return an exception indicating that a deadlock occurred. You can then retry the save yourself by handling the client side exception and reexecuting the query.
Note that because of the way that most breeze servers automatically toposort the entities in a save request, deadlocks are much less likely than if such a sort was not performed. The idea here is that by ensuring that multiple instances of a program use the same ordering when updating the same set of tables, we reduce the possibility of a deadlock.
This toposorting is part of any Entity Framework based backend as well as the Breeze Node/Sequelize (MySQL, Postgress) provider, and is likely to be added to the Breeze NHibernate and MongoDb providers in the near future.
I am using NHibernate to save objects that require that one of the properties on these objects must be unique. The design of the system is such that it is possible that an attempt may be made occasionally to save the same object twice. This of course causes a violation of the uniqueness constraint and NHibernate throws an exception. The exception happens at the time I am attempting to save the object, rather than at Transaction.Commit() time. When this happens I want to simply catch the exception, discard the object and continue on saving other similar objects. However, I have not found a way to allow this to happen. Once that exception has happened I cannot carry on to save other objects and commit the transaction.
The only work-around I have found for this is to always check if the object exists first by running a query on the unique property. That works, but it seems unnecessarily expensive. I would like to avoid that extra hit to the db. Is there a way to do this?
Thanks
The issue, you've described, must be solved on a higher level then NHibernate session. Take a look at 9.8. Exception handling, extract:
If the ISession throws an exception you should immediately rollback
the transaction, call ISession.Close() and discard the ISession
instance. Certain methods of ISession will not leave the session in a
consistent state.
So, what I would suggest, wrap the call to your Data layer (DL) with some validation. Place the if/try logic outside of the Session.
Because even in case, that we are using versioning (see 5.1.7. version) (a very powerful way how to survive concurrency) ... we are provided with StaleExceptions and have to solve them outside of the DL
We are trying to implement retry logic to recover from transient errors in Azure environment.
We are using long-running sessions to keep track and commit the whole bunch of changes at the end of application transaction (which may spread over several web-requests). Along the way we need to get additional data from database. Our main problem is that we can't easily recover from db error because we can't "replay" all user actions.
So far we used straightforward recovery algorithm:
Try to perform operation in long-running session
In case of error, close the session, open a new one and merge entities into it
Retry the operation
It's very expensive approach in terms of time (merge is really long for big entity hierarchies). So we'd like to optimize things a little.
We'd like to perform query operations in separate session (to keep long running one untouched and safe) and on success, merge results back to the long-running session. Retry is relatively simple here - we just need to open new session and run query once more. However, with this approach we have an issue with initializing lazy properties/collections:
If we do this in separate session, we need to merge results back (a lot of entities) but merge could fail and break the long-running session
We tried different ways of "moving" original entity to different session, loading details and returning it back, but without success (evict, replicate, etc.)
There is known statement that session should be discarded in case of exception. However, the example shows write operation. Is it still true for read ones? I mean if I guarantee that no data is written back to the database, can I reuse the same session to run query again?
Do you have any other suggestions about retry logic with long-running sessions?
IMO there's no way to solve your issue. It's gonna take a lot of time to commit everything or you will have to do a lot of work to break it up into smaller sessions and handle every error that can occur while merging.
To answer your question about using the session after an exception: you cannot trust ANYTHING anymore inside this session, not even loaded entities.
Read this paragraph from Ayende's article about building a simple todo app with a recoveryplan in case of an exception in the session:
Then there is the problem of error handling. If you get an exception
(such as StaleObjectStateException, because of concurrency conflict),
your session and its loaded entities are toast, because with
NHibernate, an exception thrown from a session moves that session into
an undefined state. You can no longer use that session or any loaded
entities. If you have only a single global session, it means that you
probably need to restart the application, which is probably not a good
idea.
When doing a criteria query with NHibernate, I want to get fresh results and not old ones from a cache.
The process is basically:
Query persistent objects into NHibernate application.
Change database entries externally (another program, manual edit in SSMS / MSSQL etc.).
Query persistence objects (with same query code), previously loaded objects shall be refreshed from database.
Here's the code (slightly changed object names):
public IOrder GetOrderByOrderId(int orderId)
{
...
IList result;
var query =
session.CreateCriteria(typeof(Order))
.SetFetchMode("Products", FetchMode.Eager)
.SetFetchMode("Customer", FetchMode.Eager)
.SetFetchMode("OrderItems", FetchMode.Eager)
.Add(Restrictions.Eq("OrderId", orderId));
query.SetCacheMode(CacheMode.Ignore);
query.SetCacheable(false);
result = query.List();
...
}
The SetCacheMode and SetCacheable have been added by me to disable the cache. Also, the NHibernate factory is set up with config parameter UseQueryCache=false:
Cfg.SetProperty(NHibernate.Cfg.Environment.UseQueryCache, "false");
No matter what I do, including Put/Refresh cache modes, for query or session: NHibernate keeps returning me outdated objects the second time the query is called, without the externally committed changes. Info btw.: the outdated value in this case is the value of a Version column (to test if a stale object state can be detected before saving). But I need fresh query results for multiple reasons!
NHibernate even generates an SQL query, but it is never used for the values returned.
Keeping the sessions open is neccessary to do dynamic updates on dirty columns only (also no stateless sessions for solution!); I don't want to add Clear(), Evict() or such everywhere in code, especially since the query is on a lower level and doesn't remember the objects previously loaded. Pessimistic locking would kill performance (multi-user environment!)
Is there any way to force NHibernate, by configuration, to send queries directly to the DB and get fresh results, not using unwanted caching functions?
First of all: this doesn't have anything to do with second-level caching (which is what SetCacheMode and SetCacheable control). Even if it did, those control caching of the query, not caching of the returned entities.
When an object has already been loaded into the current session (also called "first-level cache" by some people, although it's not a cache but an Identity Map), querying it again from the DB using any method will never override its value.
This is by design and there are good reasons for it behaving this way.
If you need to update potentially changed values in multiple records with a query, you will have to Evict them previously.
Alternatively, you might want to read about Stateless Sessions.
Is this code running in a transaction? Or is that external process running in a transaction? If one of those two is still in a transaction, you will not see any updates.
If that is not the case, you might be able to find the problem in the log messages that NHibernate is creating. These are very informative and will always tell you exactly what it is doing.
Keeping the sessions open is neccessary to do dynamic updates on dirty columns only
This is either the problem or it will become a problem in the future. NHibernate is doing all it can to make your life better, but you are trying to do as much as possible to prevent NHibernate to do it's job properly.
If you want NHibernate to update the dirty columns only, you could look at the dynamic-update-attribute in your class mapping file.
We use one (read-only) session which we disconnect as soon as we retrieve the data from the database. The data retrieved, often has lazy-loaded properties which are not initialized yet.
When we try to access the properties, the following exception gets thrown:
NHibernate.LazyInitializationException
Initializing[NHibernateTest.AppUser#16]-failed to lazily initialize a collection of role: NHibernateTest.AppUser.Permissions, session is disconnected
Is there a way (interceptor) to automatically detect that the application is trying to access an uninitialized property, so that the interceptor can quickly open the connection and close it after the unit of work?
Fetching everything at once would nullify the usage of laziness.
There is no efficient way to do that. The idea is that you keep the session open until your done with the session. There should be one session per unit of work. (a session is kind of unit of work actually).
Fetching everything your need in one query is more efficient than fetching everything you need in multiple queries, so I don't agree with your last statement. Lazy loading is useful for lazy programmers (like me) but is never more efficient than eager loading. Lazy loading can save you some programming time, but you still have to watch out for to many queries being executed (select N+1)