So I'm using the Nhibernate.Linq API to run a query. The query itself only takes about 50 ms, but the total time spent by NHibernate to return the result is around 500ms.
The query is something like as follows, session.Query<T>().Where(i => i.ForeignKey == someValue).Take(5).ToList().AsQueryable(). And this query executes in less than ~50ms from the debug log of creating the HQLQueryPlan, through setting up params, opening the connection, hydrating the C# objects and finishing.
However, before it even began getting that far, the debug log shows that NHibernate spent ~400ms cascading saves and updates, and then reports a bundle of Collection found: logs which appear to be Collection properties of the objects that I'm asking for. I should add the ISession being used is read only, and has absolutely not modified or created any entity. After spending these 400ms, it then logs that it flushed 0 changes.
Why is NHibernate cascading a load of save/update commands during execution of a .Query<T>(), is it trying to make sure that the correct data is retrieved? Is this the result of some configuration that I have set?
Such behavior is possible for queries executed only inside transaction and when:
You have set FlushMode to FlushMode.Always
Or NHibernate detected that your query involves tables that are modified in current session. This behaviour can be disabled by setting FlushMode to values less than FlushMode.Auto (that's default) (so it's FlushMode.Commit or FlushMode.Manual)
You can change FlushMode for your session or specify default flush mode in default_flush_mode configuration setting. See spec for details
Related
We are trying to implement retry logic to recover from transient errors in Azure environment.
We are using long-running sessions to keep track and commit the whole bunch of changes at the end of application transaction (which may spread over several web-requests). Along the way we need to get additional data from database. Our main problem is that we can't easily recover from db error because we can't "replay" all user actions.
So far we used straightforward recovery algorithm:
Try to perform operation in long-running session
In case of error, close the session, open a new one and merge entities into it
Retry the operation
It's very expensive approach in terms of time (merge is really long for big entity hierarchies). So we'd like to optimize things a little.
We'd like to perform query operations in separate session (to keep long running one untouched and safe) and on success, merge results back to the long-running session. Retry is relatively simple here - we just need to open new session and run query once more. However, with this approach we have an issue with initializing lazy properties/collections:
If we do this in separate session, we need to merge results back (a lot of entities) but merge could fail and break the long-running session
We tried different ways of "moving" original entity to different session, loading details and returning it back, but without success (evict, replicate, etc.)
There is known statement that session should be discarded in case of exception. However, the example shows write operation. Is it still true for read ones? I mean if I guarantee that no data is written back to the database, can I reuse the same session to run query again?
Do you have any other suggestions about retry logic with long-running sessions?
IMO there's no way to solve your issue. It's gonna take a lot of time to commit everything or you will have to do a lot of work to break it up into smaller sessions and handle every error that can occur while merging.
To answer your question about using the session after an exception: you cannot trust ANYTHING anymore inside this session, not even loaded entities.
Read this paragraph from Ayende's article about building a simple todo app with a recoveryplan in case of an exception in the session:
Then there is the problem of error handling. If you get an exception
(such as StaleObjectStateException, because of concurrency conflict),
your session and its loaded entities are toast, because with
NHibernate, an exception thrown from a session moves that session into
an undefined state. You can no longer use that session or any loaded
entities. If you have only a single global session, it means that you
probably need to restart the application, which is probably not a good
idea.
When doing a criteria query with NHibernate, I want to get fresh results and not old ones from a cache.
The process is basically:
Query persistent objects into NHibernate application.
Change database entries externally (another program, manual edit in SSMS / MSSQL etc.).
Query persistence objects (with same query code), previously loaded objects shall be refreshed from database.
Here's the code (slightly changed object names):
public IOrder GetOrderByOrderId(int orderId)
{
...
IList result;
var query =
session.CreateCriteria(typeof(Order))
.SetFetchMode("Products", FetchMode.Eager)
.SetFetchMode("Customer", FetchMode.Eager)
.SetFetchMode("OrderItems", FetchMode.Eager)
.Add(Restrictions.Eq("OrderId", orderId));
query.SetCacheMode(CacheMode.Ignore);
query.SetCacheable(false);
result = query.List();
...
}
The SetCacheMode and SetCacheable have been added by me to disable the cache. Also, the NHibernate factory is set up with config parameter UseQueryCache=false:
Cfg.SetProperty(NHibernate.Cfg.Environment.UseQueryCache, "false");
No matter what I do, including Put/Refresh cache modes, for query or session: NHibernate keeps returning me outdated objects the second time the query is called, without the externally committed changes. Info btw.: the outdated value in this case is the value of a Version column (to test if a stale object state can be detected before saving). But I need fresh query results for multiple reasons!
NHibernate even generates an SQL query, but it is never used for the values returned.
Keeping the sessions open is neccessary to do dynamic updates on dirty columns only (also no stateless sessions for solution!); I don't want to add Clear(), Evict() or such everywhere in code, especially since the query is on a lower level and doesn't remember the objects previously loaded. Pessimistic locking would kill performance (multi-user environment!)
Is there any way to force NHibernate, by configuration, to send queries directly to the DB and get fresh results, not using unwanted caching functions?
First of all: this doesn't have anything to do with second-level caching (which is what SetCacheMode and SetCacheable control). Even if it did, those control caching of the query, not caching of the returned entities.
When an object has already been loaded into the current session (also called "first-level cache" by some people, although it's not a cache but an Identity Map), querying it again from the DB using any method will never override its value.
This is by design and there are good reasons for it behaving this way.
If you need to update potentially changed values in multiple records with a query, you will have to Evict them previously.
Alternatively, you might want to read about Stateless Sessions.
Is this code running in a transaction? Or is that external process running in a transaction? If one of those two is still in a transaction, you will not see any updates.
If that is not the case, you might be able to find the problem in the log messages that NHibernate is creating. These are very informative and will always tell you exactly what it is doing.
Keeping the sessions open is neccessary to do dynamic updates on dirty columns only
This is either the problem or it will become a problem in the future. NHibernate is doing all it can to make your life better, but you are trying to do as much as possible to prevent NHibernate to do it's job properly.
If you want NHibernate to update the dirty columns only, you could look at the dynamic-update-attribute in your class mapping file.
I am using nhibernate, to create a collection of immutable domain objects from a legacy oracle DB. some simple lookup using Criteria api take over 60 seconds. If subsequent lookups of the same lookup is very fast usually less than 300ms (100ms in db and rest by nhibernate, i dont have 2-level cache or query cache enabled, all queries do go the DB I checked using nhibernate prof). If however i leave the app idle for a couple of minutes and run the lookup again it takes usualy 50-60 secs,
I have used nhibernate profiler and in every case its clearly showing only at the most 100ms is spend in database, i figure the rest of the time must be taken by nhibernate, I cant understand why ?
Some background info :
I am using dynamic-component to map a 20 columns into key value
pairs.
Using nhibernate 2.1
i am using dynamic-component in the mapping
Once retrieved the data is never modified, in mapping i am
using mutable=false flag.
its a legacy db so i am using a composite
key in the mapping.
I am only retriving around 50 objects in each lookup
When I open session I have set FlushMode=Never
I also tried stateless session (still have slow performance on initial lookup)
I dont not define or use any custom user types in the mapping
I am clearly doing something wrong or missed some thing, any ideas ?
I suggest downloading a C# performance profiler such as dotTrace. You will be able to quickly get a more accurate understanding of where your performance problem is. I'm pretty sure it is not an NHibernate mapping issue.
How is the lifetime of your SessionFactory being managed? Is it possible that your SessionFactory is being disposed of after some period of inactivity?
It is most likely not an Nhibernate issue.
Use the code below to figure the amount of time it takes to get your data back. (db+network_latency+nhibernate_execution)
Once you are positive that there is no APP related latency involved, check the database by looking at the query plan caching and query result caching. The first time the query runs, a cache miss, your db will invest in time-consuming and intensive operations to generate the resultset.
If 1 and 2 don't yield any useful information, check your network. Maybe some network pressure is causing heavy latency.
As mentioned by JeffreyABecker below, study how your session factories get disposed/created. Find usages of ISessionFactory.Dispose() or configuration.BuildSessionFactory(). Building ISessionFactory objects is an expensive operation and, typically, you should create them on application start and dispose them on application stop/shutdown. 60s> it is still a sound number for ISessionFactory instantiation.
//Codez
Stopwatch stopwatch = new Stopwatch();
// Begin timing
stopwatch.Start();
// Nhibernate specific stuff ONLY in here
// Depending on your setup, do a session.Flush(); if possible.
// End Timing
stopwatch.Stop();
// Write result - console/log4net/diagnostics.debug/etc
Console.WriteLine("Time elapsed: {0}",stopwatch.Elapsed);
I have two applications running on a machine, where NHibernate is used as an ORM. One app is managing objects (CRUD operations), while the other is processing the objects (get, process, set status and save).
First I let the processing app process an object and set the status to processed. Then I change a text property manually in the database and reset the status (to make it process it again). The manual DB edit is to simulate the managing app. Then I start to see problems:
The read object still has the old text property, event though I've changed it in the DB. I guess NHibernate caching is the problem here.
When I set the object's status to processed, it uses all properties in the where clause when updating, which means it doesn't get updated in the database. This is because it has the wrong text in a property. I would guess this also has to do with caching.
The consequence of the status not being updated is that the same object (with wrong text) is processed over and over and over...
Anyone out there who can help me with how I should set up NHibernate to make this problem disappear?
Better call refresh method on the object you want, because flush can have unwanted side-effects.
This is a common question, but the explanations found so far and observed behaviour are some way apart.
We have want the following nHibernate strategy in our MVC website:
A SessionScope for the request (to track changes)
An ActiveRecord.TransactonScope to wrap our inserts only (to enable rollback/commit of batch)
Selects to be outside a Transaction (to reduce extent of locks)
Delayed Flush of inserts (so that our insert/updates occur as a UoW at end of session)
Now currently we:
Don't get the implied transaction from the SessionScope (with FlushAction Auto or Never)
If we use ActiveRecord.TransactionScope there is no delayed flush and any contained selects are also caught up in a long-running transaction.
I'm wondering if it's because we have an old version of nHibernate (it was from trunk very near 2.0).
We just can't get the expected nHibernate behaviour, and performance sucks (using NHProf and SqlProfiler to monitor db locks).
Here's what we have tried since:
Written our own TransactionScope (inherits from ITransactionScope) that:
Opens a ActiveRecord.TransactionScope on the Commit, not in the ctor (delays transaction until needed)
Opens a 'SessionScope' in the ctor if none are available (as a guard)
Converted our ids to Guid from identity
This stopped the auto flush of insert/update outside of the Transaction (!)
Now we have the following application behaviour:
Request from MVC
SELECTs needed by services are fired, all outside a transaction
Repository.Add calls do not hit the db until scope.Commit is called in our Controllers
All INSERTs / UPDATEs occur wrapped inside a transaction as an atomic unit, with no SELECTs contained.
... But for some reason nHProf now != sqlProfiler (selects seems to happen in the db before nHProf reports it).
NOTE
Before I get flamed I realise the issues here, and know that the SELECTs aren't in the Transaction. That's the design. Some of our operations will contain the SELECTs (we now have a couple of our own TransactionScope implementations) in serialised transactions. The vast majority of our code does not need up-to-the-minute live data, and we have serialised workloads with individual operators.
ALSO
If anyone knows how to get an identity column (non-PK) refreshed post-insert without a need to manually refresh the entity, and in particular by using ActiveRecord markup (I think it's possible in nHibernate mapping files using a 'generated' attribute) please let me know!!