Is it possible to creat a readonly connection in nHibernate ?
Read-only : where nHibernate will not flush out any changes to the underlying database implicitly or explicitly.
When closing a nhibernate connection it does automatically flush out the changes to the persistent object.
Setting the flush mode to never is one way - but is reversable (i.e some code can reset the flush mode).
I think you've already found the solution, setting flush mode to never. Yes, it is changeable but even if it wasn't, code could simply create another session that had a different flush mode.
I think the appropriate solution is to suggest read-only with session.FlushMode = FlushMode.Never and enforce it by using a connection to the database that only has SELECT permissions (or whatever is appropriate for your situation). Maintaining separate ISessionFactory factories might help by allowing something like ReadOnlySessionFactory.Create().
Take a look at Read Only entities that became available in NHibernate 3.1 GA
https://nhibernate.jira.com/browse/NH-908
There is a newer readonly feature in NHibernate (I don't know which version, but it's in 3.3.0 for sure). You can set the session to read only using this:
session.DefaultReadOnly = true
It disables the cache for old values and therefore improves performance and memory consumption.
There is a chapter about read-only entities in the NHibernate reference documentation.
Accumulating updates, and just never flushing seems like a bad solution to me.
I posted a similar question. The solution provided uses a different approach. All the events are set to empty, and thus ignored. My feeling is that it's a better approach.
I am surprised that this is not easier to do. I like the entity framework approach of using an extension method .AsNoTracking() which ensures that read only queries remain that way.
How to create an NHibernate read-only session with Fluent NHibernate that doesn't accumulate updates?
Related
We're building a large application with NHibernate as ORM layer. We've tried to apply as many best practices as possible, among which setting FlushMode to Never. However, this is giving us pain, for example the following scenario:
There is a table with an end date column. From this table, we delete the last (by end date) record:
The record is deleted;
After the delete, we do a (repository) query for the last record (by end date);
This last record is updated because it's the new active record.
This is a very simple scenario, of which many exist. The problem here is that when we do the query, we get the deleted record back, which of course is not correct. This roughly means that we cannot do queries in business logic that may touch the entity being inserted or deleted, because its resp. not there yet or still there.
How can I work with this scenario? Are there ways to work around this without reverting the FlushMode setting or should I just give up on the FlushMode setting all together?
How can I work with this scenario? Are there ways to work around this
without reverting the FlushMode setting
FlushMode.Never does not prevent you from manually calling Flush() when you want to deal with up-to-date data. I guess it is the way to work this scenario without changing the FlushMode
or should I just give up on the FlushMode setting all together?
Could you provide some reference on FlushMode.Never being a good practice in the general case ? Seems like FlushMode.Never is fit when dealing with large, mostly readonly, sets of objects.
http://jroller.com/tfenne/entry/hibernate_understand_flushmode_never
FlushMode.Never is a best practice only when you absolutely require fine-grained control. FlushMode.Auto will cover 99.99% of the cases without a problem. That said, decorating you CUD operations with a ISession.FLush() will not hurt as it only involves a database roundtrip if there are any CUD actions in the internal action queue
Flush mode Never means NHibernate will never flush the session, it's up to you to do that. So, session.Delete() will not actually delete the record from database, just mark the object for delete in session's cache. You can force a flush by calling session.Flush() after calling session.Delete().
I think Auto is a better option, with Auto, NHibernate will flush the session automatically before querying for data.
Considering flyway as a db version tool, but have a use case in mind which I've not seen discussed.
How does one manage a cache layer after a db migrate? That is, if/when a migration happens, how can I notify an external tool to flush the cache (a memcached cluster, for example)?
More specifically, how can I tell hibernate that flyway has performed a migration, causing data/schema changes to the underlying db (so that I may manage the cache appropriately)?
I can safely say RTFM is appropriate here! :)
The migrate() method returns in integer corresponding to the number of successful migrations...so, if
migrate() > 0
then do whatever I need to do some other way (trigger a cache flush, etc).
Thanks SO! Sometimes the best answer is no answer. :)
I am dealing with a strange issue related to NHibernate and distributed transactions in a WCF service. See Deadlocks causing 'Server failed to resume the transaction' with NHibernate and distributed transactions for more details.
One thing that seems to solve my problem is using NHibernate's AdoNetTransactionFactory, instead of AdoNetWithDistributedTransactionsFactory.
I believe that the AdoNetWithDistributedTransactionsFactory is involved with making NHibernate's second-level caching mechanism work right, but we're not using that. What (if any) other problems exist with using AdoNetTransactionFactory with distributed transactions?
Thanks for your time!
I notice that you mentioned from your other question/answer:
SqlConnection class is not thread-safe, and that includes closing the connection
on a separate thread. Based on this response we have filed a
bug report for NHibernate.
However, from NHibernate's documentation:
11.2. Threads and connections
You should observe the following practices when creating NHibernate Sessions:
Never create more than one concurrent ISession or ITransaction instance per database connection.
Be extremely careful when creating more than one ISession per database per transaction. The ISession itself keeps track of updates made to loaded objects, so a different ISession might see stale data.
The ISession is not threadsafe! Never access the same ISession in two concurrent threads. An ISession is usually only a single unit-of-work!
If you are trying to multi-thread the connection with NHibernate perhaps it is just not going to work. Have you considered a different ORM such as Entity Framework?
No matter what ORM you choose though, the database connection will not be thread safe. This is universal.
"many DB drivers are not thread safe. Using a singleton means that if you have many threads, they will all share the same connection. The singleton pattern does not give you thread saftey. It merely allows many threads to easily share a "global" instance." - https://stackoverflow.com/a/6507820/1026459
Using AdoNetTransactionFactory with distributed system transactions will cause those transaction to be ignored by NHibernate, which has the following consequences:
ConnectionReleaseMode.AfterTransaction will not be honored. Instead, NHibernate will release the connection after each statement, and so will re-acquire a connection from the pool for the next one. Depending on your data provider, this may trigger escalation of the transaction to distributed.
FlushMode.Commit will not be honored. Explicit flushes will be required instead. (Auto flushes before queries may still occur.)
Works needing to be isolated from current system transaction will still be included inside it. (Unless the connection string Enlist property is false.) Such works may include id generators queries such as retrieving the next high value for a table hilo generator. If the transaction gets roll-backed, NHibernate may then use conflicting ids.
The NHibernate session will not be able to correctly track locks it holds on entities. Considering itself outside of a transaction, it will consider it has no lock on them. So it may try (on user code request by example) to re-lock them with lower lock level than the one the transaction already holds on them in database. Not sure what outcome could result of that. (At best, ignored, at worst...)
Second level cache will be disabled as soon as you start modifying data. NHibernate sort of "invalidate" cache entries in such situation, and re-enable them only on transaction completion, updated. But since it will not be aware of transactions...
Some extensions (maybe Envers) may rely on NHibernate transaction events, and will no more work as expected.
I strongly recommend upgrading to nhibernate 3.2(or a version close to it). Why? Since 2.1, there has been significant improvements (read rewrite) to the AdoNetWithDistributedTransactionFactory. Matter of fact, it now handles TransactionScopes/ambient-transactions and the like correctly. When we ran 2.1 in production we encounter many issues related to distributed transactions. We pretty much had to fix a ton of stuff ourselves and recompile NHibernate. 3.2 seems to have fixed many issues around the subject.
I don't have the source near me but, if memory doesn't fail me, the AdoNetTransactionFactory doesn't check/handle ambient transactions. So, you are down to NHibernate booting transactions when one is not present in the session(by means of ISession.BeginTransaction()).
i saw that i can use SessionScope and have inserts
inside the scope of the SessionScope and do flush at the end of the scope.
my question is if i can define in some way that after, let's
say, every 10 insertions/saves of objects, that they will automatically
be flushed to the db.
in other words i want to be able to configure the way i use flush with castle active record.
p.s: is there any way to configure cascading behavior for objects like in NHibernate?
You could hook up your own IPostInsertEventListener where you keep track of insertion count and flush accordingly. But I recommend against this unless you have some very good reasons to do so.
The relevant attributes have a Cascade property to set cascading behavior. See for example HasMany.
What are your experiences with the latest version of NHibernate (2.0.1 GA) regarding disconnected scenarios?
A disconnected scenario is where I fetch some object graph from NHibernate, disconnect from the session (and database connection), do some changes in the object graph (deleting in collections, adding entities, updating entities) and then reconnect and save....
We tried this in a client-server architecture. Now we are moving to DTO (data transfer objects). This means, the detached entities are not directly sent to the client anymore, but specialized objects.
The main reason to move in this direction is not NHibernate, it is actually the serialization needed to send entities to the client. While you could use lazy-loading (and you will!) while you are attached to the session, you need to get all references from the database to serialize it.
We had lots of Guids instead of references and lots of properties which are mapped but not serialized ... and it became a pain. So it's much easier to copy the stuff you really want to serialize to its own structure.
Besides of that - working detached could work well.
Be careful with lazy loading, which will cause exceptions to be thrown when accessing non loaded objects on a detached instance.
Be careful with concurrency, the chance that entities had changed while they where detached is high.
Be careful if you need some sort of security or even if you want your server alown to make some data changes. The detached objects could potentially return in any state.
You may take a look on session methods SaveOrUpdateCopy and Merge.
Here is an article which gives you more details:
NHibernate feature: SaveOrUpdateCopy & Merge