How to turn off NHibernate's automatic (dirty checking) update behaviour? - nhibernate

I've just discovered that if I get an object from an NHibernate session and change a property on object, NHibernate will automatically update the object on commit without me calling Session.Update(myObj)!
I can see how this could be helpful, but as default behaviour it seems crazy!
Update: I now understand persistence ignorance, so this behaviour is now clearly the preferred option. I'll leave this now embarrassing question here to hopefully help other profane users.
How can I stop this happening? Is this default NHibernate behaviour or something coming from Fluent NHibernate's AutoPersistenceModel?
If there's no way to stop this, what do I do? Unless I'm missing the point this behaviour seems to create a right mess.
I'm using NHibernate 2.0.1.4 and a Fluent NHibernate build from 18/3/2009
Is this guy right with his answer?
I've also read that overriding an Event Listener could be a solution to this. However, IDirtyCheckEventListener.OnDirtyCheck isn't called in this situation. Does anyone know which listener I need to override?

You can set Session.FlushMode to FlushMode.Never. This will make your operations explicit
ie: on tx.Commit() or session.Flush(). Of course this will still update the database upon commit/flush. If you do not want this behavior, then call session.Evict(yourObj) and it will then become transient and NHibernate will not issue any db commands for it.
Response to your edit: Yes, that guy gives you more options on how to control it.

My solution:
In your initial ISession creation, (somewhere inside your injection framework registrations) set DefaultReadOnly to true.
In your IRepository implementation which wraps around NHibernate and manages the ISession and such, in the Insert, Update, InsertUpdate and Delete (or similar) methods which call ISession.Save, Update, SaveUpdate, etc., call SetReadOnly for the entity and flag set to false.

Calling SaveOrUpdate() or Save() makes an object persistent. If you've retrieved it using an ISession or from a reference to a persistent object, then the object is persistent and flushing the session will save changes. You can prevent this behavior by calling Evict() on the object which makes it transient.
Edited to add: I generally consider an ISession to be a unit of work. This is easily implemented in a web app. using session-per-request but requires more control in WinForms.

We did this by using the Event Listeners with NH (This isn't my work - but I can't find the link for where I did it...).
We have a EventListener for when reading in the data, to set it as ReadOnly - and then one for Save (and SaveOrUpdate) to set them as loaded, so that object will persist when we manually call Save() on it.
That - or you could use an IStatelessSession which has no State/ChangeTracking.
This sets the entity/item as ReadOnly immediately on loading.
I've only included one Insertion event listener, but my config code references all of them.
/// <summary>
/// A listener that once an object is loaded will change it's status to ReadOnly so that
/// it will not be automatically saved by NH
/// </summary>
/// <remarks>
/// For this object to then be saved, the SaveUpdateEventListener is to be used.
/// </remarks>
public class PostLoadEventListener : IPostLoadEventListener
{
public void OnPostLoad(PostLoadEvent #event)
{
EntityEntry entry = #event.Session.PersistenceContext.GetEntry(#event.Entity);
entry.BackSetStatus(Status.ReadOnly);
}
}
On saving the object, we call this to set that object to Loaded (meaning it will now persist)
public class SaveUpdateEventListener : ISaveOrUpdateEventListener
{
public static readonly CascadingAction ResetReadOnly = new ResetReadOnlyCascadeAction();
/// <summary>
/// Changes the status of any loaded item to ReadOnly.
/// </summary>
/// <remarks>
/// Changes the status of all loaded entities, so that NH will no longer TrackChanges on them.
/// </remarks>
public void OnSaveOrUpdate(SaveOrUpdateEvent #event)
{
var session = #event.Session;
EntityEntry entry = session.PersistenceContext.GetEntry(#event.Entity);
if (entry != null && entry.Persister.IsMutable && entry.Status == Status.ReadOnly)
{
entry.BackSetStatus(Status.Loaded);
CascadeOnUpdate(#event, entry.Persister, #event.Entry);
}
}
private static void CascadeOnUpdate(SaveOrUpdateEvent #event, IEntityPersister entityPersister,
object entityEntry)
{
IEventSource source = #event.Session;
source.PersistenceContext.IncrementCascadeLevel();
try
{
new Cascade(ResetReadOnly, CascadePoint.BeforeFlush, source).CascadeOn(entityPersister, entityEntry);
}
finally
{
source.PersistenceContext.DecrementCascadeLevel();
}
}
}
And we implement it into NH thus so:
public static ISessionFactory CreateSessionFactory(IPersistenceConfigurer dbConfig, Action<MappingConfiguration> mappingConfig, bool enabledChangeTracking,bool enabledAuditing, int queryTimeout)
{
return Fluently.Configure()
.Database(dbConfig)
.Mappings(mappingConfig)
.Mappings(x => x.FluentMappings.AddFromAssemblyOf<__AuditEntity>())
.ExposeConfiguration(x => Configure(x, enabledChangeTracking, enabledAuditing,queryTimeout))
.BuildSessionFactory();
}
/// <summary>
/// Configures the specified config.
/// </summary>
/// <param name="config">The config.</param>
/// <param name="enableChangeTracking">if set to <c>true</c> [enable change tracking].</param>
/// <param name="queryTimeOut">The query time out in minutes.</param>
private static void Configure(NHibernate.Cfg.Configuration config, bool enableChangeTracking, bool enableAuditing, int queryTimeOut)
{
config.SetProperty(NHibernate.Cfg.Environment.Hbm2ddlKeyWords, "none");
if (queryTimeOut > 0)
{
config.SetProperty("command_timeout", (TimeSpan.FromMinutes(queryTimeOut).TotalSeconds).ToString());
}
if (!enableChangeTracking)
{
config.AppendListeners(NHibernate.Event.ListenerType.PostLoad, new[] { new Enact.Core.DB.NHib.Listeners.PostLoadEventListener() });
config.AppendListeners(NHibernate.Event.ListenerType.SaveUpdate, new[] { new Enact.Core.DB.NHib.Listeners.SaveUpdateEventListener() });
config.AppendListeners(NHibernate.Event.ListenerType.PostUpdate, new[] { new Enact.Core.DB.NHib.Listeners.PostUpdateEventListener() });
config.AppendListeners(NHibernate.Event.ListenerType.PostInsert, new[] { new Enact.Core.DB.NHib.Listeners.PostInsertEventListener() });
}
}

Related

Fluent NHibernate Conventions : OptimisticLock.Is(x => x.Version()) doesn't work

I am having problems with using OptimisticLock as a Convention.
However, using OptimisticLock within Individual ClassMap's works fine. It throws Stale State Object Exceptions.
Each Class corresponding to a Table in the database has a property (which corresponds to a Column in the Table) of type DateTime which I am trying to use for Locking using OptimisticLock.Version().
It works only when I use it within every ClassMap, I don't want to write so many ClassMaps, I instead want to use Auto Mapping.
It WORKS like this within the Class Map
Version(x => x.UpdTs).Column("UPD_TS");
OptimisticLock.Version();
So, I started using Convention below, but it DOESN'T WORK.
OptimisticLock.IsAny(x => x.Version());
I tried setting the DynamicUpdate, etc. Nothing seems to work for me.
Please help !
Here's what I did to get it work using a Convention :
/// <summary>
/// Class represents the Convention which defines which Property/Column serves as a part of the Optimistic Locking Mechanism.
/// </summary>
public class VersionConvention : IVersionConvention, IVersionConventionAcceptance
{
public void Accept(IAcceptanceCriteria<IVersionInspector> criteria)
{
criteria.Expect(x => x.Name == "%COLUMN_NAME%");
}
/// <summary>
/// Method applies additional overrides to the <see cref="IVersionInstance"/>
/// </summary>
/// <param name="instance"><see cref="IVersionInstance"/></param>
public void Apply(IVersionInstance instance)
{
instance.Column("%COLUMN_NAME%");
}
}
%COLUMN_NAME% above is the Property being used for Locking using Version.
Then specified that the Version should be used for Optimistic Locking, when creating a FluentConfiguration Object, like this
OptimisticLock.Is(x => x.Version();

Autofac.Multitenant in an aspnet core application does not seem to resolve tenant scoped dependencies correctly

I'm in the process of upgrading a Multitenant dotnet core solution which utilises the Autofac.Multitenant framework. I'm not having a lot of luck getting tenancy resolution working correctly. I've created a simple demonstration of the problem here: https://github.com/SaltyDH/AutofacMultitenancy1
This repo demonstrates registering a InstancePerTenant scoped dependency TestMultitenancyContext which is resolved in the Home Controller. Due to issues with using IHttpContextAccessor, I'm using a custom RequestMiddleware class to capture the current HttpContext object so that I can perform logic on the current HttpContext request object in the MultitenantIdentificationStrategy.
Finally, TestFixture provides a simple xUnit test which, at least on my machine returns "tenant1" for both tenants.
Is there something I've missed here or is this just not currently working?
UPDATE 10/6/2017: We released Autofac.AspNetCore.Multitenant to wrap up the solution to this in a more easy to consume package. I'll leave the original answer/explanation here for posterity, but if you're hitting this you can go grab that package and move on.
I think you're running into a timing issue.
If you pop open the debugger on the HttpContext in the middleware you can see that there's a RequestServicesFeature object on a property called ServiceProvidersFeature. That's what's responsible for creating the per-request scope. The scope gets created the first time it's accessed.
It appears that the order goes roughly like this:
The WebHostBuilder adds a startup filter to enable request services to be added to the pipeline.
The startup filter, AutoRequestServicesStartupFilter, adds middleware to the very beginning of the pipeline to trigger the creation of request services.
The middleware that gets added, RequestServicesContainerMiddleware, basically just invokes the RequestServices property from the ServiceProvidersFeature to trigger creation of the per-request lifetime scope. However, in its constructor is where it gets the IServiceScopeFactory that it uses to create the request scope, which isn't so great because it'll be created from the root container before a tenant can be established.
All that yields a situation where the per-request scope has already been determined to be for the default tenant and you can't really change it.
To work around this, you need to set up request services yourself such that they account for multitenancy.
It sounds worse than it is.
First, we need a reference to the application container. We need the ability to resolve something from application-level services rather than request services. I did that by adding a static property to your Startup class and keeping the container there.
public static IContainer ApplicationContainer { get; private set; }
Next, we're going to change your middleware to look more like the RequestServicesContainerMiddleware. You need to set the HttpContext first so your tenant ID strategy works. After that, you can get an IServiceScopeFactory and follow the same pattern they do in RequestServicesContainerMiddleware.
public class RequestMiddleware
{
private static readonly AsyncLocal<HttpContext> _context = new AsyncLocal<HttpContext>();
private readonly RequestDelegate _next;
public RequestMiddleware(RequestDelegate next)
{
this._next = next;
}
public static HttpContext Context => _context.Value;
public async Task Invoke(HttpContext context)
{
_context.Value = context;
var existingFeature = context.Features.Get<IServiceProvidersFeature>();
using (var feature = new RequestServicesFeature(Startup.ApplicationContainer.Resolve<IServiceScopeFactory>()))
{
try
{
context.Features.Set<IServiceProvidersFeature>(feature);
await this._next.Invoke(context);
}
finally
{
context.Features.Set(existingFeature);
_context.Value = null;
}
}
}
}
Now you need a startup filter to get your middleware in there. You need a startup filter because otherwise the RequestServicesContainerMiddleware will run too early in the pipeline and things will already start resolving from the wrong tenant scope.
public class RequestStartupFilter : IStartupFilter
{
public Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next)
{
return builder =>
{
builder.UseMiddleware<RequestMiddleware>();
next(builder);
};
}
}
Add the startup filter to the very start of the services collection. You need your startup filter to run before AutoRequestServicesStartupFilter.
The ConfigureServices ends up looking like this:
public IServiceProvider ConfigureServices(IServiceCollection services)
{
services.Insert(0, new ServiceDescriptor(typeof(IStartupFilter), typeof(RequestStartupFilter), ServiceLifetime.Transient));
services.AddMvc();
var builder = new ContainerBuilder();
builder.RegisterType<TestMultitenancyContext>().InstancePerTenant();
builder.Populate(services);
var container = new MultitenantContainer(new MultitenantIdentificationStrategy(), builder.Build());
ApplicationContainer = container;
return new AutofacServiceProvider(container);
}
Note the Insert call in there to jam your service registration at the top, before their startup filter.
The new order of operations will be:
At app startup...
Your startup filter will add your custom request services middleware to the pipeline.
The AutoRequestServicesStartupFilter will add the RequestServicesContainerMiddleware to the pipeline.
During a request...
Your custom request middleware will set up request services based on the inbound request information.
The RequestServicesContainerMiddleware will see that request services are already set up and will do nothing.
When services are resolved, the request service scope will already be the tenant scope as set up by your custom request middleware and the correct thing will show up.
I tested this locally by switching the tenant ID to come from querystring rather than host name (so I didn't have to set up hosts file entries and all that jazz) and I was able to switch tenant by switching querystring parameters.
Now, you may be able to simplify this a bit. For example, you may be able to get away without a startup filter by doing something directly to the web host builder in the Program class. You may be able to register your startup filter right with the ContainerBuilder before calling builder.Populate and skip that Insert call. You may be able to store the IServiceProvider in the Startup class property if you don't like having Autofac spread through the system. You may be able to get away without a static container property if you create the middleware instance and pass the container in as a constructor parameter yourself. Unfortunately, I already spent a loooot of time trying to figure out the workaround so I'm going to have to leave "optimize it" as an exercise for the reader.
Again, sorry this wasn't clear. I've filed an issue on your behalf to get the docs updated and maybe figure out a better way to do this that's a little more straightforward.
I have an alternate solution, related to work I've done on a pending PR on the Autofac DI extension. The solution there can't be used exactly, because it depends on classes that are (rightly) internal. It can be adapted by providing shims that reproduce the functionality in those classes. Since they are compact, this doesn't require the addition of a lot of code. Until the functionality is fixed, this is the solution I'm using.
The other aspect of the solution is to eschew the custom middleware and instead make the ITenantIdentificationStrategy a service that can take any dependency required to do what it needs to.
Fixing the DI
The "DI" side of the problem is that the Autofac DI extension uses resolution to supply IServiceProvider and IServiceScopeFactory implementations. This is possible, because under the hood these are IComponentContext and ILifetimeScope (which are themselves different interfaces for the same thing). In most cases this works fine, but ASP.NET Core proceeds by resolving a singleton IServiceScopeFactory very early in the application cycle. In a multi-tenant scenario this resolution will return the ILifetimeScope for either the first tenant requested, or for the "default" tenant, and that will be the root scope (as far as MS DI is concerned) for the application lifetime. (See the PR for further discussion.)
The classes below implement an alternate behavior: instead of resolving the DI interfaces, it builds (news-up) the initially-requested ones from the IContainer directly. With the initial IServiceScopeFactory based directly on IContainer, further scope requests will resolve correctly.
public class ContainerServiceProvider : IServiceProvider, ISupportRequiredService
{
private readonly IContainer container;
public ContainerServiceProvider(IContainer container)
{
this.container = container;
}
public object GetRequiredService(Type serviceType)
{
if (TryGetContainer(serviceType, out object containerSvc)) return containerSvc;
else return container.Resolve(serviceType);
}
public object GetService(Type serviceType)
{
if (TryGetContainer(serviceType, out object containerSvc)) return containerSvc;
else return container.ResolveOptional(serviceType);
}
bool TryGetContainer(Type serviceType, out object containerSvc)
{
if (serviceType == typeof(IServiceProvider)) { containerSvc = this; return true; }
if (serviceType == typeof(IServiceScopeFactory)) { containerSvc = new ContainerServiceScopeFactory(container); return true; }
else { containerSvc = null; return false; }
}
}
// uses IContainer, but could use copy of AutofacServiceScopeFactory
internal class ContainerServiceScopeFactory : IServiceScopeFactory
{
private IContainer container;
public ContainerServiceScopeFactory(IContainer container)
{
this.container = container;
}
public IServiceScope CreateScope()
{
return new BecauseAutofacsIsInternalServiceScope(container.BeginLifetimeScope());
}
}
// direct copy of AutofacServiceScope
internal class BecauseAutofacsIsInternalServiceScope : IServiceScope
{
private readonly ILifetimeScope _lifetimeScope;
/// <summary>
/// Initializes a new instance of the <see cref="AutofacServiceScope"/> class.
/// </summary>
/// <param name="lifetimeScope">
/// The lifetime scope from which services should be resolved for this service scope.
/// </param>
public BecauseAutofacsIsInternalServiceScope(ILifetimeScope lifetimeScope)
{
this._lifetimeScope = lifetimeScope;
this.ServiceProvider = this._lifetimeScope.Resolve<IServiceProvider>();
}
/// <summary>
/// Gets an <see cref="IServiceProvider" /> corresponding to this service scope.
/// </summary>
/// <value>
/// An <see cref="IServiceProvider" /> that can be used to resolve dependencies from the scope.
/// </value>
public IServiceProvider ServiceProvider { get; }
/// <summary>
/// Disposes of the lifetime scope and resolved disposable services.
/// </summary>
public void Dispose()
{
this._lifetimeScope.Dispose();
}
}
Fixing Identification Strategy
As for making the identification-strategy a service, I would rework your implementation like so:
public class MultitenantIdentificationStrategy : ITenantIdentificationStrategy
{
public const string DefaultTenantId = null;
private readonly IHttpContextAccessor contextaccessor;
public MultitenantTenantIdentificationStrategy(IHttpContextAccessor contextaccessor)
{
this.contextaccessor = contextaccessor;
}
public bool TryIdentifyTenant(out object tenantId)
{
var context = contextaccessor.HttpContext;
// after this is unchanged
.
.
}
.
.
}
Use in Startup.ConfigureServices
This shows the fragment of how these last few pieces are registered and fed to MS DI for ASP.NET.
. . .
builder.RegisterType<MultitenantIdentificationStrategy>().AsImplementedInterfaces(); // tenant identification
// register do Autofac DI integration
builder.Populate(services);
var underlyingcontainer = builder.Build();
ApplicationContainer = new MultitenantContainer(underlyingcontainer.Resolve<ITenantIdentificationStrategy>(), underlyingContainer);
return new ContainerServiceProvider(ApplicationContainer);
If you find this solution workable, please give a thumbs up to DI PR 10--or PR 11, if after reviewing you think that is the better/more elegant solution. Either will save having to add the "shim" code above.

Web Service nHibernate SessionFactory issue

I have a C# .Net Web Service. I am calling a dll (C# .Net) that uses nHibernate to connect to my database. When I call the dll, it executes a query to the db and loads the parent Object "Task". However, when the dll tries to access the child objects "Task.SubTasks", it throws the following error:
NHibernate.HibernateException failed to lazily initialize a collection of role: SubTasks no session or session was closed
I'm new to nHibernate so not sure what piece of code I'm missing.
Do I need to start a Factory session in my web service before calling the dll? If so, how do I do that?
EDIT: Added the web service code and the CreateContainer() method code. This code gets called just prior to calling the dll
[WebMethod]
public byte[] GetTaskSubtask (string subtaskId)
{
var container = CreateContainer(windsorPath);
IoC.Initialize(container);
//DLL CALL
byte[] theDoc = CommonExport.GetSubtaskDocument(subtaskId);
return theDoc;
}
/// <summary>
/// Register the IoC container.
/// </summary>
/// <param name="aWindsorConfig">The path to the windsor configuration
/// file.</param>
/// <returns>An initialized container.</returns>
protected override IWindsorContainer CreateContainer(
string aWindsorConfig)
{
//This method is a workaround. This method should not be overridden.
//This method is overridden because the CreateContainer(string) method
//in UnitOfWorkApplication instantiates a RhinoContainer instance that
//has a dependency on Binsor. At the time of writing this the Mammoth
//application did not have the libraries needed to resolve the Binsor
//dependency.
IWindsorContainer container = new RhinoContainer();
container.Register(
Component.For<IUnitOfWorkFactory>().ImplementedBy
<NHibernateUnitOfWorkFactory>());
return container;
}
EDIT: Adding DLL code and repository code...
DLL Code
public static byte[] GetSubtaskDocument(string subtaskId)
{
BOESubtask task = taskRepo.FindBOESubtaskById(Guid.Parse(subtaskId));
foreach(subtask st in task.Subtasks) <--this is the line that throws the error
{
//do some work
}
}
Repository for task
/// <summary>
/// Queries the database for the Subtasks whose ID matches the
/// passed in ID.
/// </summary>
/// <param name="aTaskId">The ID to find matching Subtasks
/// for.</param>
/// <returns>The Subtasks whose ID matches the passed in
/// ID (or null).</returns>
public Task FindTaskById(Guid aTaskId)
{
var task = new Task();
using (UnitOfWork.Start())
{
task = FindOne(DetachedCriteria.For<Task>()
.Add(Restrictions.Eq("Id", aTaskId)));
UnitOfWork.Current.Flush();
}
return task;
}
Repository for subtask
/// <summary>
/// Queries the database for the Subtasks whose ID matches the
/// passed in ID.
/// </summary>
/// <param name="aBOESubtaskId">The ID to find matching Subtasks
/// for.</param>
/// <returns>The Subtasks whose ID matches the passed in
/// ID (or null).</returns>
public Subtask FindBOESubtaskById(Guid aSubtaskId)
{
var subtask = new Subtask();
using (UnitOfWork.Start())
{
subtask = FindOne(DetachedCriteria.For<Subtask>()
.Add(Restrictions.Eq("Id", aSubtaskId)));
UnitOfWork.Current.Flush();
}
return subtask;
}
You have apparently mapped a collection in one of your NHibernate data classes with lazy loading enabled (or better: not disabled, as it is the default behavior). NHibernate loads the entity and creates a proxy for the mapped collections. As soon as they are accessed, NHibernate attempts to load the items for that collection. But if you close your NHibernate session before that happens, the error you received will occur. You are probably exposing your data object through your web service to the web service client. During the serialization process, the XmlSerializer tries to serialize the collection which prompts NHibernate to populate it. As the session is closed, the error occurs.
Two ways to prevent this:
close the session after the response has been sent
or
disable lazy loading for your collections so that they are loaded instantly
Addition after the above edits:
in your repository, you start UnitsOfWork within a using-statement. They are being disposed as soon as the code is completed. I don't know the implementation of UnitOfWork but i assume it controls the lifetime of the NHibernate session. By disposing the UnitOfWork, your are probalby closing the NHibernate session. As your mapping initializes collections lazy loaded, these collections are not yet populated and the error occurs. NHibernate needs the exact instance of the session that loaded an entity to populate lazily initialized collections.
You will run into problems like this if you use lazy loading and have a repository that closes the session before the response is complete. One option would be to initialize the UnitOfWork at the start of the request and close it after the response is complete (for instance in Application_BeginRequest, Application_EndRequest in Global.asax.cs). That would of course mean a close integration of your repository into the web service.
In any case, creating a Session for a single request in combination with lazy loading is a bad idea and is very likely to create similar problems in the future. If you can't change the repository implementation you might probably have to disable lazy loading.
Using Garland's feedback I resolved the issue. I removed the UnitOfWork(s) code from the repository in the DLL and wrapped the Web Service call to the DLL in a UnitOfWork See code mods below:
Web Service
[WebMethod]
public byte[] GetSubtaskDocument (string subtaskId)
{
var container = CreateContainer(windsorString);
IoC.Initialize(container);
byte[] theDoc;
using (UnitOfWork.Start())
{
//DLL call
theDoc = CommonExport.GetSubtaskDocument(subtaskId);
UnitOfWork.Current.Flush();
}
return theDoc;
}
Repository call in the DLL
public Subtask FindSubtaskById(Guid aSubtaskId)
{
return FindOne(DetachedCriteria.For<Subtask>()
.Add(Restrictions.Eq("Id", aSubtaskId)));
}

Repository Pattern with NHibernate?

I am wondering how much should my service layer know of my repository? In past project I always returned lists and had a method for each thing I needed.
So if I needed to return all rows that had an Id of 5 that would be a method. I do have generic repository for create, update, delete and other NHibernate options but for querying I don't.
Now I am starting to use more IQueryable as I started to run into problems of having so many methods for each case.
Say if I needed to return all that had a certain Id and needed 3 tables that where eager loaded this would be a new method. If I needed a certain Id and no eager loading that would be a separate method.
So now I am thinking if I method that does the where clause part and return IQueryable then I can add on the result (i.e. if I need to do eager loading).
At the same time though this now makes the service layer more aware of the repository layer and I no longer can switch out the repository as easy as now I have specific NHibernate in the service layer.
I am also not sure how that would effect mocking.
So now I am wondering if I go down this route if the repository is needed as it now seems like they have been blended together.
Edit
If I get rid of my repository and just have the session in my service layer is there a point to having a unit of work class then?
public class UnitOfWork : IUnitOfWork, IDisposable
{
private ITransaction transaction;
private readonly ISession session;
public UnitOfWork(ISession session)
{
this.session = session;
session.FlushMode = FlushMode.Auto;
}
/// <summary>
/// Starts a transaction with the database. Uses IsolationLevel.ReadCommitted
/// </summary>
public void BeginTransaction()
{
transaction = session.BeginTransaction(IsolationLevel.ReadCommitted);
}
/// <summary>
/// starts a transaction with the database.
/// </summary>
/// <param name="level">IsolationLevel the transaction should run in.</param>
public void BeginTransaction(IsolationLevel level)
{
transaction = session.BeginTransaction(level);
}
private bool IsTransactionActive()
{
return transaction.IsActive;
}
/// <summary>
/// Commits the transaction and writes to the database.
/// </summary>
public void Commit()
{
// make sure a transaction was started before we try to commit.
if (!IsTransactionActive())
{
throw new InvalidOperationException("Oops! We don't have an active transaction. Did a rollback occur before this commit was triggered: "
+ transaction.WasRolledBack + " did a commit happen before this commit: " + transaction.WasCommitted);
}
transaction.Commit();
}
/// <summary>
/// Rollback any writes to the databases.
/// </summary>
public void Rollback()
{
if (IsTransactionActive())
{
transaction.Rollback();
}
}
public void Dispose() // don't know where to call this to see if it will solve my problem
{
if (session.IsOpen)
{
session.Close();
}
}
Everyone has an opinion how to use the repository, what to abstract etc. Ayende Rahien has got few good posts about the issue: Architecting in the pit of doom: The evils of the repository abstraction layer and Repository is the new Singleton. Those give you some pretty good reasons why you shouldn't try to create yet another abstraction on top of NHibernate's ISession.
The thing about NHibernate is that it gives you the most if you don't try to abstract it out. Making your service layer depend on NHibernate is not necessarily a bad thing. It gives you control over sessions, caching and other NHibernate features, and thus enables you to imporove performance, not to mention saving you from all the redundant wrapping code that you've mentioned.

Iterate through IPersistentCollection items

I am listening to audit events in NHibernate, specifically to OnPostUpdateCollection(PostCollectionUpdateEvent #event)
I want to iterate through the #event.Collection elements.
The #event.Collection is an IPersistenCollection which does not implements IEnumerable. There is the Entries method that returns an IEnumerable, but it requires an ICollectionPersister which I have no idea where I can get one.
The questions is already asked here: http://osdir.com/ml/nhusers/2010-02/msg00472.html, but there was no conclusive answer.
Pedro,
Searching NHibernate code I could found the following doc about GetValue method of IPersistentCollection (#event.Collection):
/// <summary>
/// Return the user-visible collection (or array) instance
/// </summary>
/// <returns>
/// By default, the NHibernate wrapper is an acceptable collection for
/// the end user code to work with because it is interface compatible.
/// An NHibernate PersistentList is an IList, an NHibernate PersistentMap is an IDictionary
/// and those are the types user code is expecting.
/// </returns>
object GetValue();
With that, we can conclude that you can cast your collection to an IEnumerable and things will work fine.
I've built a little sample mapping a bag and things got like that over here:
public void OnPostUpdateCollection(PostCollectionUpdateEvent #event)
{
foreach (var item in (IEnumerable)#event.Collection.GetValue())
{
// DO WTVR U NEED
}
}
Hope this helps!
Filipe
If you need to do more complex operations with the collection, you are probably going to need the collection persister, which you can actually get with the following extension method (essentially, you need to work around the visibility by of the AbstractCollectionEvent.GetLoadedCollectionPersister method):
public static class CollectionEventExtensions
{
private class Helper : AbstractCollectionEvent
{
public Helper(ICollectionPersister collectionPersister, IPersistentCollection collection, IEventSource source, object affectedOwner, object affectedOwnerId)
: base(collectionPersister, collection, source, affectedOwner, affectedOwnerId)
{
}
public static ICollectionPersister GetCollectionPersister(AbstractCollectionEvent collectionEvent)
{
return GetLoadedCollectionPersister(collectionEvent.Collection, collectionEvent.Session);
}
}
public static ICollectionPersister GetCollectionPersister(this AbstractCollectionEvent collectionEvent)
{
return Helper.GetCollectionPersister(collectionEvent);
}
}
Hope it helps!
Best Regards,
Oliver Hanappi