With repository pattern, should I check first if the object queried exists in memory before it makes an actual database call - repository

I have a repository, for example, UserRepository. It returns a user by given userId. I work on web application, so objects are loaded into memory, used, and disposed when the request ends.
So far, when I write a repository, I simply retrieved the data from the database. I don't store the retrieved User object into memory (I mean in a collection of the repository). When the repository's GetById() method is called, I don't check if the object is already in the collection. I simply query the database.
My questions are
Should I store retrieved objects in the memory, and when a repository's Get method is called, should I check if the object exists in the memory first before I make any Database call?
Or is the memory collection unnecessary, as web request is a short-lived session and all objects are disposed afterward

1) Should I store retrieved objects in the memory, and when a repository's Get method is called, should I check if the object exists in the memory first before I make any Database call?
Since your repository should be abstracted enough to simulate the purpose of an in-memory collection, I think it is really up to you and to your use case.
If you store your object after being retrieved from the database you will probably end-up with an implementation of the so-called IdentityMap. If you do this, it can get very complicated (well it depends on your domain).
Depending on the infrastructure layer you rely on, you may use the IdentityMap provided by your ORM if any.
But the real question is, is it worth implementing an IdentityMap?
I mean, we agree that repeating a query may be wrong for two reasons, performance and integrity, here a quote of Martin Fowler:
An old proverb says that a man with two watches never knows what time it is. If two watches are confusing, you can get in an even bigger mess with loading objects from a database.
But sometimes you need to be pragmatic and just load them every time you need it.
2) Or is the memory collection unnecessary, as web request is a short-lived session and all objects are disposed afterward
It depends™, for example, in some case you may have to play with your object in different place, in that case, it may be worth, but let's say you need to refresh your user session identity by loading your user from database, then there are cases where you only do it once within the whole request.

As is the usual case I don't think there is going to be a "one-size-fits-all".
There may be situations where one may implement a form of caching on a repository when the data is retrieved often, does not go stale too quickly, or simply for efficiency.
However, you could very well implement a type of generic cache decorator that can wrap a repository when you do need this.
So one should take each use case on merit.

When you're using an ORM like Entity Framework or NHibernate it's already taken care of - all read entities are tracked via IdentityMap mechanism, searching by keys (DbSet.Find in EF) won't even hit the database if the entity is already loaded.
If you're using direct database access or a microORM as base for your repository, you should be careful - without IdentityMap you're essentially working with value objects:
using System;
using System.Collections.Generic;
using System.Linq;
namespace test
{
internal class Program
{
static void Main()
{
Console.WriteLine("Identity map");
var artrepo1 = new ArticleIMRepository();
var o1 = new Order();
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(1, "a1", 100), Quantity = 50});
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(1, "a1", 100), Quantity = 30});
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(2, "a2", 100), Quantity = 20});
o1.ConfirmOrder();
o1.PrintChangedStock();
/*
Art. 1/a1, Stock: 20
Art. 2/a2, Stock: 80
*/
Console.WriteLine("Value objects");
var artrepo2 = new ArticleVORepository();
var o2 = new Order();
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(1, "a1", 100), Quantity = 50});
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(1, "a1", 100), Quantity = 30});
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(2, "a2", 100), Quantity = 20});
o2.ConfirmOrder();
o2.PrintChangedStock();
/*
Art. 1/a1, Stock: 50
Art. 1/a1, Stock: 70
Art. 2/a2, Stock: 80
*/
Console.ReadLine();
}
#region "Domain Model"
public class Order
{
public List<OrderLine> OrderLines = new List<OrderLine>();
public void ConfirmOrder()
{
foreach (OrderLine line in OrderLines)
{
line.Article.Stock -= line.Quantity;
}
}
public void PrintChangedStock()
{
foreach (var a in OrderLines.Select(x => x.Article).Distinct())
{
Console.WriteLine("Art. {0}/{1}, Stock: {2}", a.Id, a.Name, a.Stock);
}
}
}
public class OrderLine
{
public Article Article;
public int Quantity;
}
public class Article
{
public int Id;
public string Name;
public int Stock;
}
#endregion
#region Repositories
public class ArticleIMRepository
{
private static readonly Dictionary<int, Article> Articles = new Dictionary<int, Article>();
public Article GetById(int id, string name, int stock)
{
if (!Articles.ContainsKey(id))
Articles.Add(id, new Article {Id = id, Name = name, Stock = stock});
return Articles[id];
}
}
public class ArticleVORepository
{
public Article GetById(int id, string name, int stock)
{
return new Article {Id = id, Name = name, Stock = stock};
}
}
#endregion
}
}

Related

EF Core include related ids but not related entities

Before I go creating my own SQL scripts by hand for this, I have a scenario where I want to get the ids of a foreign key, but not the entirety of the foreign entities, using EF Core.
Right now, I'm getting the ids manually by looping through the related entities and extracting the ids one at a time, like so:
List<int> ClientIds = new List<int>();
for (var i = 0; i < Clients.length; i++){
ClientIds.add(Clients.ElementAt(i).Id);
}
To my understanding, this will either cause data returns much larger than needed (my entity + every related entity) or a completely separate query to be run for each related entity I access, which obviously I don't want to do if I can avoid it.
Is there a straightforward way to accomplish this in EF Core, or do I need to head over the SQL side and handle it myself?
Model:
public class UserViewModel {
public UserViewModel(UserModel userModel){
ClientIds = new List<int>();
for (var i = 0; i < UserModel.Clients.length; i++){
ClientIds.add(Clients.ElementAt(i).Id);
}
//...all the other class asignments
}
public IEnumerable<int> ClientIds {get;set;}
//...all the other irrelevant properties
}
Basically, I need my front-end to know which Client to ask for later.
It looks like you are trying to query this from within the parent entity. I.e.
public class Parent
{
public virtual ICollection<Client> Clients { get; set; }
public void SomeMethod()
{
// ...
List<int> ClientIds = new List<int>();
for (var i = 0; i < Clients.length; i++)
{
ClientIds.add(Clients.ElementAt(i).Id);
}
// ...
}
}
This is not ideal because unless your Clients were eager loaded when the Parent was loaded, this would trigger a lazy load to load all of the Clients data when all you want is the IDs. Still, it's not terrible as it would only result in one DB call to load the clients.
If they are already loaded, there is a more succinct way to get the IDs:
List<int> ClientIds = Clients.Select(x => x.Id).ToList();
Otherwise, if you have business logic involving the Parent and Clients where-by you want to be more selective about when and how the data is loaded, it is better to leave the entity definition to just represent the data state and basic rules/logic about the data, and move selective business logic outside of the entity into a business logic container that scopes the DbContext and queries against the entities to fetch what it needs.
For instance, if the calling code went and did this:
var parent = _context.Parents.Single(x => x.ParentId == parentId);
parent.SomeMethod(); // which resulted in checking the Client IDs...
The simplest way to avoid the extra DB call is to ensure the related entities are eager loaded.
var parent = _context.Parents
.Include(x => x.Clients)
.Single(x => x.ParentId == parentId);
parent.SomeMethod(); // which resulted in checking the Client IDs...
The problem with this approach is that it will still load all details about all of the Clients, and you end up in a situation where you end up defaulting to eager loading everything all of the time because the code might call something like that SomeMethod() which expects to find related entity details. This is the use-case for leveraging lazy loading, but that does have the performance overheads of the ad-hoc DB hits and ensuring that the entity's DbContext is always available to perform the read if necessary.
Instead, if you move the logic out of the entity and into the caller or another container that can take the relevant details, so that this caller projects down the data it will need from the entities in an efficient query:
var parentDetails = _context.Parents
.Where(x => x.ParentId == parentId)
.Select(x => new
{
x.ParentId,
// other details from parent or related entities...
ClientIds = x.Clients.Select(c => c.Id).ToList()
}).Single();
// Do logic that SomeMethod() would have done here, or pass these
// loaded details to a method / service to do the work rather than
// embedding it in the Entity.
This doesn't load a Parent entity, but rather executes a query to load just the details about the parent and related entities that we need. In this example it is projected into an anonymous type to hold the information we can later consume, but if you are querying the data to send to a view then you can project it directly into a view model or DTO class to serialize and send.

Save complex object to session ASP .NET CORE 2.0

I am quite new to ASP .NET core, so please help. I would like to avoid database round trip for ASP .NET core application. I have functionality to dynamically add columns in datagrid. Columns settings (visibility, enable, width, caption) are stored in DB.
So I would like to store List<,PersonColumns> on server only for actual session. But I am not able to do this. I already use JsonConvert methods to serialize and deserialize objects to/from session. This works for List<,Int32> or objects with simple properties, but not for complex object with nested properties.
My object I want to store to session looks like this:
[Serializable]
public class PersonColumns
{
public Int64 PersonId { get; set; }
List<ViewPersonColumns> PersonCols { get; set; }
public PersonColumns(Int64 personId)
{
this.PersonId = personId;
}
public void LoadPersonColumns(dbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(dbContext dbContext, string code)
{
PersonCols = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public static List<ViewPersonColumns> GetFormViewColumns(SatisDbContext dbContext, string code, Int64 formId, string viewName, Int64 personId)
{
var columns = ViewPersonColumns.GetPersonColumns(dbContext, code, personId);
return columns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
I would like to ask also if my approach is not bad to save the list of 600 records to session? Is it better to access DB and load columns each time user wants to display the grid?
Any advice appreciated
Thanks
EDIT: I have tested to store in session List<,ViewPersonColumns> and it is correctly saved. When I save object where the List<,ViewPersonColumns> is property, then only built-in types are saved, List property is null.
The object I want to save in session
[Serializable]
public class UserManagement
{
public String PersonUserName { get; set; }
public Int64 PersonId { get; set; }
public List<ViewPersonColumns> PersonColumns { get; set; } //not saved to session??
public UserManagement() { }
public UserManagement(DbContext dbContext, string userName)
{
var person = dbContext.Person.Single(p => p.UserName == userName);
PersonUserName = person.UserName;
PersonId = person.Id;
}
/*public void PrepareUserData(DbContext dbContext)
{
LoadPersonColumns(dbContext);
}*/
public void LoadPersonColumns(DbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(DbContext dbContext, string code)
{
PersonColumns = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public List<ViewPersonColumns> GetFormViewColumns(Int64 formId, string viewName)
{
if (PersonColumns == null)
return null;
return PersonColumns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
Save columns to the session
UserManagement userManagement = new UserManagement(_context, user.UserName);
userManagement.LoadPersonColumns(_context);
HttpContext.Session.SetObject("ActualPersonContext", userManagement);
HttpContext.Session.SetObject("ActualPersonColumns", userManagement.PersonColumns);
Load columns from the session
//userManagement build-in types are set. The PersonColumns is null - not correct
UserManagement userManagement = session.GetObject<UserManagement>("ActualPersonContext");
//The cols is filled from session with 600 records - correct
List<ViewPersonColumns> cols = session.GetObject<List<ViewPersonColumns>>("ActualPersonColumns");
Use list for each column is better than use database.
you can't create and store sessions in .net core like .net framework 4.0
Try Like this
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
//services.AddDbContext<GeneralDBContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddMvc().AddSessionStateTempDataProvider();
services.AddSession();
}
Common/SessionExtensions.cs
sing Microsoft.AspNetCore.Http;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace IMAPApplication.Common
{
public static class SessionExtensions
{
public static T GetComplexData<T>(this ISession session, string key)
{
var data = session.GetString(key);
if (data == null)
{
return default(T);
}
return JsonConvert.DeserializeObject<T>(data);
}
public static void SetComplexData(this ISession session, string key, object value)
{
session.SetString(key, JsonConvert.SerializeObject(value));
}
}
}
Usage
==> Create Session*
public IActionResult Login([FromBody]LoginViewModel model)
{
LoggedUserVM user = GetUserDataById(model.userId);
//Create Session with complex object
HttpContext.Session.SetComplexData("loggerUser", user);
return Json(new { status = result.Status, message = result.Message });
}
==> Get Session data*
public IActionResult Index()
{
//Get Session data
LoggedUserVM loggedUser = HttpContext.Session.GetComplexData<LoggedUserVM>("loggerUser");
}
Hope this is helpful. Good luck.
This is an evergreen post, and even though Microsoft has recommended serialisation to store the object in session - it is not a correct solution unless your object is readonly, I have a blog explaining all scenario here and i have even pointed out the issues in GitHub of Asp.Net Core in issue id 18159
Synopsis of the problems are here:
A. Serialisation isn't same as object, true it will help in distributed server scenario but it comes with a caveat that Microsoft have failed to highlight - that it will work without any unpredictable failures only when the object is meant to be read and not to be written back.
B. If you were looking for a read-write object in the session, everytime you change the object that is read from the session after deserialisation - it needs to be written back to the session again by calling serialisation - and this alone can lead to multiple complexities as you will need to either keep track of the changes - or keep writing back to session after each change in any property. In one request to the server, you will have scenarios where the object is written back multiple times till the response is sent back.
C. For a read-write object in the session, even on a single server it will fail, as the actions of the user can trigger multiple rapid requests to the server and not more than often system will find itself in a situation where the object is being serialised or deserialised by one thread and being edited and then written back by another, the result is you will end up with overwriting the object state by threads - and even locks won't help you much since the object is not a real object but a temporary object created by deserialisation.
D. There are issues with serialising complex objects - it is not just a performance hit, it may even fail in certain scenario - especially if you have deeply nested objects that sometimes refer back to itself.
The synopsis of the solution is here, full implementation along with code is in the blog link:
First implement this as a Cache object, create one item in IMemoryCache for each unique session.
Keep the cache in sliding expiration mode, so that each time it is read it revives the expiry time - thereby keeping the objects in cache as long as the session is active.
Second point alone is not enough, you will need to implement heartbeat technique - triggering the call to session every T minus 1 min or so from the javascript. (This we anyways used to do even to keep the session alive till the user is working on the browser, so it won't be any different
Additional Recommendations
A. Make an object called SessionManager - so that all your code related to session read / write sits in one place.
B. Do not keep very high value for session time out - If you are implementing heartbeat technique, even 3 mins of session time out will be enough.

LightSwitch - bulk-loading all requests into one using a domain service

I need to group some data from a SQL Server database and since LightSwitch doesn't support that out-of-the-box I use a Domain Service according to Eric Erhardt's guide.
However my table contains several foreign keys and of course I want the correct related data to be shown in the table (just doing like in the guide will only make the key values show). I solved this by adding a Relationship to my newly created Entity like this:
And my Domain Service class looks like this:
public class AzureDbTestReportData : DomainService
{
private CountryLawDataDataObjectContext context;
public CountryLawDataDataObjectContext Context
{
get
{
if (this.context == null)
{
EntityConnectionStringBuilder builder = new EntityConnectionStringBuilder();
builder.Metadata =
"res://*/CountryLawDataData.csdl|res://*/CountryLawDataData.ssdl|res://*/CountryLawDataData.msl";
builder.Provider = "System.Data.SqlClient";
builder.ProviderConnectionString =
WebConfigurationManager.ConnectionStrings["CountryLawDataData"].ConnectionString;
this.context = new CountryLawDataDataObjectContext(builder.ConnectionString);
}
return this.context;
}
}
/// <summary>
/// Override the Count method in order for paging to work correctly
/// </summary>
protected override int Count<T>(IQueryable<T> query)
{
return query.Count();
}
[Query(IsDefault = true)]
public IQueryable<RuleEntryTest> GetRuleEntryTest()
{
return this.Context.RuleEntries
.Select(g =>
new RuleEntryTest()
{
Id = g.Id,
Country = g.Country,
BaseField = g.BaseField
});
}
}
public class RuleEntryTest
{
[Key]
public int Id { get; set; }
public string Country { get; set; }
public int BaseField { get; set; }
}
}
It works and all that, both the Country name and the Basefield loads with Autocomplete-boxes as it should, but it takes VERY long time. With two columns it takes 5-10 seconds to load one page.. and I have 10 more columns I haven't implemented yet.
The reason it takes so long time is because each related data (each Country and BaseField) requires one request. Loading a page looks like this in Fiddler:
This isn't acceptable at all, it should be a way of combining all those calls into one, just as it does when loading the same table without going through the Domain Service.
So.. that was a lot explaining, my question is: Is there any way I can make all related data load at once or improve the performance by any other way? It should not take 10+ seconds to load a screen.
Thanks for any help or input!s
My RIA Service queries are extremely fast, compared to not using them, even when I'm doing aggregation. It might be the fact that you're using "virtual relationships" (which you can tell by the dotted lines between the tables), that you've created using your RuleEntryTest entity.
Why is your original RuleEntry entity not related to both Country & BaseUnit in LightSwitch BEFORE you start creating your RIA entity?
I haven't used Fiddler to see what's happening, but I'd try creating "real" relationships, instead of "virtual" ones, & see if that helps your RIA entity's performance.

LINQ to SQL Datacontext mapping to WCF DataContract

I'm writing a simple movie database which has a three tier stack of service layer, data access layer and SQL DB.
I'm using LINQ to SQL to access the DB and return a Film from the Film table in the DB. This will then be returned as a Film object from the DataContracts in the service layer.
I thought this would work ok, but its resulted in some awkward code which doesn't look right. Can someone sanity check this please?
Is it best practice to map every LINQ result to it's DataContract?
public static class DBConnection
{
private static RMDB_LINQDataContext _db;
static DBConnection()
{
_db = new RMDB_LINQDataContext();
}
public static RMDB.DTO.Film GetFilm(string name)
{
var LINQ_film = from film in _db.GetTable<Film>()
where film.name == name
select film;
if (LINQ_film.ToList().Count != 1)
{
// TODO - faultException
}
else
{
foreach (Film f in LINQ_film.ToList())
{
// Yuck
return new RMDB.DTO.Film(f.name,
f.releaseDate.GetValueOrDefault(), "foo", f.rating.GetValueOrDefault());
}
}
return null;
}
this is a bit neater and more efficient as your formulation performs the query twice
public static List<Film> GetFilm(string name)
{
var LINQ_film = from film in _db.GetTable<Film>()
where film.name == name
select new Film(film.name,
film.releaseDate.GetValueOrDefault(),
"foo",
film.rating.GetValueOrDefault());
var list = LINQ_film.ToList();
if (list.Count != 1)
{
// TODO - faultException
}
return list;
}
Copying one instance of a type to another is standard practice when mapping from a domain object to a DTO. It may seem like a lot more work but it is a purely memory based mapping and provides a layer of isolation between your business layer and your clients. Without it clients will be impacted by refactoring of the business layer

Does NHibernate really deliver transparent persistency

Starting to use Nhibernate for persistency being seduced by the promise that it respects your domain model, I tried to implement a relation manager for my domain objects. Basically, to DRY my code with respect to managing bidirectional one to many and many to many relations, I decided to have those relations managed by a separate class. When a one to many or many to one property is set an entry for the two objects is made in an dictionary, the key is either a one side with a collection value to hold the many sides, or a many side with a value of the one side.
A one to many relation for a specific combination of types looks as follows:
public class OneToManyRelation<TOnePart, TManyPart> : IRelation<IRelationPart, IRelationPart>
where TOnePart : class, IRelationPart
where TManyPart : class, IRelationPart
{
private readonly IDictionary<TOnePart, Iesi.Collections.Generic.ISet<TManyPart>> _oneToMany;
private readonly IDictionary<TManyPart, TOnePart> _manyToOne;
public OneToManyRelation()
{
_manyToOne = new ConcurrentDictionary<TManyPart, TOnePart>();
_oneToMany = new ConcurrentDictionary<TOnePart, Iesi.Collections.Generic.ISet<TManyPart>>();
}
public void Set(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
}
public void Add(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, new HashedSet<TManyPart>());
_oneToMany[onePart].Add(manyPart);
}
public Iesi.Collections.Generic.ISet<TManyPart> GetManyPart(TOnePart onePart)
{
if (!_oneToMany.ContainsKey(onePart)) _oneToMany[onePart] = new HashedSet<TManyPart>();
return _oneToMany[onePart];
}
public TOnePart GetOnePart(TManyPart manyPart)
{
if(!_manyToOne.ContainsKey(manyPart)) _manyToOne[manyPart] = default(TOnePart);
return _manyToOne[manyPart];
}
public void Remove(TOnePart onePart, TManyPart manyPart)
{
_manyToOne.Remove(manyPart);
_oneToMany[onePart].Remove(manyPart);
}
public void Set(TOnePart onePart, Iesi.Collections.Generic.ISet<TManyPart> manyPart)
{
if (onePart == null) return;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, manyPart);
else _oneToMany[onePart] = manyPart;
}
public void Clear(TOnePart onePart)
{
var list = new HashedSet<TManyPart>(_oneToMany[onePart]);
foreach (var manyPart in list)
{
_manyToOne.Remove(manyPart);
}
_oneToMany.Remove(onePart);
}
public void Clear(TManyPart manyPart)
{
if (!_manyToOne.ContainsKey(manyPart)) return;
if (_manyToOne[manyPart] == null) return;
_oneToMany[_manyToOne[manyPart]].Remove(manyPart);
_manyToOne.Remove(manyPart);
}
}
On the many side a code snippet looks like:
public virtual SubstanceGroup SubstanceGroup
{
get { return RelationProvider.SubstanceGroupSubstance.GetOnePart(this); }
protected set { RelationProvider.SubstanceGroupSubstance.Set(value, this); }
}
On the one side, so, in this case the SubstanceGroup, the snippet looks like:
public virtual ISet<Substance> Substances
{
get { return RelationProvider.SubstanceGroupSubstance.GetManyPart(this); }
protected set { RelationProvider.SubstanceGroupSubstance.Set(this, value); }
}
Just using my domain objects, this works excellent. In the domain object I just have to reference an abstract factory that retrieves the appropriate relation and I can set the relation from one side, wich automatically becomes thus bidirectional.
However, when NH kicks in the problem is that I get duplicate keys in my dictionaries. Somehow NH sets a relation property with a null value(!) with a new copy(?) of a domain object. So when the domain object gets saved, I have two entries of that domain object in, for example the many side of the relation, i.e. _manyToOne dictionary.
This problem makes me lose my hair, I do not get it what is happening??
To answer your first, very general question: "Does NHibernate really deliver transparent persistency", I just can say: nothing is perfect. NH tries its best to be as transparent as possible, by also trying to keep its complexity as low as possible.
There are some assumptions, particularly regarding collections: Collections and their implementations are not considered to be part of your domain model. NH provides its own collection implementations. You are not only expected to use the interfaces like ISet and IList. You should also take the instances given by NH when the object is read from the database and never replace it with your own. (I don't know what your relation class is actually used for, so I don't know if this is the problem here.)
Domain objects are unique within the same instance of the session. If you get new instances of domain objects each time, you probably implemented the "session-per-call" anti-pattern, which creates a new session for each database interaction.
I don't have a clue what you actually are doing. How is this OneToManyRelation actually used for? What are you doing when NH doesn't behave as expected? This is a very specific problem to your specific implementation.
Besides the comments on 'convoluted code' and 'what the heck are you doing'. The problem was that I was replacing the persistence collections of NH like in the below code snippet:
public void Add(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, new HashedSet<TManyPart>());
_oneToMany[onePart].Add(manyPart);
}
I create a new Hashed set for the many part. And that was the problem. If just has set the many part with the collection coming in (in case of the persistence collection implementation of NH) than it would have worked.
As a NH newbie, this replacing of collections with a special implementation from NH has been an important source of errors. Just as a warning to other NH newbies.