Nhibernate has a nice feature, which I have discovered coincidentally:
public interface IInterface {}
public class Impl1 : IInterface {}
public class Impl2 : IInterface {}
ISession session = sf.OpenSession();
session.QueryOver<IInterface>().List();
This will fetch me all Impl1 ans Impl2 objects (in case those classes are mapped). They need not be mapped as SubClassMaps, which leads me to the conclusion that NHibernate resolves the implementing classes all by itself.
Can anyone send me the link to documentation on this one? I know neither the name nor the technical background of this feature...
Thanks in advance!
Actually, this is just the way NHibernate does inheritance mapping.
In addition to the usage you described, you also have the ability to i.e. define a child collection on an object, using the base type and put any object of inherited type to the collection. For instance, you could have another entity containing a collection of your IInterface objects:
public class MyEntity
{
public IList<IInterface> MyCollection { get; set; }
}
Now you could put any object implementing IInterface into MyCollection, and NHibernate will persist them (if mapping is correct):
Impl1 i1 = new Impl1();
Impl2 i2 = new Impl2();
MyEntity entity = new MyEntity();
entity.MyCollection.Add(i1);
entity.MyCollection.Add(i2);
session.Save(entity);
However, the actual database usage (generated SQL) depends on the inheritance mapping strategy that you have defined, so get familiar with them first. You can read more in official documentation.
Related
In DDD, Value Object and Enumeration are quite beautiful so that I want use both two in the daily program logic, not only domain logic. When use customized value objects and enumerations, serialization problem is coming : should I implemented all the value objects and enumeration with System.Text.Json.JsonConverter<T> , or is there any good way to handle serialization and deserialization ?
Update:
to make it clear, Eumeration demo as below(ValueObject derived classes are same.):
[JsonConverter(typeof(CustomizedConverter))]
public class CustomizedEnumeration1 : Enumeration
{
public string Customized { get; protected set; }
public ... // some other customized property or class
public CustomizedEnumeration(int id, string name, string customized) : base(id, string) {
Customized = customized;
}
}
public class Customized2 : Enumeration
{ ... }
public class OtherCustomized: Enumeration
{ ... }
In DDD, properties sometimes are sealed by protected/private setter, deserialization has no right to set the value. Many derived classes can't deserialize as expected, so we have to rewrite serialization with System.Text.Json.JsonConverter<T> one by one. rewrite every derived Enumeration / Valueobject converter is not good, can any one point out any easy abstraction for that ?
You can achieve your desired result. You need to switch to NewtonsoftJson serialization.
Call this in Startup.cs in the ConfigureServices method:
services.AddControllers().AddNewtonsoftJson();
After this, your constructor will be called by deserialization for classes with private setter.
There is no need for custom converters.
For reference, I am using ASP Net Core 3.1
I have an application that gets a car entity from a third party database. I call the entity ThirdPartyCar. My application needs to create a Car entity by using data from a ThirdPartyCar. However, the Car entity must also derive some of its data from my application's database. For example, a status of a ThirdPartyCar might be _BOUGHT and through a database lookup my application must transform to Sold.
I currently have a Car constructor that has a ThirdPartyCar argument. But the Car constructor cannot populate the lookup data since it is an entity and entities should not have a reference to a repositories. So, I also have a service to populate the remaining data:
public class ThirdPartyCar {
#Id
private Long id;
private String vin;
private String status;
// more props + default constructor
}
public class Car {
#Id
private Long id;
private String vin;
private CarStatus status;
// more props (some different than ThirdPartyCar) + default constructor
public Car(ThirdPartyCar thirdPartyCar) {
this.vin = thirdPartyCar.getVin();
// more props set based on thirdPartyCar
// but props leveraging database not set here
}
public class CarStatus {
#Id
private Long id;
private String status;
}
public class CarBuilderService {
private final CarStatusMappingRepository repo;
public Car buildFrom(ThirdPartyCar thirdPartyCar) {
Car car = new Car(thirdPartyCar);
CarStatus status = repo.findByThirdPartyCarStatus(thirdPartyCar.getStatus());
car.setStatus(status);
// set other props (including nested props) that depend on repos
}
}
The logical place to create a Car based on a ThirdPartyCar seems to be the constructor. But I have a disjointed approach b/c of the need of a repo. What pattern can I apply such that all data is created in the constructor but still not have the entity be aware of repositories?
You should avoid linking two POJO classes from different domains in constructor. These two classes should not know anything about each other. Maybe they represent the same concept in two different systems but they are not the same.
Good approach is creating Abstract Factory interface which will be used everywhere where Car should be created from ThirdPartyCar:
interface ThirdPartyCarFactory {
Car createNewBasedOn(ThirdPartyCar source);
}
and one implementation could be your RepositoryThirdPartyCarFactory:
class RepositoryThirdPartyCarFactory implements ThirdPartyCarFactory {
private CarStatusMappingRepository repo;
private CarMapper carMapper;
public Car createNewBasedOn(ThirdPartyCar thirdPartyCar) {
Car car = new Car();
carMapper.map(thirdPartyCar, car);
CarStatus status = repo.findByThirdPartyCarStatus(thirdPartyCar.getStatus());
car.setStatus(status);
// set other props (including nested props) that depend on repos
return car;
}
}
In above implementation you can find CarMapper which knows how to map ThirdPartyCar to Car. To implement this mapper you can use Dozer, Orika, MapStruct or your custom implementation.
Other question is how you got ThirdPartyCar object. If you load it by ID from ThirdPartyRepository you can change your abstract factory to:
interface CarFactory {
Car createNew(String id);
}
and given implementation loads by ID ThirdPartyCar and maps it to Car. Everything is hidden by factory which you can easily exchanged.
See also:
Performance of Java Mapping Frameworks
I have a pretty standard repository interface:
public interface IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
{
TDomainEntity Find(Guid id);
void Add(TDomainEntity entity);
void Update(TDomainEntity entity);
}
We can use various infrastructure implementations in order to provide default functionality (e.g. Entity Framework, DocumentDb, Table Storage, etc). This is what the Entity Framework implementation looks like (without any actual EF code, for simplicity sake):
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
public TDomainEntity Find(Guid id)
{
// Find, map and return entity using Entity Framework
}
public void Add(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Insert entity using Entity Framework
}
public void Update(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Update entity using Entity Framework
}
}
There is a mapping between the TDomainEntity domain entity (aggregate) and the TDataEntity Entity Framework data entity (database table). I will not go into detail as to why there are separate domain and data entities. This is a philosophy of Domain Driven Design (read about aggregates). What's important to understand here is that the repository will only ever expose the domain entity.
To make a new repository for, let's say, "users", I could define the interface like this:
public interface IUserRepository : IRepository<User>
{
// I can add more methods over and above those in IRepository
}
And then use the Entity Framework implementation to provide the basic Find, Add and Update functionality for the aggregate:
public class UserRepository : EntityFrameworkRepository<Stop, StopEntity>, IUserRepository
{
// I can implement more methods over and above those in IUserRepository
}
The above solution has worked great. But now we want to implement deletion functionality. I have proposed the following interface (which is an IRepository):
public interface IDeleteableRepository<TDomainEntity>
: IRepository<TDomainEntity>
{
void Delete(TDomainEntity item);
}
The Entity Framework implementation class would now look something like this:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity, IDeleteableDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find(), Add() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
entity.IsDeleted = true;
entity.DeletedDate = DateTime.UtcNow;
// Update entity using Entity Framework
// ...
}
}
As defined in the class above, the TDataEntity generic now also needs to be of type IDeleteableDataEntity, which requires the following properties:
public interface IDeleteableDataEntity
{
bool IsDeleted { get; set; }
DateTime DeletedDate { get; set; }
}
These properties are set accordingly in the Delete() implementation.
This means that, IF required, I can define IUserRepository with "deletion" capabilities which would inherently be taken care of by the relevant implementation:
public interface IUserRepository : IDeleteableRepository<User>
{
}
Provided that the relevant Entity Framework data entity is an IDeleteableDataEntity, this would not be an issue.
The great thing about this design is that I can start granualising the repository model even further (IUpdateableRepository, IFindableRepository, IDeleteableRepository, IInsertableRepository) and aggregate repositories can now expose only the relevant functionality as per our specification (perhaps you should be allowed to insert into a UserRepository but NOT into a ClientRepository). Further to this, it specifies a standarised way in which certain repository actions are done (i.e. the updating of IsDeleted and DeletedDate columns will be universal and are not at the hand of the developer).
PROBLEM
A problem with the above design arises when I want to create a repository for some aggregate WITHOUT deletion capabilities, e.g:
public interface IClientRepository : IRepository<Client>
{
}
The EntityFrameworkRepository implementation still requires TDataEntity to be of type IDeleteableDataEntity.
I can ensure that the client data entity model does implement IDeleteableDataEntity, but this is misleading and incorrect. There will be additional fields that are never updated.
The only solution I can think of is to remove the IDeleteableDataEntity generic condition from TDataEntity and then cast to the relevant type in the Delete() method:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
var deleteableEntity = entity as IDeleteableEntity;
if(deleteableEntity != null)
{
deleteableEntity.IsDeleted = true;
deleteableEntity.DeletedDate = DateTime.UtcNow;
entity = deleteableEntity;
}
// Update entity using Entity Framework
// ...
}
}
Because ClientRepository does not implement IDeleteableRepository, there will be no Delete() method exposed, which is good.
QUESTION
Can anyone advise of a better architecture which leverages the C# typing system and does not involve the hacky cast?
Interestly enough, I could do this if C# supported multiple inheritance (with separate concrete implementation for finding, adding, deleting, updating).
I do think that you're complicating things a bit too much trying to get the most generic solution of them all, however I think there's a pretty easy solution to your current problem.
TDataEntity is a persistence data structure, it has no Domain value and it's not known outside the persistence layer. So it can have fields it won't ever use, the repository is the only one knowing that, it'a persistence detail . You can afford to be 'sloppy' here, things aren't that important at this level.
Even the 'hacky' cast is a good solution because it's in one place and a private detail.
It's good to have clean and maintainable code everywhere, however we can't afford to waste time coming up with 'perfect' solutions at every layer. Personally, for view and persistence models I prefer the quickest and simplest solutions even if they're a bit smelly.
P.S: As a thumb rule, generic repository interfaces are good, generic abstract repositories not so much (you need to be careful) unless you're serializing things or using a doc db.
We are building an ASP.NET MVC application utilizing NH for data access. Using NH Profiler I see a lot of warnings like "WARN: Narrowing proxy to Domain.CaseTask - this operation breaks ==". I get these very often when executing queries for classes which are mapped in a table per subclass, for example, using the NH Linq provider:
Query<ICaseTask>().Where(c => c.Assignee == Of || c.Operator == Of)
where the class CaseTask inherits from Task, triggers the warning.
Information about the warning in the internet is scarce and mostly hints that this is something to be ignored... What does this warning warn about exactly? Should this be something I should seek to correct?
The reality is more complicated. When you load entity using either session.Load or you access a property that is lazy loaded NHibernate returns a proxy object. That proxy object will by hydrated (data will be loaded from DB) when you access any of its properties for the first time. To achieve this NHibernate generates proxy class that extends entity class and overrides all property getters and setters. This works perfectly when inheritance is not used since you will have no way to differentiate between proxy and entity class (proxy base class), e.g. simple test proxy is MyEntity will always work.
Now imagine that we have a Person entity:
class Person {
// lazy-loaded
public Animal Pet { get; set; }
}
And we also have Animal class hierarchy:
public abstract class Animal { ... }
public class Cat { ... }
public class Dog { ... }
Now assume that Pet property is lazy loaded, when you ask NHibernate for person pet you will get a proxy object:
var pet = somePerson.Pet; // pet will be a proxy
But since Pet is lazy loaded property NH will not know if it will be instance of a Cat or a Dog, so it will do its best and will create a proxy that extends Animal. The proxy will pass test for pet is Animal but will fail tests for either pet is Cat or pet is Dog.
Now assume that you will access some property of pet object, forcing NH to load data from DB. Now NH will know that your pet is e.g. a Cat but proxy is already generated and cannot be changed.
This will force NHibernate to issue a warning that original proxy for pet that extends type Animal will be narrowed to type Cat. This means that from now on proxy object for animal with pet.Id that you create using session.Load<Animal>(pet.Id) will extend Cat from now. This also means that since Cat is now stored as a part of session, if we load a second person that shares cat with the first, NH will use already available Cat proxy instance to populate lazy-loaded property.
One of the consequences will be that object reference to pet will be different that reference obtained by session.Load<Animal>(pet.Id) (in object.ReferencesEqual sense).
// example - say parent and child share *the same* pet
var pet = child.Pet; // NH will return proxy that extends Animal
pet.DoStuff(); // NH loads data from DB
var parent = child.Parent; // lazy-loaded property
var pet2 = parent.Pet; // NH will return proxy that extends Cat
Assert.NotSame(pet, pet2);
Now when this may cause harm to you:
When you put your entities into Sets or Dictionaryies in your code or if you use any other structure that requires Equals/GetHashCode pair to work. This can be easily fixed by providing custom Equals/GetHashCode implementation (see: http://www.onjava.com/pub/a/onjava/2006/09/13/dont-let-hibernate-steal-your-identity.html?page=1)
When you try to cast your proxy object to target type e.g. (Cat)pet, but again there are know solutions (e.g. Getting proxies of the correct type in NHibernate)
So the moral is to avoid as much as possible inheritance in your domain model.
This warning is about classes having properties or fields that are a subclass. IE:
public class Animal
{
public int Id {get;set;}
}
public class Cat : Animal
{
public int Weight {get;set;}
}
public class Person
{
public Cat Pet {get;set;}
}
NHibernate gets upset when it loads the person entity because it doesn't want to cast for you because behavior becomes unpredictable. Unless you tell NHibernate how to deal with Equals (among other logic) it won't know how to do that comparison on its own.
The basic idea to correct this is to let NHibernate put the base class object into the graph and then you deal with the casting (note that this setup would use some slightly different mappings - ive done this to simplify the code but it can obviously be done by keeping the properties as full getters/setters):
public class Animal
{
public int Id {get;set;}
}
public class Cat : Animal
{
public int Weight {get;set;}
}
public class Person
{
private Animal _pet;
public Cat Pet {
get{return _pet as Cat;}
}
}
I know that a private parameterless constructor works but what about an object with no parameterless constructors?
I would like to expose types from a third party library so I have no control over the type definitions.
If there is a way what is the easiest? E.g. I don't what to have to create a sub type.
Edit:
What I'm looking for is something like the level of customization shown here: http://msdn.microsoft.com/en-us/magazine/cc163902.aspx
although I don't want to have to resort to streams to serialize/deserialize.
You can't really make arbitrary types serializable; in some cases (XmlSerializer, for example) the runtime exposes options to spoof the attributes. But DataContractSerializer doesn't allow this. Feasible options:
hide the classes behind your own types that are serializable (lots of work)
provide binary formatter surrogates (yeuch)
write your own serialization core (a lot of work to get right)
Essentially, if something isn't designed for serialization, very little of the framework will let you serialize it.
I just ran a little test, using a WCF Service that returns an basic object that does not have a default constructor.
//[DataContract]
//[Serializable]
public class MyObject
{
public MyObject(string _name)
{
Name = _name;
}
//[DataMember]
public string Name { get; set; }
//[DataMember]
public string Address { get; set; }
}
Here is what the service looks like:
public class MyService : IMyService
{
#region IMyService Members
public MyObject GetByName(string _name)
{
return new MyObject(_name) { Address = "Test Address" };
}
#endregion
}
This actually works, as long as MyObject is either a [DataContract] or [Serializable]. Interestingly, it doesn't seem to need the default constructor on the client side. There is a related post here:
How does WCF deserialization instantiate objects without calling a constructor?
I am not a WCF expert but it is unlikely that they support serialization on a constructor with arbitrary types. Namely because what would they pass in for values? You could pass null for reference types and empty values for structs. But what good would a type be that could be constructed with completely empty data?
I think you are stuck with 1 of 2 options
Sub class the type in question and pass appropriate default values to the non-parameterless constructor
Create a type that exists soley for serialization. Once completed it can create an instance of the original type that you are interested in. It is a bridge of sorts.
Personally I would go for #2. Make the class a data only structure and optimize it for serialization and factory purposes.