Create object of one type from object of another type with database lookups - oop

I have an application that gets a car entity from a third party database. I call the entity ThirdPartyCar. My application needs to create a Car entity by using data from a ThirdPartyCar. However, the Car entity must also derive some of its data from my application's database. For example, a status of a ThirdPartyCar might be _BOUGHT and through a database lookup my application must transform to Sold.
I currently have a Car constructor that has a ThirdPartyCar argument. But the Car constructor cannot populate the lookup data since it is an entity and entities should not have a reference to a repositories. So, I also have a service to populate the remaining data:
public class ThirdPartyCar {
#Id
private Long id;
private String vin;
private String status;
// more props + default constructor
}
public class Car {
#Id
private Long id;
private String vin;
private CarStatus status;
// more props (some different than ThirdPartyCar) + default constructor
public Car(ThirdPartyCar thirdPartyCar) {
this.vin = thirdPartyCar.getVin();
// more props set based on thirdPartyCar
// but props leveraging database not set here
}
public class CarStatus {
#Id
private Long id;
private String status;
}
public class CarBuilderService {
private final CarStatusMappingRepository repo;
public Car buildFrom(ThirdPartyCar thirdPartyCar) {
Car car = new Car(thirdPartyCar);
CarStatus status = repo.findByThirdPartyCarStatus(thirdPartyCar.getStatus());
car.setStatus(status);
// set other props (including nested props) that depend on repos
}
}
The logical place to create a Car based on a ThirdPartyCar seems to be the constructor. But I have a disjointed approach b/c of the need of a repo. What pattern can I apply such that all data is created in the constructor but still not have the entity be aware of repositories?

You should avoid linking two POJO classes from different domains in constructor. These two classes should not know anything about each other. Maybe they represent the same concept in two different systems but they are not the same.
Good approach is creating Abstract Factory interface which will be used everywhere where Car should be created from ThirdPartyCar:
interface ThirdPartyCarFactory {
Car createNewBasedOn(ThirdPartyCar source);
}
and one implementation could be your RepositoryThirdPartyCarFactory:
class RepositoryThirdPartyCarFactory implements ThirdPartyCarFactory {
private CarStatusMappingRepository repo;
private CarMapper carMapper;
public Car createNewBasedOn(ThirdPartyCar thirdPartyCar) {
Car car = new Car();
carMapper.map(thirdPartyCar, car);
CarStatus status = repo.findByThirdPartyCarStatus(thirdPartyCar.getStatus());
car.setStatus(status);
// set other props (including nested props) that depend on repos
return car;
}
}
In above implementation you can find CarMapper which knows how to map ThirdPartyCar to Car. To implement this mapper you can use Dozer, Orika, MapStruct or your custom implementation.
Other question is how you got ThirdPartyCar object. If you load it by ID from ThirdPartyRepository you can change your abstract factory to:
interface CarFactory {
Car createNew(String id);
}
and given implementation loads by ID ThirdPartyCar and maps it to Car. Everything is hidden by factory which you can easily exchanged.
See also:
Performance of Java Mapping Frameworks

Related

Problem with Include() EntityFramework Core with blazor server side [duplicate]

I had seen some books(e.g programming entity framework code first Julia Lerman) define their domain classes (POCO) with no initialization of the navigation properties like:
public class User
{
public int Id { get; set; }
public string UserName { get; set; }
public virtual ICollection<Address> Address { get; set; }
public virtual License License { get; set; }
}
some other books or tools (e.g Entity Framework Power Tools) when generates POCOs initializes the navigation properties of the the class, like:
public class User
{
public User()
{
this.Addresses = new IList<Address>();
this.License = new License();
}
public int Id { get; set; }
public string UserName { get; set; }
public virtual ICollection<Address> Addresses { get; set; }
public virtual License License { get; set; }
}
Q1: Which one is better? why? Pros and Cons?
Edit:
public class License
{
public License()
{
this.User = new User();
}
public int Id { get; set; }
public string Key { get; set; }
public DateTime Expirtion { get; set; }
public virtual User User { get; set; }
}
Q2: In second approach there would be stack overflow if the `License` class has a reference to `User` class too. It means we should have one-way reference.(?) How we should decide which one of the navigation properties should be removed?
Collections: It doesn't matter.
There is a distinct difference between collections and references as navigation properties. A reference is an entity. A collections contains entities. This means that initializing a collection is meaningless in terms of business logic: it does not define an association between entities. Setting a reference does.
So it's purely a matter of preference whether or not, or how, you initialize embedded lists.
As for the "how", some people prefer lazy initialization:
private ICollection<Address> _addresses;
public virtual ICollection<Address> Addresses
{
get { return this._addresses ?? (this._addresses = new HashSet<Address>());
}
It prevents null reference exceptions, so it facilitates unit testing and manipulating the collection, but it also prevents unnecessary initialization. The latter may make a difference when a class has relatively many collections. The downside is that it takes relatively much plumbing, esp. when compared to auto properties without initialization. Also, the advent of the null-propagation operator in C# has made it less urgent to initialize collection properties.
...unless explicit loading is applied
The only thing is that initializing collections makes it hard to check whether or not a collection was loaded by Entity Framework. If a collection is initialized, a statement like...
var users = context.Users.ToList();
...will create User objects having empty, not-null Addresses collections (lazy loading aside). Checking whether the collection is loaded requires code like...
var user = users.First();
var isLoaded = context.Entry(user).Collection(c => c.Addresses).IsLoaded;
If the collection is not initialized a simple null check will do. So when selective explicit loading is an important part of your coding practice, i.e. ...
if (/*check collection isn't loaded*/)
context.Entry(user).Collection(c => c.Addresses).Load();
...it may be more convenient not to initialize collection properties.
Reference properties: Don't
Reference properties are entities, so assigning an empty object to them is meaningful.
Worse, if you initiate them in the constructor, EF won't overwrite them when materializing your object or by lazy loading. They will always have their initial values until you actively replace them. Worse still, you may even end up saving empty entities in the database!
And there's another effect: relationship fixup won't occcur. Relationship fixup is the process by which EF connects all entities in the context by their navigation properties. When a User and a Licence are loaded separately, still User.License will be populated and vice versa. Unless of course, if License was initialized in the constructor. This is also true for 1:n associations. If Address would initialize a User in its constructor, User.Addresses would not be populated!
Entity Framework core
Relationship fixup in Entity Framework core (2.1 at the time of writing) isn't affected by initialized reference navigation properties in constructors. That is, when users and addresses are pulled from the database separately, the navigation properties are populated.
However, lazy loading does not overwrite initialized reference navigation properties.
In EF-core 3, initializing a reference navigation property prevents Include from working properly.
So, in conclusion, also in EF-core, initializing reference navigation properties in constructors may cause trouble. Don't do it. It doesn't make sense anyway.
In all my projects I follow the rule - "Collections should not be null. They are either empty or have values."
First example is possible to have when creation of these entities is responsibility of third-part code (e.g. ORM) and you are working on a short-time project.
Second example is better, since
you are sure that entity has all properties set
you avoid silly NullReferenceException
you make consumers of your code happier
People, who practice Domain-Driven Design, expose collections as read-only and avoid setters on them. (see What is the best practice for readonly lists in NHibernate)
Q1: Which one is better? why? Pros and Cons?
It is better to expose not-null colections since you avoid additional checks in your code (e.g. Addresses). It is a good contract to have in your codebase. But it os OK for me to expose nullable reference to single entity (e.g. License)
Q2: In second approach there would be stack overflow if the License class has a reference to User class too. It means we should have one-way reference.(?) How we should decide which one of the navigation properties should be removed?
When I developed data mapper pattern by myself I tryed to avoid bidirectional references and had reference from child to parent very rarely.
When I use ORMs it is easy to have bidirectional references.
When it is needed to build test-entity for my unit-tests with bidirectional reference set I follow the following steps:
I build parent entity with emty children collection.
Then I add evey child with reference to parent entity into children collection.
Insted of having parameterless constructor in License type I would make user property required.
public class License
{
public License(User user)
{
this.User = user;
}
public int Id { get; set; }
public string Key { get; set; }
public DateTime Expirtion { get; set; }
public virtual User User { get; set; }
}
It's redundant to new the list, since your POCO is depending on Lazy Loading.
Lazy loading is the process whereby an entity or collection of entities is automatically loaded from the database the first time that a property referring to the entity/entities is accessed. When using POCO entity types, lazy loading is achieved by creating instances of derived proxy types and then overriding virtual properties to add the loading hook.
If you would remove the virtual modifier, then you would turn off lazy loading, and in that case your code no longer would work (because nothing would initialize the list).
Note that Lazy Loading is a feature supported by entity framework, if you create the class outside the context of a DbContext, then the depending code would obviously suffer from a NullReferenceException
HTH
The other answers fully answer the question, but I'd like to add something since this question is still relevant and comes up in google searches.
When you use the "code first model from database" wizard in Visual Studio all collections are initialized like so:
public partial class SomeEntity
{
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2214:DoNotCallOverridableMethodsInConstructors")]
public SomeEntity()
{
OtherEntities = new HashSet<OtherEntity>();
}
public int Id { get; set; }
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2227:CollectionPropertiesShouldBeReadOnly")]
public virtual ICollection<OtherEntity> OtherEntities { get; set; }
}
I tend to take wizard output as basically being an official recommendation from Microsoft, hence why I'm adding to this five-year-old question. Therefore, I'd initialize all collections as HashSets.
And personally, I think it'd be pretty slick to tweak the above to take advantage of C# 6.0's auto-property initializers:
public virtual ICollection<OtherEntity> OtherEntities { get; set; } = new HashSet<OtherEntity>();
Q1: Which one is better? why? Pros and Cons?
The second variant when virtual properties are set inside an entity constructor has a definite problem which is called "Virtual member call in a constructor".
As for the first variant with no initialization of navigation properties, there are 2 situations depending on who / what creates an object:
Entity framework creates an object
Code consumer creates an object
The first variant is perfectly valid when Entity Framework creates a object,
but can fail when a code consumer creates an object.
The solution to ensure a code consumer always creates a valid object is to use a static factory method:
Make default constructor protected. Entity Framework is fine to work with protected constructors.
Add a static factory method that creates an empty object, e.g. a User object, sets all properties, e.g. Addresses and License, after creation and returns a fully constructed User object
This way Entity Framework uses a protected default constructor to create a valid object from data obtained from some data source and code consumer uses a static factory method to create a valid object.
I use the answer from this Why is my Entity Framework Code First proxy collection null and why can't I set it?
Had problems with constructor initilization. Only reason I do this is to make test code easier. Making sure collection is never null saves me constantly initialising in tests etc

Multiple MemoryCache in ASp .Net Core Web API

I have an ASP .Net Core 2.2. Web API. I'd like to speed up performance by using MemoryCache. However, I need to cache 2 different types, both which use integer keys. The one type is a list of users and the other is a list of groups.
Now, I'm adding the MemoryCache service in the Startup.cs file:
services.AddMemoryCache();
and then I'm using dependency injection to access this cache in two different places (in Middleware and in a service I wrote).
From what I understand, both these caches are the same instance. So when I add my various users and groups to it, since they both have integer keys, there will be conflicts. How can I handle this? I thought about using two caches - one for each type - but (a) I'm not sure how to do this and (b) I've read somewhere that it's not recommended to use multiple caches. Any ideas?
Yeah, I've had the same issue before and resorted to creating an extended version of the MemoryCache that allows me to plug in different "stores".. You can do it simply by wrapping the data you're sticking into the cache in a "metadata" type class. I suppose similar to how the ServiceDescriptors wrap your service registrations in the DI?
Also, in specific answer to the point "I thought about using two caches - one for each type". This is where the problem will arise because I believe IMemoryCache gets registered as a singleton by default
I ran into this problem myself. One solution I thought of was to just two instantiate separate memory caches in a wrapper class and register the wrapper class as a singleton instance. However, this only makes sense if you have different requirements for each memory cache and/or you expect to store a massive amount of data for each memory cache (at that point, an in-memory cache may not be what you want).
Here is some example classes I want to cache.
// If using a record, GetHashCode is already implemented through each member already
public record Person(string Name);
// If using a class, ensure that Equals/GetHashCode is overridden
public class Car
{
public string Model { get; }
public Car(string model)
{
Model = model;
}
public override bool Equals(object? obj)
{
return obj is Car car &&
Model == car.Model;
}
public override int GetHashCode()
{
return HashCode.Combine(Model);
}
}
Here is a dual MemoryCache implementation.
public class CustomCache : ICustomCache // Expose what you need and register it as singleton instance
{
private readonly MemoryCache personCache;
private readonly MemoryCache carCache;
public CustomCache(IOptions<MemoryCacheOptions> personCacheOptions, IOptions<MemoryCacheOptions> carCacheOptions)
{
personCache = new MemoryCache(personCacheOptions);
carCache = new MemoryCache(carCacheOptions);
}
public void CreatePersonEntry(Person person)
{
_ = personCache.Set(person, person, TimeSpan.FromHours(1));
}
public void CreateCarEntry(Car car)
{
_ = carCache.Set(car, car, TimeSpan.FromHours(12));
}
}
If you don't have the above requirements, then you could just do what juunas mentioned and create an easy wrapper with a composite key. You still need to ensure GetHashCode is properly implemented for each class you want to store. Here, my composite key is just an integer (I used prime numbers, no specific reason) paired with an object. I didn't use a struct for the key as the MemoryCache uses a Dictionary<object, CacheEntry>, so I don't want to box/unbox the key.
public class CustomCache : ICustomCache // Expose what you need
{
private readonly IMemoryCache cache;
public CustomCache(IMemoryCache cache)
{
this.cache = cache;
}
public void CreatePersonEntry(Person person)
{
_ = cache.Set(CustomKey.Person(person), person, TimeSpan.FromHours(1));
}
public void CreateCarEntry(Car car)
{
_ = cache.Set(CustomKey.Car(car), car, TimeSpan.FromHours(12));
}
private record CompositeKey(int Key, object Value)
{
public static CustomKey Person(Person value) => new(PERSON_KEY, value);
public static CustomKey Car(Car value) => new(CAR_KEY, value);
private const int PERSON_KEY = 1123322689;
private const int CAR_KEY = 262376431;
}
}
Let me know if you see anything wrong, or if there is a better solution.

Repository OO Design - Multiple Specifications

I have a pretty standard repository interface:
public interface IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
{
TDomainEntity Find(Guid id);
void Add(TDomainEntity entity);
void Update(TDomainEntity entity);
}
We can use various infrastructure implementations in order to provide default functionality (e.g. Entity Framework, DocumentDb, Table Storage, etc). This is what the Entity Framework implementation looks like (without any actual EF code, for simplicity sake):
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
public TDomainEntity Find(Guid id)
{
// Find, map and return entity using Entity Framework
}
public void Add(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Insert entity using Entity Framework
}
public void Update(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Update entity using Entity Framework
}
}
There is a mapping between the TDomainEntity domain entity (aggregate) and the TDataEntity Entity Framework data entity (database table). I will not go into detail as to why there are separate domain and data entities. This is a philosophy of Domain Driven Design (read about aggregates). What's important to understand here is that the repository will only ever expose the domain entity.
To make a new repository for, let's say, "users", I could define the interface like this:
public interface IUserRepository : IRepository<User>
{
// I can add more methods over and above those in IRepository
}
And then use the Entity Framework implementation to provide the basic Find, Add and Update functionality for the aggregate:
public class UserRepository : EntityFrameworkRepository<Stop, StopEntity>, IUserRepository
{
// I can implement more methods over and above those in IUserRepository
}
The above solution has worked great. But now we want to implement deletion functionality. I have proposed the following interface (which is an IRepository):
public interface IDeleteableRepository<TDomainEntity>
: IRepository<TDomainEntity>
{
void Delete(TDomainEntity item);
}
The Entity Framework implementation class would now look something like this:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity, IDeleteableDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find(), Add() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
entity.IsDeleted = true;
entity.DeletedDate = DateTime.UtcNow;
// Update entity using Entity Framework
// ...
}
}
As defined in the class above, the TDataEntity generic now also needs to be of type IDeleteableDataEntity, which requires the following properties:
public interface IDeleteableDataEntity
{
bool IsDeleted { get; set; }
DateTime DeletedDate { get; set; }
}
These properties are set accordingly in the Delete() implementation.
This means that, IF required, I can define IUserRepository with "deletion" capabilities which would inherently be taken care of by the relevant implementation:
public interface IUserRepository : IDeleteableRepository<User>
{
}
Provided that the relevant Entity Framework data entity is an IDeleteableDataEntity, this would not be an issue.
The great thing about this design is that I can start granualising the repository model even further (IUpdateableRepository, IFindableRepository, IDeleteableRepository, IInsertableRepository) and aggregate repositories can now expose only the relevant functionality as per our specification (perhaps you should be allowed to insert into a UserRepository but NOT into a ClientRepository). Further to this, it specifies a standarised way in which certain repository actions are done (i.e. the updating of IsDeleted and DeletedDate columns will be universal and are not at the hand of the developer).
PROBLEM
A problem with the above design arises when I want to create a repository for some aggregate WITHOUT deletion capabilities, e.g:
public interface IClientRepository : IRepository<Client>
{
}
The EntityFrameworkRepository implementation still requires TDataEntity to be of type IDeleteableDataEntity.
I can ensure that the client data entity model does implement IDeleteableDataEntity, but this is misleading and incorrect. There will be additional fields that are never updated.
The only solution I can think of is to remove the IDeleteableDataEntity generic condition from TDataEntity and then cast to the relevant type in the Delete() method:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
var deleteableEntity = entity as IDeleteableEntity;
if(deleteableEntity != null)
{
deleteableEntity.IsDeleted = true;
deleteableEntity.DeletedDate = DateTime.UtcNow;
entity = deleteableEntity;
}
// Update entity using Entity Framework
// ...
}
}
Because ClientRepository does not implement IDeleteableRepository, there will be no Delete() method exposed, which is good.
QUESTION
Can anyone advise of a better architecture which leverages the C# typing system and does not involve the hacky cast?
Interestly enough, I could do this if C# supported multiple inheritance (with separate concrete implementation for finding, adding, deleting, updating).
I do think that you're complicating things a bit too much trying to get the most generic solution of them all, however I think there's a pretty easy solution to your current problem.
TDataEntity is a persistence data structure, it has no Domain value and it's not known outside the persistence layer. So it can have fields it won't ever use, the repository is the only one knowing that, it'a persistence detail . You can afford to be 'sloppy' here, things aren't that important at this level.
Even the 'hacky' cast is a good solution because it's in one place and a private detail.
It's good to have clean and maintainable code everywhere, however we can't afford to waste time coming up with 'perfect' solutions at every layer. Personally, for view and persistence models I prefer the quickest and simplest solutions even if they're a bit smelly.
P.S: As a thumb rule, generic repository interfaces are good, generic abstract repositories not so much (you need to be careful) unless you're serializing things or using a doc db.

How to easily access widely different subsets of fields of related objects/DB tables?

Imagine we have a number of related objects (equivalently DB tables), for example:
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
}
public class Job {
private String company;
private int salary;
..
}
public class House {
private Address address;
private int age;
private int numRooms;
..
}
public class Address {
private String town;
private String street;
..
}
How to best design a system for easily defining and accessing widely varying subsets of data on these objects/tables? Design patterns, pros and cons, are very welcome. I'm using Java, but this is a more general problem.
For example, I want to easily say:
I'd like some object with (Person.name, Person.height, Job.company, Address.street)
I'd like some object with (Job.company, House.numRooms, Address.town)
Etc.
Other assumptions:
We can assume that we're always getting a known structure of objects on the input, e.g. a Person with its Job, House, and Address.
The resulting object doesn't necessarily need to know the names of the fields it was constructed from, i.e. for subset defined as (Person.name, Person.height, Job.company, Address.street) it can be the array of Objects {"Joe Doe", 180, "ACompany Inc.", "Main Street"}.
The object/table hierarchy is complex, so there are hundreds of data fields.
There may be hundreds of subsets that need to be defined.
A minority of fields to obtain may be computed from actual fields, e.g. I may want to get a person's age, computed as (now().getYear() - Person.birtday.getYear()).
Here are some options I see:
A SQL view for each subset.
Minuses:
They will be almost the same for similar subsets. This is OK just for field names, but not great for the joins part, which could ideally be refactored out to a common place.
Less testable than a solution in code.
Using a DTO assembler, e.g. http://www.genericdtoassembler.org/
This could be used to flatten the complex structure of input objects into a single DTO.
Minuses:
I'm not sure how I'd then proceed to easily define subsets of fields on this DTO. Perhaps if I could somehow set the ones irrelevant to the current subset to null? Not sure how.
Not sure if I can do computed fields easily in this way.
A custom mapper I came up with.
Relevant code:
// The enum has a value for each field in the Person objects hierarchy
// that we may be interested in.
public enum DataField {
PERSON_NAME(new PersonNameExtractor()),
..
PERSON_AGE(new PersonAgeExtractor()),
..
COMPANY(new CompanyExtractor()),
..
}
// This is the container for field-value pairs from a given instance of
// the object hierarchy.
public class Vector {
private Map<DataField, Object> fields;
..
}
// Extractors know how to get the value for a given DataField
// from the object hierarchy. There's one extractor per each field.
public interface Extractor<T> {
public T extract(Person person);
}
public class PersonNameExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getName();
}
}
public class PersonAgeExtractor implements Extractor<Integer> {
public int extract(Person person) {
return now().getYear() - person.getBirthday().getYear();
}
}
public class CompanyExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getJob().getCompany();
}
}
// Building the Vector using all the fields from the DataField enum
// and the extractors.
public class FullVectorBuilder {
public Vector buildVector(Person person) {
Vector vector = new Vector();
for (DataField field : DataField.values()) {
vector.addField(field, field.getExtractor().extract(person));
}
return vector;
}
}
// Definition of a subset of fields on the Vector.
public interface Selector {
public List<DataField> getFields();
}
public class SampleSubsetSelector implements Selector {
private List<DataField> fields = ImmutableList.of(PERSON_NAME, COMPANY);
...
}
// Finally, a builder for the subset Vector, choosing only
// fields pointed to by the selector.
public class SubsetVectorBuilder {
public Vector buildSubsetVector(Vector fullVector, Selector selector) {
Vector subsetVector = new Vector();
for (DataField field : selector.getFields()) {
subsetVector.addField(field, fullVector.getValue(field));
}
return subsetVector;
}
}
Minuses:
Need to create a tiny Extractor class for each of hundreds of data fields.
This is a custom solution that I came up with, seems to work and I like it, but I feel this problem must have been encountered and solved before, likely in a better way.. Has it?
Edit
Each object knows how to turn itself into a Map of fields, keyed on an enum of all fields.
E.g.
public enum DataField {
PERSON_NAME,
..
PERSON_AGE,
..
COMPANY,
..
}
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
public Map<DataField, Object> toMap() {
return ImmutableMap
.add(DataField.PERSON_NAME, name)
.add(DataField.BIRTHDAY, birthday)
.add(DataField.HEIGHT, height)
.add(DataField.AGE, now().getYear() - birthday.getYear())
.build();
}
}
Then, I could build a Vector combining all the Maps, and select subsets from it like in 3.
Minuses:
Enum name clashes, e.g. if Job has an Address and House has an Address, then I want to be able to specify a subset taking street name of both. But how do I then define the toMap() method in the Address class?
No obvious place to put code doing computed fields requiring data from more than one object, e.g. physical distance from Address of House to Address of Company.
Many thanks!
Over in-memory object mapping in the application, I would favor database processing of the data for better performance. Views, or more elaborate OLAP/datawarehouse tooling could do the trick. If the calculated fields remain basic, as in "age = now - birth", I see nothing wrong with having that logic in the DB.
On the code side, given the large number of DTOs you have to deal with, you could use classless dynamic (available in some JVM languages) or JSON objects. The idea is that when a data structure changes, you only need to modify the DB and the UI, saving you the cost of changing a whole bunch of classes in between.

NHibernate narrowing proxy warning

We are building an ASP.NET MVC application utilizing NH for data access. Using NH Profiler I see a lot of warnings like "WARN: Narrowing proxy to Domain.CaseTask - this operation breaks ==". I get these very often when executing queries for classes which are mapped in a table per subclass, for example, using the NH Linq provider:
Query<ICaseTask>().Where(c => c.Assignee == Of || c.Operator == Of)
where the class CaseTask inherits from Task, triggers the warning.
Information about the warning in the internet is scarce and mostly hints that this is something to be ignored... What does this warning warn about exactly? Should this be something I should seek to correct?
The reality is more complicated. When you load entity using either session.Load or you access a property that is lazy loaded NHibernate returns a proxy object. That proxy object will by hydrated (data will be loaded from DB) when you access any of its properties for the first time. To achieve this NHibernate generates proxy class that extends entity class and overrides all property getters and setters. This works perfectly when inheritance is not used since you will have no way to differentiate between proxy and entity class (proxy base class), e.g. simple test proxy is MyEntity will always work.
Now imagine that we have a Person entity:
class Person {
// lazy-loaded
public Animal Pet { get; set; }
}
And we also have Animal class hierarchy:
public abstract class Animal { ... }
public class Cat { ... }
public class Dog { ... }
Now assume that Pet property is lazy loaded, when you ask NHibernate for person pet you will get a proxy object:
var pet = somePerson.Pet; // pet will be a proxy
But since Pet is lazy loaded property NH will not know if it will be instance of a Cat or a Dog, so it will do its best and will create a proxy that extends Animal. The proxy will pass test for pet is Animal but will fail tests for either pet is Cat or pet is Dog.
Now assume that you will access some property of pet object, forcing NH to load data from DB. Now NH will know that your pet is e.g. a Cat but proxy is already generated and cannot be changed.
This will force NHibernate to issue a warning that original proxy for pet that extends type Animal will be narrowed to type Cat. This means that from now on proxy object for animal with pet.Id that you create using session.Load<Animal>(pet.Id) will extend Cat from now. This also means that since Cat is now stored as a part of session, if we load a second person that shares cat with the first, NH will use already available Cat proxy instance to populate lazy-loaded property.
One of the consequences will be that object reference to pet will be different that reference obtained by session.Load<Animal>(pet.Id) (in object.ReferencesEqual sense).
// example - say parent and child share *the same* pet
var pet = child.Pet; // NH will return proxy that extends Animal
pet.DoStuff(); // NH loads data from DB
var parent = child.Parent; // lazy-loaded property
var pet2 = parent.Pet; // NH will return proxy that extends Cat
Assert.NotSame(pet, pet2);
Now when this may cause harm to you:
When you put your entities into Sets or Dictionaryies in your code or if you use any other structure that requires Equals/GetHashCode pair to work. This can be easily fixed by providing custom Equals/GetHashCode implementation (see: http://www.onjava.com/pub/a/onjava/2006/09/13/dont-let-hibernate-steal-your-identity.html?page=1)
When you try to cast your proxy object to target type e.g. (Cat)pet, but again there are know solutions (e.g. Getting proxies of the correct type in NHibernate)
So the moral is to avoid as much as possible inheritance in your domain model.
This warning is about classes having properties or fields that are a subclass. IE:
public class Animal
{
public int Id {get;set;}
}
public class Cat : Animal
{
public int Weight {get;set;}
}
public class Person
{
public Cat Pet {get;set;}
}
NHibernate gets upset when it loads the person entity because it doesn't want to cast for you because behavior becomes unpredictable. Unless you tell NHibernate how to deal with Equals (among other logic) it won't know how to do that comparison on its own.
The basic idea to correct this is to let NHibernate put the base class object into the graph and then you deal with the casting (note that this setup would use some slightly different mappings - ive done this to simplify the code but it can obviously be done by keeping the properties as full getters/setters):
public class Animal
{
public int Id {get;set;}
}
public class Cat : Animal
{
public int Weight {get;set;}
}
public class Person
{
private Animal _pet;
public Cat Pet {
get{return _pet as Cat;}
}
}