Business Rule Split among two classes - oop

I have a project allocation domain with the following business rules
When a new employee is getting allocated to a project the total expenditure should not exceed the Budget Amount.
For an employee the total allocation percentage should not exceed 100%
I have created entities as shown below created in C#.
QUESTION
The Allocate logic is split across two classes – Project and Employee..The List<Allocation> is passed as a parameter to the Allocate method rather than adding as property of the class... Is it correct approach or do I need to add List<Allocation> as property in these two classes?
Note:
Database
Entitles
Code
Project
public class Project
{
public int ProjectID { get; set; }
public int BudgetAmount { get; set; }
public string ProjectName { get; set; }
public void Allocate(Role newRole, int newPercentage, Employee newEmployee, List<Allocation> existingAllocationsInProject)
{
int currentTotalExpenditure = 0;
if (existingAllocationsInProject != null)
{
foreach (Allocation alloc in existingAllocationsInProject)
{
int allocationExpenditure = alloc.Role.BillRate * alloc.PercentageAllocation / 100;
currentTotalExpenditure = currentTotalExpenditure + allocationExpenditure;
}
}
int newAllocationExpenditure = newRole.BillRate * newPercentage / 100;
if (currentTotalExpenditure + newAllocationExpenditure <= BudgetAmount)
{
List<Allocation> existingAllocationsOfEmployee = GetAllocationsForEmployee(newEmployee.EmployeeID);
bool isValidAllocation= newEmployee.Allocate(newRole, newPercentage, existingAllocationsOfEmployee);
if (isValidAllocation)
{
//Do allocation
}
else
{
throw new Exception("Employee is not avaiable for allocation");
}
}
else
{
throw new Exception("Budget Exceeded");
}
}
}
Employee
public class Employee
{
public int EmployeeID { get; set; }
public string EmployeeName { get; set; }
public bool Allocate(Role newRole, int newPercentage, List<Allocation> existingAllocationsOfEmployee)
{
int currentTotalAllocation = 0;
if (existingAllocationsOfEmployee != null)
{
foreach (Allocation alloc in existingAllocationsOfEmployee)
{
currentTotalAllocation = currentTotalAllocation + alloc.PercentageAllocation;
}
}
if (currentTotalAllocation + newPercentage <= 100)
{
return true;
}
return false;
}
}
References
Following is from Repository Pattern without an ORM
What behaviour is there that requires the customer to have a list of orders? When you give more thought to the behaviour of your domain (i.e. what data is required at what point) you can model your aggregates based around use cases and things become much clearer and much easier as you are only change tracking for a small set of objects in the aggregate boundary.
I suspect that Customer should be a separate aggregate without a list of orders, and Order should be an aggregate with a list of order lines. If you need to perform operations on each order for a customer then use orderRepository.GetOrdersForCustomer(customerID); make your changes then use orderRespository.Save(order);

I have a few comments:
Separating the allocate logic is the right thing to do
Consider to move the allocate logic to service classes e.g. ProjectService and EmployeeService, so the domain models can be logic free
Consider to add a new AllocationService class to manipulate the allocations.
public void Allocate(Project project, Role role, Employee employee, int percentage)
{
// Fetch allocation from allocation repository
var allocations = _allocationRepository.GetAllocations(project.Id);
// project allocation logic
if (!_projectService.Allocate(Project, Role, int percentage))
{
// throw exception
}
// allocate to employee
if(!_employeeService.Allocate(employee, role, percentage))
{
// throw exception
}
// create new allocation
_allocationRepository.Add(new Allocation
{
......
});
}
The allocation repository and the services can be injected in via the constructor, e.g.
public interface IAllocationRepository
{
IEnumerable<Allocation> GetAllocationsByProject(Project project);
IEnumerable<Allocation> GetAllocationsByEmployee(Employee employee);
void Add(Allocation);
}
The IAllocationRepository can be injected into EmployeeService and ProjectService as well so you don't need to pass the List of Allocation around.

The business rule is also relevant to the existing Allocations. What about making Allocation an Aggregate and wrap business rule in its Factory? like:
public Allocation Allocate(Project project, Role newRole, int newPercentage, Employee newEmployee)
{
List<Allocation> existingAllocationsInProject = allocationRepository.findBy(project);
//validate project rule
List<Allocation> existingAllocationsInEmployee = allocationRepository.findBy(newEmployee);
//validate employee rule
}
So in this case, we don't have to worry about how to find the existingAllocations. And the rules valiation could be further refactored using Specification patterns.

The Allocate logic is split across two classes – Project and Employee..
I would not do this since it splits the allocation responsibility, thus breaking the single responsibility principle. If you find that it belongs neither to Project nor to Employee then a domain service may do the job. In general, operations involving several entities that do not form part of the same aggregate are candidates to be located in such a service.
The List<Allocation> is passed as a parameter to the Allocate method rather than adding as property of the class... Is it correct approach or do I need to add List<Allocation> as property in these two classes?
My answer would be neither those: add List<Allocation> only to your Project class.
I think that what you need to consider is what Allocation really stands for in your domain. Is it an entity that forms part of the Project aggregate? May it even be a value object instead of an entity?
Sometimes I find myself losing perspective of the domain when I have database relations around. In this case, I see that the Allocation table does not even have its own id; instead, it seems to represent just the relationship between Project, Employee and Role with several attributes. Although the domain model should not care about persistence, this may be giving some hints about what Allocation really represents.
From my point of view, an Allocation only makes sense in the context of a Project and thus it should be part of that aggregate. Arguably, its equality is not based on identity and thus, it may even be a value object. The responsibility of ensuring that the first restriction - not exceeding the budget upon allocation - is satisfied, belongs to the Project entity and it may be performed upon employee allocation.
The tricky constraint is the second one: an Employee not exceeding 100% allocation through several Projects. In this case, you may be interested in providing means to obtain those Projects for which a given Employee is allocated, maybe through your Project repository. You can also provide an operation to check to provide the total allocation for a given Employee, possibly through a domain service.
Note that you are actually doing all this logic in the Allocate method of your Project class: first you obtain all the Allocations through GetAllocationsForEmployee and then you pass the retrieved list to Employee.Allocate which could be actually be named CanBeAllocated. You may feel that it is responsibility of the Employee to ensure this business logic, but I think that it has little to do with neither its properties nor its behavior and thus, it should rather be part of the Project.Allocate method or a domain service if you keep feeling that there are mixed responsibilities.
As a final note, in case there is some confusion given previous comments, there is nothing wrong with putting logic inside your model classes, it is actually a fundamental part of the whole domain modelling! The AnemicDomainModel post by Martin Fowler provides some good insight into this.

Related

Enforcing invariants with scope on child entity of aggregate root - DDD

I´m trying to understand how to represent certain DDD (Domain Driven Design) rules.
Following the Blue Book convention we have:
The root Entity has global identity and is responsible for checking invariants.
The root entity controls access and cannot be blindsided by changes to its internals.
Transient references to internal members can be passed out for use withing a single operation only.
I´m having a hard time to find the best way to enforce the invariants when clients can have access to internal entities.
This problem of course only happens if the child entity is mutable.
Supose this toy example where you have a Car with four Tire(s). I want to track the usage of each Tire idependently.
Clearly Car is a Aggregate Root and Tire is an Child Entity.
Business Rule: Milage cannot be added to to a single Tire. Milage can only be added to all 4 tires, when attached to a Car
A naive implementation would be:
public class Tire
{
public double Milage { get; private set; }
public DateTime PurchaseDate { get; set; }
public string ID { get; set; }
public void AddMilage(double milage) => Milage += milage;
}
public class Car
{
public Tire FrontLefTire { get; private set; }
public Tire FrontRightTire { get; private set; }
public Tire RearLeftTire { get; private set; }
public Tire RearRightTire { get; private set; }
public void AddMilage (double milage)
{
FrontLefTire.AddMilage(milage);
FrontRightTire.AddMilage(milage);
RearLeftTire.AddMilage(milage);
RearRightTire.AddMilage(milage);
}
public void RotateTires()
{
var oldFrontLefTire = FrontLefTire;
var oldFrontRightTire = FrontRightTire;
var oldRearLeftTire = RearLeftTire;
var oldRearRightTire = RearRightTire;
RearRightTire = oldFrontLefTire;
FrontRightTire = oldRearRightTire;
RearLeftTire = oldFrontRightTire;
FrontLefTire = oldRearLeftTire;
}
//...
}
But the Tire.AddMilage method is public, meaning any service could do something like this:
Car car = new Car(); //...
// Adds Milage to all tires, respecting invariants - OK
car.AddMilage(200);
//corrupt access to front tire, change milage of single tire on car
//violating business rules - ERROR
car.FrontLefTire.AddMilage(200);
Possible solutions that crossed my mind:
Create events on Tire to validate the change, and implement it on Car
Make Car a factory of Tire, passing a TireState on its contructor, and holding a reference to it.
But I feel there should be an easier way to do this.
What do you think ?
Transient references to internal members can be passed out for use withing a single operation only.
In the years since the blue book was written, this practice has changed; passing out references to internal members that support mutating operations is Not Done.
A way to think of this is to take the Aggregate API (which currently supports both queries and commands), and split that API into two (or more) interfaces; one which supports the command operations, and another that supports the queries.
The command operations still follow the usual pattern, providing a path by which the application can ask the aggregate to change itself.
The query operations return interfaces that include no mutating operations, neither directly, nor by proxy.
root.getA() // returns an A API with no mutation operations
root.getA().getB() // returns a B API with no mutation operations
Queries are queries all the way down.
In most cases, you can avoid querying entities altogether; but instead return values that represent the current state of the entity.
Another reason to avoid sharing child entities is that, for the most part, the choice to model that part of the aggregate as a separate entity is a decision that you might want to change in the domain model. By exposing the entity in the API, you are creating coupling between that implementation choice and consumers of the API.
(One way of thinking of this: the Car aggregate isn't a "car", it's a "document" that describes a "car". The API is supposed to insulate the application from the specific details of the document.)
There should be no getters for the Tires.
Getters get you in trouble. Removing the getters is not just a matter of DDD Aggregte Roots, but a matter of OO, Law of Demeter, etc.
Think about why you would need the Tires from a Car and move that functionality into the Car itself.

Should there be one or multiple XXXRepository instances in my system, with DDD?

There's something that has been bothering from my DDD readings. From what I've seen, it seems as if there is only repository instance for each given aggregate root type in my system.
Consider, for instance, the following imaginary situation as an abstraction of a deeper domain model:
When coding in a "standard-style" I'd consider that each Owner in my system would have its own collection of cars, so there would be an equal number of Car collections (should I call it Repositories?) as there are Owners. But, as stated previously, it seems as if in DDD I should only have one CarRepository in the whole system (I've seen examples in which they are accessed as static classes), and to do simple operations such as adding cars to the Owner, I should make use of a domain-service, which seems to be, for the simple case, not very API friendly.
Am I right about only having one CarRepository instantiated in my system (Singleton), or am I missing something? I'd like to strive for something like
public void an_owner_has_cars() throws Exception {
Owner owner = new Owner(new OwnerId(1));
CarId carId = new CarId(1);
Car car = new Car(carId);
owner.addCar(car);
Assert.assertEquals(car, owner.getCarOf(carId));
}
but that doesn't seem to be possible without injecting a repository into Owner, something that seems to be kind of forbidden.
A repository does not represent a collection that belongs to another entity. The idea is that it represents the entire collection of entities.
So in your example Car is an entity and probably an aggregate. So your model is OK on a conceptual level but you need to split the tight coupling between Car and Owner since Owner is most definitely an AR and, in your current model, deleting it would mean all cars belonging to it should be deleted also.
What you are probably after is something like this:
public class Owner {
private IEnumerable<OwnedCar> cars;
}
public class OwnedCar {
public Guid CarId { get; set; }
}
Or, as an alternative to a VO:
public class Owner {
private IEnumerable<Guid> carsOwned;
}
So one AR should not reference another AR instance.
Another point is that you probably do not want to inject repositories into entities since that may indicate a bit of a design flaw (somewhat of a code smell).
To get the owned cars into the Owner would be the job of the OwnerRepository since it is part of the same aggregate. There would be no OwnedCarRepository since it is a value object.
100% for sure, you don't have to make a singleton CarRepository unless you're working in a legacy system which doesn't use any dependency inejction mechanism.
If you find you need to inject CarRepository to Owner to retrieve cars belong to a specific owner, maybe it's a hint that you should re-model there relationship like:
public class Owner {
}
public class Car {
private Owner owner;
}
And use CareRepository to achieve your goal:
public interface CarRepository {
List<Car> findBy(String onwer);
}
And just a speculation, the static part maybe refer to DomainEvents, like:
public class Owner {
public long quantityOfCarsOwned() {
return DomainEvents.raise(new SumCarsEvent(this));//static
}
}
public class SumCarsEventHandler {
private CarRepository carRepository;//inject this, SumCarsEventHandler should be a statless bean managed by container like spring
public long handle(SumCarsEvent event) {
return carRepository.countBy(event.getOwner());
}
}
In very simple case, it's just too complicated I think.

Accept Interface into Collection (Covariance) troubles with nHibernate

I am using Fluent nHibernate for my persistence layer in an ASP.NET MVC application, and I have come across a bit of a quandry.
I have a situation where I need to use an abstraction to store objects into a collection, in this situation, an interface is the most logical choice if you are looking at a pure C# perspective.
Basically, an object (Item) can have Requirements. A requirement can be many things. In a native C# situation, I would merely accomplish this with the following code.
interface IRequirement
{
// methods and properties neccessary for evaluation
}
class Item
{
virtual int Id { get; set; }
virtual IList<IRequirement> Requirements { get; set; }
}
A crude example. This works fine in native C# - however because the objects have to be stored in a database, it becomes a bit more complicated than that. Each object that implements IRequirement could be a completely different kind of object. Since nHibernate (or any other ORM that I have discovered) cannot really understand how to serialize an interface, I cannot think of, for the life of me, how to approach this scenario. I mean, I understand the problem.
This makes no sense to the database/orm. I understand completely why, too.
class SomeKindOfObject
{
virtual int Id { get; set; }
// ... some other methods relative to this base type
}
class OneRequirement : SomeKindOfObject, IRequirement
{
virtual string Name { get; set; }
// some more methods and properties
}
class AnotherKindOfObject
{
virtual int Id { get; set; }
// ... more methods and properties, different from SomeKindOfObject
}
class AnotherRequirement : AnotherKindOfObject, IRequirement
{
// yet more methods and properties relative to AnotherKindOfObject's intentive hierarchy
}
class OneRequirementMap : ClassMap<OneRequirement>
{
// etc
Table("OneRequirement");
}
class AnotherRequirementMap : ClassMap<AnotherRequirement>
{
//
Table("OtherRequirements");
}
class ItemMap : ClassMap<Item>
{
// ... Now we have a problem.
Map( x => x.Requirements ) // does not compute...
// additional mapping
}
So, does anyone have any ideas? I cannot seem to use generics, either, so making a basic Requirement<T> type seems out. I mean the code works and runs, but the ORM cannot grasp it. I realize what I am asking here is probably impossible, but all I can do is ask.
I would also like to add, I do not have much experience with nHibernate, only Fluent nHibernate, but I have been made aware that both communities are very good and so I am tagging this as both. But my mapping at present is 100% 'fluent'.
Edit
I actually discovered Programming to interfaces while mapping with Fluent NHibernate that touches on this a bit, but I'm still not sure it is applicable to my scenario. Any help is appreciated.
UPDATE (02/02/2011)
I'm adding this update in response to some of the answers posted, as my results are ... a little awkward.
Taking the advice, and doing more research, I've designed a basic interface.
interface IRequirement
{
// ... Same as it always was
}
and now I establish my class mapping..
class IRequirementMap : ClassMap<IRequirement>
{
public IRequirementMap()
{
Id( x => x.Id );
UseUnionSubclassForInheritanceMapping();
Table("Requirements");
}
}
And then I map something that implements it. This is where it gets very freaky.
class ObjectThatImplementsRequirementMap : ClassMap<ObjectThatImplementsRequirement>
{
ObjectThatImplementsRequirementMap()
{
Id(x => x.Id); // Yes, I am base-class mapping it.
// other properties
Table("ObjectImplementingRequirement");
}
}
class AnotherObjectThatHasRequirementMap : ClassMap<AnotherObjectThatHasRequirement>
{
AnotherObjectThatHasRequirementMap ()
{
Id(x => x.Id); // Yes, I am base-class mapping it.
// other properties
Table("AnotheObjectImplementingRequirement");
}
}
This is not what people have suggested, but it was my first approach. Though I did it because I got some very freaky results. Results that really make no sense to me.
It Actually Works... Sort Of
Running the following code yields unanticipated results.
// setup ISession
// setup Transaction
var requirements = new <IRequirement>
{
new ObjectThatImplementsRequirement
{
// properties, etc..
},
new AnotherObjectThatHasRequirement
{
// other properties.
}
}
// add to session.
// commit transaction.
// close writing block.
// setup new session
// setup new transaction
var requireables = session.Query<IRequirable>();
foreach(var requireable in requireables)
Console.WriteLine( requireable.Id );
Now things get freaky. I get the results...
1
1
This makes no sense to me. It shouldn't work. I can even query the individual properties of each object, and they have retained their type. Even if I run the insertion, close the application, then run the retrieval (so as to avoid the possibility of caching), they still have the right types. But the following does not work.
class SomethingThatHasRequireables
{
// ...
public virtual IList<IRequirement> Requirements { get; set; }
}
Trying to add to that collection fails (as I expect it to). Here is why I am confused.
If I can add to the generic IList<IRequirement> in my session, why not in an object?
How is nHibernate understanding the difference between two entities with the same Id,
if they are both mapped as the same kind of object, in one scenario, and not the other?
Can someone explain to me what in the world is going on here?
The suggested approach is to use SubclassMap<T>, however the problem with that is the number of identities, and the size of the table. I am concerned about scalability and performance if multiple objects (up to about 8) are referencing identities from one table. Can someone give me some insight on this one specifically?
Take a look at the chapter Inheritance mapping in the reference documentation. In the chapter Limitations you can see what's possible with which mapping strategy.
You've chose one of the "table per concrete class" strategies, as far as I can see. You may need <one-to-many> with inverse=true or <many-to-any> to map it.
If you want to avoid this, you need to map IRequirement as a base class into a table, then it is possible to have foreign keys to that table. Doing so you turn it into a "table per class-hierarchy" or "table per subclass" mapping. This is of course not possible if another base class is already mapped. E.g. SomeKindOfObject.
Edit: some more information about <one-to-many> with inverse=true and <many-to-any>.
When you use <one-to-many>, the foreign key is actually in the requirement tables pointing back to the Item. This works well so far, NH unions all the requirement tables to find all the items in the list. Inverse is required because it forces you to have a reference from the requirement to the Item, which is used by NH to build the foreign key.
<many-to-any> is even more flexible. It stores the list in an additional link table. This table has three columns:
the foreign key to the Item,
the name of the actual requirement type (.NET type or entity name)
and the primary key of the requirement (which can't be a foreign key, because it could point to different tables).
When NH reads this table, it knows from the type information (and the corresponding requirement mapping) in which other tables the requirements are. This is how any-types work.
That it is actually a many-to-many relation shouldn't bother you, it only means that it stores the relation in an additional table which is technically able to link a requirement to more then one item.
Edit 2: freaky results:
You mapped 3 tables: IRequirement, ObjectThatImplementsRequirement, AnotherObjectThatHasRequirement. They are all completely independent. You are still on "table per concrete class with implicit polymorphism". You just added another table with containing IRequirements, which may also result in some ambiguity when NH tries to find the correct table.
Of course you get 1, 1 as result. The have independent tables and therefore independent ids which both start with 1.
The part that works: NHibernate is able to find all the objects implementing an interface in the whole database when you query for it. Try session.CreateQuery("from object") and you get the whole database.
The part that doesn't work: On the other side, you can't get an object just by id and interface or object. So session.Get<object>(1) doesn't work, because there are many objects with id 1. The same problem is with the list. And there are some more problems there, for instance the fact that with implicit polymorphism, there is no foreign key specified which points from every type implementing IRequirement to the Item.
The any types: This is where the any type mapping comes in. Any types are stored with additional type information in the database and that's done by the <many-to-any> mapping which stores the foreign key and type information in an additional table. With this additional type information NH is able to find the table where the record is stored in.
The freaky results: Consider that NH needs to find both ways, from the object to a single table and from the record to a single class. So be careful when mapping both the interface and the concrete classes independently. It could happen that NH uses one or the other table depending on which way you access the data. This may have been the cause or your freaky results.
The other solution: Using any of the other inheritance mapping strategies, you define a single table where NH can start reading and finding the type.
The Id Scope: If you are using Int32 as id, you can create 1 record each second for 68 years until you run out of ids. If this is not enough, just switch to long, you'll get ... probably more then the database is able to store in the next few thousand years...

Type conversion when iterating over a collection of super-type. Alternatives?

This is quite a common problem I run into. Let's hear your solutions. I'm going to use an Employee-managing application as an example:-
We've got some entity classes, some of which implement a particular interface.
public interface IEmployee { ... }
public interface IRecievesBonus { int Amount { get; } }
public class Manager : IEmployee, IRecievesBonus { ... }
public class Grunt : IEmployee /* This company sucks! */ { ... }
We've got a collection of Employees that we can iterate over. We need to grab all the objects that implement IRecievesBonus and pay the bonus.
The naive implementation goes something along the lines of:-
foreach(Employee employee in employees)
{
IRecievesBonus bonusReciever = employee as IRecievesBonus;
if(bonusReciever != null)
{
PayBonus(bonusReciever);
}
}
or alternately in C#:-
foreach(IRecievesBonus bonusReciever in employees.OfType<IRecievesBonus>())
{
PayBonus(bonusReciever);
}
We cannot modify the IEmployee interface to include details of the child type as we don't want to pollute the super-type with details that only the sub-type cares about.
We do not have an existing collection of only the subtype.
We cannot use the Visitor pattern because the element types are not stable. Also, we might have a type which implements both IRecievesBonus and IDrinksTea. Its Accept method would contain an ambiguous call to visitor.Visit(this).
Often we're forced down this route because we can't modify the super-type, nor the collection e.g. in .NET we may need to find all the Buttons on this Form via the child Controls collection. We may need to do something to the child types that depends on some aspect of the child type (e.g. the bonus amount in the example above).
Strikes me as odd that there isn't an "accepted" way to do this, given how often it comes up.
1) Is the type conversion worth avoiding?
2) Are there any alternatives I haven't thought of?
EDIT
Péter Török suggests composing Employee and pushing the type conversion further down the object tree:-
public interface IEmployee
{
public IList<IEmployeeProperty> Properties { get; }
}
public interface IEmployeeProperty { ... }
public class DrinksTeaProperty : IEmployeeProperty
{
int Sugars { get; set; }
bool Milk { get; set; }
}
foreach (IEmployee employee in employees)
{
foreach (IEmployeeProperty property in employee.Propeties)
{
// Handle duplicate properties if you need to.
// Since this is just an example, we'll just
// let the greedy ones have two cups of tea.
DrinksTeaProperty tea = property as DrinksTeaProperty;
if (tea != null)
{
MakeTea(tea.Sugers, tea.Milk);
}
}
}
In this example it's definitely worth pushing these traits out of the Employee type - particularly because some managers might drink tea and some might not - but we still have the same underlying problem of the type conversion.
Is it the case that it's "ok" so long as we do it at the right level? Or are we just moving the problem around?
The holy grail would be a variant on the Visitor pattern where:-
You can add element members without modifying all the visitors
Visitors should only visit types they're interested in visiting
The visitor can visit the member based on an interface type
Elements might implement multiple interfaces which are visited by different visitors
Doesn't involve casting or reflection
but I appreciate that's probably unrealistic.
I would definitely try to resolve this with composition instead of inheritance, by associating the needed properties/traits to Employee, instead of subclassing it.
I can give an example partly in Java, I think it's close enough to your language (C#) to be useful.
public enum EmployeeProperty {
RECEIVES_BONUS,
DRINKS_TEA,
...
}
public class Employee {
Set<EmployeeProperty> properties;
// methods to add/remove/query properties
...
}
And the modified loop would look like this:
foreach(Employee employee in employees) {
if (employee.getProperties().contains(EmployeeProperty.RECEIVES_BONUS)) {
PayBonus(employee);
}
}
This solution is much more flexible than subclassing:
it can trivially handle any combination of employee properties, while with subclassing you would experience a combinatorial explosion of subclasses as the number of properties grow,
it trivially allows you to change Employee properties runtime, while with subclassing this would require changing the concrete class of your object!
In Java, enums can have properties or (even virtual) methods themselves - I don't know whether this is possible in C#, but in the worst case, if you need more complex properties, you can implement them with a class hierarchy. (Even in this case, you are not back to square one, since you have an extra level of indirection which gives you the flexibility described above.)
Update
You are right that in the most general case (discussed in the last sentence above) the type conversion problem is not resolved, just pushed one level down on the object graph.
In general, I don't know a really satisfying solution to this problem. The typical way to handle it is using polymorphism: pull up the common interface and manipulate the objects via that, thus eliminating the need for downcasts. However, in cases when the objects in question do not have a common interface, what to do? It may help to realize that in these cases the design does not reflect reality well: practically, we created a marker interface solely to enable us to put a bunch of distinct objects into a common collection, but there is no semantical relationship between the objects.
So I believe in these cases the awkwardness of downcasts is a signal that there may be a deeper problem with our design.
You could implement a custom iterator that only iterates over the IRecievesBonus types.

NHibernate convert subclass to parent class

Supposing the following entities :
public class AppUser
{
public virtual int Id { get; set; }
public virtual string Login { get; set; }
}
// Mapped as joined-subclass
public class Person : AppUser
{
public virtual int Age { get; set; }
}
If I create 1 AppUser, and save it like this
var user = new AppUser() { Login = "test" };
session.Save( user ); // let's say Id = 1
How can I cast/convert/"promote" it to a Person, keeping the same ID ?
Now, i'm stuck with a row in my AppUser table, with Id = N. How can I populate the Person table with the same Id ? I can't delete the AppUser and recreate it as a Person, as AppUser may be referenced by foreign keys.
I could issue a "manual" SQL INSERT, but it's kind of ugly...
This is definitively a NHibernate question. I understand that from an OOP point of view, this makes little sense, hence the absence of other tags than nhibernate.
I don't believe nHibernate is going to be able to solve this problem for you. nHibernate is dealing with your data as an object and, especially with joined-subclass I don't believe there is anything built in that allows you to change the subclass type on the fly, or at least change the type and retain the original ID.
I think your best bet is to write a stored procedure that, given an ID and a NEW type, removes all entries from subclass tables and adds a new entry to the correct subclass table.
Once that proc runs, then reload the object in nHibernate (and make sure you have thrown away any cached data relating to it), it should now be of the correct type you want to work with, set its NEW properties and save it.
That way you have a relatively generic stored proc that just changes your subclass types, but you dont need to add all the crazy logic to handle various properties on your subclasses.
This has been discussed on SO before and I am quoting Jon Skeet for posterity:
No. A reference to a derived class
must actually refer to an instance of
the derived class (or null). Otherwise
how would you expect it to behave?
For example:
object o = new object();
string s = (string) o;
int i = s.Length; // What can this sensibly do?
If you want to be able to convert an
instance of the base type to the
derived type, I suggest you write a
method to create an appropriate
derived type instance. Or look at your
inheritance tree again and try to
redesign so that you don't need to do
this in the first place.
In Skeet's example, string's are objects and objects are not strings. So the "upcasting" would not work.