How to solve cross referencess in OOP? - oop

I encountered this a couple of times now, and i wondered what is the OO way to solve circular references. By that i mean class A has class B as a member, and B in turn has class A as a member.
One example of this would be class Person that has Person spouse as a member.
Person jack = new Person("Jack");
Person jill = new Person("Jill");
jack.setSpouse(jill);
jill.setSpouse(jack);
Another example would be Product classes that have some Collection of other Products as a member. That collection could for example be products that people who are interested in this product might also be interested in, and we want to upkeep that list on a per-product base, not on same shared attributes (e.g. we don't want to just display all other products in the same category).
Product pc = new Product("pc");
Product monitor = new Product("monitor");
Product tv = new Product("tv");
pc.setSeeAlso({monitor, tv});
monitor.setSeeAlso({pc});
tv.setSeeAlso(null);
(these products are just for making a point, the issue is not about wether or not certain products would relate to each other)
Would this be bad design in OOP in general? Would/should all OOP languages allow this, or is it just bad practice? If it's bad practice, what would be the nicest way of solving this?

The examples you give are (to me, anyway) examples of reasonable OO design.
The cross-referencing issue you describe isn't an artifact of any design process but a real-life characteristic of the things you're representing as objects, so I don't see there's a problem.
What have you encountered that has given you the impression that this approach is bad-design?
Update 11 March:
In systems that lack garbage collection, where memory management is explicitly managed, one common approach is to require all objects to have an owner - some other object responsible for managing the lifetime of that object.
One example is Delphi's TComponent class, which provides cascading support - destroy the parent component, and all owned components are also destroyed.
If you're working on such a system, the kinds of referential loop described in this question may be considered poor design because there's no clear owner, no one object responsible for managing lifetimes.
The way that I've seen this handled in some systems is to retain the references (because they properly capture the business concerns), and to add in an explicit TransactionContext object that owns everything loaded into the business domain from the database. This context object takes care of knowing which objects need to be saved, and cleans everything up when processing is complete.

It's not a fundamental problem in OO design. An example of a time it might become a problem is in graph traversal, for instance, finding the shortest path between two objects - you could potentially get into an infinite loop. However, that's something you would have to consider on a case-by-case basis. If you know there could be cross-references in a case like that, then code some checks in to avoid infinite loops (for instance, maintaining a set of visited nodes to avoid re-visiting). But if there's no reason it could be a problem (such as in the examples you gave in your question), then it's not bad at all to have such cross-references. And in many cases, as you've described, it's a good solution to the problem at hand.

I do not think this is an example of cross referencing.
Cross referencing usually pertains to this case:
class A
{
public void MethodA(B objectB)
{
objectB.SomeMethodInB();
}
}
class B
{
public void MethodB(A objectA)
{
objectA.SomeMethodInA();
}
}
In this case each object kind of "reaches in" to each other; A calls B, B calls A, and they become tightly coupled. This is made even worse if A and B are in different packages/namespaces/assemblies; in many cases those would create compile time errors as assemblies are compiled linearly.
The way to solve that is to have either object implement an interface with the desired method.
In your case you only have one level of "reaching in":
public Class Person
{
public void setSpouse(Person person)
{ ... }
}
I do not think this is unreasonable, nor even a case of cross-referencing/circular references.

The main time this is a problem is if it becomes too confusing to cope with, or maintain, as it can become a form of spaghetti code.
However, to touch on your examples;
See Also is perfectly valid if this is a feature you need in your code - it is a simple list of pointers (or references) to other items a user may be interested in.
Similarily it is perfectly valid to add spouse, as this is a simple real world relationship that would not be confusing to someone maintaining your code.
I have always seen it as a potential code smell, or perhaps a warning to take a step back and rationalise what I am doing.
As for some systems finding recursive relationships in your code (mentioned in a comment above), these can come up regardless of this sort of design. I have recently worked on a metadata capture system that had recursive 'types' of relationships - i.e Columns being logically related to other columns. It needs to be handled by the code trying to parse your system.

I don't think the circular references as such are a problem.
However, putting all those relationships inside objects may add too much clutter, so you may instead want to represent them externally. E.g. you might use a hash table to store relationships between products.

Referencing other objects is not a real bad OO design at all. It's the way state is managed within each object.
A good rule of thumb is the Law of Demeter. Look at this perfect paper of LoD (Paperboy and the wallet): click here

One way to fix this is to refer to other object via an id.
e.g.
Person jack = new Person(new PersonId("Jack"));
Person jill = new Person(new PersonId("Jill"));
jack.setSpouse(jill.getId());
jill.setSpouse(jack.getId());
I'm not saying it is a perfect solution, but it will prevent circular references. You are using an object instead of a object reference to model the relationship.

Related

Be suspicious of classes of which there is only one instance

tl;dr -- What does the below quoted paragraph mean?
Be suspicious of classes of which there is only one instance. A
single instance might indicate that the design confuses objects with
classes. Consider whether you could just create an object instead of a
new class. Can the variation of the derived class be represented in
data rather than as a distinct class? The Singleton pattern is one
notable exception to this guideline.
McConnell, Steve (2004-06-09). Code Complete (2nd Edition)
Extended version:
I'm currently reading Code Complete, and I'm having trouble understanding the above mentioned paragraph. For context, it's from Chapter 6 under guidelines for inheritance. At first I thought this was advice against using Singletons, but I was obviously proven wrong by the time I got to the end of the paragraph.
I simply can't grasp what the author is trying to get through my thick skull. For example, I don't know what he means by design confusing objects with classes, nor what that means in the context of having a class with only one instance. Help!
The wording is pretty confusing there, but I believe what's meant is that sometimes a novice programmer might create a whole new type just to instantiate one object of it. As a particularly blatant example:
struct Player1Name
{
string data;
};
There we could just use string player1_name; (or even an aggregate for multiple players) without creating a whole new type, hence the confusion of trying to use classes to model what new objects (new instances of existing types) can already do.
In such a case, a developer might spam the codebase with hundreds of new user-defined data types and possibly massive inheritance hierarchies with no potential for a single class to be reused beyond a single instance for every single new thing he wants to create when the existing classes generally suffice.
The real problem is not that the classes are being instantiated once,
but that their design is so narrowly applicable as to only be worth instantiating once.
Classes are generally meant to model a one-to-many relationship with their instances (objects). They're supposed to be at least somewhat more generally applicable beyond a single instance of that class. Put crudely, a class should model a Dog, not your neighbor's specific pet dog, Spark. It's supposed to model a Rectangle, not a precise Rectangle42x87 that is 4.2 meters by 8.7 meters. If you're designing things to be instantiated one time, you're probably designing them too narrowly and probably have existing things you can use instead.
A new data type is typically meant to tackle a class (category) of problems, so to speak, rather than one very precise one that calls for only one instance of that class. Otherwise your class designs are going to be one-shot deals that are just superficially creating classes all over the place to solve very individual problems with no potential for broader application whatsoever.
A singleton is an exception to the rule since it's not utilizing classes in this normal object-oriented kind of way. There it's deliberately setting out to create a single instance, lazy constructed, and with a global point of access. So there it's not by some accident and misunderstanding of object-oriented design that the developer created a class designed to be instantiated one time. It's a very deliberate and conscious design decision, so to speak, rather than a misunderstanding of how to use the tools.

What approach to take with classes and class architecture?

I'm working on a website with a few colleagues and we are having some differences in how we see the class architecture so I'm posting this too see how the larger community feels about this issue because I presume a lot of people were in a similar situation.
We are using an MVC approach with model implemented through active record pattern. One of the models is the "Product" model related to the product table in our database.
The thing is, we have two main types of products - tangible ( type = 1 ) and non-tangible ( type = 2 ). Throughout the code, we'll have tons of logic related to just this - type of product. ( if tangible do this, if non tangible do that...)
So one approach is - create classes TangibleProduct and NonTangibleProduct and through a factory fetch one or the other. Of course, these classes would have duplicate methods i.e. isTangible() or isNonTangible() would exist in both classes but in one case would return true an in other would return false. (this is just an example ).
I'm expecting to see at least about 30 different methods in classes which will return different values based on product type.
The other approach is to have just one Product class and in each method implement IF blocks and do the logic if product is tangible or non tangible and return results.
I know this is a vague question but I do think that most of the people who are working in an OO environment had a similar situation at some point...
Do you see any long term consequences in choosing one approach over the other, do you see any approach better or worse than the other?
Editing:
Sorry, I might have not been too clear. These two classes would extend the Product class. (i.e. "class TangibleProduct extends Product" and "class NonTangibleProduct extends Product" )
Thanx
Architecturally speaking, since the difference between tangible and intangible products is so fundamental in your domain the best approach is to have separate classes TangibleProduct and IntangibleProduct and handle both as instances implementing IProduct (a variation on the first approach). If your ORM tool allows you to do this then it's all good.
The second approach should be avoided very very hard.
It's generally advisable in OO to have this sort of distinction made in a class hierarchy.
If you use a factory method to create the right subclass, you only do the conditional logic once at creation time instead of in each method where the behavior varies.
You can also more easily add a new type. If the decision logic is repeated in many methods, you have to change them all. If it's consolidated in a factory, you only have to add a branch there and implement the new subclass and all the existing subclasses can frequently be untouched.
It's hard to tell without having access to your full requirements.
It mostly depends on whether there will be any non-trivial methods which
behave very differently for tangible and non-tangible products.
only make sense in the context of tangible or non-tangible products.
When you end up creating a lot of methods which look like this:
public void doSomething()
if (isTangible) {
[...loads of code...]
}
else {
[...loads of different code...]
}
}
you would be better off with separate classes.

Is a class that manages multiple classes a "god object"?

Reading the wikipedia entry about God Objects, it says that a class is a god object when it knows too much or does too much.
I see the logic behind this, but if it's true, then how do you couple every different class? Don't you always use a master class for connecting window management, DB connections, etc?
The main function/method may know about the existence of the windows, databases, and other objects. It may perform over-arching tasks like introduce the model to the controller.
But that doesn't mean it manages all the little details. It probably doesn't know anything about how the database or windows are implemented.
If it did, it could be accused of being a God object.
A god object is an object that contains references, directly or indirectly, to most if not all objects within an application. As the question observes, it is almost impossible to avoid having a god object in an application. Some object must hold references to the various subsystems: UI, database, communications, business logic, etc. Note that the god object need not be application-defined. Many frameworks have built-in god objects with names like "application context", "application environment", "session", "activator", etc.
The issue is not whether a god object exists, but rather how it is used. I will illustrate with an extreme example...
Let's say that in my application I want to standardize how many decimal places of precision to show when displaying numbers. However, I want the precision to be configurable. I create a class whose responsibility is to convert numbers to strings:
class NumberFormatter {
...
String format(double value) {
int decimalPlaces = getConfiguredPrecision();
return formatDouble(value, decimalPlaces);
}
int getConfiguredPrecision() {
return /* what ??? */;
}
}
The question is, how does getConfiguredPrecision figure out what to return? One way would be to give NumberFormatter a reference to the global application context which it stores in a member field called _appContext. Then we could write:
return _appContext.getPreferenceManager().getNumericPreferences().getDecimalPlaces();
By doing this, we have just made NumberFormatter into a god object as well! Why? Because now we can (indirectly) reference virtually any object in the application through its _appContext field. Is this bad? Yes, it is.
I'm going to write a unit test for NumberFormatter. Let's set up the parameters... it needs an application context?! WTF, that has 57 methods I need to mock. Oh, it only needs the pref manager... WTF, I have to mock 14 methods! Numeric prefs!?! Screw it, the class is simple enough, I don't need to test it...
Let's say that the application context had another method, getDatabaseManager(). Last week we were using SQL, so the method returned an SQL database object. But this week, we've decided to change to a NoSQL database and the method now returns a new type. Is NumberFormatter affected by the change? Hmmm, I can't remember... yeah, it might be, I see it takes an application context in the constructor... let me open the source and take a look... nope, we're in luck: it only accesses getPreferenceManager()... now let's check the other 93 classes that take an application context as a parameter...
This same scenario occurs if a change is made to the preferences manager, or the numeric preferences object. The moral of the story is that an object should only hold references to the things that it needs to perform its job, and only those things. In the case of NumberFormatter, all it needs to know is a single integer -- the number of decimal places. It could be created directly by the application god object who knows the magic number (or the pref manager or better still, numeric prefs), without turning the formatter into a god object itself. Furthermore, any components that need to format numbers could be given a formatter instead of the god object. Wins all around.
So, to summarize, the problem is not the existence of a god object but rather the act of conferring god-like status to other objects willy-nilly.
Incidentally, the design principle that tackles this problem head-on has become known as the Law of Demeter. Or "when paying at a restaurant, give the server your money not your wallet."
In my experience this most often occurs when you're dealing with code that is the product of "Develop as you go" project management (or lack there of). When a project is not thought through and planned and object responsibilities are loose and not delegated properly. In theses scenarios you find a "god-object" being the catchall for code that doesn't have any obvious organization or delegation.
It is not the interconnectedness or coupling of the different classes that is the problem with god-objects, it's the fact that a god-object many times can accomplish most if not all responsibilities of it's derived children, and are fairly unpredictable (by anyone other than the developer) as to what their defined responsibilities are.
Simply knowing about "multiple" classes doesn't make one a God; knowing about multiple classes in order to solve a problem that should be split into several sub-problems does make one a God.
I think the focus should be on whether a problem should be split into several sub-problems, not on the number of classes a given object knows about (as you pointed out, sometimes knowing about several classes is necessary).
Gods are over-hyped.

DDD: Should everything fit into either Entity or Value Object?

I'm trying to follow DDD, or a least my limited understanding of it.
I'm having trouble fitting a few things into the DDD boxes though.
An example: I have a User Entity. This user Entity has a reference to a UserPreferencesInfo object - this is just a class which contains a bunch of properties regarding user preferences. These properties are fairly unrelated, other than the fact that they are all user preferences (unlike say an Address VO, where all the properties form a meaningful whole).
Question is - what is this UserPreferencesInfo object?
1) Obviously it's not an Entity (I'm just storing it as 'component' in fluent nhibernate speak (i.e. in the same DB table as the User entity).
2) VO? I understand that Value Object are supposed to be Immutable (so you cant cange them, just new them up). This makes complete sense when the object is an address for instance (the address properties form a meaningful 'whole'). But in the case of UserPreferencesInfo I don't think it makes sense. There could be 100 properties (Realistically) There could be maybe 20 properties on this object - why would I want to discard an recreate the object whenever I needed to change one property?
I feel like I need to break the rules here to get what I need, but I don't really like the idea of that (it's a slippery slope!). Am I missing something here?
Thanks
Answer 1 (the practical one)
I'm a huge proponent of DDD, but don't force it. You've already recognised that immutable VOs add more work than is required. DDD is designed to harness complexity, but in this case there is very little complexity to manage.
I would simply treat UserPreferencesInfo as an Entity, and reference it from the User aggregate. Whether you store it as a Component or in a separate table is your choice.
IMHO, the whole Entity vs VO debate can be rendered moot. It's highly unlikely that in 6 months time, another developer will look at your code and say "WTF! He's not using immutable VOs! What the heck was he thinking!!".
Answer 2 (the DDD purist)
Is UserPreferencesInfo actually part of the business domain? Others have mentioned disecting this object. But if you stick to pure DDD, you might need to determine which preferences belong to which Bounded Context.
This in turn could lead to adding Service Layers, and before you know it, you've over-engineered the solution for a very simple problem...
Here's my two cents. Short answer: UserPreferenceInfo is a value object because it describes the characteristics of an object. It's not an entity because there's no need to track an object instance over time.
Longer answer: an object with 100+ properties which are not related is not very DDD-ish. Try to group related properties together to form new VOs or you might discover new entities as well.
Another DDD smell is to have a lot of set properties in the first place. Try to find the essence of the action instead of only setting the value. Example:
// not ddd
employee.Salary = newSalary;
// more ddd
employee.GiveRaise(newSalary);
On the other hand you may very well have legitimate reasons to have a bunch of properties that are no more than getters and setters. But then there's probably simpler methods than DDD to solve the problem. There's nothing wrong with taking the best patterns and ideas from DDD but relax a little of all the "rules", especially for simpler domains.
I'd say a UserPreferenceInfo is actually a part of the User aggregate root. It should be the responsibility of the UserRepository to persist the User Aggregate Root.
Value objects only need to be newed up (in your object model) when their values are shared. A sample scenario for that would be if you check for a similar UserPreferenceInfo and associate the User with that instead of Inserting a new one everytime. Sharing Value Objects make sense if value object tables would get to large and raise speed/storage concerns. The price for sharing is paid on Insert.
It is reasonable to abstract this procedure in the DAL.
If you are not shraing value objects, there is nothing against updating.
As far as I understand, UserPreferenceInfo is a part of User entity. Ergo User entity is an Aggregate root which is retrieved or saved using UserRepository as a whole, along with UserPreferenceInfo and other objects.
Personally, I think that UserPreferenceInfo is entity type, since it has identity - it can be changed, saved and retrieved from repository and still be regarded as the same object (i.e. has identity). But it depends on your usage of it.
It doesn't matter IMHO how object is represented in the DAL - is it stored in a separate table or part of other table. One of the benefits of DDD is persistence ignorance and is ususally a good thing.
Of course, I may be wrong, I am new to DDD too.
Question is - what is this UserPreferencesInfo object?
I don't know how this case is supported by NHibernate, but some ORMs support special concepts for them. For example DataObjects.Net include Structures concept. It seems that you need something like this in NH.
First time ever posting on a blog. Hope I do it right.
Anyway, since you haven't showed us the UserPreferencesInfo object, I am not sure how it's constructed such that you can have a variable number of things in it.
If it were me, I'd make a single class called UserPreference, with id, userid, key, value, displaytype, and whatever other fields you may need in it. This is an entity. it has an id and is tied to a certain user.
Then in your user entity (the root I am assuming), have an ISet.
100 properties sounds like a lot.
Try breaking UserPreferenceInfo up into smaller (more cohesive) types, which likely/hopefully are manageable as VOs.

Single Responsibility Principle vs Anemic Domain Model anti-pattern

I'm in a project that takes the Single Responsibility Principle pretty seriously. We have a lot of small classes and things are quite simple. However, we have an anemic domain model - there is no behaviour in any of our model classes, they are just property bags. This isn't a complaint about our design - it actually seems to work quite well
During design reviews, SRP is brought out whenever new behaviour is added to the system, and so new behaviour typically ends up in a new class. This keeps things very easily unit testable, but I am perplexed sometimes because it feels like pulling behaviour out of the place where it's relevant.
I'm trying to improve my understanding of how to apply SRP properly. It seems to me that SRP is in opposition to adding business modelling behaviour that shares the same context to one object, because the object inevitably ends up either doing more than one related thing, or doing one thing but knowing multiple business rules that change the shape of its outputs.
If that is so, then it feels like the end result is an Anemic Domain Model, which is certainly the case in our project. Yet the Anemic Domain Model is an anti-pattern.
Can these two ideas coexist?
EDIT: A couple of context related links:
SRP - http://www.objectmentor.com/resources/articles/srp.pdf
Anemic Domain Model - http://martinfowler.com/bliki/AnemicDomainModel.html
I'm not the kind of developer who just likes to find a prophet and follow what they say as gospel. So I don't provide links to these as a way of stating "these are the rules", just as a source of definition of the two concepts.
Rich Domain Model (RDM) and Single Responsibility Principle (SRP) are not necessarily at odds. RDM is more at odds with a very specialised subclassof SRP - the model advocating "data beans + all business logic in controller classes" (DBABLICC).
If you read Martin's SRP chapter, you'll see his modem example is entirely in the domain layer, but abstracting the DataChannel and Connection concepts as separate classes. He keeps the Modem itself as a wrapper, since that is useful abstraction for client code. It's much more about proper (re)factoring than mere layering. Cohesion and coupling are still the base principles of design.
Finally, three issues:
As Martin notes himself, it's not always easy to see the different 'reasons for change'. The very concepts of YAGNI, Agile, etc. discourage the anticipation of future reasons for change, so we shouldn't invent ones where they aren't immediately obvious. I see 'premature, anticipated reasons for change' as a real risk in applying SRP and should be managed by the developer.
Further to the previous, even correct (but unnecessary anal) application of SRP may result in unwanted complexity. Always think about the next poor sod who has to maintain your class: will the diligent abstraction of trivial behaviour into its own interfaces, base classes and one-line implementations really aid his understanding of what should simply have been a single class?
Software design is often about getting the best compromise between competing forces. For example, a layered architecture is mostly a good application of SRP, but what about the fact that, for example, the change of a property of a business class from, say, a boolean to an enum has a ripple effect across all the layers - from db through domain, facades, web service, to GUI? Does this point to bad design? Not necessarily: it points to the fact that your design favours one aspect of change to another.
I'd have to say "yes", but you have to do your SRP properly. If the same operation applies to only one class, it belongs in that class, wouldn't you say? How about if the same operation applies to multiple classes? In that case, if you want to follow the OO model of combining data and behavior, you'd put the operation into a base class, no?
I suspect that from your description, you're ending up with classes which are basically bags of operations, so you've essentially recreated the C-style of coding: structs and modules.
From the linked SRP paper:
"The SRP is one of the simplest of the principle, and one of the hardest to get right."
The quote from the SRP paper is very correct; SRP is hard to get right. This one and OCP are the two elements of SOLID that simply must be relaxed to at least some degree in order to actually get a project done. Overzealous application of either will very quickly produce ravioli code.
SRP can indeed be taken to ridiculous lengths, if the "reasons for change" are too specific. Even a POCO/POJO "data bag" can be thought of as violating SRP, if you consider the type of a field changing as a "change". You'd think common sense would tell you that a field's type changing is a necessary allowance for "change", but I've seen domain layers with wrappers for built-in value types; a hell that makes ADM look like Utopia.
It's often good to ground yourself with some realistic goal, based on readability or a desired cohesion level. When you say, "I want this class to do one thing", it should have no more or less than what is necessary to do it. You can maintain at least procedural cohesion with this basic philosophy. "I want this class to maintain all the data for an invoice" will generally allow SOME business logic, even summing subtotals or calculating sales tax, based on the object's responsibility to know how to give you an accurate, internally-consistent value for any field it contains.
I personally do not have a big problem with a "lightweight" domain. Just having the one role of being the "data expert" makes the domain object the keeper of every field/property pertinent to the class, as well as all calculated field logic, any explicit/implicit data type conversions, and possibly the simpler validation rules (i.e. required fields, value limits, things that would break the instance internally if allowed). If a calculation algorithm, perhaps for a weighted or rolling average, is likely to change, encapsulate the algorithm and refer to it in the calculated field (that's just good OCP/PV).
I don't consider such a domain object to be "anemic". My perception of that term is a "data bag", a collection of fields that has no concept whatsoever of the outside world or even the relation between its fields other than that it contains them. I've seen that too, and it's not fun tracking down inconsistencies in object state that the object never knew was a problem. Overzealous SRP will lead to this by stating that a data object is not responsible for any business logic, but common sense would generally intervene first and say that the object, as the data expert, must be responsible for maintaining a consistent internal state.
Again, personal opinion, I prefer the Repository pattern to Active Record. One object, with one responsibility, and very little if anything else in the system above that layer has to know anything about how it works. Active Record requires the domain layer to know at least some specific details about the persistence method or framework (whether that be the names of stored procedures used to read/write each class, framework-specific object references, or attributes decorating the fields with ORM information), and thus injects a second reason to change into every domain class by default.
My $0.02.
I've found following the solid principles did in fact lead me away from DDD's rich domain model, in the end, I found I didn't care. More to the point, I found that the logical concept of a domain model, and a class in whatever language weren't mapped 1:1, unless we were talking about a facade of some sort.
I wouldn't say this is exactly a c-style of programming where you have structs and modules, but rather you'll probably end up with something more functional, I realise the styles are similar, but the details make a big difference. I found my class instances end up behaving like higher order functions, partial functions application, lazily evaluated functions, or some combination of the above. It's somewhat ineffable for me, but that's the feeling I get from writing code following TDD + SOLID, it ended up behaving like a hybrid OO/Functional style.
As for inheritance being a bad word, i think that's more due to the fact that the inheritance isn't sufficiently fine grained enough in languages like Java/C#. In other languages, it's less of an issue, and more useful.
I like the definition of SRP as:
"A class has only one business reason to change"
So, as long as behaviours can be grouped into single "business reasons" then there is no reason for them not to co-exist in the same class. Of course, what defines a "business reason" is open to debate (and should be debated by all stakeholders).
Before I get into my rant, here's my opinion in a nutshell: somewhere everything has got to come together... and then a river runs through it.
I am haunted by coding.
=======
Anemic data model and me... well, we pal around a lot. Maybe it's just the nature of small to medium sized applications with very little business logic built into them. Maybe I am just a bit 'tarded.
However, here's my 2 cents:
Couldn't you just factor out the code in the entities and tie it up to an interface?
public class Object1
{
public string Property1 { get; set; }
public string Property2 { get; set; }
private IAction1 action1;
public Object1(IAction1 action1)
{
this.action1 = action1;
}
public void DoAction1()
{
action1.Do(Property1);
}
}
public interface IAction1
{
void Do(string input1);
}
Does this somehow violate the principles of SRP?
Furthermore, isn't having a bunch of classes sitting around not tied to each other by anything but the consuming code actually a larger violation of SRP, but pushed up a layer?
Imagine the guy writing the client code sitting there trying to figure out how to do something related to Object1. If he has to work with your model he will be working with Object1, the data bag, and a bunch of 'services' each with a single responsibility. It'll be his job to make sure all those things interact properly. So now his code becomes a transaction script, and that script will itself contain every responsibility necessary to properly complete that particular transaction (or unit of work).
Furthermore, you could say, "no brah, all he needs to do is access the service layer. It's like Object1Service.DoActionX(Object1). Piece of cake." Well then, where's the logic now? All in that one method? Your still just pushing code around, and no matter what, you'll end up with data and the logic being separated.
So in this scenario, why not expose to the client code that particular Object1Service and have it's DoActionX() basically just be another hook for your domain model? By this I mean:
public class Object1Service
{
private Object1Repository repository;
public Object1Service(Object1Repository repository)
{
this.repository = repository;
}
// Tie in your Unit of Work Aspect'ing stuff or whatever if need be
public void DoAction1(Object1DTO object1DTO)
{
Object1 object1 = repository.GetById(object1DTO.Id);
object1.DoAction1();
repository.Save(object1);
}
}
You still have factored out the actual code for Action1 from Object1 but for all intensive purposes, have a non-anemic Object1.
Say you need Action1 to represent 2 (or more) different operations that you would like to make atomic and separated into their own classes. Just create an interface for each atomic operation and hook it up inside of DoAction1.
That's how I might approach this situation. But then again, I don't really know what SRP is all about.
Convert your plain domain objects to ActiveRecord pattern with a common base class to all domain objects. Put common behaviour in the base class and override the behaviour in derived classes wherever necessary or define the new behaviour wherever required.