Role Based Object-Oriented Design - Controller - oop

I'd like to implement a system that actually encompasses 3, each one varying in functionality by user role type. I.e. a system that allows users to perform different tasks based on their role type; role type determined right at user creation. The users cannot use their role to access other components / features of the system, and the UI is unique for each user.*
I need these "systems" to act independently of each other, but I'm finding some common behavior (most often, opportunities for composition) across the board.
Currently, I have 3 controllers, as this was the original intent of my design - RoleType1Controller, RoleType2Controller, RoleType3Controller. Obviously, these branch out independently, and touch the classes they need to touch.
I'm preparing for some pretty large enhancements in functionality as soon as I get my feet off the ground and need to take these enhancements into consideration, as some of them will be co-driving points of the system. I.e. I want the system to do a couple things, all of equal importance, but can only implement one major feature at this time.
Concerning the OOD, I'm thinking this "three systems in one" approach may be best suited for upcoming changes. However, these opportunities for composition and the desire to keep with standards of having a single controller is weighing heavily on my decision making process.
Does anyone have experience with something like this or, if not, is experienced in OOD and can point me in the right direction? I'm building from the ground up, so (obviously) the framework of the system is being defined in this first iteration. I'd like it to be as robust and flexible as possible.
Any help would be greatly appreciated.
*I am NOT using the UI to drive my design process...I just thought this extra bit of information may be of some help.

The answer, in this case, is to have multiple controllers. Not necessarily by role (though this is how my domain model is currently formed); these should be defined through the process of delegating use case controllers responsibility. I found this through my initial stage of design - defining SSDs and the class diagram.
In Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development (an incredible introductory book pulling its context from various, proven best practice definitions and their authors), Craig Larman states there are two ways to approach defining your controllers:
Assign the responsibility to a class representing one of the following choices:
• Represents the overall “system,” a “root object,” a device that the software is running within, or a major
subsystem—these are all variations of a facade controller.
• Represents a use case scenario within which the system event occurs, often named Handler,
Coordinator, or Session (use case or session controller).
• Use the same controller class for all system events in the same use case scenario.
• Informally, a session is an instance of a conversation with an actor. Sessions can be of any length but are
often organized in terms of use cases (use case sessions).
Previous experience has always lead me to the first solution, because the systems I was helping design were of a much lower level of complexity. However, I ran into this issue: bloated controller.
From the same text, Larman proposes this:
Issues and Solutions
Poorly designed, a controller class will have low cohesion—unfocused and handling too many areas of responsibility;
this is called a bloated controller. Signs of bloating are:
• There is only a single controller class receiving all system events in the system, and there are many of them.
This sometimes happens if a facade controller is chosen.
• The controller itself performs many of the tasks necessary to fulfill the system event, without delegating the
work. This usually involves a violation of Information Expert and High Cohesion.
• A controller has many attributes, and it maintains significant information about the system or domain, which should have been distributed to other objects, or it duplicates information found elsewhere.
Among the cures for a bloated controller are these two:
Add more controllers—a system does not have to need only one. Instead of facade controllers, employ use case controllers.
For example, consider an application with many system events, such as an airline reservation system. It may contain the following controllers:
Use case controllers
MakeReservationHandler
Use case controllers
ManageSchedulesHandler
ManageFaresHandler
Design the controller so that it primarily delegates the fulfillment of each system operation responsibility
to other objects.
2) by itself wasn't going to help me because I had actors speaking to the same classes but in their own ways...I was already delegating as much responsibility as I could to other classes while trying to maintain a simple and consistent interaction for the user. As stated before, this lead me to the "bloated controller" issue.
Because OOA/D is an evolving process, I won't be able to say this is my final solution until it's truly implemented. Really, these use case controllers could lead me down a different path...instead of (like) controllers for each (like) use case, I could end up with 3 (or 4 or 5 or 6), and this may be just a means of getting there. But for now, things are going a lot smoother than they were before - I'm beginning to see the realization of the ultimate solution.

Related

Abstractions that are... too abstract?

In the Vaughn Vernon's Domain-Driven Design Distilled book we can read that we should try to avoid creating technical abstractions that are perhaps too abstract and try to be more explicit by sticking to the concepts of the Ubiquitous Language.
Where I work we've built several tracking applications and in almost every of them there is the problem of having multiple specializations of the same thing, most likely with common behaviors, but different data and validation rules.
For instance, imagine an incident logging application where various kind of incidents are reported over the phone (e.g. car accident, fire, robbery). The information gathering process is similar to every incidents, but the captured data may vary widely as well as the validation rules that constrains this data.
So far, we have always solved these kind of problems with very technical abstractions (this is an oversimplified model, but you should get the idea):
As you can see, the DataValidationRules, DataFields and DataEntries abstractions have very little to do with the business of incident logging. Actually, they are part of a very generic solution to the problem of representing multiple entity specializations with different data in any domain.
I'd like to move away from this kind of very abstract model, but at the same time I do not see what would be the correct approach in making the business concepts explicit. I understand that the answer would be different in every domain, but in essence, should I be looking into having a single class per specialization? E.g. CarAccidentIndicent, FireIncident and RobberyIncident?
With a very limited number of specializations it seems like it could be manageable, but what if I have hundreds of them?
What about the UI? That means I'd have to move away from a generic way of generating the UI as well.
After thinking a little more about it I think I may have found a better and simpler way to express my concerns when it comes to DDD, OO and modeling many specializations.
On the one hand I want to come up with a model that is faithful to the Ubiquitous Language (UL) and model domain concepts explicitly. On the other hand I'm trying to respect the "favor composition over inheritance" mantra I'm so used to apply.
It seems that boths are conflicting because in order to enable composition I'll have to introduce abstractions that are most likely not part of the UL (e.g. Entity--Field composition) and when it comes to explicit modeling I do not see any other way than inheritance with one class per specialization.
Am I wrong in trying to avoid inheritance to represent hundreds of specialized entities that mainly differ in terms of data structure, not behaviors?
Then again, assuming they did differ a lot in terms of behaviors as well I'd have the same dilemma.
Just to be more explicit on the design choices:
In one scenario, composition would be achievable dynamically without requiring multiple classes per specialized compositions:
class Incident {
Set<Detail> details;
IncidentType type;
}
interface Detail {
public DetailType type();
}
class SomeDetail implements Detail {
...
}
class SomeOtherDetail implements Detail {
...
}
In the other scenario compositions are static and do require one class per specialized composition:
class CarAccidentIncident extends Incident {
SomeDetail someDetail;
SomeOtherDetail someOtherDetail;
}
class SomeDetail {}
class SomeOtherDetail {}
Obviously, the second approach is more explicit and offers a natural home for specific behaviors and rules. In the first approach we would have to introduce some abstract and technical concepts like Operation and DetailValidation which may not align well with the UL.
With a small number of different specializations I'd probably choose the latter without a second though, but because there are many of them it seems like I'm leaning more towards dynamic composition (even thought being dynamic is not required). Should I?
When to use DDD?
The thing is, DDD is not necessarily the right fit for all systems. It's particularly well suited to large systems with complex business rules.
If the business rules that need expressing to capture the essence of a FireIncident are simple enough to be encoded in a DataValidationRules record and a set of DataFields, then that suggests that perhaps those rules do not require the complexity of a DDD implementation.
The Domain of Data Validation
However, if you acknowledge that, you can shift your perspective towards intending to actually build a pure data validation engine. The domain of data validation should include entities such as data validation rules, and data fields, and would contemplate such questions related to the lifecycles of rules and fields - e.g. 'what happens if a validation rule changes - do all existing records that have previously been validated need revalidation?'
If the lifecycle of a data validation rule itself is complex enough to warrant it, then by all means, use DDD to implement that domain, although you may still choose to use CRUD if you find there are no complex rules or processes in the domain of data validation.
Who are your Domain Experts?
The further consequence of that is that your domain experts are no longer your end users (the people who know about car accidents and fire incidents) they are now the people (most likely specialists) who craft the validation rules and fields. If using DDD, you need to be asking them what types of rules they need and how they need the rules to work, and implementing using the Ubiquitous Language that they use to talk about the art and process of crafting validation rules.
Those people, in turn, would be 'programming' a next level system (you might say they are using a 4GL language tailored to the domain of incident logging) using your data validation engine. The thing is, their domain experts would be the people who know about car accidents. But the specialists wouldn't strictly be using DDD to craft the rules of a car accident, because they would not be expressing their model in software, but in the constrained language of your data capture and validation engine.
Additions following Update
Have been pondering this since your update and had a few more thoughts/questions:
Data Validation vs Entity Lifecycle/Behavior
Most of your concern is around representing data validation rules on create/update. Something that would help to understand is - what behavior/rules are represented by your entities other than data validation? i.e. in an incident management system, you might track an incident through a set of states such as Reported, WaitingForDispatch, ResponseEnRoute, ResponseOnSite, Resolved, Debriefed? In an insurance system you might track Reported, Verified, AwaitingFunding, Closed, etc.
The reason I ask, is that in the absence of such lifecycle behavior - if the main purpose of your system is pure data validation, then I return to my original thought of wondering if DDD is really the right approach for this system, as DDD brings greatest value when there is complex behavior to be modelled.
If you do have such lifecycles or other complex behavior - then one possibility is to consider the approach from the perspective of different bounded contexts - i.e. have one bounded context for data validation - which uses the approach you've described with more technical abstractions - as it is an efficient way to represent the validations - but another context from the perspective of lifecycle management, in which you could focus more on business abstractions - if all incidents follow similar set of lifecycles, then that context would have a much smaller number of specific entities.
Keeping entities sync'd between contexts is a whole 'nother topic, but not too troublesome if you adopt a service bus or event type technology and publish events when things change.
Updates to Validation Rules?
How do your business experts express requests for changes to validation rules? And how do you implement them? I'm guessing from what you've said, they probably express them in domain terms such as 'FireIncident'. But the implementation is interesting - do you have to craft data modification statements in SQL which get applied as part of a deployment?
Inheritance vs Composition
It seems that boths are conflicting because in order to enable composition I'll have to introduce abstractions that are most likely not part of the UL (e.g. Entity--Field composition)
I do not think this is true - composition does not have to require introducing technical abstractions. With either composition or inheritance, the goal is to distil insights into the domain to discover common patterns.
e.g. look for common behaviours or data validation sets and find the business language term that describes this commonality. e.g. You might find RobberyIncident and FireIncident both apply to Buildings.
If using inheritence you might create a BuildingIncident and RobberyIncident and FireIncident would extend BuildingIncident.
If using composition, you might create a valueobject to represent a Building and both RobberyIncident and FireIncident would contain a Building property. However RobberyIncident would also contain a Robbery property and FireIncident would also contain a Fire property. CarAccidentIncident and CarRobberyIncident would both contain a Car property, but CarRobberyIncident would also contain a Robbery property of the same type as the Robbery property on RobberyIncident - assuming they are truly common behaviours.
You may still have hundreds of classes representing specialised incident types, but they are simply composed of a set of value object properties representing the set of common patterns they are composed of - and those value objects can and should be in terms of ubiquitious language concepts.
My take on this is that not all information is pertinent to the domain.
I think that in many instances we try to apply techniques in an "all-or-nothing" approach whereas we may need to be focusing on the "right tool for the job". In the answer provided by Chris he asks the question "When to use DDD?" and mentions "The thing is, DDD is not necessarily the right fit for all systems." I would argue that DDD may not be appropriate for some parts of a system.
Would DDD be useful to create, say, a word processing application? I don't really think so. Although some good old proper OO would go a long way.
DDD is absolutely great for business behaviour focused bits of a system. However, there are going to be bits that can be modeled in a more technical/generic way that feed into more interesting business functionality. I'm sure that those incidents end up in some business process. An example may be a Claim. The business is very interested in tracking a claim and the claim amount, but where that claim came from isn't all too interesting. For all intents and purposes the "initiating documentation" may be filled in using pen and paper and scanned in to be linked to said claim. One could even start a new claim on the system using a plain text input.
I have been involved in a number of systems where a lot of peripheral data was sucked into the system but actually it wasn't really contributing much (law of diminishing returns and such).
I once worked on a loan system. The original 20 year-old system was re-written in C#. The main moving bits:
Client
Loan Amount
Payment schedule
Financial transactions (interest, payments, etc.)
All-in-all it is really a simple system. Well, 800+ tables later and stacks of developers/BAs and the system is somewhat of a monster. One could even capture stock and title deeds as guarantee. Now, my take would be to scan in some of this information and link it to the loan. However, somehow some business folks decide that they absolutely "must have" this information in the system. It isn't core though, I would say.
On the other end, another system I worked on calculated premiums. It was modeled quite business-like and was quite a maintenance nightmare. It was then re-written very generically by simply defining calculations that work on given inputs. There were some lookup tables for values and so on but no business processing as such.
Sometimes we may need to abstract moving bits into something that makes sense as an input or output and then use that in our domain. I think the UL should be used by ourselves and domain experts but it doesn't mean that we are not going to end up using technical concepts that are not part of the UL, and I think that that is okay. I'm sure a domain expert wouldn't care much for a SqlDbConnection even though we are going to using one of those in our code :) --- similarly we could model some structures outside of the domain proper.
In response to your update and question: I would not create a concrete class unless it really does feature in the UL in a big way. On a side note, I still favour composition over inheritance. I typically implement interfaces where necessary and go with abstract classes when inheriting, just to place some default behaviour when it helps.
The UL, as with any design, represents a model with nuances. We can apply DDD without using domain events. When we do use domain events we may even go with event sourcing. Event sourcing has very little to do with the UL in much the same way that the terms "Aggregate", "Entity", or "Value Object" would. The UL is going to be specific to the domain / domain experts and when we, as domain modelers, talk to each other we can describe various models in terms of DDD tactical patterns in order to bring across some of the specific UL concepts.
We have to listen to how a domain expert describes the problem space. As soon as we hear "When", as stated in so many other places, we know that we are probably dealing with an event. In much the same way we can listen to how a domain expert talks about the aggregates. For instance (totally bogus example):
"When a customer is registered we need to inform the supervisor of the CSR that initiated the request"
More loosely related to your example:
"When an incident takes place we need to capture some specific details regarding the incident. Depending on the type we need to capture different bits and validate that we have sufficient data to process our claim
Between these two we can see a distinct difference in how they are referring to interacting with the problem space. When a domain expert thinks of something in very broad terms I think it is prudent that we do the same.
On the other hand, should the conversation be more along the lines of this:
"When a car accident is registered we need to assign an assessor an wait for an assessment report that has to answer..."
Now we have something much more specific. These are, necessarily, mutually exclusive in that if they only ever talk about specifics then we go with "specific". If they first mention in broad terms and then specifics, we can also work in broad terms.
This is where our modeling is tricky to get right. It is the same nuance as we have in the Address as an aggregate vs value object "debate". It all depends on the context.
These things are going to be tricky and dependent on the domain in order to get right. As Eric Evans did mention: it may take a couple of models to get something that fits just right. This is necessarily so based on one's experience with the domain.

How can a Domain Model interact with UI and Data without being dependent on them?

I have read there are good design patterns that resolve the following conflicting requirements: 1.) A domain model (DM) shouldn't be dependent on other layers like the UI and data persistence layers. 2.) The DM needs to interact with the UI and data persistence layers. What patterns resolve this conflict?
I'm not sure if you can call it a design pattern or not, but I believe that what you are looking for is the Dependency Inversion Principle (DIP).
The principle states that:
A. High-level modules should not depend on low-level modules. Both
should depend on abstractions.
B. Abstractions should not depend on details. Details should depend on
abstractions. - Wikipedia
When you apply this principle to the traditionnal Layered Architecture, you end up pretty much with the widely adopted Onion/Hexagonnal/Port & Adapters/etc.... architecture.
For instance, instead of the traditionnal Presentation -> Application -> Domain -> Infrastructure where the domain depends on infrastructure details you inverse the dependency and make the Infrastructure layer depend on an interface defined in the Domain layer. This allows the domain to depend on nothing else but itself.
The DM needs to interact with the UI
About that, I cannot see any scenario where the domain should be aware of the UI.
This all really comes down to the use case of the software project. Use cases do not specify any sort of implementation in a project. You can do whatever you want, as long as you meet these specific project requirements.
There are fundamental building blocks that are necessary to meet these project requirements. For example, you cannot print a business report with last year's pencil taxes without having the actual number to print. You need that data, no matter what.
Then databases become the next level of implementation. Everything in the database is a fundamental building block that is required to complete the use case. You just simply cannot complete the use cases without it.
We don't want our users to just have a command line SQL program and do all the use cases by that, because that would take forever. Imagine every user having to know and understand the domain model behind your software, just to figure out what value to read to determine the font color of your title screen. Nobody is going to buy your software.
We may need more than a simply domain model to satisfy the use case from our customer. Let's build a program that will serve as a tool for the user to access the data, and update the data. We can simplify the knowledge and time required to perform this use case. For example, we can just make a button that loads the screen.
While the model, view, and controller are all viewed as being right next to each other on all the diagrams we see, they really belong stacked on top of each other. You can have a database without a view or a controller, but not vice versa. To build a view or controller, you must know what you are interacting with. You still need the fundamental pieces of data required to accomplish the purpose (which, you can find in the database).

OOP - Can a part of the application you are designing also be an actor?

I am studying Object Oriented Design and am using usecases with actors and scenario''s to plan out the application i am trying to build. No specific language yet, just the theory at the moment.
I have come to the point where i have identified and written out the use cases for the users, administrator, owner, etc and also the external systems like the feed generator.
but i have come to realise that my application actually consists of multiple smaller apps. like a data gathering application and a analysis application.
Can/should i use the data gathering and analysis app as an actor in the overall application too?
I can write specific use cases for them, with scenarios etc.
Typically, no.
Actor is an entity that sits outside of the system and produces some action. It gets to the system boundaries, but then all interactions between system components are modeled not as usecases, but as i.e. dynamic diagrams or sequence diagrams.
For the record, I think this approach is flawed and doesn't really help you in building applications. I personally prefer thinking about components and their interactions directly, without forcing the idea of architecture to fit a particular modeling scheme.

Difference between code tangling and cohesion?

In relation with crosscutting concerns and aspect oriented programming, you often read about code tangling. This article 1 desciribes code tangling as:
Modules in a software system may simultaneously interact with several requirements. For example, oftentimes developers simultaneously think about business logic, performance, synchronization, logging, and security. Such a multitude of requirements results in the simultaneous presence of elements from each concern's implementation, resulting in code tangling.
Isn't that exactly the same as low cohesion? Is there any difference between high tangling and low cohesion, or are that two different words describing the same thing?
Accourding to wikipedia:
The implementation of a concern is tangled if its code is intermixed
with code that implements other concerns. The module in which tangling
occurs is not cohesive.
Cohesion is decreased if:
- The functionalities embedded in a class, accessed through its methods, have little in common.
- Methods carry out many varied activities, often using coarsely-grained or unrelated sets of data.
So.. when the code is tangled, it would violate SOLID principles such Single Responsibility Principle, Open Closed Priciple etc.
All these principles most often go together and violation of one principle/best practice lead to another.
But tangling doesn't necessarily mean that the code is not cohesive.
For example we could have a class called SecurityChecker, which does the authentication of a user log all authentication related activties.
Clearly this would be handling multiple concerns which are Authentication and Logging. Therefroe it would be a tangled class.
On the other hand both these conerns would be operating on the same set of data which in this case could be user data, times of logon , number of login attempts etc. Therefore cohesion could still be high.
Genrally most of these principles/guidlines/best practices look at the same issue from different perspectives, and the end goal is to manage dependencies between different components/classes etc so that the overall design would be more maintainable , efficient and elegant in the long run.
Very similar yes.
Cohesion is used to indicate the degree to which a class has a single, well-focused purpose.
Therefore if you have a Class with a single well-focused purpose then it would follow that it's not "tangled" by trying to do more than one thing.

How to design a business logic layer

To be perfectly clear, I do not expect a solution to this problem. A big part of figuring this out is obviously solving the problem. However, I don't have a lot of experience with well architected n-tier applications and I don't want to end up with an unruly BLL.
At the moment of writing this, our business logic is largely a intermingled ball of twine. An intergalactic mess of dependencies with the same identical business logic being replicated more than once. My focus right now is to pull the business logic out of the thing we refer to as a data access layer, so that I can define well known events that can be subscribed to. I think I want to support an event driven/reactive programming model.
My hope is that there's certain attainable goals that tell me how to design these collection of classes in a manner well suited for business logic. If there are things that differentiate a good BLL from a bad BLL I'd like to hear more about them.
As a seasoned programmer but fairly modest architect I ask my fellow community members for advice.
Edit 1:
So the validation logic goes into the business objects, but that means that the business objects need to communicate validation error/logic back to the GUI. That get's me thinking of implementing business operations as objects rather than objects to provide a lot more metadata about the necessities of an operation. I'm not a big fan of code cloning.
Kind of a broad question. Separate your DB from your business logic (horrible term) with ORM tech (NHibernate perhaps?). That let's you stay in OO land mostly (obviously) and you can mostly ignore the DB side of things from an architectural point of view.
Moving on, I find Domain Driven Design (DDD) to be the most successful method for breaking a complex system into manageable chunks, and although it gets no respect I genuinely find UML - especially action and class diagrams - to be critically useful in understanding and communicating system design.
General advice: Interface everything, build your unit tests from the start, and learn to recognise and separate the reusable service components that can exist as subsystems. FWIW if there's a bunch of you working on this I'd also agree on and aggressively use stylecop from the get go :)
I have found some o fthe practices of Domain Driven Design to be excellent when it comes to splitting up complex business logic into more managable/testable chunks.
Have a look through the sample code from the following link:
http://dddpds.codeplex.com/
DDD focuses on your Domain layer or BLL if you like, I hope it helps.
We're just talking about this from an architecture standpoint, and what remains as the gist of it is "abstraction, abstraction, abstraction".
You could use EBC to design top-down and pass the interface definitions to the programmer teams. Using a methology like this (or any other visualisation technique) visualizing the dependencies prevents you from duplicating business logic anywhere in your project.
Hmm, I can tell you the technique we used for a rather large database-centered application. We had one class which managed the datalayer as you suggested which had suffix DL. We had a program which automatically generated this source file (which was quite convenient), though it also meant if we wanted to extend functionality, you needed to derive the class since upon regeneration of the source you'd overwrite it.
We had another file end with OBJ which simply defined the actual database row handled by the datalayer.
And last but not least, with a well-formed base class there was a file ending in BS (standing for business logic) as the only file not generated automatically defining event methods such as "New" and "Save" such that by calling the base, the default action was done. Therefore, any deviation from the norm could be handled in this file (including complete rewrites of default functionality if necessary).
You should create a single group of such files for each table and its children (or grandchildren) tables which derive from that master table. You'll also need a factory which contains the full names of all objects so that any object can be created via reflection. So to patch the program, you'd merely have to derive from the base functionality and update a line in the database so that the factory creates that object rather than the default.
Hope that helps, though I'll leave this a community wiki response so perhaps you can get some more feedback on this suggestion.
Have a look in this thread. May give you some thoughts.
How should my business logic interact with my data layer?
This guide from Microsoft could also be helpful.
Regarding "Edit 1" - I've encountered exactly that problem many times. I agree with you completely: there are multiple places where the same validation must occur.
The way I've resolved it in the past is to encapsulate the validation rules somehow. Metadata/XML, separate objects, whatever. Just make sure it's something that can be requested from the business objects, taken somewhere else and executed there. That way, you're writing the validation code once, and it can be executed by your business objects or UI objects, or possibly even by third-party consumers of your code.
There is one caveat: some validation rules are easy to encapsulate/transport; "last name is a required field" for example. However, some of your validation rules may be too complex and involve far too many objects to be easily encapsulated or described in metadata: "user can include that coupon only if they aren't an employee, and the order is placed on labor day weekend, and they have between 2 and 5 items of this particular type in their cart, unless they also have these other items in their cart, but only if the color is one of our 'premiere sale' colors, except blah blah blah...." - you know how business 'logic' is! ;)
In those cases, I usually just accept the fact that there will be some additional validation done only at the business layer, and ensure there's a way for those errors to be propagated back to the UI layer when they occur (you're going to need that communication channel anyway, to report back persistence-layer errors anyway).