Clean Architecture : why not using the entity as request model of the use case (interactor) [closed] - oop

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have read the PPP book and clean code, coder and architecture books.
I know that:
Clean architecture is a layered architecture
What is it like being open layered or close layered architecture
Clean architecture books suggests that each layer can access it`s inner layers, and not only the very next inner layer
So I assume that clean architecture does not force being close layered and it allows being open layered, meaning that for example UI which is in the frameworks layer can directly access Entity, jumping 2 layers in the way.
And I understand that if clean architecture forced being close layered, we could not implement repository interface directly from Frameworks layer and we should implement it in the terms of next layer and this next layer should have implemented it in the terms of its next layer and so on.
Now my question is, why we can't introduce Entity as the parameter type of the usecase or controller directly and why do we have to define data structures or DTOs in the middle layers and bother converting entity to data structures and return it as response, while we are allowed to use and see Entity in the controller layer because the access rule is not violated?
Consider this example, suppose we have:
JobView
JobController
JobUseCase(RequestModel) : ResponseModel
JobEntity
Now if JobView wants to call JobController, it should pass RequestModel. Now could we simply introduce JobEntity as the RequestModel like so:
JobView
JobController
JobUseCase(JobEntity)
JobEntity
I know that doing like so will increase the fragility of code, because that way if we change JobEntity, then JobView has to change. But does clean architecture force SOLID principles being fragile or rigid as a rule?!

Why not using entity as request model of usecase?
You have answered this question yourself: Even as you do not break the dependency rule, it will increase the fragility of the code.
why we cant introduce Entity as the Parameter type of usecase or controller directly and why we have to define data structures or DTOs in the middle layers and bother converting entity to data structures and return it as response , while we are allowed to use and see Entity in the Controller layer because the access rule is not violated?
The (Critical business) Entities and the DTOs are in the application of very different reasons. Entities should encompass the critical business rules and have nothing to do with communication between adapters and interactors. DTOs should be implemented in the most convenient way to enhance this communication, and don't have any immediate reason to depend on business entities.
Even as an entity might have the exact same code as a DTO, this should be considered coincidence, as their reason to change are completely different (Single responsibility principle). This might seems to collide with the popular DRY principle (Dont repeat yourself), but DRY states that knowledge should not be duplicated, code might still look the same in different parts of the application as long as they are changed by different reasons.

Not sure I understand the reasoning behind your question:
Does clean architecture forces SOLID principles or not being fragile or rigid as a rule?
How could Clean Architecture possibly force rigidity and fragility? Defining an architecture is all about: how to take care widely of fundamental OOP principles such as SOLID and others…
On the other hand, your following example would definitely denature Clean Architecture:
JobView > JobController > JobUseCase(JobEntity) > JobEntity
This implicitly tells us that you've retrieved your entity most likely from the controller, which completely misses the point of the Interactor (or use case) and so of the Clean Architecture.
Interactors encapsulate application business rules such as interactions with entities and CRUD on entities done via the Entity Gateway which in turn encapsulate the infrastructure layer.
Furthermore, in Clean Architecture context, your entities which are parts of your model layer should have nothing to do with your controller which is part of your delivery mechanism, or more exactly, which is the evaluator of the HTTP request message. Denaturing this way the lower level component that is the controller would affect negatively the SRP (=> fragility increase) and the degree of decoupling between your components (=> rigidity increase).
You say:
And I understand that if clean architecture forced being close layered, we could not implement repository interface directly from Frameworks layer and we should implement it in the terms of next layer and this next layer should have implemented it in the terms of its next layer and so on.
Your entity framework's RepositoryInterface and its implementations belong to the infrastructure layer and they should be wrapped, adapted by the entity gateway. Law of Demeter might be important to respect here as we are talking about the business model's closed layer's port (EntityGatewayInterface)'s implementation.
Finally, for the reasons above, I suspect the following assumption being wrong and so would be all the further assumptions based on that one, leading yourself to total confusion:
So I assume that clean architecture does not force being close layered and it allows being open layered, meaning that for example UI which is in the frameworks layer can directly access Entity, jumping 2 layers in the way.
But whether it forces being closed layered or not, Clean Architecture explicitly and concretely defines itself (relation between components) such as on the UML class diagram below:
I can see only a close layered architecture from that diagram…
It seems to me that an open layer is an oxymore, that it does not constrain what a layer is supposed to constrain by nature, because by definition, a layer is an isolation, an abstraction of a group of components reduced to its port, meant to reduce technical debt such as fragility etc...
Additional Resources
Conference given by Uncle Bob resuming well enough why and how to implement Clean Architecture: https://www.youtube.com/watch?v=o_TH-Y78tt4

The above answers are accurate, but I'd like to point out why this creates confusion as I've seen it before: because from a dependency perspective, there is nothing wrong with passing entities across the boundaries. What you cannot pass is any type that has a dependency on an outer layer, that's a no-no for reasons obvious. Much of the book talks about dependency issues, so that creates confusion - why aren't the entities ok?
As stated above, entities need to observe SRP just like any other code. If you use entities for data transfer purposes, you have introduced an unnecessary coupling. When the entity needs to change for a business reason, at the very least the mapping code and maybe more in the outer layer need to change in response.

Related

A query on software design/architecture

In book 'Patterns of Enterprise Application Architecture', following is stated:
When thinking of a system in terms of layers, you imagine the principal subsystems
in the software arranged in some form of layer cake, where each layer
rests on a lower layer. In this scheme the higher layer uses various services
defined by the lower layer, but the lower layer is unaware of the higher layer.
On other hand, in book 'Agile Principles, Patterns, and Practices', following is stated on Dependency-Inversion Principle:
High-level modules should not depend on low-level modules. Both should depend
on abstractions.
This sort of confuses me. Am I mixing two different things here?
I suppose that it could speak to some the same principles, but at different levels of granularity. I would still view Dependency Inversion as something that stands on its own, however.
In the first instance, consider this example - in a simple layered architecture, you might have a presentation layer built in JavaScript, a business logic layer built in Java, and a data layer in SQL Server. These layers could be developed by different teams of people. The presentation layer knows how to make API calls to the business logic layer, but not the other way around. The business logic layer knows how to read/write to and from the database layer, but not the other way around. The distinction here happens at a high-level - you might even call it conceptual.
In the second instance, you want to prevent scenarios where supposedly generic code depends on specific implementations - and at this point, I see it as a relatively low-level concern that falls within the scope of a particular application (i.e. in code, not conceptually as in the previous example). If you have code that writes to a database, but you want to support different implementations - e.g. MySQL and SQL Server, where each of those might have some specific intricacies, don't make the calling code explicitly depend on either of those - abstract away the complexity through an interface. This is what the Dependency Inversion Principle addresses (see below).
public class UserService {
public UserRepository repository;
public void setUserRepository(UserRepository repository) {
this.repository = repository; //Is this a MySqlRepository or a SqlServerRepository? We don't care - the dependency is managed outside and we can change it without changing this class.
}
public void persistUser() {
this.repository.persist(user); //We just care about the abstraction - the interface.
}
}
Am I mixing two different things here?
Yes.
The first case is about dependencies at run-time while the second case is about dependencies at compile-time.
For example, at run-time, business layer can call the functions/methods implemented in database layer. But at compile-time, business layer code does not mention any code in database layer. This is usually achieved with Polymorphism.

Clean Architecture: UseCase Output Port

I have a question regarding the "Use Case Output Port" in Uncle Bob´s Clean Architecture.
In the image, Uncle Bob describes the port as an interface. I am wondering if it has to be that way or if the invoked Use Case Interactor could also return a "simple" value. In either case the Application and Business Rules Layer would define its interface that the Interface Adapters Layer has to use. So I think for simple invocations just returning a value would not violate the architectural idea.
Is that true?
Additionally, I think this Output Port Interface implemented by the presenter should work like the Observer pattern. The presenter simply observes the interactor for relevant "events". In the case of .NET where events are first-class citizens, I think using one of these is the same idea.
Are these thoughts compatible with the ideas behind Clean Architecture?
Howzit OP. I see your question is still unanswered after all these years and I hope we can reason about this and provide some clarity. I also hope I am understanding your question correctly. So with that in mind, here is how I see the solution:
The short answer is, a use case interactor should be able to return a simple value (by which I assume string, int, bool etc) without breaking any architectural rules.
If we go over the onion architecture, which is very similar to the clean architecture, the idea is to encapsulate the core business logic in the center of the architecture, the domain. The corresponding concept in the clean architecture is the entities and the use cases on top of it. We do this because we want to dictate our understanding of the business in a consistent way when we write our business rules.
The interface adapters allow us to convert the outside world to our understanding. What we want is a contract in our domain (use cases or entities) that ensures we will get what we need from the outside world, without knowing any implementation details. We also don't care what the outside world calls it, we convert their understanding to ours.
A common way to do this, is to define the interface in the domain to establish a contract that says, we expect to give "x", and you must then tell us what "y" is. The implementation can then sit outside the domain.
Now to get to the core of your question. Let's assume that the core of our application is to track some complicated process with various stages. During one of these stages, we need to send data to a couple of external parties and we want to keep a reference of some sort for auditing purposes. In such a case our interface may sit in the domain and state we send our complicated object to some party, and we expect a string reference back. We can then use this string reference and fire some domain event etc. The implementation can sit completely outside of the domain and call external APIs and do it's thing, but our core domain is unaffected. Hence returning a simple value has no impact on the architecture. The reverse of the above scenario may also hold true. We can say that we have a reference id of some sort, and the outside world needs to return us our understanding of some object.
For the second part of your question. I would imagine it depends on the use case itself. If you present some idea out there and need to constantly react to it, domain events will get involved and you will have a structure very similar to the observer pattern. .NET encapsulates events very nicely and fits very well with clean architecture and Domain driven design.
Please let me know if the above makes sense or if I can clarify it in any way.

anemic data model, dao's, ... authoritative reference?

Although an experimented programmer and architect, the same old basic problem comes back recurrently. I have my own religion about it, but I need some authoritative source.
Are anemic data models ( (c) Martin Fowler?) inherently bad? Should a cake be able to bake itself? Should an invoice know how (and when it should allow) to add lines to itself, or should another layer do that? rabbit.addToHole(hole) or hole.addRabbit(rabbit)? Has it been proved that an ADM is more bug-prone, or easier to maintain, or anything?
You can find a lot of claims on the web, but I'd really want some authoritative quotes, references or facts, if possible from both sides.
See this stackoverflow answer for enlightment.
And this is my opinion:
ADM (Anemic Domain Model) cannot be represented with class diagram UML
Anemic domain model is bad, only in terms of full oop. It is considered as bad design, mainly because you cannot create UML classes and relations with embedded behavior inside it. For example, in your Invoice class with Rich Domain Model (RDM):
Class Name: Order
Implemented: ICommittable, IDraftable, ...
Attributes: No, UserId, TotalAmount, ...
Behavior: Commit(), SaveDraft(), ...
The class is self-documented and self explaining about what it can do and what can't.
If it is anemic domain model, it does not has the behavior, and we need to search which class is responsible for Committing and Saving Draft. And since the UML class diagram only shows the relation between each classes (one to many / many to many / aggregate / composite), the relation with service class cannot be documented, and Martin Fowler has his point right.
In general, the more behavior you find in the services, the more
likely you are to be robbing yourself of the benefits of a domain
model. If all your logic is in services, you've robbed yourself blind.
This is based on class diagram UML in OOAD book by Lars Mathiassen. I don't know if newer class diagram UML can represent service class.
SRP
In ADM's point of view and compisition over inheritance, RDM (rich domain model) violates SRP. It may be true, but you can refer to this question for discussion.
Shortly, in ADM's point of view, SRP equals one class doing one thing and one thing only. Any change into the class has one and only one reason.
In RDM's point of view, SRP equals all responsibility related to and only to the interface itself. As soon as the operation involve other class, then the operation need to be put into other interface. The implementation itself may vary, as such if a class can implement 2 or more interfaces. It is simply said as if an operation in interface need to be changed, it is for and only for one reason.
ADM tend to be abused with static methods and dirty hacks may apply
ADM is very easy to be abused with static methods - service class. It can be done with RDM too, but it need another layer of abstraction and not worth it. Static methods are usually a sign of bad design, it reduced testability and may introduce race conditions, as well as hiding the dependency.
ADM can has many dirty hacks because the operations are not being constrained by the object definition (hey, I can create another class for this!). In hand of bad designer, this can become catastrophic. In RDM it is harder, please read next point for information.
RDM's implementation usually cannot be reused and cannot be mocked. RDM require to know the system's behavior beforehand
Usually RDM's implementation cannot be reused and mocked. In TDD manner, it reduced testability (please correct me if there is RDM which can be mocked and reused). Imagine a situation with this inheritance tree:
A
/ \
B C
If B need logic implemented in C, it cannot be done. Using composition over inheritance, it can be achieved. In RDM, it can be done with a design like this:
A
|
D
/ \
B C
In which introduce more inheritance. However, in order to achieve neat design early, you will need to know the system flow firsthand. That said, RDM require you to know the system's behavior before doing any design, or you won't know any of the interfaces named ISubmitable, IUpdateable, ICrushable, IRenderable, ISoluble, etc, suitable for your system.
Conclusion
That's all my opinion about this kind of holy war. Both has pros and cons. I usually go for ADM because it seems like higher flexibility even has less reliability. Regardless of ADM or RDM, if you design your system bad, the maintenance is hard. Any type of chainsaw will only shines when held by skillful carpenter.
I think the accepted answer to this question is one best answering your question too.
Things that I think are esential to remember:
ADM is adequate for CRUD applications, and since most apps start out this way, it's OK as a starting architecture; you can evolve from there via refactoring, if needed, but there's no point of over-designing an application right from the start
once complexity starts to grow - once business rules start to pile up - it's less convenient to keep the model anemic - separating the rules from the objects they act upon makes it hard to remember all rules that apply when you look at the object
if the rules are in the domain objects, they are also conducive to writing tests, if they're elsewhere (say in stateless services), you don't know what a domain object can do and what all constraints that apply to it are, to write proper tests for it (think orthogonal rules modelled in distinct services)
there's a distinction to be made between really simple applications and anemic domain models: in a really simple application, there is not much business logic, in an anemic domain model the logic exists, but is kept separately from the domain model

Strategy for Sharing Business and Data Access Entities

I'm designing a layered application where 90% of the business and data access entities have the same properties. Basically it doesn't make sense to create one set of classes for each layer (and map) with the same properties for the sake of separation of concerns. I'm completely aware of automappers but I'd rather not use one in this case as I think it's unecessary. Is it ok to share share the business entities between the business and data access layer in this scenario? We will manage the remaining 10% of the classes by creating adhoc/transformed classes within the same namespace.
Any other design approach?
I think sharing between layers is the whole point of having model classes backed by a data store. I would avoid adding unnecessary architecture unless the code really needs it. If you get to a point where you need to be agnostic of the data store or some other similar situation, I would you might look into the Repository pattern. Simple code = maintainable code.

Design Principles [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What principles do you generally follow when doing class design?
Principles Of Object Oriented Class Design (the "SOLID" principles)
SRP: The Single Responsibility
Principle A class should have one,
and only one, reason to change.
OCP: The Open Closed Principle You
should be able to extend a classes
behavior, without modifying it.
LSP: The Liskov Substitution
Principle Derived classes must be
substitutable for their base
classes.
ISP: The Interface Segregation
Principle Make fine grained
interfaces that are client specific.
DIP: The Dependency
Inversion Principle Depend on
abstractions, not on concretions.
Source: http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
Video (Uncle Bob): Clean Coding By Robert C. Martin ( Uncle Bob )
Don't forget the Law of Demeter.
The S.O.L.I.D. principles.
Or at least I try not to steer away too much from them.
The most fundamental design pattern should be KISS (keep it simple stupid)
Which means that sometimes not using classes for some elements at all it the right solution.
That and CRC(Class, Responsibility, Collaborators) cards (write the card down in your header files, not on actual cards that way they because easy to understand documentation too)
As mentioned above, some of the fundamental Object Oriented Design principles are OCP, LSP, DIP and ISP.
An excellent overview of these by Robert C. Martin (of Object Mentor) is available here: OOD Principles and Patterns
The "Resource Acquisition Is Initialization" paradigm is handy, particularly when writing in C++ and dealing with operating system resources (file handles, ports, etc.).
A key benefit of this approach is that an object, once created, is "complete" - there is no need for two-phase initialization and no possibility of partially-initialized objects.
loosely coupled, highly cohesive.
Composition over inheritance.
Domain Driven Design is generally a good principle to follow.
Basically I get away with programming to interfaces. I try to encapsulate that which changes through cases to avoid code duplication and to isolate code into managable (for my brain) chunks. Later, if I need, I can then refactor the code quite easily.
SOLID principles and Liskov's pattern, along with Single responsibility pattern.
A thing which I would like to add to all this is layering, Define layers in your application, the overall responsibility of a layer, they way two layers will interact. Only classes which have the same responsibility as that of the layer should be allowed in that layer. Doing this resolves a lot of chaos, ensures exceptions are handled appropriately, and it makes sure that new developers know where to place their code.
Another way to design is by designing your class to be configurable creating a mechanism where the configuration can be plugged in your class, rather than overriding methods in sub classes, identify what changes, see if that can be made configurable and ensures that this functionality is derived from configurations
I usually try to fit the class into one of the oo design patterns.