May a Repository call an UseCase in Clean Architecture? - repository

This is a very tricky question, because when we check the rules it's not explicit that a repository couldn't call an UseCase. However, it doesn't seem logical.
Is there any definition/good practices and why it shouldn't do this?
Thanks!

The short answer is "No" - it shouldn't, regardless of the context (in most of all cases). As to why - the definitions, principles and good practices - it may be helpful to think in terms of clear separation of concerns across your whole Clean Architecture implementation.
Consider this illustration, as background at thinking about how one could organize the interactions (and dependencies) between main parts of a Clean Architecture.
The main principles illustrated are, that -
Through its execution, the Use Case has different "data needs" (A and B). It doesn't implement the logic to fulfill them itself (since they require some specific technology). So the Use Case declares these as two Gateway-interfaces ("ports"), in this example. And then calls them amidst its logic.
Both of these interfaces declare some distinct set of operations that should be provided (implemented) from "outside". The Use Case, in its logic, needs and invokes all of those A and B operations. They are separated into A and B, because they are different kinds of responsibilities - and might be implemented by different parts of the system (but not necessarily). Let's say that the Use Case needs loading of persisted domain objects (as part of A operations), but it also needs to retrieve configuration (as some key-value pairs), which are B operations. These interfaces are segregated since both sets of operations serve distinct purposes for the Use Case. Anyhow, it's important design-wise, that they both explicitly "serve" the Use Case needs - meaning, they are not generic entity-centric DAO / Repository interfaces; they ONLY have operations that the Use Case actually needs and invokes, in exactly the shape and form (parameters, return values) that the Use Case specifically needs them. They are "ports" to be "plugged into", as part of the whole Use Case.
The "outside" providers of these responsibilities are the Adapters (the implementers) of those needs. To fulfill them, they typically use some specific technology or framework - a database, a network call to some server, a message producer, a file operation, Spring's configuration properties, etc.
The Use Case is invoked (called) only by Drivers side of the architecture (that is, the initiating side). The Use Case itself, in fact, is one of the "initiators" for its further collaborating parts (eg, the Adapters).
On the other hand, the Use Case is "technically supported" (the declared parts of its needs "implemented") by Adapters side of the architecture.
Effectively, there is a clear separation of who calls what - meaning, at runtime the call stack progresses in a clear directional flow of control across this architecture.
The flow of control is always from Drivers towards Adapters (via the Use Case), never the other way around.
These are principles I have learned, researched, implemented and corrected purely across my career in different projects. In other words, they've been shaped by the real world in terms of what has been practical and useful - in terms of separation of concerns and clear division of responsibilities - in my experience. Yours naturally may differ, and there is no universal fit - CA is not a recipe, it is a mindset of software design, implementable in (better and worse) several ways.
Thinking simply though, I would imagine in your situation Repository is your "data storage gateway" implementation of the Use Case's (Data) Gateway. The UC needs that data from "somewhere" - without caring where it comes from or how its is stored. This is very important - the whole core domain, along with the Use Case needs to be framework and I/O agnostic.
Your Repository fulfills that need - provides persisted domain objects. But the Use Case must not call it directly, instead it declares a Gateway (in Hexagonal eg Ports & Adapters architecture, named a Port) - with needed operation(s) that your Repository needs to implement. By using some specific (DB / persistence) technology, your Repository fulfills it -it implements one of the Use Case's "ports", as an Adapter.
With the above being said - on rare occasions, some Gateway implementations may demand exceptions. They might need several back-and-forth-going interactions, even across your architecture. They are rare and indeed complex situations - likely not necessary for a Repository implementation.
But, if that is really an inevitable case - then it's best if the Use Case, when calling the Gateway, provides a callback
interface as a parameter of the call. So during its processing the Gateway's implementer can call back using the operations in that interface - effectively implementing the back-and-forth necessity. In most of all cases though, this implies excessive logic and complexity at the adapters' level, which should be avoided - and serves as a strong cue that the current solution should be re-designed.

Related

"Cross-tree" inter-container communication between encapsulated objects

Encapsulation lends itself to hierarchical "silos" or "trees" of objects, with a given application's major functionalities decomposed into core trunks, each further decomposed into sub-functionalities instantiated as sub-branch member objects of their respective branches.
As an example, suppose I'm programming a GUI in QT. To separate GUI elements from business logic, for every widget I have a class for the GUI elements, a class for the business logic associated with the GUI elements, and what I'll call a controller which serves as a container for both and exists primarily to pass signals/slots in between. I understand this pattern to be called dependency injection.
Let's suppose our application has 2 windows A and B.
Window A contains 2 independent widgets that have separate business functions i and ii, so we might have the structure A::i and A::ii, where :: is "contains an instance of" and not "extends".
Then i and ii both contain gui and business logic units, so we have A::i::business and A::i::gui.
Now, suppose A::i::business and A::ii::business want to pass information between each other, or engage in bidirectional communication with the model-view for my database, identified as MV. Generally speaking, what are my options for doing this?
What I've thought of so far:
1) Pass signals up and down the tree, i.e., the Verilog solution. This seems the most strictly object oriented, but most tedious.
2) Have a flatter architecture to ease the implementation of solution 1). This hurts encapsulation.
3) Make A::i::business and A::ii::business public all the way down, and have either the other object or a third-party shared class access A::i::business or A::ii:business directly. This hurts encapsulation.
4) Have a relatively unencapsulated object, like the database MV or some other form of "shared storage", exist in a public form near the top level of the program with few/ no super-containers. This seems most appealing, as the relatively encapsulated objects can stay encapsulated and communicate through unidirectional reading/ writing to something that's unencapsulated. However, if I want other objects to perform actions based on changes shared storage, some way of notifying the dependent objects while keeping them private is necessary. This might be called the "multi-threading inspired" or "multi-processing inspired" model of communication.
And any other suggestions you all may have.
I've read this post here, but at my level of understanding, the solutions in the accepted answer such as controller, listener and pub-sub, refer to general design patterns that don't seem to commit to a solution for the concrete problem of how to route signals and make decisions about public/ private accessibility of member classes. It may be the case that these solutions have associated conventions for where the communication objects go and how they access the different variables in either silo that I'm not familiar with.
Most generally speaking, I seem to be running into a general problem of communication across container-trees in well-encapsulated programming.
For future reference, is there a general term for this problem to aide future searching? Is my architectural approach of having the object-container structure directly reflecting the tree-decomposition of application functionality lending itself to too hierarchical a design, and would a different pattern of object containment and cross-branch communication be more optimal?

Clean Architecture: UseCase Output Port

I have a question regarding the "Use Case Output Port" in Uncle BobĀ“s Clean Architecture.
In the image, Uncle Bob describes the port as an interface. I am wondering if it has to be that way or if the invoked Use Case Interactor could also return a "simple" value. In either case the Application and Business Rules Layer would define its interface that the Interface Adapters Layer has to use. So I think for simple invocations just returning a value would not violate the architectural idea.
Is that true?
Additionally, I think this Output Port Interface implemented by the presenter should work like the Observer pattern. The presenter simply observes the interactor for relevant "events". In the case of .NET where events are first-class citizens, I think using one of these is the same idea.
Are these thoughts compatible with the ideas behind Clean Architecture?
Howzit OP. I see your question is still unanswered after all these years and I hope we can reason about this and provide some clarity. I also hope I am understanding your question correctly. So with that in mind, here is how I see the solution:
The short answer is, a use case interactor should be able to return a simple value (by which I assume string, int, bool etc) without breaking any architectural rules.
If we go over the onion architecture, which is very similar to the clean architecture, the idea is to encapsulate the core business logic in the center of the architecture, the domain. The corresponding concept in the clean architecture is the entities and the use cases on top of it. We do this because we want to dictate our understanding of the business in a consistent way when we write our business rules.
The interface adapters allow us to convert the outside world to our understanding. What we want is a contract in our domain (use cases or entities) that ensures we will get what we need from the outside world, without knowing any implementation details. We also don't care what the outside world calls it, we convert their understanding to ours.
A common way to do this, is to define the interface in the domain to establish a contract that says, we expect to give "x", and you must then tell us what "y" is. The implementation can then sit outside the domain.
Now to get to the core of your question. Let's assume that the core of our application is to track some complicated process with various stages. During one of these stages, we need to send data to a couple of external parties and we want to keep a reference of some sort for auditing purposes. In such a case our interface may sit in the domain and state we send our complicated object to some party, and we expect a string reference back. We can then use this string reference and fire some domain event etc. The implementation can sit completely outside of the domain and call external APIs and do it's thing, but our core domain is unaffected. Hence returning a simple value has no impact on the architecture. The reverse of the above scenario may also hold true. We can say that we have a reference id of some sort, and the outside world needs to return us our understanding of some object.
For the second part of your question. I would imagine it depends on the use case itself. If you present some idea out there and need to constantly react to it, domain events will get involved and you will have a structure very similar to the observer pattern. .NET encapsulates events very nicely and fits very well with clean architecture and Domain driven design.
Please let me know if the above makes sense or if I can clarify it in any way.

Deciding extent of coupling

I have a Component which has API exposed with some 10 functionality in all. I can think of two ways to achieve it:
Give out all these functionality as separate functions.
Expose only one function which takes an XML as input. Based on request_Type specified and the parameters passed in the XML, I internally call one of the respective functions.
Q1. Will the second design be more loosely coupled than the first ?
I always read about how I should try my components to be loosely coupled, should I really go to this extent to achieve lose coupling ?
Q2. Which one of these would be a better design in terms of OOP and why?
Edit:
If I am exposing this API over D-Bus for others to use, will type checking still be a consideration to compare the two approaches? From what I understand type checking is done at compile time, but in case when this function is exposed over some IPC, issue of type checking comes into picture ?
The two alternatives you propose do not differ in the (obviously quite large) number of "functions" you want to offer from your API. However, the second seems to have many disadvantages because you are loosing any strong type checking, it will become much harder to document the functionality etc. (The only advantage I see is that you don't need to change your API if you add functionality. But at the disadvantage that users will not be able to figure out API changes like deleted functions until run-time.)
What is more related with this question is the Single Responsiblity Principle (http://en.wikipedia.org/wiki/Single_responsibility_principle). As you are talking about OOP, you should not expose your tens of functions within one class but split them among different classes, each with a single responsibility. Defining good "responsibilities" and roles requires some practice, but following some basic guidelines will help you to get started quickly. See Are there any rules for OOP? for a good starting point.
Reply to the question edit
I haven't used D-Bus, so this might be totally wrong. But from a quick look at the tutorial I read
Each object supports one or more interfaces. Think of an interface as
a named group of methods and signals, just as it is in GLib or Qt or
Java. Interfaces define the type of an object instance.
DBus identifies interfaces with a simple namespaced string, something
like org.freedesktop.Introspectable. Most bindings will map these
interface names directly to the appropriate programming language
construct, for example to Java interfaces or C++ pure virtual classes.
As far as I understand, D-Bus has the concept of differnt objects which provide interfaces consisting of several methods. This means (to me) that my answer above still applies. The "D-Bus native" way of specifying your API would mean to exhibit interfaces and I don't see any reason why good OOP design guidelines shouldn't be valid, here. As D-Bus seems to map these even to native language constructs, this is even more likely.
Of course, nobody keeps you from just building your own API description language in XML. However, things like are some kind of abuse of underlying techniques. You should have good reasons for doing such things.

CQRS Naming Conventions

I'm implementing a new webservice and while I'm not yet using CQRS I would like to create my service so it could easily be moved to CQRS in the future. So, I'm wondering about naming convention for my DTO classes and also my methods.
I've read this blog post on DTO naming conventions and it seems sensible to me. It suggest the following ...
SomeSortOfQueryResult
SomeSortOfQueryParameter
SomeSortOfCommand
SomeSortOfConfigItem
SomeSortOfSpecification
SomeSortOfRow
SomeSortOfItem (for a collection)
SomeSortOfEvent
SomeSortOfElement
SomeSortOfMessage
What I'm asking here is how I should name my methods. Is it good practise to use GetSomething or would SomeQuery be better?
Naming should really just come out of the thing the method is doing. Take a step back and looking at Command-query separation (CQS) first. What you're really trying to do here is make sure that any given method is either querying for data or commanding that something happen.
I.e. "Asking for a value should not change the value".
CQRS is something different, on a larger scale, and generally less well understood. It's not necessarily complex, though, just applying the CQS concept at an architectural level rather than a code level. You might choose WCF for commands and raw SQL for queries, for example. It aims to allow you the freedom to make your queries the simplest thing that could possibly work, while your commands still get the richness of a full Domain Model or other suitable implementation for your business rules.
CQRS also steers you away from a CRUD application, to a task-based one where you focus more on the problem domain in terms of user interactions than just reading and saving data.
Queries
Generally I name "queries" variations on FindXYZ(), GetXYZ() or LoadXYZ, as long as the intent is clear (i.e. return some data, don't modify any).
Commands
Typically commands are harder to name, though you can think in similar terms to PowerShell's cmdlet naming conventions - verb-noun. Personally though I tend to implement commands as a CommandProcessor pattern, where commands are actually objects containing parameters (sometimes only a primary key of an entity). There is the code to look for appropriate "processors" for each command's Type. Typically in CQRS you'd try and keep this synchronous, because async means you have more work to do with respect to handling commands that failed to be processed, but if you really need a command to be async, then your command's handler might send a message to an ESB to do so.
Talking about DTOs in the context of CQRS rings alarm bells for me where you are specifically talking about the query side. Quoting that blog article
the DTO (Data Transfer Object) pattern was originally created for serializing and transmitting objects
A CQRS architecture implies a thin query side to me i.e. you don't have lots of layers where you need to move information between them with serialized objects or DTOs. It might be that you're using the term DTO in a different sense.
That doesn't really answer your question but I wanted to point it out.

DDD: Where to put persistence logic, and when to use ORM mapping

We are taking a long, hard look at our (Java) web application patterns. In the past, we've suffered from an overly anaemic object model and overly procedural separation between controllers, services and DAOs, with simple value objects (basically just bags of data) travelling between them. We've used declarative (XML) managed ORM (Hibernate) for persistence. All entity management has taken place in DAOs.
In trying to move to a richer domain model, we find ourselves struggling with how best to design the persistence layer. I've spent a lot of time reading and thinking about Domain Driven Design patterns. However, I'd like some advice.
First, the things I'm more confident about:
We'll have "thin" controllers at the front that deal only with HTTP and HTML - processing forms, validation, UI logic.
We'll have a layer of stateless business logic services that implements common algorithms or logic, unaware of the UI, but very much aware of (and delegating to) the domain model.
We'll have a richer domain model which contains state, relationships, and logic inherent to the objects in that domain model.
The question comes around persistence. Previously, our services would be injected (via Spring) with DAOs, and would use DAO methods like find() and save() to perform persistence. However, a richer domain model would seem to imply that objects should know how to save and delete themselves, and perhaps that higher level services should know how to locate (query for) domain objects.
Here, a few questions and uncertainties arise:
Do we want to inject DAOs into domain objects, so that they can do "this.someDao.save(this)" in a save() method? This is a little awkward since domain objects are not singletons, so we'll need factories or post-construction setting of DAOs. When loading entities from a database, this gets messy. I know Spring AOP can be used for this, but I couldn't get it to work (using Play! framework, another line of experimentation) and it seems quite messy and magical.
Do we instead keep DAOs (repositories?) completely separate, on par with stateless business logic services? This can make some sense, but it means that if "save" or "delete" are inherent operations of a domain object, the domain object can't express those.
Do we just dispense with DAOs entirely and use JPA to let entities manage themselves.
Herein lies the next subtlety: It's quite convenient to map entities using JPA. The Play! framework gives us a nice entity base class, too, with operations like save() and delete(). However, this means that our domain model entities are quite closely tied to the database structure, and we are passing objects around with a large amount of persistence logic, perhaps all the way up to the view layer. If nothing else, this will make the domain model less re-usable in other contexts.
If we want to avoid this, then we'd need some kind of mapping DAO - either using simple JDBC (or at least Spring's JdbcTemplate), or using a parallel hierarchy of database entities and "business" entities, with DAOs forever copying information from one hierarchy to another.
What is the appropriate design choice here?
Martin
Your questions and doubts ring an interesting alarm here, I think you went a bit too far in your interpretation of a "rich domain model". Richness doesn't go as far as implying that persistence logic must be handled by the domain objects, in other words, no, they shouldn't know how to save and delete themselves (at least not explicitely, though Hibernate actually adds some persistence logic transparently). This is often referred to as persistence ignorance.
I suggest that you keep the existing DAO injection system (a nice thing to have for unit testing) and leave the persistence layer as is while trying to move some business logic to your entities where it's fit. A good starting point to do that is to identify Aggregates and establish your Aggregate Roots. They'll often contain more business logic than the other entities.
However, this is not to say domain objects should contain all logic (especially not logic needed by many other objects across the application, which often belongs in Services).
I am not a Java expert, but I use NHibernate in my .NET code so my experience should be directly translatable to the Java world.
When using ORM (like Hibernate you mentioned) to build Domain-Driven Design application, one of good (I won't say best) practices is to create so-called application services between the UI and the Domain. They are similar to stateless business objects you mentioned, but should contain almost no logic. They should look like this:
public void SayHello(int id, String helloString)
{
SomeDomainObject target = domainObjectRepository.findById(id); //This uses Hibernate to load the object.
target.sayHello(helloString); //There is a single domain object method invocation per application service method.
domainObjectRepository.Save(target); //This one is optional. Hibernate should already know that this object needs saving because it tracks changes.
}
Any changes to objects contained by DomainObject (also adding objects to collections) will be handled by Hibernate.
You will also need some kind of AOP to intercept application service method invocations and create Hibernate's session before the method executes and save changes after method finishes with no exceptions.
There is a really good sample how to do DDD in Java here. It is based on the sample problem from Eric Evans' 'Blue Book'. The application logic class sample code is here.