Help with debate on Separation of concerns (Data Access vs Business Logic) [closed] - data-access-layer

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I had a debate with my co-worker on whether certain logic belongs in the data access or business logic layer.
The scenario is, the BLL needs some data to work with. That data primarily lives in the database. We want to cache that data (using System.Runtime.Caching) so it's quickly available on subsequent requests. The architecture is such that the DAL and the BLL live on the same box and in different assemblies (projects in the same solution). So there is no concern of hitting a DAL over the wire or anything like that.
My argument is that the decision to hit the cache vs the database is a concern of the DAL. The business logic layer should not care where the data comes from, only that it gets the data it needs.
His argument is that the data access layer should be "pure" and "dumb" and any logic to make that decision to hit the cache vs the database should be in the business logic layer.
In my opinion what he's saying is undermining separation of concerns and causing the layers to be more tightly coupled when the goal is to keep things loosely coupled. I can see where the BLL might want to control this if it was a specific program/ui function to decide where to go for data, but that simply isn't the case here. It's just a very simple caching scenario where the database is the primary data store.
Thoughts?

I agree with you 100%.
Caching is part of the DAL and does not belong in the BLL.
Let's take hibernate as an example, it use a caching system to store your entity's. Hibernate is responsible and know how to control his cache, (dirty read, flushing data etc)
You don't want to cluttered your BLL with all this low-level data logic.
Regards

I believe that the caching should be done in the business layer. The moment you try to get the data from DAL, you can check if the data is available in cache system.runtime.caching, then use cache data otherwise fetch data from the database. Moreover if you want to invalidate cache due to some reason, you can do it by calling a function in the business later.

The whole purpose in separating business logic from data is so that you can swap them out as business requirements or technology changes. By intermixing them, you are defeating this logic, and therefore, on a theoretical level you are correct. In the real world however, I think you need to be a bit more pragmatic. What's the real life expectancy of the application , what is the likelihood that the technology is going to change, and how much extra work is involved in keeping the two cleanly separated?

My initial reaction would be the same as yours, to let the data layer cache the information. This can even be integrated in with a strategy to subscribe to changes in the database, or implement polling to ensure the data is kept up-to-date.
However, if you intend to re-use the data layer in other projects, or even if not, it might not be a bad idea to implement a new business layer between the existing one and the data layer to handle caching decisions. Because ultimately, caching is a not just a performance issue, it does involve business decisions about concurrency and other matters.
An n-tier system is just that, you're not limited on how many levels you want to seperate things into.

I know I'm over two years late to the game but I wanted to add something:
If you have an interface defined for your DAL, you can write a caching mechanism that follows that interface and manages 'cache vs. hit the data source' concerns without the technology or source-specific DAL code having to worry about it and without the BLL having to worry about it. Example:
internal interface IThingsGateway
{
public Thing GetThing(int thingId);
public void UpdateThing(ThingUpdateInfo info);
}
internal class MsSqlThingsGateway : IThingsGateway
{
// implementation specific to MsSql here
}
internal class CachingThingsGateway : IThingsGateway
{
private IThingsGateway underlyingImplementation;
public CachingThingsGateway(IThingsGateway implementation)
{
this.underlyingGateway = implementation;
}
public Thing GetThing(int thingId)
{
if (this.HasCachedThing(thingId))
{
return this.GetCachedThing(thingId);
}
var thing = this.underlyingGateway.GetThing(thingId);
this.SetCachedThing(thingId);
return thing;
}
public void UpdateThing(ThingUpdateInfo info)
{
this.underlyingGateway.UpdateThing(info);
this.ClearCachedThing(info.ThingId);
}
}
And I would use this same approach if I needed to check multiple data sources for a thing: write an implementation of IThingsGateway that handles the logic of juggling the various data sources, delegating to the appropriate one... then wrap that in the CachingThingsGateway. Client code will ultimately obtain an IThingsGateway reference from some factory or container, which is where the wrapping and instantiating would occur.
And all of this really doesn't take that much extra effort. If you use caching you will have to write that code anyways, and the overhead generated by putting it in another class with the same interface is minimal at worst.

Related

Maintainable API design for communication between Blazor WASM and ASP.NET Core [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am seeking advice on how to design and implement the API between Blazor WebAssembly and ASP.NET Core server for a mid-size project (couple developers, few years). Here are approaches that I've tried and issues I've encountered.
Approach #1: Put Entity classes to the Shared project and use them as the return type in Controller methods
Pros:
Simplicity - no boilerplate code
Ensured type safety from database all the way to the client
Cons:
In some cases the data should be processed on the server before they are returned - for example we don't need to return each product, but only the total count of products in a category. In some cases, the Client can work with simplified view of the data model - for example the Client only needs to know the price that's available to them, but database design needs to be more complex, so the Server could determine which price is available to which customer. In these cases we need to create a custom return type for the Controller method (Data Transfer Objects). This creates inconsistency, because some Controller methods return database entities, while some return DTOs. I found that these cases are so frequent that it's better to use DTOs for all communication.
The Client usually doesn't use each field in the entity, but we transfer it anyway. This slows down the app for users with slow internet connection.
Approach #2: Create one Data Transfer Object per entity, map with Entity.ToDataTransferObject()
The Controller has many methods for querying data, to accomodate for needs of different Components on the client. Most often, the database result takes a form of an Entity or of List<Entity>. For each database entity, we have a method entity.ToDataTransferObject() which transforms the database result to a DTO from Shared project.
For cases when the response type is very different from database entities, we create distinct data transfer object and do the transformation either in Controller method or in a distinct class.
Pros:
Data model on the Client is just as complex as it needs to be
Cons:
Some controller methods load (and need return) all data about an entity, and about its related entities, going into depth of 5. Some methods only need to load two simple fields. Because we use the same entity.ToDataTransferObject() method for all of them, they need to share the same return type. Any field which is not always returned is declared as nullable. This has BAD consequences. The compiler no longer ensures the compatibility of the Blazor Component with the return type of the Controller method. The compiler doesn't ensure compatibility of the database query with the entity.ToDataTransferObject() method. The compatibility is only discovered by testing, and that is only if the right data are present in the database. As app development continues and the data model evolves, this is a great source of bugs.
There are multiple controller methods querying the same data. The queries contain some business logic (for example - which products should be displayed to this customer?). When there are multiple controller methods querying the same data, this business logic is duplicated into multiple controller methods. Even worse, sometimes the logic is duplicated into other controllers, when we need to decide which entity to include.
Now I am looking for Approach #3
The cons of Approch #2 lead me to the following design changes:
Stop making properties of Data Transfer Object nullable, to signify that they have not have been loaded from the database. If a property has't been loaded, we need to create a new class for the transfer object, where the property will not be present.
Stop using entity.ToDataTransferObject() - one master-method to convert an entity to Data Transfer Object. Instead, create a method for every type of DataTransferObject.
Find a way to extract parts of EF Core queries to re-usable methods to prevent duplicating business logic.
However, this would require us to add a mountain of additional code 🏔️. We would need to create a new class for each subset of properties of an entity, which is used in a component. This might be worth it, considering it's likely to eliminate majority of bugs that we face today, but it's a heavy price to pay.
Have I missed anything? Is there any better design that I haven't considered?
In my experience, use DTO’s to match the client side UI view model. So UI-formatted values, along with record ID values to allow posting updates from edit forms. This ensures no accidental unauthorized access to any values that the current session has no permissions for, and it prevents overfetching data in general.

Clean Architecture : why not using the entity as request model of the use case (interactor) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have read the PPP book and clean code, coder and architecture books.
I know that:
Clean architecture is a layered architecture
What is it like being open layered or close layered architecture
Clean architecture books suggests that each layer can access it`s inner layers, and not only the very next inner layer
So I assume that clean architecture does not force being close layered and it allows being open layered, meaning that for example UI which is in the frameworks layer can directly access Entity, jumping 2 layers in the way.
And I understand that if clean architecture forced being close layered, we could not implement repository interface directly from Frameworks layer and we should implement it in the terms of next layer and this next layer should have implemented it in the terms of its next layer and so on.
Now my question is, why we can't introduce Entity as the parameter type of the usecase or controller directly and why do we have to define data structures or DTOs in the middle layers and bother converting entity to data structures and return it as response, while we are allowed to use and see Entity in the controller layer because the access rule is not violated?
Consider this example, suppose we have:
JobView
JobController
JobUseCase(RequestModel) : ResponseModel
JobEntity
Now if JobView wants to call JobController, it should pass RequestModel. Now could we simply introduce JobEntity as the RequestModel like so:
JobView
JobController
JobUseCase(JobEntity)
JobEntity
I know that doing like so will increase the fragility of code, because that way if we change JobEntity, then JobView has to change. But does clean architecture force SOLID principles being fragile or rigid as a rule?!
Why not using entity as request model of usecase?
You have answered this question yourself: Even as you do not break the dependency rule, it will increase the fragility of the code.
why we cant introduce Entity as the Parameter type of usecase or controller directly and why we have to define data structures or DTOs in the middle layers and bother converting entity to data structures and return it as response , while we are allowed to use and see Entity in the Controller layer because the access rule is not violated?
The (Critical business) Entities and the DTOs are in the application of very different reasons. Entities should encompass the critical business rules and have nothing to do with communication between adapters and interactors. DTOs should be implemented in the most convenient way to enhance this communication, and don't have any immediate reason to depend on business entities.
Even as an entity might have the exact same code as a DTO, this should be considered coincidence, as their reason to change are completely different (Single responsibility principle). This might seems to collide with the popular DRY principle (Dont repeat yourself), but DRY states that knowledge should not be duplicated, code might still look the same in different parts of the application as long as they are changed by different reasons.
Not sure I understand the reasoning behind your question:
Does clean architecture forces SOLID principles or not being fragile or rigid as a rule?
How could Clean Architecture possibly force rigidity and fragility? Defining an architecture is all about: how to take care widely of fundamental OOP principles such as SOLID and others…
On the other hand, your following example would definitely denature Clean Architecture:
JobView > JobController > JobUseCase(JobEntity) > JobEntity
This implicitly tells us that you've retrieved your entity most likely from the controller, which completely misses the point of the Interactor (or use case) and so of the Clean Architecture.
Interactors encapsulate application business rules such as interactions with entities and CRUD on entities done via the Entity Gateway which in turn encapsulate the infrastructure layer.
Furthermore, in Clean Architecture context, your entities which are parts of your model layer should have nothing to do with your controller which is part of your delivery mechanism, or more exactly, which is the evaluator of the HTTP request message. Denaturing this way the lower level component that is the controller would affect negatively the SRP (=> fragility increase) and the degree of decoupling between your components (=> rigidity increase).
You say:
And I understand that if clean architecture forced being close layered, we could not implement repository interface directly from Frameworks layer and we should implement it in the terms of next layer and this next layer should have implemented it in the terms of its next layer and so on.
Your entity framework's RepositoryInterface and its implementations belong to the infrastructure layer and they should be wrapped, adapted by the entity gateway. Law of Demeter might be important to respect here as we are talking about the business model's closed layer's port (EntityGatewayInterface)'s implementation.
Finally, for the reasons above, I suspect the following assumption being wrong and so would be all the further assumptions based on that one, leading yourself to total confusion:
So I assume that clean architecture does not force being close layered and it allows being open layered, meaning that for example UI which is in the frameworks layer can directly access Entity, jumping 2 layers in the way.
But whether it forces being closed layered or not, Clean Architecture explicitly and concretely defines itself (relation between components) such as on the UML class diagram below:
I can see only a close layered architecture from that diagram…
It seems to me that an open layer is an oxymore, that it does not constrain what a layer is supposed to constrain by nature, because by definition, a layer is an isolation, an abstraction of a group of components reduced to its port, meant to reduce technical debt such as fragility etc...
Additional Resources
Conference given by Uncle Bob resuming well enough why and how to implement Clean Architecture: https://www.youtube.com/watch?v=o_TH-Y78tt4
The above answers are accurate, but I'd like to point out why this creates confusion as I've seen it before: because from a dependency perspective, there is nothing wrong with passing entities across the boundaries. What you cannot pass is any type that has a dependency on an outer layer, that's a no-no for reasons obvious. Much of the book talks about dependency issues, so that creates confusion - why aren't the entities ok?
As stated above, entities need to observe SRP just like any other code. If you use entities for data transfer purposes, you have introduced an unnecessary coupling. When the entity needs to change for a business reason, at the very least the mapping code and maybe more in the outer layer need to change in response.

Clean Architecture: UseCase Output Port

I have a question regarding the "Use Case Output Port" in Uncle Bob´s Clean Architecture.
In the image, Uncle Bob describes the port as an interface. I am wondering if it has to be that way or if the invoked Use Case Interactor could also return a "simple" value. In either case the Application and Business Rules Layer would define its interface that the Interface Adapters Layer has to use. So I think for simple invocations just returning a value would not violate the architectural idea.
Is that true?
Additionally, I think this Output Port Interface implemented by the presenter should work like the Observer pattern. The presenter simply observes the interactor for relevant "events". In the case of .NET where events are first-class citizens, I think using one of these is the same idea.
Are these thoughts compatible with the ideas behind Clean Architecture?
Howzit OP. I see your question is still unanswered after all these years and I hope we can reason about this and provide some clarity. I also hope I am understanding your question correctly. So with that in mind, here is how I see the solution:
The short answer is, a use case interactor should be able to return a simple value (by which I assume string, int, bool etc) without breaking any architectural rules.
If we go over the onion architecture, which is very similar to the clean architecture, the idea is to encapsulate the core business logic in the center of the architecture, the domain. The corresponding concept in the clean architecture is the entities and the use cases on top of it. We do this because we want to dictate our understanding of the business in a consistent way when we write our business rules.
The interface adapters allow us to convert the outside world to our understanding. What we want is a contract in our domain (use cases or entities) that ensures we will get what we need from the outside world, without knowing any implementation details. We also don't care what the outside world calls it, we convert their understanding to ours.
A common way to do this, is to define the interface in the domain to establish a contract that says, we expect to give "x", and you must then tell us what "y" is. The implementation can then sit outside the domain.
Now to get to the core of your question. Let's assume that the core of our application is to track some complicated process with various stages. During one of these stages, we need to send data to a couple of external parties and we want to keep a reference of some sort for auditing purposes. In such a case our interface may sit in the domain and state we send our complicated object to some party, and we expect a string reference back. We can then use this string reference and fire some domain event etc. The implementation can sit completely outside of the domain and call external APIs and do it's thing, but our core domain is unaffected. Hence returning a simple value has no impact on the architecture. The reverse of the above scenario may also hold true. We can say that we have a reference id of some sort, and the outside world needs to return us our understanding of some object.
For the second part of your question. I would imagine it depends on the use case itself. If you present some idea out there and need to constantly react to it, domain events will get involved and you will have a structure very similar to the observer pattern. .NET encapsulates events very nicely and fits very well with clean architecture and Domain driven design.
Please let me know if the above makes sense or if I can clarify it in any way.

Is good to return a generic object with all important returning values from a facade signature? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Since I asked How has to be written the signature in facade pattern? question, I've thought about how to create a signature for an API that has to be both useful and efficient (and a aesthetically nice solution!) . I've seen some APIs and their boundary interfaces at the top expose the following style of signature:
public List<InterestingDTO>
ANecessaryMethodToCompleteABusinessProcess(int aParameter,
InterestingDTO aSecondParamenter)
In this style, business rules violations and other normal/abnormal business situations had to be reported using an specific exception designed for this signature or adopting some convention like returning nulls to state the situation at the end of method's execution.
I think that to use exceptions to show business problems can lead to maintainability problems and it surely is a bad practice (there is a bunch of technical bibliography arguing about this). So, to cope with this problems I suggest to use an structure or a class like this:
public class ReturnValue<T>
{
public T returnedValue;
public string message;
public Status status;
}
enum Status {SUCCESS, FAILURE, ANY_OTHER_STATUS};
the former signature can then be written like:
public ReturnValue<List<InterestingDTO>>
ANecessaryMethodToCompleteABusinessProcess(int aParameter,
InterestingDTO aSecondParamenter)
Where all interesting things for any consuming layers can be known, at least, efficiently. Notice that there are not exceptions to control flow (except probably for those you want outer layers to know), and business layer can have the entire control about business error messages. Do you think this approach has any flaw?
Please, if possible, add some bibliography for your answer.
We pretty much use the same throughout our enterprise apps, with two additions, viz 1) for transactional services, an additional List<> property containing a list of "Validation Results", each of which models a single business rule or validation rule violation, which can then be reported back to the client (user or service consumer) with as much context information as possible, and 2) for data services we add paging information, indicating how much total data is available to the client (given that we only allow a finite number of rows to be returned). This allows the client to tie into a pagination strategy.
The only complaint thus far is for Service Consumers - when we exposed service methods returning the typed generic across the enterprise (ESB / SOA), is that the WSDL / naming of the generics can be cumbersome (e.g. ReturnResultOfPOUpdateRequestJQ3KogUE). This isn't of much concern to .NET clients if we share the Entities on both client and service, but for other clients such as Java, Mobile etc can be problematic, and sometimes we need to provide an alternative facade for such clients without the generics.

Using SOA principles over OOD in non-service code

Our architect has spoken about using SOA techniques throughout our codebase, even on interfaces that are not actually hosted as a service. One of his requests is that we design our interface methods so that we make no assumptions about the actual implementation. So if we have a method that takes in an object and needs to update a property on that object, we explictly need to return the object from the method. Otherwise we would be relying on the fact that Something is a reference type and c# allows us to update properties on a reference type by default.
So:
public void SaveSomething(Something something)
{
//save to database
something.SomethingID = 42;
}
becomes:
public Something SaveSomething(Something something)
{
//save to database
return new Something
{
//all properties here including new primary key from db
};
}
I can't really get my head around the benefits of this approach and was wondering if anyone could help?
Is this a common approach?
I think your architect is trying to get your code to have fewer side effects. In your specific example, there isn't a benefit. In many, many cases, your architect would be right, and you can design large parts of your application without side effects, but one place this cannot happen is during operations against a database.
What you need to do is get familiar with functional programming, and prepare for your conversations about cases like these with your architect. Remember his/her intentions are most likely good, but specific cases are YOUR domain. In this case, the side effect is the point, and you would most likely want a return type of bool to indicate success, but returning a new type doesn't make sense.
Show your architect that you understand limiting side effects, but certain side effects must be allowed (database, UI, network access, et cetera), and you will likely find that he or she agrees with you. Find a way to isolate the desired side effects and make them clear to him or her, and it will help your case. Your architect will probably appreciate it if you do this in the spirit of collaboration (not trying to shoot holes in his or her plan).
A couple resources for FP:
A great tutorial on Functional
Programming
Wikipedia's entry on Functional programming
Good luck, I hope this helps.