Adding cache layer to N-tier application instead of calling database multiple times in ASP.NET Core 5 - asp.net-core

In an n-tier application, in the business layer, I need to read database entities multiple times for validation purposes, so I decide to add caching layer to reduce the round trips to the database to fetch the data that will rarely change,
my question is where the caching layer should be added, or how the architecture should be,
to add caching layer without changing the whole architecture ( n-tier )
The layers:
API layer
||
Business Layer
||
Database Layer
Any recommendations or examples I can follow?

Related

Data Model/Schema decoupling in Data Processing Pipeline suing Event Driven Architecture

I was wondering how Microservices in the Streaming Pipeline based on Event Driven Architecture can be truly decoupled from the data model perspective. We have implemented a data processing pipeline using Event-Driven Architecture where the data model is very critical. Although all the Microservices are decoupled from the business perspective, they are not truly decoupled as the data model is shared across all the services.
In the ingestion pipeline, we have collected data from multiple sources where they have a different data model. Hence, a normalizer microservice is required to normalize those data models to a common data model that can be used by downstream consumers. The challenge is Data Model can change for any reason and we should be able to easily manage the change here. However, that level of change can break the consumer applications and can easily introduce a cascade of modification to all the Microservices.
Is there any solution or technology that can truly decouple microservices in this scenario?
This problem is solved by carefully designing the data model to ensure backward and forward compatibility. Such design is important for independent evolution of services, rolling upgrades etc. A data model is said to be backward compatible if a new client (using new model) can read / write the data written by another client (using old model). Similarly, forward compatibility means a client (using old data model) can read / write the data written by another client (using new data model).
Let's say in a Person object is shared across services in a JSON encoded format. Now a one of the services introduces a new field alternateContact. A service consuming this data and using the old data model can simply ignore this new field and continue its operation. If you're using Jackson library, you'd use #JsonIgnoreProperties(ignoreUnknown = true). Thus the consuming service is designed for forward compatibility.
Problem arises when the service (using old data model) deserializes a Person data written with the new model, updates one or more field values and writes the data back. Since the unknown properties are ignored, the write will result in data loss.
Fortunately, binary encoding format such as Protocol Buffer 3.5 and later versions preserve unknown fields during deserialization using old model. Thus when you serialize the data back, the new fields remain as is.
There maybe other data model evolutions hou need to deal with like field removal, field rename etc. The nasic idea is you need to be aware of and plan for these possibilities early on in the design phase. The common data encoding formats are JSON, Apache Thrift, Protocol Buffer, Avro etc.

DDD - Persistence Model and Domain Model

I am trying to learn domain-driven design (DDD), and I think I got the basic idea. But there is something confusing me.
In DDD, are the persistence model and domain model different things? I mean, we design our domain and classes with only domain concerns in mind; that's okay. But after that when we are building our repositories or any other data persistence system, should we create another representation of our model to use in persistence layer?
I was thinking our domain model is used in persistence too, meaning our repositories return our domain objects from queries. But today, I read this post, and I'm a little confused:
Just Stop It! The Domain Model Is Not The Persistence Model
If that's true what would be the advantage of having separate persistence objects from domain objects?
Just think of it this way, the domain model should be dependent upon nothing and have no infrastructure code within it. The domain model should not be serializable or inherit from some ORM objects or even share them. These are all infrastructure concerns and should be defined separate from the domain model.
But, that is if you're looking for going for pure DDD and your project values scalability and performance over speed of initial development. Many times, mixing infrastructure concerns with your "domain model" can help you achieve great strides in speed at the cost of scalability. The point is, you need to ask yourself, "Are the benefits of pure DDD worth the cost in the speed of development?". If your answer is yes, then here is the answer to your question.
Let's start with an example where your application begins with a domain model and it just so happens that the tables in the database match your domain model exactly. Now, your application grows by leaps and bounds and you begin to experience performance issues when querying the database. You have applied a few well thought out indexes, but your tables are growing so rapidly that it looks like you may need to de-normalize your database just to keep up. So, with the help of a dba, you come up with a new database design that will handle your performance needs, but now the tables are vastly different from the way they were before and now chunks of your domain entities are spread across multiple tables rather than it being one table for each entity.
This is just one example, but it demonstrates why your domain model should be separate from your persistence model. In this example, you don't want to break out the classes of your domain model to match the changes you made to the persistence model design and essentially change the meaning of your domain model. Instead, you want to change the mapping between your new persistence model and the domain model.
There are several benefits to keeping these designs separate such as scalability, performance, and reaction time to emergency db changes, but you should weigh them against the cost and speed of initial development. Generally, the projects that will gain the most benefit from this level of separation are large-scale enterprise applications.
UPDATE FOR COMMENTATORS
In the world of software development, there is Nth number of possible solutions. Because of this, there exists an indirect inverse relationship between flexibility and initial speed of development. As a simple example, I could hard code logic into a class or I could write a class that allows for dynamic logic rules to be passed into it. The former option would have a higher speed of development, but at the price of a lower degree of flexibility. The latter option would have a higher degree of flexibility, but at the cost of a lower speed of development. This holds true within every coding language because there is always Nth number of possible solutions.
Many tools are available that help you increase your initial development speed and flexibility. For example, an ORM tool may increase the speed of development for your database access code while also giving you the flexibility to choose whatever specific database implementations the ORM supports. From your perspective, this is a net gain in both time and flexibility minus the cost of the tool (some of which are free) which may or may not be worth it to you based on the cost of development time relative to the value of the business need.
But, for this conversation in coding styles, which is essentially what Domain Driven Design is, you have to account for the time it took to write that tool you're using. If you were to write that ORM tool or even write your database access logic in such a way that it supports all of the implementations that tool gives you, it would take much longer than if you were to just hard-code the specific implementation you plan on using.
In summary, tools can help you to offset your own time to production and price of flexibility, often by distributing the cost of that time to everyone who purchases the tool. But, any code including the code that utilizes a tool, will remain affected by the speed/flexibility relationship. In this way, Domain Driven Design allows for greater flexibility than if you were entangle your business logic, database access, service access, and UI code all together, but at the cost of time to production. Domain Driven Design serves Enterprise level applications better than small applications because Enterprise level applications tend to have a greater cost for the initial development time in relation to business value and because they are more complex, they are also more subject to change requiring greater flexibility at a reduced cost in time.
In DDD, are persistence model and domain model different things?
In DDD you have the domain model and the repository. That's it! If inside the repository you will persist the domain model directly OR if you will convert it to a persistence model before persisting it, it's up to you! It's a matter of design, your design.
The domain doesn't care about how models are saved. It's an implementation detail of the repository and it doesn't matter for the domain. That's the entire purpose of Repositories: encapsulate persistence logic & details inside it.
But as developers we know it's not always possible to build a domain 100% immune from persistence interference, even they being different things. Here in this post I detail some Pros & Cons of having the domain model completely free and isolated from the persistence model.
In DDD, are persistence model and domain model different things?
Yes, but that does not necessarily imply a different set of classes to explicitly represent the persistence model.
If using a relational database for persistence an ORM such as NHibernate can take care of representing the persistence model through mappings to domain classes. In this case there are no explicit persistence model classes. The success of this approach depends on that mapping capabilities of the ORM. NHibernate, for example, can support an intermediate mapping class through component mappings. This allows the use of an explicit persistence model class when the need arises.
If using a document database for persistence, there is usually even less need for a persistence model since the domain model only needs to be serializable in order to be persisted.
Therefore, use an explicit persistence model class when there is a complex mapping that cannot be attained with ORM mappings to the domain model. The difference between the domain model and the persistence model remains regardless of implementation.

Application Layer vs UI

Greetings ye ol whimsical denizens of truthful knowledge,
Got a quickie for ya'll:
I'm wondering if the Application Layer is analogus with the UI Layer, generally ?
I'm reading Evans DDD book and he keeps referring to the Application Layer, but doesn't
mention the UI explicitly, and so I'm left to wonder.
Could someone please help me make this distinction ? Thanks.
The application layer contains the application behavior, i.e. what happens when the user clicks somewhere. In front of the application layer there is often a presentation layer which defines the look-and-feel of the application and specific GUI widgets used. Together, these form the UI.
domain <- application <- presentation
DDD is mostly concerned with the domain layer and a forming a ubiquitous model/language. It is usually not concerned with how layers are defined, except that non-domain concepts are kept out of the domain layer and in other layers such as the application layer.

Failover AND Load Balancing - mutually exclusive?

For the next generation of one of our products, I have been asked to design a system that has both failover capability (ie there are several nodes, and if one of the nodes crashes there is minimal / no data loss) and load balancing (so each of the nodes only handles part of the data). What I can't quite grok is how I can do both. Suppose a node has all the data but only processes an agreed subset. It changes element 8, say. Now all the other nodes have the wrong element 8. So I need to sync - tell all the other nodes element 8 changed - to maintain integrity. But surely that just makes a mockery of load-balancing?!
The short answer is, it depends very much on your application architecture.
It sounds like you are thinking about this using a bad design anti-pattern -- trying to solve for scale-out processing and disaster recovery at the same time in the same layer. If each node only handles part of the data, then it can't be a failover for the other nodes. A lot of people fall into this trap, since both scale-out and DR can be implemented using a type of federation ... but don't confuse the mechanism with the objective. I would respectfully submit you need to think about this problem a little differently.
The way to approach this problem is in two entirely separate layers:
Layer 1 -- app. Devise a high-level design for your app as if there is no requirement for DR. Ignore the fact there may be another instance of this app elsewhere that will be used in DR. Focus on functional & performance aspects of your app -- what the distinct subsystems should be, if any should scale out for workload reasons. This app as a whole handles 100% of the data -- decide if there is a scale-out / federation approach needed within the app itself -- that does not relate to the DR requirement.
Layer 2 -- DR. Now think of your app as a black box. How many instances of the black box will you need to meet your availability requirements, and how will you maintain the required degree of synchronization between those instances? What are the performance requirements for the failover & recovery (time to availability, allowable data loss if any, how long before you need the next failover env up & running)?
Back to Layer 1 -- choose an implementation approach for your high-level design that uses the recovery approach and tools you identified in Layer 2. For example, if you will use a master-slave DB approach for data synchronization among DR nodes, store everything you want to preserve in a failover in the DB layer, not in app-node-local files or memory. These choices depend on the DR framework you choose.
The design of the app layer and DR layer are related, but if you pick the right tools & approach, they don't have to be strongly coupled. E.g. in Amazon Web Services, you can use IP load balancing to forward requests to the failover app instance, and if you store all relevant data (including sessions and other transient things) in a database and use the DBMS native replication capability, it's pretty simple.
Bottom line:
Don't confuse performance scale-out nodes (app-internal) with DR nodes (entire apps)
Use your choice of DR approach to drive implementation decisions in the app layer
Good luck

Can you bypass a layer for a certain operation in a layered architecture?

In an n-layered (5-layer, let's say) application, if there are options available for a certain operation to bypass one of the layers and communicate with the next layer directly, can it still be called an "n-layer" architecture, or does it turn into an (n-1)-layered (4-layer) architecture?
And should the layer in question, which you can bypass, be considered as a "layer" at all?
EDIT: I'm trying to implement an application with following structure -
Presentation layer (contains WPF grids)
Application layer (contains application logic and workflow as application services, extracts display model objects from domain model objects, which are then bound to the UI grids)
Domain layer (contains domain model objects only)
Repository (stores data fetched from the database, isolates the lower layers from the upper layer)
Data mapping layer (maps domain model objects to data model objects)
Data access layer (contains data model objects, and stores and retrieves data to and from the database)
-each one above is implemented as separate project and the domain layer is referenced by the application layer, repository and data mapping layer. Now the thing is, the application layer is directly communicating with the repository, not through the domain layer and the domain layer (if I can call it a layer at all) is acting just like a cross-cutting reference. So that's where my question comes, should I call it domain "layer"? I think NOT. But in domain-driven Design there exists a domain layer, right? There must be something wrong in my architecture? Where and what is it?
You could have as many layers as you want and call it an n-layered system...whether they are used properly or loosely coupled is another question.
The fact that you talk about bypassing a layer may mean you've over engineered a solution or you have implemented a layer in an unhelpful/incorrect way...You'd need to be providing some samples of usage to really help out more here...