Maintainable API design for communication between Blazor WASM and ASP.NET Core [closed] - asp.net-core

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am seeking advice on how to design and implement the API between Blazor WebAssembly and ASP.NET Core server for a mid-size project (couple developers, few years). Here are approaches that I've tried and issues I've encountered.
Approach #1: Put Entity classes to the Shared project and use them as the return type in Controller methods
Pros:
Simplicity - no boilerplate code
Ensured type safety from database all the way to the client
Cons:
In some cases the data should be processed on the server before they are returned - for example we don't need to return each product, but only the total count of products in a category. In some cases, the Client can work with simplified view of the data model - for example the Client only needs to know the price that's available to them, but database design needs to be more complex, so the Server could determine which price is available to which customer. In these cases we need to create a custom return type for the Controller method (Data Transfer Objects). This creates inconsistency, because some Controller methods return database entities, while some return DTOs. I found that these cases are so frequent that it's better to use DTOs for all communication.
The Client usually doesn't use each field in the entity, but we transfer it anyway. This slows down the app for users with slow internet connection.
Approach #2: Create one Data Transfer Object per entity, map with Entity.ToDataTransferObject()
The Controller has many methods for querying data, to accomodate for needs of different Components on the client. Most often, the database result takes a form of an Entity or of List<Entity>. For each database entity, we have a method entity.ToDataTransferObject() which transforms the database result to a DTO from Shared project.
For cases when the response type is very different from database entities, we create distinct data transfer object and do the transformation either in Controller method or in a distinct class.
Pros:
Data model on the Client is just as complex as it needs to be
Cons:
Some controller methods load (and need return) all data about an entity, and about its related entities, going into depth of 5. Some methods only need to load two simple fields. Because we use the same entity.ToDataTransferObject() method for all of them, they need to share the same return type. Any field which is not always returned is declared as nullable. This has BAD consequences. The compiler no longer ensures the compatibility of the Blazor Component with the return type of the Controller method. The compiler doesn't ensure compatibility of the database query with the entity.ToDataTransferObject() method. The compatibility is only discovered by testing, and that is only if the right data are present in the database. As app development continues and the data model evolves, this is a great source of bugs.
There are multiple controller methods querying the same data. The queries contain some business logic (for example - which products should be displayed to this customer?). When there are multiple controller methods querying the same data, this business logic is duplicated into multiple controller methods. Even worse, sometimes the logic is duplicated into other controllers, when we need to decide which entity to include.
Now I am looking for Approach #3
The cons of Approch #2 lead me to the following design changes:
Stop making properties of Data Transfer Object nullable, to signify that they have not have been loaded from the database. If a property has't been loaded, we need to create a new class for the transfer object, where the property will not be present.
Stop using entity.ToDataTransferObject() - one master-method to convert an entity to Data Transfer Object. Instead, create a method for every type of DataTransferObject.
Find a way to extract parts of EF Core queries to re-usable methods to prevent duplicating business logic.
However, this would require us to add a mountain of additional code 🏔️. We would need to create a new class for each subset of properties of an entity, which is used in a component. This might be worth it, considering it's likely to eliminate majority of bugs that we face today, but it's a heavy price to pay.
Have I missed anything? Is there any better design that I haven't considered?

In my experience, use DTO’s to match the client side UI view model. So UI-formatted values, along with record ID values to allow posting updates from edit forms. This ensures no accidental unauthorized access to any values that the current session has no permissions for, and it prevents overfetching data in general.

Related

What Is the Proper Way to Model a Discussion Forum Entity Relationship in OOP? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am flexing my OOP muscles and attempting to build an MVC discussion forum web application. I am working on modeling out my entity relationships between Forum objects, Thread objects, and Post objects. Unfortunately, I've run into a problem which has put my brain into a tailspin. Here's some background:
Definitions:
Forum: a collection of discussion threads (e.g. "Questions and Answers")
Thread: a collection of posts on a particular topic (e.g.
"What is the meaning of life, the universe, and everything?")
Post: contains the body of a message.
(e.g. "Duh... it's 42")
Model Design:
Here's my initial draft for modeling my entities. In this model, forums are the root. Forums can contain zero or more threads, and threads can contain 1 or more posts. For sake of this question, I have kept things as simple as possible.
Here are my service classes:
Here's what the database schema looks like:
To create my objects, I am using my interpretation of the Data Mapper Pattern to translate data from the database into entity objects. The layered architecture looks like this:
Here's where things get a little complex:
My understanding of good OOP design is that entities shouldn't really have to deal with things like "foreign keys", because those are data persistence concerns. Instead, entities should reference the actual object that the "foreign key" represents.
Therefore I want to make sure my child entities (Thread and Post) have references to their parents. This way, when it comes time to persisting them, the data mapper layer can infer the foreign keys by calling a method like this in a Post object. :
// get the primary key from the parent object and return it
function getThreadId() {
return thread.getThreadId();
}
One of the problems I've run into is determining when I should inject references to parent objects into the entities.
Approach 1:
My instincts tell me that the Data Mapper layer should be responsible for this behavior. Here's a simplified version of what the build() method might look like in the Post data mapper:
// build a post entity
function build( required postId ) {
// go out to the database and get our dto
var dto = dao.getById( postId );
// initialize the entity
var post = new post( dto.postId, dto.body );
// inject a reference to the parent object (thread)
post.setThread( threadDataMapper.getById( dto.threadId ) );
return post;
}
I see a few problems with this approach:
I've read that in OOP child objects shouldn't really know about their parents and that parents should be responsible for injecting soft references to their children, not the other way around.
The approach above feels inefficient because each entity has to go out and get a copy of its parent on every new() instance. If I have a controller that gets a Thread entity, the data mapper has to instance both a new Thread and a Forum (2 trips to the database). If I then need to get a Post from that Thread via getPostById(), I have to instance the Post, and then re-instance the Thread and Forum again (3 trips to the database). That just smells terrible to me.
Approach 2:
Another idea I had was to have my parent entities inject themselves into their respective children. So for example, a Thread might do this when getting a Post:
// Get a post by id
function getPostById( id ) {
// get the post entity from the service layer
var post = postService.getById( arguments.id );
// inject a reference of this thread into the post
post.setThread( this );
return post;
}
This feels a little better! However, the main caveat I've run into is if you want to directly access a Post in the application. Let's say for example you want to have a page for editing a Post. Since the only way to properly construct a post is to go through its Thread (and the only way to construct a Thread is through its Forum) I have to make my controller do a lot more work just to get an instance of a particular Post. This seems like adding a lot of complexity just so I can access a single entity so I can edit it.
Approach 3:
Finally, perhaps the simplest approach would be to keep Forum entities, Thread entities, and Post entities completely separate and include their foreign keys as object properties. However, this approach seems like I'm just using the entities as fancy DTOs as they just hold data and don't contain any business logic:
Approach 4: ???
So that's where I am at as of today. Perhaps I'm going about solving this problem all wrong, or maybe there's a pattern that exists already for this type of model that I'm not aware of. Any help or insight you could offer would be most appreciated!
I'm not the guru of OOP design but I guess the answer heavily depends on your app logic.
I think first of all you have to consider your objects as an entity that keeps own internal data in consistency.
E.g., if the Post does not need to know to which thread it belongs in order to update own 'title' and 'body' properties than it should not keep the thread reference at all.
Thread as a posts container should have some sort of reference to the posts.
As the next step let's say we want to improve thread search performance (for the given post find it's parent thread). Or Post internal consistency starts depends on thread (e.g. when thread is blocked Post body could not be updated).
Post in such case may contain reference to the parent thread (by id or by the instance).
There are supporters and opponents of how to store the reference.
Related to creation I guess all entities should have own factories. What instances would be instantiated during creation depends on how you choose to store reference.
Whatever variant you choose it may work for some time until Post starts to depend on too many classes (Thread, Author, List of Best Posts). To keep own consistency post should have the reference to all those classes which expose a lot of external information. That is the time when we have to close Post for modification. All post rules that depends on external objects post should take as a dependency during initialization.

What is the advantage to using an event-driven approach vs procedural programming? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What are the pros and cons of using an event-driven approach vs a non-event-driven (procedural) approach?
Without EDP:
Object A responds to some user input. Object A calls a method in Object B and a method in Object C and they perform their respective tasks.
With EDP:
Object A responds to some user input. Object A publishes an event in which Objects B and C are subscribed. Relevant data is packaged up into an EventArgs and received by B & C and they perform their respective tasks.
Is there an advantage to using one or the other? I'm at a crossroads where I need to choose. However, I do not have any objective information on which one is superior, and in what ways one may have an advantage over the other.
Thanks!
Edit: I am understanding the difference in a similar fashion to how it's described here: https://stackoverflow.com/a/28135353/3547347
Is there an advantage to using one or the other?
Yes - using events decouples A, B, and C. Without events, you cannot, for example, extend functionality by having another type respond to As events without modifying As code.
The downside is that it's harder to code (though not terribly) and you have to code more "plumbing" to add all of the relevent events. It also makes it harder to trace logic since you don't know what may be listening to As events at any one time.
Extendability and maintenance. Instead of having to go back to the method and adding to it every time you want to add a new 'subscriber' in your without EDP example, you'll just add the method you want to call to its list of subscribers.
OOP is all about encapsulating the parts of your code that change, so that changing them has as few consequences as possible. You don't want to have to modify a vaguely related class each time you need new functionality elsewhere in a project.
So I would say given the two options, always go with the event driven model.
I think you are talking about an observer pattern.
You use the observer pattern when you don't have an Object B and Object C at the time you are implementing Object A; or if you know that later, additional classes will need to know about the event, but you do not want them to have to modify the code for Object A.
Event-driven programming is a concurrency model for handling IO bound process (like user input in your example). So, really, both processes you've described are event-driven.
The difference between the two examples is that by introducing the publish / subscribe abstraction between the "observer" object and the "responder" objects you are, as D Stanley mentions, decoupling the two layers by adding a layer of indirection.
The advantage of this approach is greater abstraction (at the expense of just a little more complexity). So you could do things like put a queue between the "observers" and the "responders" which can allow you to control and observe your process, and scale your system.
So, for example, Your "observer" could be a front-end application that queues jobs on to a queue server that is queried by the "responders" which are other applications that run on other servers. That would be one way to architect a multi-tier application.

Is good to return a generic object with all important returning values from a facade signature? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Since I asked How has to be written the signature in facade pattern? question, I've thought about how to create a signature for an API that has to be both useful and efficient (and a aesthetically nice solution!) . I've seen some APIs and their boundary interfaces at the top expose the following style of signature:
public List<InterestingDTO>
ANecessaryMethodToCompleteABusinessProcess(int aParameter,
InterestingDTO aSecondParamenter)
In this style, business rules violations and other normal/abnormal business situations had to be reported using an specific exception designed for this signature or adopting some convention like returning nulls to state the situation at the end of method's execution.
I think that to use exceptions to show business problems can lead to maintainability problems and it surely is a bad practice (there is a bunch of technical bibliography arguing about this). So, to cope with this problems I suggest to use an structure or a class like this:
public class ReturnValue<T>
{
public T returnedValue;
public string message;
public Status status;
}
enum Status {SUCCESS, FAILURE, ANY_OTHER_STATUS};
the former signature can then be written like:
public ReturnValue<List<InterestingDTO>>
ANecessaryMethodToCompleteABusinessProcess(int aParameter,
InterestingDTO aSecondParamenter)
Where all interesting things for any consuming layers can be known, at least, efficiently. Notice that there are not exceptions to control flow (except probably for those you want outer layers to know), and business layer can have the entire control about business error messages. Do you think this approach has any flaw?
Please, if possible, add some bibliography for your answer.
We pretty much use the same throughout our enterprise apps, with two additions, viz 1) for transactional services, an additional List<> property containing a list of "Validation Results", each of which models a single business rule or validation rule violation, which can then be reported back to the client (user or service consumer) with as much context information as possible, and 2) for data services we add paging information, indicating how much total data is available to the client (given that we only allow a finite number of rows to be returned). This allows the client to tie into a pagination strategy.
The only complaint thus far is for Service Consumers - when we exposed service methods returning the typed generic across the enterprise (ESB / SOA), is that the WSDL / naming of the generics can be cumbersome (e.g. ReturnResultOfPOUpdateRequestJQ3KogUE). This isn't of much concern to .NET clients if we share the Entities on both client and service, but for other clients such as Java, Mobile etc can be problematic, and sometimes we need to provide an alternative facade for such clients without the generics.

Help with debate on Separation of concerns (Data Access vs Business Logic) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I had a debate with my co-worker on whether certain logic belongs in the data access or business logic layer.
The scenario is, the BLL needs some data to work with. That data primarily lives in the database. We want to cache that data (using System.Runtime.Caching) so it's quickly available on subsequent requests. The architecture is such that the DAL and the BLL live on the same box and in different assemblies (projects in the same solution). So there is no concern of hitting a DAL over the wire or anything like that.
My argument is that the decision to hit the cache vs the database is a concern of the DAL. The business logic layer should not care where the data comes from, only that it gets the data it needs.
His argument is that the data access layer should be "pure" and "dumb" and any logic to make that decision to hit the cache vs the database should be in the business logic layer.
In my opinion what he's saying is undermining separation of concerns and causing the layers to be more tightly coupled when the goal is to keep things loosely coupled. I can see where the BLL might want to control this if it was a specific program/ui function to decide where to go for data, but that simply isn't the case here. It's just a very simple caching scenario where the database is the primary data store.
Thoughts?
I agree with you 100%.
Caching is part of the DAL and does not belong in the BLL.
Let's take hibernate as an example, it use a caching system to store your entity's. Hibernate is responsible and know how to control his cache, (dirty read, flushing data etc)
You don't want to cluttered your BLL with all this low-level data logic.
Regards
I believe that the caching should be done in the business layer. The moment you try to get the data from DAL, you can check if the data is available in cache system.runtime.caching, then use cache data otherwise fetch data from the database. Moreover if you want to invalidate cache due to some reason, you can do it by calling a function in the business later.
The whole purpose in separating business logic from data is so that you can swap them out as business requirements or technology changes. By intermixing them, you are defeating this logic, and therefore, on a theoretical level you are correct. In the real world however, I think you need to be a bit more pragmatic. What's the real life expectancy of the application , what is the likelihood that the technology is going to change, and how much extra work is involved in keeping the two cleanly separated?
My initial reaction would be the same as yours, to let the data layer cache the information. This can even be integrated in with a strategy to subscribe to changes in the database, or implement polling to ensure the data is kept up-to-date.
However, if you intend to re-use the data layer in other projects, or even if not, it might not be a bad idea to implement a new business layer between the existing one and the data layer to handle caching decisions. Because ultimately, caching is a not just a performance issue, it does involve business decisions about concurrency and other matters.
An n-tier system is just that, you're not limited on how many levels you want to seperate things into.
I know I'm over two years late to the game but I wanted to add something:
If you have an interface defined for your DAL, you can write a caching mechanism that follows that interface and manages 'cache vs. hit the data source' concerns without the technology or source-specific DAL code having to worry about it and without the BLL having to worry about it. Example:
internal interface IThingsGateway
{
public Thing GetThing(int thingId);
public void UpdateThing(ThingUpdateInfo info);
}
internal class MsSqlThingsGateway : IThingsGateway
{
// implementation specific to MsSql here
}
internal class CachingThingsGateway : IThingsGateway
{
private IThingsGateway underlyingImplementation;
public CachingThingsGateway(IThingsGateway implementation)
{
this.underlyingGateway = implementation;
}
public Thing GetThing(int thingId)
{
if (this.HasCachedThing(thingId))
{
return this.GetCachedThing(thingId);
}
var thing = this.underlyingGateway.GetThing(thingId);
this.SetCachedThing(thingId);
return thing;
}
public void UpdateThing(ThingUpdateInfo info)
{
this.underlyingGateway.UpdateThing(info);
this.ClearCachedThing(info.ThingId);
}
}
And I would use this same approach if I needed to check multiple data sources for a thing: write an implementation of IThingsGateway that handles the logic of juggling the various data sources, delegating to the appropriate one... then wrap that in the CachingThingsGateway. Client code will ultimately obtain an IThingsGateway reference from some factory or container, which is where the wrapping and instantiating would occur.
And all of this really doesn't take that much extra effort. If you use caching you will have to write that code anyways, and the overhead generated by putting it in another class with the same interface is minimal at worst.

DAL and BLL with Lazy Loading

How would one implement lazy loading in the context of three tiers? I understand the basic architecture of Presentation Layer, Business Layer, and Data Layer:
You have your basic "dumb" classes that are nearly mirror images of the tables in the database with one exception. Instead of foreign key IDs you have a reference to the actual instance(s) of what is being referred to. For example: Employee with Name/DOB/Title properties.
Then for each of these classes you have a class that provides the CRUD operations on it plus any custom data storage routines you might need (calling a stored procedure that works with that object, etc). This class would be swapped out if you changed database. For example: EmployeeDAL.Save(myEmployee), EmployeeDAL.Get(myEmployee) (where myEmployee has their ID populated but nothing else)
You have business layer classes that perform validation and what not. The methods in these classes usually end by calling into the DAL to persist information or to retrieve it. This is changed when the customer changes their mind about what constitutes valid/invalid data or wants to change the way some calculation is done.
The presentation layer interacts with the business layer to display things and shuttle inserts/updates made in the UI to the lower layers. For example: it loops over a list of Employees and displays them in an HTML table.
But where exactly would the code for lazy loading references go? If the presentation layer has a Company object that it just displayed and is beginning the process of displaying myCompany.Employees, how is that achieved? myCompany is an instance of one of the dumb classes that mirror the database tables and isn't supposed to know about how to retrieve anything.
Do you do as the answer to this question suggests and create a Dummy version of each object? Then the DAL level object can have variables indicating if Employees has or has not been loaded and call DALEmployee.GetEmployees(this)? I feel as if I'm missing something crucial about the pattern...
If you use a pre-built framework such as nHibernate this will make it all much easier, you can define the lazy-loading in the class/table mapping and when a query is run. To do it yourself in a neat manner is going to take a fair bit of code although the System.Lazy class in .NET 4 may help.