Is good to return a generic object with all important returning values from a facade signature? [closed] - oop

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Since I asked How has to be written the signature in facade pattern? question, I've thought about how to create a signature for an API that has to be both useful and efficient (and a aesthetically nice solution!) . I've seen some APIs and their boundary interfaces at the top expose the following style of signature:
public List<InterestingDTO>
ANecessaryMethodToCompleteABusinessProcess(int aParameter,
InterestingDTO aSecondParamenter)
In this style, business rules violations and other normal/abnormal business situations had to be reported using an specific exception designed for this signature or adopting some convention like returning nulls to state the situation at the end of method's execution.
I think that to use exceptions to show business problems can lead to maintainability problems and it surely is a bad practice (there is a bunch of technical bibliography arguing about this). So, to cope with this problems I suggest to use an structure or a class like this:
public class ReturnValue<T>
{
public T returnedValue;
public string message;
public Status status;
}
enum Status {SUCCESS, FAILURE, ANY_OTHER_STATUS};
the former signature can then be written like:
public ReturnValue<List<InterestingDTO>>
ANecessaryMethodToCompleteABusinessProcess(int aParameter,
InterestingDTO aSecondParamenter)
Where all interesting things for any consuming layers can be known, at least, efficiently. Notice that there are not exceptions to control flow (except probably for those you want outer layers to know), and business layer can have the entire control about business error messages. Do you think this approach has any flaw?
Please, if possible, add some bibliography for your answer.

We pretty much use the same throughout our enterprise apps, with two additions, viz 1) for transactional services, an additional List<> property containing a list of "Validation Results", each of which models a single business rule or validation rule violation, which can then be reported back to the client (user or service consumer) with as much context information as possible, and 2) for data services we add paging information, indicating how much total data is available to the client (given that we only allow a finite number of rows to be returned). This allows the client to tie into a pagination strategy.
The only complaint thus far is for Service Consumers - when we exposed service methods returning the typed generic across the enterprise (ESB / SOA), is that the WSDL / naming of the generics can be cumbersome (e.g. ReturnResultOfPOUpdateRequestJQ3KogUE). This isn't of much concern to .NET clients if we share the Entities on both client and service, but for other clients such as Java, Mobile etc can be problematic, and sometimes we need to provide an alternative facade for such clients without the generics.

Related

Maintainable API design for communication between Blazor WASM and ASP.NET Core [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am seeking advice on how to design and implement the API between Blazor WebAssembly and ASP.NET Core server for a mid-size project (couple developers, few years). Here are approaches that I've tried and issues I've encountered.
Approach #1: Put Entity classes to the Shared project and use them as the return type in Controller methods
Pros:
Simplicity - no boilerplate code
Ensured type safety from database all the way to the client
Cons:
In some cases the data should be processed on the server before they are returned - for example we don't need to return each product, but only the total count of products in a category. In some cases, the Client can work with simplified view of the data model - for example the Client only needs to know the price that's available to them, but database design needs to be more complex, so the Server could determine which price is available to which customer. In these cases we need to create a custom return type for the Controller method (Data Transfer Objects). This creates inconsistency, because some Controller methods return database entities, while some return DTOs. I found that these cases are so frequent that it's better to use DTOs for all communication.
The Client usually doesn't use each field in the entity, but we transfer it anyway. This slows down the app for users with slow internet connection.
Approach #2: Create one Data Transfer Object per entity, map with Entity.ToDataTransferObject()
The Controller has many methods for querying data, to accomodate for needs of different Components on the client. Most often, the database result takes a form of an Entity or of List<Entity>. For each database entity, we have a method entity.ToDataTransferObject() which transforms the database result to a DTO from Shared project.
For cases when the response type is very different from database entities, we create distinct data transfer object and do the transformation either in Controller method or in a distinct class.
Pros:
Data model on the Client is just as complex as it needs to be
Cons:
Some controller methods load (and need return) all data about an entity, and about its related entities, going into depth of 5. Some methods only need to load two simple fields. Because we use the same entity.ToDataTransferObject() method for all of them, they need to share the same return type. Any field which is not always returned is declared as nullable. This has BAD consequences. The compiler no longer ensures the compatibility of the Blazor Component with the return type of the Controller method. The compiler doesn't ensure compatibility of the database query with the entity.ToDataTransferObject() method. The compatibility is only discovered by testing, and that is only if the right data are present in the database. As app development continues and the data model evolves, this is a great source of bugs.
There are multiple controller methods querying the same data. The queries contain some business logic (for example - which products should be displayed to this customer?). When there are multiple controller methods querying the same data, this business logic is duplicated into multiple controller methods. Even worse, sometimes the logic is duplicated into other controllers, when we need to decide which entity to include.
Now I am looking for Approach #3
The cons of Approch #2 lead me to the following design changes:
Stop making properties of Data Transfer Object nullable, to signify that they have not have been loaded from the database. If a property has't been loaded, we need to create a new class for the transfer object, where the property will not be present.
Stop using entity.ToDataTransferObject() - one master-method to convert an entity to Data Transfer Object. Instead, create a method for every type of DataTransferObject.
Find a way to extract parts of EF Core queries to re-usable methods to prevent duplicating business logic.
However, this would require us to add a mountain of additional code 🏔️. We would need to create a new class for each subset of properties of an entity, which is used in a component. This might be worth it, considering it's likely to eliminate majority of bugs that we face today, but it's a heavy price to pay.
Have I missed anything? Is there any better design that I haven't considered?
In my experience, use DTO’s to match the client side UI view model. So UI-formatted values, along with record ID values to allow posting updates from edit forms. This ensures no accidental unauthorized access to any values that the current session has no permissions for, and it prevents overfetching data in general.

Breaking up a large "monolithic" class into smaller ones [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is the best way to go about breaking up a large "monolithic" class into smaller ones?
I have designed a simple chat system that has User objects and Channel objects, where a user can be in and talk in a number of channels.
Here is a diagram of my design:
The primary issue I have with this design is that the ChatManager class is a bit monolithic, i.e. it does too many things. In a previous incarnation it also handled channel membership, which has now been separated out in to ChannelMembershipManager.
What is the best way to go about "simplifying" my ChatManager class? Are there any other problems with my design I am not seeing?
The best way to break up that monolithic manager is to assign responsibilities to the classes, according to OO tenets. Here are some suggestions that immediately come to mind. Don't expect perfection, this is just off the top of my head.
I see no need for a "Manager" class, although I do see a need to track all the instances of the Channel class and all the instances of the User class. Maybe this could be done with class statics within each class. (These indexes could be modeled in UML using qualifiers, which work kind of like hash maps. The Channels and Users really don't even need numbers! Those numbers are merely one of many ways to code this.
Each instance of the User class can use a command channel to communicate with a person. When a person asks the instance of the User class to join a channel, it can create an instance of a Private Channel that manages a per-channel socket that is private to one person, then ask an instance of the requested Channel for permission to accept it. That Private Channel could have methods to poll(), read() and write().
An instance of a Channel class chould be responsible for announcing things when a User joins or leaves. Each instance of a Channel class should be responsible for polling the connected Private Channel instances, reading messages / commands, and taking action, such as repeating a message out to all the other Private Channels.
This is just off the top of my head. If I took some time to think about it, I might see some potential problems or optimizations I could make, but hopefully this gives you some ideas for how to split up a "manager" monolith according to OO tenets.

Why is public/private such an important programming aspect? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Why do in most programming languages do you get the ability to have private and or public methods/functions classes and properties?
Does it make much of a difrence to let's say.. have all classes, methods and properties be public?
I know that if you have all your methods, classe ans properties set to private nothing will work.
So at least I know that much.
But does the distinction between the two matter? What's the big deal if one class knows another Class "that is meant to be private" exists?
When you make something public, you enter a contract with the user class: "Hey, this is, what I offer, use it or not." Changing the public interface is expensive, because you have to change all code using that public interface, too. Think of a developer of a framework like Cocoa used by thousands of developers. If you change one public methods, for example removing one, thousands of apps break. They have to be changed, too.
So making everything public simply means that you cannot change anything anymore. (You can, but the people will get angry at one point.)
Let's think of having a class implementing a list. There is a method to sort it: sortListWithKey. You make that public because you want the users of the class to get a sorted list. This is good.
There are several algorithms for sorting. Let's say, you implement one that needs to calculate the meridian (the middle element). You need this method internally for your sorting algorithm. So it is enough, to implement it privately. Changing the whole structure of data holding including the implemented sorting algorithm is no problem and will not break existing code using that class.
But if you made the meridian method public (remember: you implemented it, because you needed it internally), you still have to keep it, even the new sorting algorithm does not need it. You cannot remove it anymore, even with the new structure it is very hard (and/or expensive) to keep the method.
So make that part of your implementation public that is useful for the users, but no more. Otherwise you shackle yourself.
If humans had perfect memory, documentation and communication skills, and made no mistakes, then there might not be a useful difference. But using or changing something from the wrong file and then forgetting about it (or not documenting it clearly for the rest of the team, or yourself in the future) is too common a cause of hard-to-find bugs.
Marking things private makes it a bit more work to create the same types of bugs, and thus less likely that lazy/sleepy programmers will do all that extra work just to mess up the application.
In computer science it is called information hiding. You, as a programmer, want to offer only necessary methods or properties to other programmers which will use your public API and this is the way how you can achieve so-called low coupling between modules.

What is the advantage to using an event-driven approach vs procedural programming? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What are the pros and cons of using an event-driven approach vs a non-event-driven (procedural) approach?
Without EDP:
Object A responds to some user input. Object A calls a method in Object B and a method in Object C and they perform their respective tasks.
With EDP:
Object A responds to some user input. Object A publishes an event in which Objects B and C are subscribed. Relevant data is packaged up into an EventArgs and received by B & C and they perform their respective tasks.
Is there an advantage to using one or the other? I'm at a crossroads where I need to choose. However, I do not have any objective information on which one is superior, and in what ways one may have an advantage over the other.
Thanks!
Edit: I am understanding the difference in a similar fashion to how it's described here: https://stackoverflow.com/a/28135353/3547347
Is there an advantage to using one or the other?
Yes - using events decouples A, B, and C. Without events, you cannot, for example, extend functionality by having another type respond to As events without modifying As code.
The downside is that it's harder to code (though not terribly) and you have to code more "plumbing" to add all of the relevent events. It also makes it harder to trace logic since you don't know what may be listening to As events at any one time.
Extendability and maintenance. Instead of having to go back to the method and adding to it every time you want to add a new 'subscriber' in your without EDP example, you'll just add the method you want to call to its list of subscribers.
OOP is all about encapsulating the parts of your code that change, so that changing them has as few consequences as possible. You don't want to have to modify a vaguely related class each time you need new functionality elsewhere in a project.
So I would say given the two options, always go with the event driven model.
I think you are talking about an observer pattern.
You use the observer pattern when you don't have an Object B and Object C at the time you are implementing Object A; or if you know that later, additional classes will need to know about the event, but you do not want them to have to modify the code for Object A.
Event-driven programming is a concurrency model for handling IO bound process (like user input in your example). So, really, both processes you've described are event-driven.
The difference between the two examples is that by introducing the publish / subscribe abstraction between the "observer" object and the "responder" objects you are, as D Stanley mentions, decoupling the two layers by adding a layer of indirection.
The advantage of this approach is greater abstraction (at the expense of just a little more complexity). So you could do things like put a queue between the "observers" and the "responders" which can allow you to control and observe your process, and scale your system.
So, for example, Your "observer" could be a front-end application that queues jobs on to a queue server that is queried by the "responders" which are other applications that run on other servers. That would be one way to architect a multi-tier application.

Help with debate on Separation of concerns (Data Access vs Business Logic) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I had a debate with my co-worker on whether certain logic belongs in the data access or business logic layer.
The scenario is, the BLL needs some data to work with. That data primarily lives in the database. We want to cache that data (using System.Runtime.Caching) so it's quickly available on subsequent requests. The architecture is such that the DAL and the BLL live on the same box and in different assemblies (projects in the same solution). So there is no concern of hitting a DAL over the wire or anything like that.
My argument is that the decision to hit the cache vs the database is a concern of the DAL. The business logic layer should not care where the data comes from, only that it gets the data it needs.
His argument is that the data access layer should be "pure" and "dumb" and any logic to make that decision to hit the cache vs the database should be in the business logic layer.
In my opinion what he's saying is undermining separation of concerns and causing the layers to be more tightly coupled when the goal is to keep things loosely coupled. I can see where the BLL might want to control this if it was a specific program/ui function to decide where to go for data, but that simply isn't the case here. It's just a very simple caching scenario where the database is the primary data store.
Thoughts?
I agree with you 100%.
Caching is part of the DAL and does not belong in the BLL.
Let's take hibernate as an example, it use a caching system to store your entity's. Hibernate is responsible and know how to control his cache, (dirty read, flushing data etc)
You don't want to cluttered your BLL with all this low-level data logic.
Regards
I believe that the caching should be done in the business layer. The moment you try to get the data from DAL, you can check if the data is available in cache system.runtime.caching, then use cache data otherwise fetch data from the database. Moreover if you want to invalidate cache due to some reason, you can do it by calling a function in the business later.
The whole purpose in separating business logic from data is so that you can swap them out as business requirements or technology changes. By intermixing them, you are defeating this logic, and therefore, on a theoretical level you are correct. In the real world however, I think you need to be a bit more pragmatic. What's the real life expectancy of the application , what is the likelihood that the technology is going to change, and how much extra work is involved in keeping the two cleanly separated?
My initial reaction would be the same as yours, to let the data layer cache the information. This can even be integrated in with a strategy to subscribe to changes in the database, or implement polling to ensure the data is kept up-to-date.
However, if you intend to re-use the data layer in other projects, or even if not, it might not be a bad idea to implement a new business layer between the existing one and the data layer to handle caching decisions. Because ultimately, caching is a not just a performance issue, it does involve business decisions about concurrency and other matters.
An n-tier system is just that, you're not limited on how many levels you want to seperate things into.
I know I'm over two years late to the game but I wanted to add something:
If you have an interface defined for your DAL, you can write a caching mechanism that follows that interface and manages 'cache vs. hit the data source' concerns without the technology or source-specific DAL code having to worry about it and without the BLL having to worry about it. Example:
internal interface IThingsGateway
{
public Thing GetThing(int thingId);
public void UpdateThing(ThingUpdateInfo info);
}
internal class MsSqlThingsGateway : IThingsGateway
{
// implementation specific to MsSql here
}
internal class CachingThingsGateway : IThingsGateway
{
private IThingsGateway underlyingImplementation;
public CachingThingsGateway(IThingsGateway implementation)
{
this.underlyingGateway = implementation;
}
public Thing GetThing(int thingId)
{
if (this.HasCachedThing(thingId))
{
return this.GetCachedThing(thingId);
}
var thing = this.underlyingGateway.GetThing(thingId);
this.SetCachedThing(thingId);
return thing;
}
public void UpdateThing(ThingUpdateInfo info)
{
this.underlyingGateway.UpdateThing(info);
this.ClearCachedThing(info.ThingId);
}
}
And I would use this same approach if I needed to check multiple data sources for a thing: write an implementation of IThingsGateway that handles the logic of juggling the various data sources, delegating to the appropriate one... then wrap that in the CachingThingsGateway. Client code will ultimately obtain an IThingsGateway reference from some factory or container, which is where the wrapping and instantiating would occur.
And all of this really doesn't take that much extra effort. If you use caching you will have to write that code anyways, and the overhead generated by putting it in another class with the same interface is minimal at worst.