The Open Closed Principle - oop

Imagine I have a banking application with a base class called Account and many subclasses, e.g. Savings, Current. I have designed my application so that if I have a new account, say Checking, I can add it easily by not disturbing the other classes. This seems to follow the The Open Closed Principle.
But what happens if my customer decides that a detail of the Savings account needs changing? I'm not extending Savings, I want to change its 'calculateInterest()' method. Is the fact I have to modify Savings mean I violate The Open Closed Principle?

Related

Maintainable API design for communication between Blazor WASM and ASP.NET Core [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am seeking advice on how to design and implement the API between Blazor WebAssembly and ASP.NET Core server for a mid-size project (couple developers, few years). Here are approaches that I've tried and issues I've encountered.
Approach #1: Put Entity classes to the Shared project and use them as the return type in Controller methods
Pros:
Simplicity - no boilerplate code
Ensured type safety from database all the way to the client
Cons:
In some cases the data should be processed on the server before they are returned - for example we don't need to return each product, but only the total count of products in a category. In some cases, the Client can work with simplified view of the data model - for example the Client only needs to know the price that's available to them, but database design needs to be more complex, so the Server could determine which price is available to which customer. In these cases we need to create a custom return type for the Controller method (Data Transfer Objects). This creates inconsistency, because some Controller methods return database entities, while some return DTOs. I found that these cases are so frequent that it's better to use DTOs for all communication.
The Client usually doesn't use each field in the entity, but we transfer it anyway. This slows down the app for users with slow internet connection.
Approach #2: Create one Data Transfer Object per entity, map with Entity.ToDataTransferObject()
The Controller has many methods for querying data, to accomodate for needs of different Components on the client. Most often, the database result takes a form of an Entity or of List<Entity>. For each database entity, we have a method entity.ToDataTransferObject() which transforms the database result to a DTO from Shared project.
For cases when the response type is very different from database entities, we create distinct data transfer object and do the transformation either in Controller method or in a distinct class.
Pros:
Data model on the Client is just as complex as it needs to be
Cons:
Some controller methods load (and need return) all data about an entity, and about its related entities, going into depth of 5. Some methods only need to load two simple fields. Because we use the same entity.ToDataTransferObject() method for all of them, they need to share the same return type. Any field which is not always returned is declared as nullable. This has BAD consequences. The compiler no longer ensures the compatibility of the Blazor Component with the return type of the Controller method. The compiler doesn't ensure compatibility of the database query with the entity.ToDataTransferObject() method. The compatibility is only discovered by testing, and that is only if the right data are present in the database. As app development continues and the data model evolves, this is a great source of bugs.
There are multiple controller methods querying the same data. The queries contain some business logic (for example - which products should be displayed to this customer?). When there are multiple controller methods querying the same data, this business logic is duplicated into multiple controller methods. Even worse, sometimes the logic is duplicated into other controllers, when we need to decide which entity to include.
Now I am looking for Approach #3
The cons of Approch #2 lead me to the following design changes:
Stop making properties of Data Transfer Object nullable, to signify that they have not have been loaded from the database. If a property has't been loaded, we need to create a new class for the transfer object, where the property will not be present.
Stop using entity.ToDataTransferObject() - one master-method to convert an entity to Data Transfer Object. Instead, create a method for every type of DataTransferObject.
Find a way to extract parts of EF Core queries to re-usable methods to prevent duplicating business logic.
However, this would require us to add a mountain of additional code 🏔️. We would need to create a new class for each subset of properties of an entity, which is used in a component. This might be worth it, considering it's likely to eliminate majority of bugs that we face today, but it's a heavy price to pay.
Have I missed anything? Is there any better design that I haven't considered?
In my experience, use DTO’s to match the client side UI view model. So UI-formatted values, along with record ID values to allow posting updates from edit forms. This ensures no accidental unauthorized access to any values that the current session has no permissions for, and it prevents overfetching data in general.

Best practices for discarding no longer needed objects in memory managed languages [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Lets say we have a top level object, in a game for example that represents some physical thing in the game world. The top level object then owns several sub-objects, like a bounding box, graphical object, ect. By own I mean that when it dies so should it's sub-objects.
Lets also say there is a graphical manager object that keeps track of all existing graphical objects in order to draw them or something.
Now lets say that top level object leaves the game world, it's destroyed or we load another level, whatever. We can at this point remove it's reference and it will be GCed, however the graphical object it owns still has a reference to it in the graphical manager. So there needs to be some mechanism of informing the graphical manager to remove it's reference to the graphical object. It's this mechanism that I'm asking about.
The best way I can think of is that every object needs a public alive Boolean flag and any other object that doesn't own it but interacts with it and may need to keep a reference to it then needs logic to check for that flag and remove it's references if it's false. But this to me seems like a fairly inelegant solution.
You idea is not only inelegant; it is also the not good OOP. The last thing you want to do is to expose a field of a class; and have outside classes depend on the content of that field.
That is a direct violation of the Tell Don't Ask principle. State should be internal; you do not expose - especially not for the purpose of other objects making decisions based on this state. And of course: most languages do not allow you to synchronize on a field - meaning that this approach scream race conditions all over the place (when different threads are reading/writing to that field). You can mitigate this by making the field volatile (if your language allows for that).
One alternative approach would be to look into the observer pattern. Meaning: the graphical manager registers itself as listener; for example on a central "game manager" - that one component that is actually responsible for adding/removing your game objects. And each time the game manager adds/removes an object, the graphical manager gets notified and can adapt its data structures.

Too many abstract functions - OOP Best practices

How many abstract function declaration of an Abstract class is too many?
For example, for a membership based payment system :
Multiple payment modes are supported :
Credit Card
Token (a Credit Card payment but using a token)
Redirect (i.e Paypal)
Manual (admin charging manually the user)
I have an abstract class PaymentMode and different modes above extend to this class.
Each mode has its own unique logic of the methods below and i have to declare abstract methods in PaymentMode class for these
// each mode has own way of validating the customer data
validate();
// own logic of cleaning customer data (e.g removing/adding/updating)
preparePaymentData();
// returns a string for saving in database, subclass must implement so developers plan to extend the PaymentMode abstract will be forced to return the correct value
getModeOfPayment();
// each mode has its own logic when determining payment gateways to attempt
getGatewaysToAttempt();
// before sending the payment to gateway, each mode has its own logic when adding specific data
addCustomDataSpecificForGateway();
// check if transaction has failed, different payment modes has different logic of determining a failed transaction
isTransactionFailed()
There 6 unique logic for each mode, I've managed to commonized the common codes already and put it inside the PaymentMode class.
This number may grow as we implement new features that is unique to each mode.
In my mind, im concerned that if any future developer extends my PaymentMode class, he has to implement all the abstract function declarations.
So does a large number of abstract function declarations an indication of a BAD DESIGN? How much is too many?
If its a bad design then, can you recommend any techniques or Design Patterns that will solve this issue
Thanks
It's hard to answer without specifics, but:
Obviously there is no hard limit on abstract methods (methods in interfaces or abstract classes), although less is always clearer and easier to understand.
What is indicating a suboptimal design however is that you need to modify your abstraction of a payment method with each new payment method. That to me indicates a failing abstraction. OOP is not just about pulling common code out, avoiding duplication, it is about abstractions also.
What I would look into is to somehow transfer the control (the real control) to the payment method. Trust the payment method, delegate the task of making the payment to it.
What I mean by that is, you retain control somewhere, where you ask the payment method to do specific parts of its job (with the parts being different for different concrete methods). Steps like validate(), prepare...(). And also, you expect it to give you the "gateway", so now code outside the payment method (even if it's the superclass) must know what that is, or how to handle it.
Instead of doing all that, try to come up with a design, that transfers full control over to the payment method, so it can do it's job without outside code assuming any particular set of steps.
For example:
public interface PaymentMethod {
Receipt payFor(Bill bill);
}
The PaymentMethod here is responsible for doing everything itself. Redirecting the user, saving the receipt in the database, whatever is needed. Once you feel comfortable with this "main" abstraction (it covers all use-cases), you can work to create smaller abstractions that cover details like saving to database, if it is the same for all methods.
In relation to that: don't use abstract parent classes as a way to share code between classes, that is not exactly what inheritance is for. Create proper abstractions for the different "pieces of code", and let them be used by "bigger" abstractions (i.e. composition).
There is no such number of abstract functions declarations that is BAD although the huge number could mean the design has flaws. Just pay attention to Single responsibility principle.
You have already defined that you have 4 modes - so i think you should do 4 interfaces for each mode in your case. After doing this you can see what is the common for all 4 of them and extract the base interface. You may consider to extract 6 unique logics for all of them also as interfaces...

OO Analysis--Operation placement

I'm confused as where I should place the operation/function when identifying classes. The following example--taken from the lecture slides of object-oriented design using UML, patterns and Java--particularly confuses me.
In this example 3 classes are identified from the following part of use case description "The customer enters the store to buy a toy".
2 functions are also identified, one is enters() (placed in the Store class) and the other is buy() (placed in the Toy class).
Why those functions are not associated with the Customer who perform them? Is there any heuristic to help with operation placement?
Your example is extremely simple, and it's hard to say something about it without a context. Anyway, I'll try to answer your question. So, first of all: oo modeling is not about building your classes in a "natural" way. The reason is very simple: even if we wanted to model the "real world" objects, it's simply impossible. The relations between real-world (Customer, Store, Toy) objects are almost infinitely complex. Let's think about your case for a while. When a customer enters a store, there is a lot of things happening, let's try to order them:
Customer enters a store
Customer needs to interact with the "Store gateway" somehow, for example with a door. Even this interaction can be complex: store can be closed, full, an accident can happen, door can be blocked, etc
When customer finally is inside the store, maybe there's a special store policy to greet customers (or every n-th customer). We can also imagine a lot of other things.
Finally, the customer wants to buy a toy. First, she needs to find that toy, which might not be so easy (how would you model this interaction?).
When the desired toy is found, she needs to take it and add to the shopping basket.
Then, customer goes to the queue and waits for her turn.
When waiting is over, the customer interacts with the cashier (a lot of small things, like take the toy, check it's price, maybe some quick chat...)
Finally, the customer can pay for the toy (check if she have enough money, select the paying method (cash, card, nfc?), leave the queue...).
The customer leaves the store (similar to the "enters a store" interaction, plus maybe security checking).
I'm absolutely sure I forgot about something. As you can see, the simple scenario is in fact very complex in real world. That's why it's impossible to model it exactly the same way. Even if we tried, the naive 1-to-1 mapping would probably lead to the design, where almost every action is a method of the Customer class: customer.enter(), customer.leave(), customer.buy(), customer.findToy(), customer.interactWithCashier(), customer.openDoor()... and lot more. This naive mapping would be entirely bad, because every step in the "Customer enters a store" scenario is in fact a collaboration of multiple objects, each somehow connected with another. From the other hand, if we tried to implement this scenario with all interactions, we would create a system that would take years to build and would be simply impossible to deal with (every change would require insane amounts of hours).
Ok, so how to follow ood principles? Take just a part of the interaction. Do not try to model it exactly the same way as it works in the real world. Try to adjust the model to the needs of your client. Don't overload your classes with responsibility. Every class should be easy to understand, and relatively small. You can follow some of the basic principles of software modeling, such as SOLID, YAGNI. Learn about design patterns in practice (find some GOF patterns and try to implement them in your projects). Use code metrics to analyze your code (Lack of Cohesion of methods, Efferent coupling, Afferent coupling, Cyclomatic complexity) to keep your code simple.
Let's get back to your specific example. According to the rules I mentioned before, the very important part of object modeling is to place methods where they belong. So, the data and the methods should be "coherent" (see Lack of Cohesion of Methods metric). So, your classes should generally do one thing. In your example, the responsibility of the Store class could be, for example, to allow customers to buy toys. So, we could model it this way:
public class Store {
public void buyToy(Toy toy, Customer customer)
throws ToyNotAvailableException, InsufficientFundsException {
// some validation - check* methods are private
if(!checkToyIsAvailable(toy)) {
throw new ToyNotAvailableException();
}
if(!checkCustomerHasFunds(customer, toy.price())){
throw new InsufficientFundsException();
}
// if validation succeeds, we can remove the toy from store
// and charge the customer
// removeFromStore is a private method
removeFromStore(toy);
customer.charge(toy.price());
}
}
Keep in mind that this is just a simple example, created to be easy to understand and read. We should refine it many times to make it production-ready (for example handle payment method, number of items etc).

Open closed vs Single responsibility

I was looking into the Single Responsibility Principle(SRP) and Open Closed Principle(OCP).
SRP states that a class must have only one reason to change.
OCP states that the class must be closed for modification but open to extension.
I find that to be contradicting. One principle states that the class must be simple enough, that you change for a single reason but the other principle states that a class must not be changed but only extended.
Does anyone have a better explanation?
The Single Responsipbiliy Principle deals with the fact that if a class has multiple responsibilities, these responsibilities will be tightly coupled if they're in a single class.
So if an interface or algorithm changes for one responsibility it will likely also effect the other responsibility, an undesired effect.
In the Open/Closed Principle a class should be able to extend its behaviour without the need to modify the class itself. The only need to modify the class should be because it has a bug/error in it, not because you would like to change or add functionality.
For example (OCP): a class that holds a list of hard-coded types of objects is not open for extension, because if you would to add a new type to the list, you would need to modify the class. Instead a better design is when the class has an add or remove functionality, or an interface which you can implement to hold different types per subclass.
Lets represent all the responsibilities and reasons for change as a 2D cicrle.
SRP -> asks us to chip at the edges (haha) of that circle so that what is left is very tightly coupled, and if it will change it will change all at the same time.
OCP -> asks us to poke holes in that circle, so that those parts that will change at different pace can be provided at a latter date.
In other words SRP compliant class may fail OCP and OCP compliant class can fail SRP. There is also significant overlay between the two, but my presentation also shows that there will be differences too.