Single responsibility principle in API - oop

Please have a look at following piece of code:
public interface ICultureService
{
List<Culture> GetCultures();
bool IsCultureSupported(Culture culture);
Culture GetFallbackCulture();
}
We found that most of the consumers first call IsCultureSupported to validate if their culture is supported or not. And if culture is not supported, they call GetFallbackCulture():
public CallingMethod()
{
if(!cultureManager.IsCultureSupported(currentCulture))
{
currentCulture=cultureManager.GetFallbackCulture();
}
.
.
.
}
As per Single Responsibility Principle (and other OOPs rules), is it ok to introduce a function (in ICultureService and its implementation) like:
function GetFallbackCultureIfInvalid(Culture culture)
{
if(this.IsCultureSupported(culture)
{
return this.FallbackCulture();
}
}

As per Single Responsibility Principle (and other OOPs rules), is it ok to introduce a function (in CultureManager) like:
What you are referring to is called the Tell-Don't-Ask principle rather than the Single Responsibility Principle. Adding the GetFallbackCultureIfInvalid function actually makes the client code more readable. You should also reduce the visibility of the IsCultureSupported so that this method is no longer visible to the client code.
That said, it looks like CultureManager is an implementation of CultureService so it doesn't make sense to add a new method named GetFallbackCultureIfInvalid in CultureManager which is not a part of CultureService interface. What you should do is stick to a single method called GetFallbackCulture in CultureManager and let it return a fall back culture if the required condition is met :
Culture GetFallbackCulture(Culture culture) {
Culture fallBackCulture = culture;
if(!this.IsCultureSupported(culture) {
fallBackCulture = this.FallbackCulture();
}
return fallBackCulture;
}

Related

How to avoid if..else(or any conditionals) while deciding which method to be called?

How to follow Open Close Principle without violating LSP while deciding which method to be invoked with different parameters in a statically typed language?
Consider the requirement like
Action 1: perform DB operation on Table 1
Action 2: Perform DB operation on Table 2 based on input
Action 3: Do Nothing
Code for above requirement would look like
process(obj) {
if(obj.type === action1) {
db.updateTable1()
}
if(obj.type === action2) {
db.updateTable2(obj.status)
}
if(obj.type === action3) {
//May be log action 3 recieved
}
}
Figured out a way to follow OCP in above code for additional actions, by moving body of if statement to method and maintain a map of keys with action as name. Reference
However feels solution is violating the OCP as method wrapping the contents of first if block will not receive any parameter, second method wrapping the contents of second if block will have a parameter.
Either it forces all method to follow the same signature in trade off following OCP but violating LSP or give up OCP itself and thereby live with multi if statements.
A simple solution would be to define a strategy, which execute the code currently contained in the if / else if / else branches:
interface Strategy {
String getType();
void apply();
}
The strategies need to be registered:
class Executor {
private Map<String, Strategy> strategies;
void registerStrategy(strategy Strategy) {
strategies.put(strategy.getType(), strategy);
}
void process(obj) {
if (strategies.containsKey(obj.type)) {
// apply might execute db.updateTable1(),
// depending on the interface's implementation
strategies.get(obj.type).apply();
} else {
System.out.println("No strategy registered for type: " + obj.type);
}
}
}
The tradeoffs you recognise are unfortunately what you'll have to deal with when working with OOP in Java, C++, C# etc as the systems are dynamically put together and SOLID is kind of addresses the flaws. But the SOLID principles are intended to provide guidance, I wouldn't follow them idiomatically.
I hoped to find an example by better programmers than myself illustrating the command pattern. But I was just finding really bad examples which were not really addressing your question.
The problem of defining an associating an intent (defined as string or enum, a button click) with an action (an object, a lambda function) will always require a level of indirection we have to deal with. Some layers of abstractions are acceptable, for example: never call a model or service directly in a view. You could also think of implementing am event dispatcher and corresponding listeners, which would help with the loose coupling. But at some lower level you'll have to look up all listeners ...
The nature of obj is ambiguous, but I would recommend having a well-defined interface and pass it throughout your code where the class implementation of your interface would be equivalent to your 'action'. Here's an example of what that might look like in Typescript:
interface someDBInterface {
performAction() : void;
}
function process(obj : someDBInterface) {
let result = obj.performAction();
}
class action1 implements someDBInterface {
status: any
performAction() {
//db.updateTable1();
}
}
class action2 implements someDBInterface {
status : any
performAction() {
//db.updateTable1(this.status);
}
}
class action3 implements someDBInterface {
performAction() {
//May be log action 3 recieved
}
}
If this doesn't meet your requirements, feel free to reach out :)

How best to return a single value of different types from function

I have a function that returns either an error message (String) or a Firestore DocumentReference. I was planning to use a class containing both and testing if the error message is non-null to detect an error and if not then the reference is valid. I thought that was far too verbose however, and then thought it may be neater to return a var. Returning a var is not allowed however. Therefore I return a dynamic and test if result is String to detect an error.
IE.
dynamic varResult = insertDoc(_sCollection,
dataRec.toJson());
if (varResult is String) {
Then after checking for compliance, I read the following from one of the gurus:
"It is bad style to explicitly mark a function as returning Dynamic (or var, or Any or whatever you choose to call it). It is very rare that you need to be aware of it (only when instantiating a generic with multiple type arguments where some are known and some are not)."
I'm quite happy using dynamic for the return value if that is appropriate, but generally I try to comply with best practice. I am also very aware of bloated software and I go to extremes to avoid it. That is why I didn't want to use a Class for the return value.
What is the best way to handle the above situation where the return type could be a String or alternatively some other object, in this case a Firestore DocumentReference (emphasis on very compact code)?
One option would be to create an abstract state class. Something like this:
abstract class DocumentInsertionState {
const DocumentInsertionState();
}
class DocumentInsertionError extends DocumentInsertionState {
final String message;
const DocumentInsertionError(this.message);
}
class DocumentInsertionSuccess<T> extends DocumentInsertionState {
final T object;
const DocumentInsertionSuccess(this.object);
}
class Test {
void doSomething() {
final state = insertDoc();
if (state is DocumentInsertionError) {
}
}
DocumentInsertionState insertDoc() {
try {
return DocumentInsertionSuccess("It worked");
} catch (e) {
return DocumentInsertionError(e.toString());
}
}
}
Full example here: https://github.com/ReactiveX/rxdart/tree/master/example/flutter/github_search

Type hinting v duck typing

Using the following simple Example (coded in php):
public function doSomething(Registry $registry)
{
$object = $registry->getData('object_key');
if ($object) {
//use the object to do something
}
}
public function doSomething($registry)
{
$object = $registry->getData('object_key');
if ($object) {
//use the object to do something
}
}
What are the benefits of either approach?
Both will ultimately fail just at different points:
The first example will fail if an object not of type Registry is passed, and the second will fail if the object passed does not implement a getData method.
How do you choose when to use either approach?
Those are 2 different design approaches. The responsibility falls on the developer(s) to make sure either methods won't fail.
Type hinting is a more robust approach while duck typing gives you more flexibility.

code in the middle is different, everything else the same

I often have a situation where I need to do:
function a1() {
a = getA;
b = getB;
b.doStuff();
.... // do some things
b.send()
return a - b;
}
function a2() {
a = getA;
b = getB;
b.doStuff();
.... // do some things, but different to above
b.send()
return a - b;
}
I feel like I am repeating myself, yet where I have ...., the methods are different, have different signatures, etc..
What do people normally do? Add an if (this type) do this stuff, else do the other stuff that is different? It doesn't seem like a very good solution either.
Polymorphism and possibly abstraction and encapsulation are your friends here.
You should specify better what kind of instructions you have on the .... // do some things part. If you're always using the same information, but doing different things with it, the solution is fairly easy using simple polymorphism. See my first revision of this answer. I'll assume you need different information to do the specific tasks in each case.
You also didn't specify if those functions are in the same class/module or not. If they are not, you can use inheritance to share the common parts and polymorphism to introduce different behavior in the specific part. If they are in the same class you don't need inheritance nor polymorphism.
In different classes
Taking into account you're stating in the question that you might need to make calls to functions with different signature depending on the implementation subclass (for instance, passing a or b as parameter depending on the case), and assuming you need to do something with the intermediate local variables (i.e. a and b) in the specific implementations:
Short version: Polymorphism+Encapsulation: Pass all the possible in & out parameters that every subclass might need to the abstract function. Might be less painful if you encapsulate them in an object.
Long Version
I'd store intermediate state in generic class' member, and pass it to the implementation methods. Alternatively you could grab the State from the implementation methods instead of passing it as an argument. Then, you can make two subclasses of it implementing the doSpecificStuff(State) method, and grabbing the needed parameters from the intermediate state in the superclass. If needed by the superclass, subclasses might also modify state.
(Java specifics next, sorry)
public abstract class Generic {
private State state = new State();
public void a() {
preProcess();
prepareState();
doSpecificStuf(state);
clearState();
return postProcess();
}
protected void preProcess(){
a = getA;
b = getB;
b.doStuff();
}
protected Object postProcess(){
b.send()
return a - b;
}
protected void prepareState(){
state.prepareState(a,b);
}
private void clearState() {
state.clear();
}
protected abstract doSpecificStuf(State state);
}
public class Specific extends Generic {
protected doSpecificStuf(State state) {
state.getA().doThings();
state.setB(someCalculation);
}
}
public class Specific2 extends Generic {
protected doSpecificStuf(State state) {
state.getB().doThings();
}
}
In the same class
Another possibility would be making the preProcess() method return a State variable, and use it inthe implementations of a1() and a2().
public class MyClass {
protected State preProcess(){
a = getA;
b = getB;
b.doStuff();
return new State(a,b);
}
protected Object postProcess(){
b.send()
return a - b;
}
public void a1(){
State st = preProcess();
st.getA().doThings();
State.clear(st);
return postProcess();
}
public void a2(){
State st = preProcess();
st.getB().doThings();
State.clear(st);
return postProcess();
}
}
Well, don't repeat yourself. My golden rule (which admittedly I break from time on time) is based on the ZOI rule: all code must live exactly zero, one or infinite times. If you see code repeated, you should refactor that into a common ancestor.
That said, it is not possible to give you a definite answer how to refactor your code; there are infinite ways to do this. For example, if a1() and a2() reside in different classes then you can use polymorphism. If they live in the same class, you can create a function that receives an anonymous function as parameter and then a1() and a2() are just wrappers to that function. Using a (shudder) parameter to change the function behavior can be used, too.
You can solve this in one of 2 ways. Both a1 and a2 will call a3. a3 will do the shared code, and:
1. call a function that it receives as a parameter, which does either the middle part of a1 or the middle part of a2 (and they will pass the correct parameter),
- or -
2. receive a flag (e.g. boolean), which will tell it which part it needs to do, and using an if statement will execute the correct code.
This screams out loud for the design pattern "Template Method"
The general part is in the super class:
package patterns.templatemethod;
public abstract class AbstractSuper {
public Integer doTheStuff(Integer a, Integer b) {
Integer x = b.intValue() + a.intValue();
Integer y = doSpecificStuff(x);
return b.intValue() * y;
}
protected abstract Integer doSpecificStuff(Integer x);
}
The spezific part is in the subclass:
package patterns.templatemethod;
public class ConcreteA extends AbstractSuper {
#Override
protected Integer doSpecificStuff(Integer x) {
return x.intValue() * x.intValue();
}
}
For every spezific solution you implement a subclass, with the specific behavior.
If you put them all in an Collection, you can iterate over them and call always the common method and evry class does it's magic. ;)
hope this helps

Would this still be considered a Chain-of-Responsiblity pattern?

I have been using a design pattern for quite some time and have been calling/referring to it as a "Chain-of-Responsibility pattern" but now I realise there are differences, and it may not be appropriate to do so. So my question is 1, "is the following an instance of this pattern, or should it be called something else?", and 2, "is there any reason I should prefer the traditional way?".
I often use the following pattern when developing software. I have an interface that defines a functor, something like this.
interface FooBar{
boolean isFooBar( Object o );
}
These are usually search, filtering, or processing classes; usually something like Comparator. The implementation method is usually functional (i.e. side-effect free). Eventually, I find myself creating an implementation of the interface that looks like:
class FooBarChain implements FooBar{
FooBar[] foobars;
FooBarChain( FooBar... fubars ){
foobars = fubars;
}
boolean isFooBar( Object o ){
for( FooBar f : foobars )
if( f.isFooBar( o ) )
return true;
return false;
}
}
Its not always booleans either -I've used this pattern with mutable objects as well- but there is always a short-circuiting condition (e.g. returns true, the String is empty String, a flag gets set etc).
Until now I have generally calling this a "Chain of Responsibility" pattern, considering the issue of inheriting from a base class to be an implementation detail. However, today I have realised an important difference: the objects along the chain cannot interrupt the rest of chain. There is no way for an implementation to say "this is false, and I can guarantee it will be false for any condition" (nb: short-circuits only on true ).
So, should this be called something other than a chain-of-responsibility pattern? Are there any concerns or issues I should consider when using this approach over the traditional having the instances pass the message along.
I wouldn't call this chain of Chain of Responsibility.
In Chain of Responsibility, the "short-circuit" is roughly "I can handle this, so the next guy in the chain doesn't have to" rather than being a return value of any kind. It's normal for each object in the chain to know who is next in the chain and to pass control to that next object as necessary. They normally do something rather than returning a value.
The example you've presented it is perfectly reasonable, though I'm not sure it's a named pattern. I'm not too clear right now on the other variants you describe.
What you have is a chain-of-responsibility, but you can make a 'pure' chain of responsibility by adding a few small changes.
You can create an enum that will represent the 3 different results that you are expecting from this function.
public enum Validity{
Invalid,
Indeterminate,
Valid
}
You can change the interface to be chain-able like so:
public interface ChainFooBar{
public boolean isFooBar(Object o);
public Validity checkFooBar(Object o);
}
Most of your FooBars would then have to implement a method like this:
public abstract class AbstractFooBar implements FooBar{
public Validity checkFooBar(Object o){
return this.isFooBar(o) ? Validity.Valid : Validity.Indeterminate;
}
}
Then you can change your chain to check for either of the definite answers.
public class FooBarChain implements FooBar{
private FooBar[] fooBars;
public FooBarChain(FooBar... fooBars){
this.fooBars = fooBars;
}
public Validity isFooBar(Object o){
for(FooBar fooBar : this.fooBars){
Validity validity = fooBar.checkFooBar(o);
if(validity != Validity.Indeterminate){
return validity == Validity.Valid;
}
}
return false;
}
}