Design pattern to apply a set of operations on an object - oop

I'm looking for a specific design pattern that is used to implement function composition. The goal is to be able to pass an object that represents a set of operations applied on a specific sequence to a client, and let the client apply the operations without caring what they might be. The pattern covers the following requirements:
needs to apply an operation to an input (i.e., maps a given input to an output)
operations must be composable
each operation can be applied in isolation or composed with other operations
Requirement #1 sounds like a strategy pattern or command pattern, while requirement #2 brings the composite design pattern to mind.
Does anyone know if there's a OO design pattern that fits these requirements?

Related

Interaction between levels of abstraction

Can lower-level abstractions be dependent on higher-level abstractions?
let's say i have a method named getAudioFileInfo(id) which should get some information about audio file with specified id from server. And list of required information is provided by higher abstraction level interface. Is it right choise to call this method inside getAudioFileInfo? Or will be better to pass the list of required information as argument?
It is better to pass the list down the lower-level function, or split the lower-level function in more focused functions that return only one info value each one, and then assemble the result at higher-level functions.
But also consider that this does not mean you should make several requests to the server for each datum, as the requests may be expensive. So maybe you get all the data, but return the relevant only. And this makes more sense with first option.
In general ask for things in same layer or in the next lower layer, and give them what they need. See what I wrote here in Layered architecture.

OOP and object parametrization

I am supposed to develop a program, which will heavily depend on input data at runtime (data for initialization, read from XML) and I would like to ask for good OOP practice regarding object/architecture design.
Situation
I have the following objects, object_A, object_B, object_C, each of them has a specified objective.
object_A = evaluation of equations, requires input, produces output
object_B = evaluation of equations, requires input, produces output
object_C = requires data from object_A and object_B as input, produces output
Then there is object_D, which passes the data and calls functions among these objects_A/B/C.
There are 2 ways to tackle this situation that I know of :
a) Inheritance
object_D inherits from object_A, object_B, object_C. Data are passed by appointing appropriate structures in objects_A/_B/_C using "this->", virtual functions in objects_A/_B/_C can then call back to object_D.
hierarchical approach
objects are concealed
difficult to parametrize the object_A/_B/_C (parameters need to travel all the way up in the hierarchy to base classes)
b) Passing pointers
Create object_A/_B/_C, by passing parameters in constructor. Then pass pointers of these objects to constructor of object_D.
no information hiding, all objects are visible
hierarchy might be unclear, especially when there are more levels
easy to pass initialization parameters
Question
What is an appropriate way of handling software architecture, where many objects require passing initialization parameters at runtime?
I think your question is broad and can have more than one good answer. However, I think your scenario can be solved in one of two ways:
Eventing: Instead of tightly coupling your classes using inheritance, you can use events. For instance when Object A finishes processing it raise an event called 'ClassAFinished'. Then you have to create an event handler for ClassAFinish Event that will in turns pass objectA's output to other objects that rely on Object A output.
Second way is Chain of Responsibility design pattern. Since your question is related to OOP I think it's reasonable to use this design pattern. In a nutshell Chain of Responsibility is a design pattern that you use it when you have a series (chain) of objects, each of which will do specific processing (responsibility), but each one of them can't begin processing until it received data from the previous object. When it finishes processing it'll send its output to the next object and so forth.
These are 2 main ideas that I wanted to share with you.

Is there a commonly accepted design pattern for base methods implementing "early exit" functionality?

I have a class hierarchy of patterns: patterns are split into simple patterns and compound patterns, both of which have concrete implementations.
Patterns have a Match method which returns a Result, which can be a Node or an Error.
All patterns can check for a memoized result when matching. Simple patterns return an error on EOF.
Is there a pattern that allows a more simple way to reuse implemented functionality than mine? Let's say we're using a single-inheritance, single-dispatch language like C# or Java.
My approach is to implement Match at pattern level only and call a protected abstract method InnerMatch inside it. At simple pattern level, InnerMatch is implemented to handle EOF and calls protected abstract InnerInnerMatch, which is where concrete implementations define their specific functionality.
I find this approach better than adding an out bool handled parameter to Match and calling the base method explicitly in each class, but I don't like how I have to define new methods. Is there a design pattern that describes a better solution?
Possibly Strategy pattern
The strategy pattern (also known as the policy pattern) is a software design pattern that enables an algorithm's behavior to be selected at runtime. The strategy pattern
defines a family of algorithms,
encapsulates each algorithm, and
makes the algorithms interchangeable within that family.
And perhaps Chain of Repsonsibility
The chain-of-responsibility pattern is a design pattern consisting of a source of command objects and a series of processing objects. Each processing object contains logic that defines the types of command objects that it can handle; the rest are passed to the next processing object in the chain. A mechanism also exists for adding new processing objects to the end of this chain.
But the Chain of Responsibility would depend more on how you want to handle allowing multiple 'Patterns'(your Objects, not 'design patterns') to be 'processed' in order.
Chain of Responsibility might also be good for allowing you to have dynamic Pattern "sets" that different inputs can be processed with. (Depending on your needs.)
You'll have to encapsulate your input values, but that isn't too big of deal.

What design pattern is used by IProject.setDescription in Eclipse

I'm designing an API with a specific pattern in mind, but don't know if this pattern has a name. It's similar to the Command pattern in GoF (Gang of Four) but not exactly.
One simple example of it I can find is in Eclipse where you manipulate a project (IProject), not by calling methods on the project that change its state, but by this 3 step process:
extracting its state into a descriptor object (IProjectDescription) with getDescription
setting properties on the descriptor. E.g. setName
applying the descriptor back to the original project with setDescription
The general principle seems to be that you have a complex object as part of a framework with many potentially interdependent properties, and rather than working directly on that object, one property at a time, you extract the properties into a simple data object, manipulate that, and apply it back.
It has some of the attributes of the Command pattern, in that the data object encapsulates all of the changes like a Command would - but it's not really a Command, because you don't execute it on the object, it's simply a representation of the state of the object.
It also has some attributes of a Transactional API, in that, by making the changes all in one hit with the set... call, you allow for the entire modification to effectively "roll back" if any one property changes fails. But while that's an advantage of the approach, it's not really the main purpose of it. And what's more, you can achieve the transactional nature without this approach, by simply adding transactional methods to the API (like commit and rollback)
There are two advantages in this pattern that I do want to exploit - although I don't see them being exploited by the eclipse example above:
You can represent the meaningful state of the underlying object while its implementation changes. This is useful for upgrading, or copying state from different types of representations. Say I release a new version of my API where I create an object Foo2 which is a totally new form of my old Foo1, but both have the same basic properties. To upgrade a Foo1 to a Foo2, I can extract those properties as a FooState. foo2.setFooState(foo1.getFooState) as simply as that. The way in which the properties are interpreted and represented is encapsulated in the Foos and can be totally different.
I can persist and transmit the state of the underlying object with my simple data object, where persisting the object itself would be much more complex. So I can extract the state of Foo as a FooState, and persist it as a simple XML document then later apply it to some new object by "loading" it and applying it. Or I can transmit the FooState simply to a webservice as a JSON object whereas the Foo itself is too big and complex to transmit. (Or the objects on each end of the service call are entirely different, like Foo1 and Foo2)
Anyway, I can't find an name or example of this pattern anywhere, neither in the Gang of Four design patterns, nor even in Martin Fowler's comprehensive "bliki"
Data Transfer Object(DTO) that Martin Fowler describes in his book Principles of Enterprise Application Architecture seems to be for the purpose you describe in point 2.
A DTO is a fairly simple extraction of the more complex Domain Model that it represents.
Fowler describes that the usage of a DTO in combination with an assembler can be used to keep the DTO independent from the actual Domain Object(or Objects) that it is supposed to represent. The assembler knows how to create a DTO from the Domain Object and vice versa. Also he mentions that the DTO needs to be serializable to persist/transmit its state. What you describe in point 2 seems to match this description.
What you've described in point 1 though does not seem to be an intended purpose, but definitely seems achievable using this pattern.
I'm not sure if you went through the Pattern catalog of his book or the book itself. The book itself describes this in much greater detail.
You may also want to have a look at Transfer Object definition from Oracle which Fowler says here is what he describes as DTO.
Not every design is documented as a single Design Pattern, in fact most system designs are combinations of multiple patterns.
However one part of what you're doing, with IProjectDescription is using a Memento, however yours seems to be a Polymorphic variation. Consider Patterns as they appear in Pattern Catalogues to be the pared down to the essential starting point not the end result. Patterns are by there very nature supposed to be extended and combined.
The Command pattern can give you Commit and RollBack (Do/Undo) and combining it with Memento in that way is a quite common approach. The same thing is seen in the Java Servlet API with HttpRequest & HttpResponse.

Does the Strategy Pattern violate the Single Responsibility Principle?

If the Single Responsibility Principle states that every object must have a single reason to change and a single strategy class implemented with the Strategy pattern (by definition) has multiple methods that can change for any number of reasons, does that mean that it's impossible to implement the Strategy pattern without violating the SRP?
How so?
Strategy pattern if I recollect is basically a way to decouple the logic/algorithm being used. So Client has m_IAlgorithm. IAlgorithm should have a small set of methods if not one.
So the only reason that a AlgoImplementation class can change is
if there is a change in the algorithm it implements. (change in its responsibility/behavior)
or if IAlgoritm changes.. which would be rare unless you made a mistake in defining the interface. (Its a change in its own public interface - so don't think its a violation of SRP.)
I actually see the opposite. The strategy pattern lets you decouple two things, the (potential) algorithms used to get some job done and the decision making logic about these algorithms.
I'm not sure if you rather have a class which does both conditional logic on which algorithm to use and also encloses those algorithms. Also, I'm not saying you've implied that but you didn't gave an example where Strategy would break SRP and what's, in your opinion, a better design.
The context which I am most familiar with the single responsibility principle is in overall system design, and can compliment the strategy pattern with regard to the grouping of components within the system.
Use the strategy pattern to define a set of algorithms that a client uses interchangeably, then the single responsibility principle can be used to decide where to group the client and algorithms the client uses within the system. You don't want to have to disturb the code for algorithm A if your work is solely in algorithm B and vice versa. For compiled languages this can have a significant impact in the complexity of the re-factor, version and deploy cycle. Why version and re-compile the client and algorithms A, C, and D when the only changes needed where to algorithm B.
With this understanding of the single responsibility principle I don't see where having a class that implements the strategy pattern violates SRP. The purpose of the client class is to implement the strategy pattern, that is the clients responsibility. The purpose of the algorithms are to implement the logic that they are responsible for, and the single responsibility principle says don't group them all together within the system since they will be changing for different reasons. That's my $0.02.
Good Point :) I guess it's more a single responsibility guideline, which makes sense for many cases, but for some also not, like the strategy pattern..
Depends on the point of view IMO.
If you look at strategy (class for example) as a mixture of algorithms/logic which is decided on run-time and they are quite different from one another.... yeah kinda violates the single responsibility pattern... but if you think about this strategy class it's a decoupling entity which encapsulates the whole decision making and making it a single-job class - to decide which algorithm/logic is goint to be used and does nothing more, nothing less