Which design pattern can I use to solve this situation? - objective-c

First of all, I'm using Objective-C, but this doesn't matter at all.
My situation is:
I have two different scenarios. I distinguish them by a preprocessor macro like:
#ifdef USER
do some stuff for scenario 1
#else
do some stuff for scenario 2
Both scenarios works with a list of items all across the application, but the difference is the way of getting those items.
In the first one I get the items by sending a request to a server.
In the second one, I get them from the local device storage.
What I have now is the second scenario implemented. I have a singleton class that returns to me the list of items by getting them from the local storage. (like a traditional database singleton)
I want to add the other scenario. Since the items can be get from any point across the app, I want this to be a singleton too.
Does it make sense to have a singleton superclass, and then two subclasses that implement the different ways of getting the items? Singleton hierarchies sound quite strange to me.

That's not exactly hierarchy. The superclass you're mentioning is actually an interface for your 2 concrete classes, which can be singletons if you want. The interface is an abstract entity thus any instance-related term is irrelevant to it.
You're statically defining your program behavior by using preprocessor to do the scenario choice. If you stick to this approach and it fits your requirements, you don't need any design patterns. In your code just use the interface I mentioned above, which is a port to your statically instantiated data. If you want to have more flexibility (this sounds likely), you can do your scenario choice at runtime. In this case you may find the Strategy pattern useful for applying scenarios and Factory pattern for instancing.

Factory combined with Strategy.
Factory as the pattern of using another class to make your instance rather than using just a constructor. You are already doing that with your Singleton most likely.
Strategy for the ability to configure which kind of object is actually created by the factory at rutime.

Related

The delegates need the methods and attributes of the holder class

I have some entity, which depending on internals, may act in two ways. For example, my Connector class can operate as a HttpConnector and as a TCPConnector. The implementation of 'connect' method differs for these two 'engine' classes. Both of them share some common methods of Connector such as "openFileToTransfer(String fileName)" and share common attributes such as "folderWithFiles" etc. I need two find the best OOP design for this problem.
1) first way is delegation. I create Connector with TCPConnectorEngine and it works. The problem is that I need to share some settings and common methods. I dont want to copy paste them of course into each of the classes. I can provide common settings via constructor, which implies coding the same attributes two times, but sharing common methods is harder. May be I can inject Connector instance in each of them, but that looks ugly. May be I can provide a BaseClass for both of my ConnectorEngines, but this looks more complicated.
2) second way is inheritance. I just inherit TCPConnector from Connector and get all I need. But I suppose the 'engine' decision fits better for my task just because it fits better logically. It is really an engine of Connector, its not different types of Connector.. but may be I am wrong?
Which way you would choose and why?
I work with Java, if it matters for the answer.
In pattern terminology, the question boils down to, how to implement a Connection interface properly:
1) Use a facade and then delegate to a strategy.
2) Or use an abstract base class and inherit with concrete implementation.
So in my opinion 2 is a good solution, only in case the internal choreography or protocol of the chil classes is quite similar and they therefore can share a lot of structure and code, which is then captured in the base class.
If the concepts used internally are quite different, I think it is better to implement different strategies, instanciate those in a facade class and delegate to the strategy instances. If you want code reuse, e.g. for the settings, I would keep this concept in a different class, e.g. ConnectionSettings and inject that to the strategy instance from the facade.

Why would I create an interface for each mapper class?

In cases of MVC applications where the model is split into separate domain and mapper layers, why would you give each of the mapper classes its own interface?
I have seen a few examples now, some from well respected developers such as the case with this blog, http://site.svn.dasprids.de/trunk/application/modules/blog/models/
I suspect that its because the developers are expecting the code to be re-used by others who may have their own back-ends. Is this the case? Or am I missing something?
Note that in the examples I have seen, developers are not necessarily creating interfaces for the domain objects.
Since interfaces are contracts between classes (I'm kinda assuming that you already know that). When a class expects you to pass an object with as specific interface, the goal is to inform you, that this class instance expect specific method to be executable on said object.
The only case that i can think of, when having a defined interface for data mappers make sense might be when using unit of work to manage the persistence. But even then it would make more sense to simply inject a factory, that can create data mappers.
TL;DR: someone's been overdoing.
P.S.: it is quite possible, that I am completely wrong about this one, since I'm a bit biased on the subject - my mappers contain only 3 (+constructor) public methods: fetch(), store() and remove() .. though names method names tend to change. I prefer to take the retrieval conditions from domain object, as described here.

Factory Pattern - Is having multiple factories a good idea?

I am designing a system that lets a user assign a specific task to be performed when a button is pressed. The task to be performed can be assigned to all sorts of things. So I have an abstract base class called "ButtonTask", and all other tasks inherit from this base to implement the task to be performed along with the associated data it needs to know. This way I can use polymorphism to abstract away all the specifics, I just call "PerformTask" without having to care about what type it actually is. So far so good.
The actual task itself can be set in may different ways, the user may change the task with a UI menu, the task may be read from a file, and also the task may be set remotely via a network message.
At the moment I have a factory function that will create the correct derived type based on the network message, and return a pointer to the base type. The problem is that the UI menu and the file reading feel like they need their own factory method for object creation, as they are inherently different from one another. Is it generally a good idea to have multiple factories for this kind of problem? I can't really think of another way around this problem but perhaps there's something neater I can do.
The only good reason I see to implement multiple factory methods is if you want to be able to create the objects with different sets of initial attributes, for instance by allowing the caller to specify some attributes and setting default values for others - the equivalent of having multiple public constructors.
If the idea is that the tasks are independent of the way they were initiated (GUI, network, etc), then I don't see a need for separate factory methods. Instead, I would say that one of the duties of the factory is to achieve this very abstraction. In other words, calling the same factory from three different parts of the code is absolutely fine. It is probably a good idea to make the factory method static or to make the factory a singleton object, though.
If on the other hand you have a situation where certain tasks can only ever be initiated from the network and others from the GUI, and only a few can be initiated in all three ways, then it might be worthwhile to rethink the design a bit. You should then consider adding another level of abstract Task classes, eg CommonTask, GuiTask, NetworkTask, FileTask, and have factories for them instead of ButtonTask. This is obviously more complex and whether or not it's worth it depends on the number of task classes and the structure of your code.
What you want to avoid is a situation where users of the factory are aware of which specific subclasses of ButtonTask they can receive from the factory. That's a "false base class" situation, ie one where the base class is not a true abstraction of the whole set of its subclasses, and you get out of it by adding the extra subclass layer as outlined above.
Other than that, you might also want to consider renaming ButtonTask; it sounds like a GUI-only task just from the name.

When do you need to create abstractions in the form of interfaces?

When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent

Should every single object have an interface and all objects loosely coupled?

From what I have read best practice is to have classes based on an interface and loosely couple the objects, in order to help code re-use and unit test.
Is this correct and is it a rule that should always be followed?
The reason I ask is I have recently worked on a system with 100’s of very different objects. A few shared common interfaces but most do not and wonder if it should have had an interface mirroring every property and function in those classes?
I am using C# and dot net 2.0 however I believe this question would fit many languages.
It's useful for objects which really provide a service - authentication, storage etc. For simple types which don't have any further dependencies, and where there are never going to be any alternative implementations, I think it's okay to use the concrete types.
If you go overboard with this kind of thing, you end up spending a lot of time mocking/stubbing everything in the world - which can often end up creating brittle tests.
Not really. Service components (class that do things for your application) are a good fit for interfaces, but as a rule I wouldn't bother having interfaces for, say, basic entity classes.
For example:
If you're working on a domain model, then that model shouldn't be interfaces. However if that domain model wants to call service classes (like data access, operating system functions etc) then you should be looking at interfaces for those components. This reduces coupling between the classes and means it's the interface, or "contract" that is coupled.
In this situation you then start to find it much easier to write unit tests (because you can have stubs/mocks/fakes for database access etc) and can use IoC to swap components without recompiling your applications.
I'd only use interfaces where that level of abstraction was required - i.e. you need to use polymorphic behaviour. Common examples would be dependency injection or where you have a factory-type scenario going on somewhere, or you need to establish a "multiple inheritance" type behaviour.
In my case, with my development style, this is quite often (I favour aggregation over deep inheritance hierarchies for most things other than UI controls), but I have seen perfectly fine apps that use very little. It all depends...
Oh yes, and if you do go heavily into interfaces - beware web services. If you need to expose your object methods via a web service they can't really return or take interface types, only concrete types (unless you are going to hand-write all your own serialization/deserialization). Yes, that has bitten me big time...
A downside to interface is that they can't be versioned. Once you shipped the interface you won't be making changes to it. If you use abstract classes then you can easily extend the contract over time by adding new methods and flagging them as virtual.
As an example, all stream objects in .NET derive from System.IO.Stream which is an abstract class. This makes it easy for Microsoft to add new features. In version 2 of the frameworkj they added the ReadTimeout and WriteTimeout properties without breaking any code. If they used an interface(say IStream) then they wouldn't have been able to do this. Instead they'd have had to create a new interface to define the timeout methods and we'd have to write code to conditionally cast to this interface if we wanted to use the functionality.
Interfaces should be used when you want to clearly define the interaction between two different sections of your software. Especially when it is possible that you want to rip out either end of the connection and replace it with something else.
For example in my CAM application I have a CuttingPath connected to a Collection of Points. It makes no sense to have a IPointList interface as CuttingPaths are always going to be comprised of Points in my application.
However I uses the interface IMotionController to communicate with the machine because we support many different types of cutting machine each with their own commend set and method of communications. So in that case it makes sense to put it behind a interface as one installation may be using a different machine than another.
Our applications has been maintain since the mid 80s and went to a object oriented design in late 90s. I have found that what could change greatly exceeded what I originally thought and the use of interfaces has grown. For example it used to be that our DrawingPath was comprised of points. But now it is comprised of entities (splines, arcs, ec) So it is pointed to a EntityList that is a collection of Object implementing IEntity interface.
But that change was propelled by the realization that a DrawingPath could be drawn using many different methods. Once that it was realized that a variety of drawing methods was needed then the need for a interface as opposed to a fixed relationship to a Entity Object was indicated.
Note that in our system DrawingPaths are rendered down to a low level cutting path which are always series of point segments.
I tried to take the advice of 'code to an interface' literally on a recent project. The end result was essentially duplication of the public interface (small i) of each class precisely once in an Interface (big I) implementation. This is pretty pointless in practice.
A better strategy I feel is to confine your interface implementations to verbs:
Print()
Draw()
Save()
Serialize()
Update()
...etc etc. This means that classes whose primary role is to store data - and if your code is well-designed they would usually only do that - don't want or need interface implementations. Anywhere you might want runtime-configurable behaviour, for example a variety of different graph styles representing the same data.
It's better still when the thing asking for the work really doesn't want to know how the work is done. This means you can give it a macguffin that it can simply trust will do whatever its public interface says it does, and let the component in question simply choose when to do the work.
I agree with kpollock. Interfaces are used to get a common ground for objects. The fact that they can be used in IOC containers and other purposes is an added feature.
Let's say you have several types of customer classes that vary slightly but have common properties. In this case it is great to have a ICustomer interface to bound them together, logicaly. By doing that you could create a CustomerHander class/method that handels ICustomer objects the same way instead of creating a handerl method for each variation of customers.
This is the strength of interfaces.
If you only have a single class that implements an interface, then the interface isn't to much help, it just sits there and does nothing.