Imagine that I have a system of objects that are event emitters and also can listen to events in other objects. In this system these objects communicate between them mainly using events.
I want to follow good object-oriented practices an I'm interested in creating abstractions for my objects. The actual language that I would use doesn't matter that much: those abstractions could be interfaces or abstract classes for example.
These kind of abstractions define methods and properties in the object, but don't define event names. New implementations could decide to throw totally new events... It's true that you are using always the same method to throw them, but if you are creating new event names my impression is that you are in a way violating the open/closed principle: I would say that that's equivalent to adding new methods.
Would there be a way to state the possible events that an object like that can emit, in the abstraction that that object follows?
I'm interested in how this would work in languages like Java, C++, Javascript and Typescript. I'm not entirely sure as I don't know C# but I think that in that language events can be a part of the interfaces. But I am interested in how would you make events part of the abstractions in those other languages that don't support that in a straightforward way.
The ability to enforce those contracts in code efficiently will differ depending on the language being used, therefore the optimal solution is not really language agnostic: there's an idiomatic way of doing things in every language.
One solution that can be implemented in probably all typed languages is to make every event an emitter and rely on composition.
That basically boils down to someObject.onSomeEvent.listen(handler) instead of someObject.listenTo('someEvent', handler).
That's the approach they used in Google Dart when refactoring the DOM Events API. Since events are now class members they can easily become part of the code contract.
There are obviously other ways, such as creating a concrete event class per event rather than using strings. The advantage of this solution is that the contract wouldn't have to change when new events are added.
Related
I've been on an object-oriented design binge lately in an effort to better my design skills. This question is about a particular design choice that I see somewhat frequently, and don't understand the rationale. I know design choices tend toward the subjective, but I'd like to know what others think about this to find out if my design instincts are getting better or worse.
I was watching Robert C Martin(Uncle Bob) -Clean Architecture and Design-2012 COHAA The Path to Agility Conference. During the talk he tells a story about developing Fitnesse. I'm unfamiliar with the software, so I look it up and find the project hosted on github.
While looking through the project, one thing catches my attention in the WikiPage interface: the PageCrawler getPageCrawler(); method. So I look up the PageCrawler interface to see what that looks like. Upon examining this interface I think to myself that the methods in PageCrawler look like they would belong to a WikiPage, and that WikiPage could reasonably implement the interface.
I would think that separating the two might cause WikiPage to expose internals so that the information needed to crawl the page is accessible to the objects that crawl it. Also, the BaseWikiPage abstract class just returns a new PageCrawlerImpl, and there are no other PageCrawler implementations in the project from what I can see.
I've seen this type of code in other projects where a method of one interface/class returns an object of another interface/class with methods that can reasonably belong to the first class. In trying to see the intention of the Fitnesse developers, the only reason for this design that I came up with, is that developers who create new wiki pages by implementing WikiPage aren't required to re-implement the crawling functionality, i.e. the crawling functionality should be the same regardless of the wiki page's implementation. Is this the purpose for such a design, or am I missing something?
I found the Implementing an interface vs. providing an interface question on SO, but it wasn't quite the same and didn't give much insight into when you might design something like this.
What you're seeing is the Interface Segregation Principle in action. You say PageCrawler's interface makes you think "these are all things that belong in a WikiPage," but that's looking at things from an implementation point of view, not the point of view of someone calling WikiPage.
EDIT:
Maybe it's less about the Interface Segregation Principle and more about the Single Responsibility Principle.
In some cases, some functions that are "associated" with an object will need to hold state which should be separate from the object itself. In .NET, IEnumerator<T> methods are prime examples of this. Their meaning generally derives from an associated IEnumerable<T>, but each enumerator has a state which should be separate from that of the underlying collection (especially as a typical implementation of IEnumerable<T> will have no way of knowing how many enumerators, each with an independent state, may be associated with it simultaneously.
A few more advantages of splitting off interface implementations:
The split-off implementation can include members whose names mirror those of the underlying type but whose functionality is different. For example, a Dictionary<TKey,TValue> implements IEnumerable<KeyValuePair<TKey,TValue>>, but has a Keys property which implements IEnumerable<TKey>. Both the Dictionary and the thing returned by Keys have a GetEnumerator method, but one enumerates key-value pairs while the other just enumerates keys.
If the split-off implementation wraps the underlying object, and doesn't expose a reference to it but exposes some of its functionality, it may be safely exposed to code which is trusted with that limited functionality but not with an unfettered reference to the object. In some cases, having an object to hold a reference to a single wrapper object, which could then be used to satisfy any number of requests, may be more efficient than having clients use the wrapper object's constructor to to create a new wrapper object for every request.
Although neither Java nor .NET allows for double inheritance, it may be possible for each of the tightly-associated objects to inherit from a different class.
I'm not familiar with the particular types you mention, so I don't know the particular reasons they behave as they do, but the reasons mentioned are all common ones.
I've been working on a small project using C++ (although this question might be considered language-agnostic) and I'm trying to write my program so that it is as efficient and encapsulated as possible. I'm a self-taught and inexperienced programmer but I'm trying to teach myself good habits when it comes to using interfaces and OOP practices. I'm mainly interested in the typical 'best' practices when it comes to accessing the methods of an object that is acting as a subsystem for another class.
First, let me explain what I mean:
An instance of ClassGame wants to render out a 2d sprite image using the private ClassRenderer subsystem of ClassEngine. ClassGame only has access to the interface of ClassEngine, and ClassRenderer is supposed to be a subsystem of ClassEngine (behind a layer of abstraction).
My question is based on the way that the ClassGame object can indirectly make use of ClassRenderer's functionality while still remaining fast and behind a layer of abstraction. From what I've seen in lessons and other people's code examples, there seems to be two basic ways of doing this:
The first method that I learned via a series of online lectures on OOP design was to have one class delegate tasks to it's private member objects internally. [ In this example, ClassGame would call a method that belongs to ClassEngine, and ClassEngine would 'secretly' pass that request on to it's ClassRenderer subsystem by calling one of its methods. ] Kind of a 'daisy chain' of function calls. This makes sense to me, but it seems like it may be slower than some alternative options.
Another way that I've seen in other people's code is have an accessor method that returns a reference or pointer to the location of a particular subsystem. [ So, ClassGame would call a simple method in ClassEngine that would return a reference/pointer to the object that makes up its ClassRenderer subsystem ]. This route seems convenient to me, but it also seems to eliminate the point of having a private member act as a sub-component of a bigger class. Of course, this also means writing much fewer 'mindless' functions that simply pass a particular task on, due to the fact that you can simply write one getter function for each independent subsystem.
Considering the various important aspects of OO design (abstraction, encapsulation, modularity, usability, extensibility, etc.) while also considering speed and performance, is it better to use the first or the second type of method for delegating tasks to a sub-component?
The book Design Patterns Explained discusses a very similar problem in its chapter about the Bridge Pattern. Apparently this chapter is freely available online.
I would recommend you to read it :)
I think your type-1 approach resembles the OOP bridge pattern the most.
Type-2, returning handles to internal data is something that should generally be avoided.
There are many ways to do what you want, and it really depends on the context (for example, how big the project is, how much you expect to reuse from it in other projects etc.)
You have three classes: Game, Engine and Renderer. Both of your solutions make the Game commit to the Engine's interface. The second solution also makes the Game commit to the Renderer's interface. Clearly, the more interfaces you use, the more you have to change if the interfaces change.
How about a third option: The Game knows what it needs in terms of rendering, so you can create an abstract class that describes those requirements. That would be the only interface that the Game commits to. Let's call this interface AbstractGameRenderer.
To tie this into the rest of the system, again there are many ways. One option would be:
1. Implement this abstract interface using your existing Renderer class. So we have a class GameRenderer that uses Renderer and implements the AbstractGameRenderer interface.
2. The Engine creates both the Game object and the GameRenderer object.
3. The Engine passes the GameRenderer object to the Game object (using a pointer to AbstractGameRenderer).
The result: The Game can use a renderer that does what it wants. It doesn't know where it comes from, how it renders, who owns it - nothing. The GameRenderer is a specific implementation, but other implementations (using other renderers) could be written later. The Engine "knows everything" (but that may be acceptable).
Later, you want to take your Game's logic and use it as a mini-game in another game. All you need to do is create the appropriate GameRenderer (implementing AbstractGameRenderer) and pass it to the Game object. The Game object does not care that it's a different game, a different Engine and a different Renderer - it doesn't even know.
The point is that there are many solutions to design problems. This suggestion may not be appropriate or acceptable, or maybe it's exactly what you need. The principles I try to follow are:
1. Try not to commit to interfaces you can't control (you'll have to change if they change)
2. Try to prevent now the pain that will come later
In my example, the assumption is that it's less painful to change GameRenderer if Renderer changes, than it is to change a large component such as Game. But it's better to stick to principles (and minimise pain) rather than follow patterns blindly.
I have a very limited understanding of OOP.
I've been programming in .Net for a year or so, but I'm completely self taught so some of the uses of the finer points of OOP are lost on me.
Encapsulation, inheritance, abstraction, etc. I know what they mean (superficially), but what are their uses?
I've only ever used OOP for putting reusable code into methods, but I know I am missing out on a lot of functionality.
Even classes -- I've only made an actual class two or three times. Rather, I typically just include all of my methods with the MainForm.
OOP is way too involved to explain in a StackOverflow answer, but the main thrust is as follows:
Procedural programming is about writing code that performs actions on data. Object-oriented programming is about creating data that performs actions on itself.
In procedural programming, you have functions and you have data. The data is structured but passive and you write functions that perform actions on the data and resources.
In object-oriented programming, data and resources are represented by objects that have properties and methods. Here, the data is no longer passive: method is a means of instructing the data or resource to perform some action on itself.
The reason that this distinction matters is that in procedural programming, any data can be inspected or modified in any arbitrary way by any part of the program. You have to watch out for unexpected interactions between different functions that touch the same data, and you have to modify a whole lot of code if you choose to change how the data is stored or organized.
But in object-oriented programming, when encapsulation is used properly, no code except that inside the object needs to know (and thus won't become dependent on) how the data object stores its properties or mutates itself. This helps greatly to modularize your code because each object now has a well-defined interface, and so long as it continues to support that interface and other objects and free functions use it through that interface, the internal workings can be modified without risk.
Additionally, the concepts of objects, along with the use of inheritance and composition, allow you to model your data structurally in your code. If you need to have data that represents an employee, you create an Employee class. If you need to work with a printer resource, you create a Printer class. If you need to draw pushbuttons on a dialog, you create a Button class. This way, not only do you achieve greater modularization, but your modules reflect a useful model of whatever real-world things your program is supposed to be working with.
You can try this: http://homepage.mac.com/s_lott/books/oodesign.html It might help you see how to design objects.
You must go though this I can't create a clear picture of implementing OOP concepts, though I understand most of the OOP concepts. Why?
I had same scenario and I too is a self taught. I followed those steps and now I started getting a knowledge of implementation of OOP. I make my code in a more modular way better structured.
OOP can be used to model things in the real world that your application deals with. For example, a video game will probably have classes for the player, the badguys, NPCs, weapons, ammo, etc... anything that the system wants to deal with as a distinct entity.
Some links I just found that are intros to OOD:
http://accu.informika.ru/acornsig/public/articles/ood_intro.html
http://www.fincher.org/tips/General/SoftwareEngineering/ObjectOrientedDesign.shtml
http://www.softwaredesign.com/objects.html
Keeping it very brief: instead of doing operations on data a bunch of different places, you ask the object to do its thing, without caring how it does it.
Polymorphism: different objects can do different things but give them the same name, so that you can just ask any object (of a particular supertype) to do its thing by asking any object of that type to do that named operation.
I learned OOP using Turbo Pascal and found it immediately useful when I tried to model physical objects. Typical examples include a Circle object with fields for location and radius and methods for drawing, checking if a point is inside or outside, and other actions. I guess, you start thinking of classes as objects, and methods as verbs and actions. Procedural programming is like writing a script. It is often linear and it follows step by step what needs to be done. In OOP world you build an available repetoire of actions and tasks (like lego pieces), and use them to do what you want to do.
Inheritance is used common code should/can be used on multiple objects. You can easily go the other way and create way too many classes for what you need. If I am dealing with shapes do I really need two different classes for rectangles and squares, or can I use a common class with different values (fields).
Mastery comes with experience and practice. Once you start scratching your head on how to solve particular problems (especially when it comes to making your code usable again in the future), slowly you will gain the confidence to start including more and more OOP features into your code.
Good luck.
When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent
This is probably a newbie question since I'm new to design patterns but I was looking at the Template Method and Strategy DP's and they seem very similar. I can read the definitions, examine the UML's and check out code examples but to me it seem like the Strategy pattern is just using the Template Method pattern but you just happen to passing it into and object (i.e. composition).
And for that matter the Template Method seems like that is just basic OO inheritance.
Am I missing some key aspect to their differences? Am I missing something about the Template Method that makes it more that just basic inheritance?
Note: There is a previous post on this (672083) but its more on when to use it, which kind of helps me get it a bit more but I want valid my thoughts on the patterns themselves.
It basically all comes down to semantics. The strategy pattern allows you to pass in a particular algorithm/procedure (the strategy) to another object and that will use it. The template method allows you to override particular aspects of an algorithm while still keeping certain aspects of it the same (keep the order the same, and have things that are always done at the start and end for example... the 'template') while inheritance is a way of modelling 'IS-A' relationships in data models.
Certainly, template methods are most easily implemented using inheritance (although you could just as easily use composition, especially once you have functors), and strategy patterns are frequently also template methods but where the syntax is similar the meanings are vastly different.
The Strategy design pattern
provides a way to exchange the algorithm of an object
dynamically at run-time
(via object composition).
For example, calculating prices in an order processing system.
To calculate prices in different ways,
different pricing algorithms can be supported
so that which algorithm to use can be selected (injected) and exchanged dynamically at run-time.
The Template Method
design pattern
provides a way to
redefine some parts of the behavior of a class statically at compile-time
(via subclassing).
For example, designing reusable applications (frameworks).
The application implements the common (invariant) parts of the behavior
so that users of the application can write subclasses to redefine
the variant parts to suit their needs.
But subclass writers should neither be able to change the invariant parts of
the behavior nor the behavior's structure
(the structure of invariant and variant parts).