I was looking into Fractal (tldr : data object/collection to json formatting library) today and saw some benefits to using it. However it's functionality seems to span across multiple layers of the app I'm working on. Hence a question appeared -- where does a code utilising Fractal belong to? Model, service, controller, some other place? The examples given in the documentation at the project docs seem to favour putting it in the controller or right in a route callback (more complex examples seem to be coming from Laravel app and the author mentioned it in his book on API).
My concern is coupling -- if I put it in the controller, as most of the usage examples show, then I'm pretty much bound to it in the future. My first instinct is to abstract it a little, make that abstraction bound to a contract and then put it to use. Might sound over engineered, but the API I'm working on "is aspiring" to be JSON-API compliant, so exchanging such a "json formatter" for something else sounds less crazy. Besides I still need to format error messages and Fractal seems no to touch that at all.
I'd like to take advantage of support for Eloquent's paginator and embedded resources, because that's always a pain. Only doing that makes it awkward (to say the least) at presentation/control layer. Even in the Fractal docs they resort to adding some extra methods to the controller class to prep Fractal objects. It seems a bit weird to me, but maybe it's just me. That's why take it here.
I'm aware that it might be a matter of preference, but I'm counting on somebody to have a reasonably sounding one :). Or perhaps a better solution altogether, keeping in mind that automation and json-api compliance are key reasons.
I did this once with an API class for a proprietary system with which my application needed to interface. The API returned objects that looked a lot like models, so I implemented a number of classes for the objects I needed and implemented a library to make the API calls and return the objects. Luckily, I only needed read access to the API, so my library implements only a small subset of the actions available.
Maybe you could abstract all the functionality you need (both Fractal and any Eloquent features) into a library class for which you have defined an interface. That way all the Fractal code is in one place and if you ever need to replace it, you just rewrite your custom library class (which might be a lot of work, but probably better than hunting down references to Fractal sprinkled throughout your code).
Related
I'm not so much seeking a specific implementation but trying to figure out the proper terms for what I'm trying to do so I can properly research the topic.
I have a bunch of interfaces and those interfaces are implemented by controllers, repositories, services and whatnot. Somewhere in the start up process of the application we're using the Castle.MicroKernel.Registration.Component class to register the classes to use for a particular interface. For instance:
Component.For<IPaginationService>().ImplementedBy<PaginationService>().LifeStyle.Transient
Recently I became interested in creating an audit trail of every class and method call. There's a few hundred of these classes so writing a proxy class for each one by hand isn't very practical. I could use a template to generate the code but I'd rather not blow up our code base with all that.
So I'm curious if there's some kind of on the fly solution. I know nHibernate creates proxy classes at some point which overlay all the entity classes. Can someone give me some guidance on how I might be able to do something similar here?
Something like:
Component.For<IPaginationService>().ImplementedBy<ProxyFor<PaginationService>>().LifeStyle.Transient
Obviously that won't work because I can only use generics to generalize the types of methods but not the methods themselves. Is there some tricky reflection approach I can use to do this?
You are looking for what Castle Windsor calls interceptors. It's an aspect-oriented way to tackle cross-cutting concerns -- auditing is certainly one of them. See documentation, or an article about the approach:
Aspect oriented programming is an approach that effectively “injects” pieces of code before or after an existing operation. This works by defining an Inteceptor wrapping the logic being invoked then registering it to run whenever a particular set/sub-set of methods are called.
If you want to apply it to many registered services, read more about interceptor selection mechanisms: IModelInterceptorsSelector helps there.
Using PostSharp, things like this can be even done at compile time. This can speed the resulting application, but when used correctly, interceptors are not slow.
I have a Component which has API exposed with some 10 functionality in all. I can think of two ways to achieve it:
Give out all these functionality as separate functions.
Expose only one function which takes an XML as input. Based on request_Type specified and the parameters passed in the XML, I internally call one of the respective functions.
Q1. Will the second design be more loosely coupled than the first ?
I always read about how I should try my components to be loosely coupled, should I really go to this extent to achieve lose coupling ?
Q2. Which one of these would be a better design in terms of OOP and why?
Edit:
If I am exposing this API over D-Bus for others to use, will type checking still be a consideration to compare the two approaches? From what I understand type checking is done at compile time, but in case when this function is exposed over some IPC, issue of type checking comes into picture ?
The two alternatives you propose do not differ in the (obviously quite large) number of "functions" you want to offer from your API. However, the second seems to have many disadvantages because you are loosing any strong type checking, it will become much harder to document the functionality etc. (The only advantage I see is that you don't need to change your API if you add functionality. But at the disadvantage that users will not be able to figure out API changes like deleted functions until run-time.)
What is more related with this question is the Single Responsiblity Principle (http://en.wikipedia.org/wiki/Single_responsibility_principle). As you are talking about OOP, you should not expose your tens of functions within one class but split them among different classes, each with a single responsibility. Defining good "responsibilities" and roles requires some practice, but following some basic guidelines will help you to get started quickly. See Are there any rules for OOP? for a good starting point.
Reply to the question edit
I haven't used D-Bus, so this might be totally wrong. But from a quick look at the tutorial I read
Each object supports one or more interfaces. Think of an interface as
a named group of methods and signals, just as it is in GLib or Qt or
Java. Interfaces define the type of an object instance.
DBus identifies interfaces with a simple namespaced string, something
like org.freedesktop.Introspectable. Most bindings will map these
interface names directly to the appropriate programming language
construct, for example to Java interfaces or C++ pure virtual classes.
As far as I understand, D-Bus has the concept of differnt objects which provide interfaces consisting of several methods. This means (to me) that my answer above still applies. The "D-Bus native" way of specifying your API would mean to exhibit interfaces and I don't see any reason why good OOP design guidelines shouldn't be valid, here. As D-Bus seems to map these even to native language constructs, this is even more likely.
Of course, nobody keeps you from just building your own API description language in XML. However, things like are some kind of abuse of underlying techniques. You should have good reasons for doing such things.
I've been working on a small project using C++ (although this question might be considered language-agnostic) and I'm trying to write my program so that it is as efficient and encapsulated as possible. I'm a self-taught and inexperienced programmer but I'm trying to teach myself good habits when it comes to using interfaces and OOP practices. I'm mainly interested in the typical 'best' practices when it comes to accessing the methods of an object that is acting as a subsystem for another class.
First, let me explain what I mean:
An instance of ClassGame wants to render out a 2d sprite image using the private ClassRenderer subsystem of ClassEngine. ClassGame only has access to the interface of ClassEngine, and ClassRenderer is supposed to be a subsystem of ClassEngine (behind a layer of abstraction).
My question is based on the way that the ClassGame object can indirectly make use of ClassRenderer's functionality while still remaining fast and behind a layer of abstraction. From what I've seen in lessons and other people's code examples, there seems to be two basic ways of doing this:
The first method that I learned via a series of online lectures on OOP design was to have one class delegate tasks to it's private member objects internally. [ In this example, ClassGame would call a method that belongs to ClassEngine, and ClassEngine would 'secretly' pass that request on to it's ClassRenderer subsystem by calling one of its methods. ] Kind of a 'daisy chain' of function calls. This makes sense to me, but it seems like it may be slower than some alternative options.
Another way that I've seen in other people's code is have an accessor method that returns a reference or pointer to the location of a particular subsystem. [ So, ClassGame would call a simple method in ClassEngine that would return a reference/pointer to the object that makes up its ClassRenderer subsystem ]. This route seems convenient to me, but it also seems to eliminate the point of having a private member act as a sub-component of a bigger class. Of course, this also means writing much fewer 'mindless' functions that simply pass a particular task on, due to the fact that you can simply write one getter function for each independent subsystem.
Considering the various important aspects of OO design (abstraction, encapsulation, modularity, usability, extensibility, etc.) while also considering speed and performance, is it better to use the first or the second type of method for delegating tasks to a sub-component?
The book Design Patterns Explained discusses a very similar problem in its chapter about the Bridge Pattern. Apparently this chapter is freely available online.
I would recommend you to read it :)
I think your type-1 approach resembles the OOP bridge pattern the most.
Type-2, returning handles to internal data is something that should generally be avoided.
There are many ways to do what you want, and it really depends on the context (for example, how big the project is, how much you expect to reuse from it in other projects etc.)
You have three classes: Game, Engine and Renderer. Both of your solutions make the Game commit to the Engine's interface. The second solution also makes the Game commit to the Renderer's interface. Clearly, the more interfaces you use, the more you have to change if the interfaces change.
How about a third option: The Game knows what it needs in terms of rendering, so you can create an abstract class that describes those requirements. That would be the only interface that the Game commits to. Let's call this interface AbstractGameRenderer.
To tie this into the rest of the system, again there are many ways. One option would be:
1. Implement this abstract interface using your existing Renderer class. So we have a class GameRenderer that uses Renderer and implements the AbstractGameRenderer interface.
2. The Engine creates both the Game object and the GameRenderer object.
3. The Engine passes the GameRenderer object to the Game object (using a pointer to AbstractGameRenderer).
The result: The Game can use a renderer that does what it wants. It doesn't know where it comes from, how it renders, who owns it - nothing. The GameRenderer is a specific implementation, but other implementations (using other renderers) could be written later. The Engine "knows everything" (but that may be acceptable).
Later, you want to take your Game's logic and use it as a mini-game in another game. All you need to do is create the appropriate GameRenderer (implementing AbstractGameRenderer) and pass it to the Game object. The Game object does not care that it's a different game, a different Engine and a different Renderer - it doesn't even know.
The point is that there are many solutions to design problems. This suggestion may not be appropriate or acceptable, or maybe it's exactly what you need. The principles I try to follow are:
1. Try not to commit to interfaces you can't control (you'll have to change if they change)
2. Try to prevent now the pain that will come later
In my example, the assumption is that it's less painful to change GameRenderer if Renderer changes, than it is to change a large component such as Game. But it's better to stick to principles (and minimise pain) rather than follow patterns blindly.
When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent
I'm trying to help a coworker come to terms with OO, and I'm finding that for some cases, it's hard to find solid real-world examples for the concept of a tag (or marker) interface. (An interface that contains no methods; it is used as a tag or marker or label only). While it really shouldn't matter for the sakes of our discussions, we're using PHP as the platform behind the discussions (because it's a common language between us). I'm probably not the best person to teach OO since most of my background is highly theoretical and about 15 years old, but I'm what he's got.
In any case, the dearth of discussions I've found regarding tag interfaces leads me to believe it's not even being used enough to warrant discussion. Am I wrong here?
Tag interfaces are used in Java (Serializable being the obvious example). C# and even Java seem to be moving away from this though in favor of attributes, which can accomplish the same thing but also do much more.
I still think there's a place for them in other languages that don't have the attribute concept that .NET and Java have.
ETA:
You would typically use this when you have an interface that implies an implementation, but you don't want the class that implements the interface to actually have to provide that implementation.
Some real world examples:
Serializable is a good example - it implies that there is an implementation (somewhere) that can serialize the object data, but since a generic implementation for that is available, there is no need to actually have the object implement that functionality itself.
Another example might be a web page caching system. Say you have a "Page" object and a "RequestHandler" object. The RequestHandler takes a request for a page, locates/creates the corresponding Page object, calls a Render() method on the Page object, and sends the results to the browser.
Now, say you wanted to implement caching for rendered pages. But the hitch is that some pages are dynamic, so they can't be cached. One way to implement this would be to have the cacheable Page objects implement an ICacheable "tag" interface (Or vice-versa, you could have an INotCacheable interface). The RequestHandler would then check if the Page implemented ICacheable, and if it did, it would cache the results after calling Render() and serve up those cached results on subsequent requests for that page.
In .Net Tag interfaces can be great for use with reflection and extension methods. Tag interfaces are typically interfaces without any methods. They allow you to see if an object is of a certain type without having the penatilty of reflecting over your objects.
An examples in the .Net Framework INamingContainer is part of ASP.Net
I'd call myself an OO programmer, and I've never heard of a tag interface.
I think tag interfaces are worth discussing, because they're an interesting corner case of the concept of an interface. Their infrequent use is also worth noting, though!
I've used tag interfaces a couple of times in an object model representing a SQL database. In these instances, it's a subtype of the root interface for particular types of objects. It's easier to check for a tag interface than an attribute ('obj is IInterface' rather than using reflection)
The .NET Style guide says to use Attributes rather than tag/marker interfaces.
alt text http://www.freeimagehosting.net/uploads/th.4528577db5.jpg
Click for Full image
source: http://www.informit.com/articles/article.aspx?p=423349&seqNum=6
or any number of other exposure points of Cwalina's recommendations, like the book.
I've used a tag interface twice in the past month. They can cut through some nasty problems when refactoring to make a function more generic.
That said, another thing I just discovered is using a tag interface as a parent class to a bunch of related interfaces with methods. The object can be passed around, and various preprocessors can check to see if they need to deal with a particular object. A way of keeping the processing in a separate object where it belongs, but the implementation details of processed objects in their definitions where they belong.