Is Abstracting API function call names correct? - api

When using an external library or API, I have noticed that each function or data structure belonging to that library or API has something in its name which discloses the API or library we are using. For example, D3DXVECTOR3 or SDL_Surface from Direct3D and SDL respectively have been named to disclose which API they belong to.
While building our own applications, I would not like to disclose which API's I have used, so is it good practice to change the name of these API structures by #define directives into some more general names? Is this a practices and used form of abstraction? Are there better ways to do such abstractions?

In the OO world, the best way to do such an abstraction is through the adapter pattern.
Since you mentioned #define, I assume you are using C or C++. In both cases, you can still use this pattern if you want. Simply use a class with an abstract base class as the interface. This will add a small overhead because of the virtual function calls though. You could also consider template inheritance to bypass this issue.
Either way, I would avoid using the preprocessor as much as possible since it can quickly turn your code into a nice italian dish.

Related

Is creating a module with interfaces only a good idea?

Creating a module (bundle, package, whatever) with only interfaces seems to me a strange idea. Yet, I don't know the other best solution to solve the following architectural requirement.
There often appears a need for a set of utilities. In many projects I can see the creation of "utils" folder, or even a seperate package (module) with frequently used ones.
Now consider the idea that you don't want to depend upon a concrete utils set. Instead you, therefore, use interfaces.
So you may create the whole project, with multiple modules, dependent only on the "Utils-Interfaces" set, which could be a separate module. Then you think you can re-use it in other projects, as these utils are frequently used.
So what do you do? Create a seperate module (package, bundle...) with interfaces with definitions of the methods to be implemented by concrete utility-classes? And re-use this "glue-interfaces-packages" (possibly with other "glues", such as bridges, providers etc.) in your various other projects? Or is there a better way to design the archictecture regarding the utilities that could be easily switched from one to another?
It seems a bit odd to have an interface for utility methods as it should be clear what they do. Also in most language you won't have static dispatch anymore. And you wouldn't solve a problem by having interfaces for utility methods. I think it would make more sense to look for a library doing the same thing or writing your own if such functionality isn't already implemented. Very specific things should be tied to the project, though.
Let's look at an example in Java:
public static boolean isDigitOnly(String text) {
return "\\d+".matches(text);
}
Let's assume one would use an interface. That would mean that you have to have an instance of such an implementation, most likely a singleton. So what's the point of that? You would write the method head twice and you don't have any advantage; interfaces are used for loose coupling, however such generic utility methods aren't bound to your application.
So maybe you just want to use a library. And actually there is one for exactly this use case: Apache Commons. Of course you may not want to include such a big library for a single method. However, if you need this many utility methods you may want to use it.
Now I've explained how to use and reuse utility methods; however, a part of your question was about using different implementations.
I can't see many cases you wanted this. If, for example, you have a method specific to a certain implementation of sockets, you may instead want
A) the utility method as a part of the API
B) an interface for different socket implementations on which you have one common utility method
If you cannot apply this to your problem, it's probably not a utility method or I didn't consider it. If you could provide me with a more specific problem I'd be happy to give you a more concrete answer.

Deciding extent of coupling

I have a Component which has API exposed with some 10 functionality in all. I can think of two ways to achieve it:
Give out all these functionality as separate functions.
Expose only one function which takes an XML as input. Based on request_Type specified and the parameters passed in the XML, I internally call one of the respective functions.
Q1. Will the second design be more loosely coupled than the first ?
I always read about how I should try my components to be loosely coupled, should I really go to this extent to achieve lose coupling ?
Q2. Which one of these would be a better design in terms of OOP and why?
Edit:
If I am exposing this API over D-Bus for others to use, will type checking still be a consideration to compare the two approaches? From what I understand type checking is done at compile time, but in case when this function is exposed over some IPC, issue of type checking comes into picture ?
The two alternatives you propose do not differ in the (obviously quite large) number of "functions" you want to offer from your API. However, the second seems to have many disadvantages because you are loosing any strong type checking, it will become much harder to document the functionality etc. (The only advantage I see is that you don't need to change your API if you add functionality. But at the disadvantage that users will not be able to figure out API changes like deleted functions until run-time.)
What is more related with this question is the Single Responsiblity Principle (http://en.wikipedia.org/wiki/Single_responsibility_principle). As you are talking about OOP, you should not expose your tens of functions within one class but split them among different classes, each with a single responsibility. Defining good "responsibilities" and roles requires some practice, but following some basic guidelines will help you to get started quickly. See Are there any rules for OOP? for a good starting point.
Reply to the question edit
I haven't used D-Bus, so this might be totally wrong. But from a quick look at the tutorial I read
Each object supports one or more interfaces. Think of an interface as
a named group of methods and signals, just as it is in GLib or Qt or
Java. Interfaces define the type of an object instance.
DBus identifies interfaces with a simple namespaced string, something
like org.freedesktop.Introspectable. Most bindings will map these
interface names directly to the appropriate programming language
construct, for example to Java interfaces or C++ pure virtual classes.
As far as I understand, D-Bus has the concept of differnt objects which provide interfaces consisting of several methods. This means (to me) that my answer above still applies. The "D-Bus native" way of specifying your API would mean to exhibit interfaces and I don't see any reason why good OOP design guidelines shouldn't be valid, here. As D-Bus seems to map these even to native language constructs, this is even more likely.
Of course, nobody keeps you from just building your own API description language in XML. However, things like are some kind of abuse of underlying techniques. You should have good reasons for doing such things.

Creating a wrapper for BeaaS (Parse/Stackmob/...)

I'm currently developing an app using Parse and I'd like to start abstracting their SDK as I don't know if and when I'm going to replace their backend with another by other provider or by ours.
Another motivation is separating issues: all my apps code will use the same framework while I can just update the framework for any backend specifics.
I've started by creating some generic classes to replace their main classes. This generic classes define a protocol that each adapter must implement. Then I'd have a Parse adapter that would forward the calls to the Parse SDK.
Some problems I can predict is that this will require a lot of different classes. In some cases, e.g. Parse, they also have classes for dealing with Facebook. Or that the architecture in some parts can be so different that there'll be no common ground to allow something like this.
I've actually never went so far with Stackmob as I am with Parse so I guess the first versions will share Parse's own architecture.
What are the best practices for something like this?
Is there something like this out there? I've already searched without success but
maybe I'm looking in the wrong direction;
Should I stick with the Parse SDK just making sure that the code using
it is well identified and contained?
I'm the Developer Evangelist at Applicasa.
We've built a cool set of tools for mobile app developers, part of which includes offering a BaaS service that takes a bit different approach compared to Parse, StackMob, and others. I think it provides a helpful perspective for tackling the problem of abstracting away from third-party SDK APIs in a way that would allow you to replace backends by other providers or your own.
/disclaimer
Is there something like this out there? I've already searched without success but maybe I'm looking in the wrong direction
While there are other BaaS providers out there that provide similar and differentiating features, I'm not aware of a product out there that completely abstracts away third-party providers in an agnostic manner.
What are the best practices for something like this?
I think you already show to be on a solid footing for getting started in the right direction.
First, you're correct in predicting that you'll end up with a number of different classes that encapsulate objects and required functionality in a backend-agnostic way. The number, of course, will depend on what kind of abstraction and encapsulation you're going after. The approach you outline also sounds like the way I'd begin such a project, as well—creating classes for all the objects my application would need to interact with, and implementing custom methods on those classes (or a base class they all extend) that would do the actual work of interacting with a backend provider.
So, if I was building an app that, for example, had a Foo, Bar, and Baz object, I'd create those classes as part of my internal API, with all necessary functionality required by my app. All app logic and functional operations would only interact with those classes, and all app logic and functionality would be data backend-agnostic (meaning no internal functionality could depend on a data backend, but the object classes would provide a consistent interface that allowed operations to be performed, while keeping data handling methods private).
Then, I'd likely make each class inherit from a BaseObject class, which would include the methods that actually talked to a data backend (provider-based or my own custom remote backend). The BaseObject class might have methods like saveObject, getById:, getObjects (with some appropriate parameters for performing object filtering/searching). Then, when I want to replace my backend data service in the future, I'd only have to focus on updating the BaseObject class methods that handle data interaction, while all my app logic & functionality is tied to the Foo, Bar, and Baz classes, and doesn't actually care how get/save/update/delete operations work behind the scenes.
Now, to keep things as easy on myself as possible, I'd build out my BaaS schema to match internal object class names (where, depending on the BaaS requirements, I could use either an isKindOfClass: or NSStringFromClass: call). This means that if I was using Parse, I'd want to make my save method get the NSStringFromClass: of the class name to perform data actions. If I was using a service like Applicasa, which generates a custom SDK of native objects for data interactions, I'd want to base custom data actions on isKindOfClass: results. If I wanted even more flexibility than that (perhaps to allow multiple backend providers to be used, or some other complex requirement), I'd make all the child classes tell BaseObject exactly what schema name to use for data operations through some kind of custom method, like getSchemaName. I'd probably define it as a BaseObject method that would return the class name as a string by default, but then implement on child classes to customize further. So, the inside of a BaseObject save method might look something like this:
- (BOOL) save {
// call backend-specific method for saving an object
BaasProviderObject *objectToSave = [BaasProviderObject
objectWithClassName:[self getSchemaName]];
// Transfer all object properties to BaasProviderObject properties
// Implement however it makes the most sense for BaasProvider
// After you've set all calling object properties to BaasProviderObject
// key-value pairs or object properties, you call the BaasProvider's save
[objectToSave save];
// Return a BOOL value to indicate actual success/failure
return YES; // you'll want this to come from BaaS
}
Then in, say, the Foo class, I might implement getSchemaName like so:
- (NSString) getSchemaName {
// Return a custom NSString for BaasProvider schema
return #"dbFoo";
}
I hope that makes sense.
Should I stick with the Parse SDK just making sure that the code using it is well identified and contained?
Making an internal abstraction like this will be a fair amount of work up front, but it will inevitably offer a lot of flexibility to implement as you wish. You can implement CoreData, reject CoreData, and do whatever you'd like really. There are definite advantages to building internal app logic/functionality in a data-agnostic way, even if it's to allow yourself the ease of trying out another BaaS in, say, a custom branch of your app code to see how you like another provider (or to give you an easy route to working with developing your own data solution).
I hope that helps.
I'm the Platform Evangelist at StackMob and thought I'd chime in on this question. We built our iOS SDK with a Core Data interface. You'll use regular Core Data and we've overridden the NSIncremental Store to persist to StackMob instead of SQLLite.
You can checkout an example of the Core Data code.
http://developer.stackmob.com/tutorials/ios/Create-an-Object
If you want see what methods are being leveraged by Core Data to communicate with StackMob.
http://developer.stackmob.com/tutorials/ios/Lower-Level-CRUD-API

What is the correct system design when dealing with third party API?

This blog post by Joubert just opened my eyes. I have dealt with a lot of design patterns in Java and other languages. But Objective-C is a rather unique language.
Let's say that in a project we talk with a third party API, like Dropbox or Facebook. What I've been doing so far is to combine everything that has to do with the third party API into a singleton class. So I can access the class from anywhere in my view controllers. I can just go for example: [[DropboxModel sharedInstance] uploadFile:aFile]
However as the blog post noted, this isn't efficient and leads to spaghetti code and bad unit testing. So what is the best way to design the system so that it's modular and easy to use?
I would dispute the idea that singletons lead to spaghetti code and are inefficient. However, the unit testing problem is legitimate and singletons do reduce modularity since they are really just fancy global variables.
I like Joubert's idea of injecting the singleton instance into the controller(s) from the app delegate (which is itself a singleton, ahem). I think the same approach would work for you.
What I normally do in these situations where I might want to use a different stub object in unit tests is define a protocol to represent the API and make my "real" API object conform to it and also my stub API object. I use the stub in the unit tests and the real object in the app.
Not that this really solves any architectural problems associated with singletons, but for the sake of readability and typability you can always define a macro in your DropboxModel header file, eg:
#define DBM [DropboxModel sharedInstance]
<...>
[DBM uploadFile:aFile];
i'll typically create an abstraction layer. this wraps a simple interface onto the library's calls which you use, while giving you a chance to introduce whatever state (e.g. variables) you'll need.
you can then expose only what you need and use, and add your own state, checks, and conveniently deal with all issues of the library from one place. 'issues' may be introduced for several reasons - it could be threading, resources, state, or undesired behavioral changes across versions.
most libraries are not meant to be used solely via a singleton. in such cases, it's best (subjective) to create interfaces as you would normally -- of course, being mindful of the constraints behind the abstraction layer. in that sense, you simply create object based interfaces which are divided by size/task/purpose/functionality -- all as you'd usually do when writing your own classes.
if you don't need the library all over the place, then i think it's also good to wrap what you need to minimize dependencies (increasingly important in large projects).
if you use the library all over the place, then you may also prefer to use the calls without the abstraction layer.

Should every single object have an interface and all objects loosely coupled?

From what I have read best practice is to have classes based on an interface and loosely couple the objects, in order to help code re-use and unit test.
Is this correct and is it a rule that should always be followed?
The reason I ask is I have recently worked on a system with 100’s of very different objects. A few shared common interfaces but most do not and wonder if it should have had an interface mirroring every property and function in those classes?
I am using C# and dot net 2.0 however I believe this question would fit many languages.
It's useful for objects which really provide a service - authentication, storage etc. For simple types which don't have any further dependencies, and where there are never going to be any alternative implementations, I think it's okay to use the concrete types.
If you go overboard with this kind of thing, you end up spending a lot of time mocking/stubbing everything in the world - which can often end up creating brittle tests.
Not really. Service components (class that do things for your application) are a good fit for interfaces, but as a rule I wouldn't bother having interfaces for, say, basic entity classes.
For example:
If you're working on a domain model, then that model shouldn't be interfaces. However if that domain model wants to call service classes (like data access, operating system functions etc) then you should be looking at interfaces for those components. This reduces coupling between the classes and means it's the interface, or "contract" that is coupled.
In this situation you then start to find it much easier to write unit tests (because you can have stubs/mocks/fakes for database access etc) and can use IoC to swap components without recompiling your applications.
I'd only use interfaces where that level of abstraction was required - i.e. you need to use polymorphic behaviour. Common examples would be dependency injection or where you have a factory-type scenario going on somewhere, or you need to establish a "multiple inheritance" type behaviour.
In my case, with my development style, this is quite often (I favour aggregation over deep inheritance hierarchies for most things other than UI controls), but I have seen perfectly fine apps that use very little. It all depends...
Oh yes, and if you do go heavily into interfaces - beware web services. If you need to expose your object methods via a web service they can't really return or take interface types, only concrete types (unless you are going to hand-write all your own serialization/deserialization). Yes, that has bitten me big time...
A downside to interface is that they can't be versioned. Once you shipped the interface you won't be making changes to it. If you use abstract classes then you can easily extend the contract over time by adding new methods and flagging them as virtual.
As an example, all stream objects in .NET derive from System.IO.Stream which is an abstract class. This makes it easy for Microsoft to add new features. In version 2 of the frameworkj they added the ReadTimeout and WriteTimeout properties without breaking any code. If they used an interface(say IStream) then they wouldn't have been able to do this. Instead they'd have had to create a new interface to define the timeout methods and we'd have to write code to conditionally cast to this interface if we wanted to use the functionality.
Interfaces should be used when you want to clearly define the interaction between two different sections of your software. Especially when it is possible that you want to rip out either end of the connection and replace it with something else.
For example in my CAM application I have a CuttingPath connected to a Collection of Points. It makes no sense to have a IPointList interface as CuttingPaths are always going to be comprised of Points in my application.
However I uses the interface IMotionController to communicate with the machine because we support many different types of cutting machine each with their own commend set and method of communications. So in that case it makes sense to put it behind a interface as one installation may be using a different machine than another.
Our applications has been maintain since the mid 80s and went to a object oriented design in late 90s. I have found that what could change greatly exceeded what I originally thought and the use of interfaces has grown. For example it used to be that our DrawingPath was comprised of points. But now it is comprised of entities (splines, arcs, ec) So it is pointed to a EntityList that is a collection of Object implementing IEntity interface.
But that change was propelled by the realization that a DrawingPath could be drawn using many different methods. Once that it was realized that a variety of drawing methods was needed then the need for a interface as opposed to a fixed relationship to a Entity Object was indicated.
Note that in our system DrawingPaths are rendered down to a low level cutting path which are always series of point segments.
I tried to take the advice of 'code to an interface' literally on a recent project. The end result was essentially duplication of the public interface (small i) of each class precisely once in an Interface (big I) implementation. This is pretty pointless in practice.
A better strategy I feel is to confine your interface implementations to verbs:
Print()
Draw()
Save()
Serialize()
Update()
...etc etc. This means that classes whose primary role is to store data - and if your code is well-designed they would usually only do that - don't want or need interface implementations. Anywhere you might want runtime-configurable behaviour, for example a variety of different graph styles representing the same data.
It's better still when the thing asking for the work really doesn't want to know how the work is done. This means you can give it a macguffin that it can simply trust will do whatever its public interface says it does, and let the component in question simply choose when to do the work.
I agree with kpollock. Interfaces are used to get a common ground for objects. The fact that they can be used in IOC containers and other purposes is an added feature.
Let's say you have several types of customer classes that vary slightly but have common properties. In this case it is great to have a ICustomer interface to bound them together, logicaly. By doing that you could create a CustomerHander class/method that handels ICustomer objects the same way instead of creating a handerl method for each variation of customers.
This is the strength of interfaces.
If you only have a single class that implements an interface, then the interface isn't to much help, it just sits there and does nothing.