Design issue in wrapping "C" API into OO wrapper - oop

I am trying to build an object oriented wrapper, which will wrap API specification; this includes a many structures, events, and APIs.
This API specification will be revised every year, there by releasing new specification; the updates are likely to have newer structures, events and APIs. Updates will also include
Updates to existing structures, events and APIs, the APIs as such does not change but as they take various structures as parameters which eventually have updates
The challenges
The API specification is nothing but an SDK to a lower layer,
what I am trying to build is also an SDK but will be an object
orient wrapper over this SDK.
The requirement is that the users
want Objects and methods and no ā€œCā€ like structures and APIs
The frequent version change should not have any impact on high level
application and should seamlessly work with any underlying API
version
Older application should work on newer APIs
Newer application should work on older APIs
The last one is a tricky one, what I mean is that the newer application when it sees that it an older version of SDK should somehow transform itself to an older version of API
Is there any design pattern which will help me achieve this task and tied over the frequent changes to internal data and also achieve backward compatibility and forward compatibility?
OS: Windows
Dev Environment : Visual C++

Your problem is too high level to be answerable by a design pattern.
What you are asking for are architectural principles.
These you should base on your well-founded design decisions ("API is backwards compatible using versioning because...") which in turn are based on your requirements (e.g. "Older application should work on newer APIs").
Look into this (a presentation keynote about API design by Joshua Bloch):
How to Design a Good API and Why it Matters

1) All that comes to mind at the moment, if the sdk API involves manual resource allocation:
RAII, or ctor,dtor resource management: https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
2-5) Determine a function decomposition of the API you're building, that becomes expressible in terms of each version tier of the SDK API. Some details on semi-formal function decomposition here (towards the bottom):
http://jfeltz.com/posts/2015-08-30-cost-decreasing-software-architecture.html
You can then take the resulting function compositions and make them construct-able objects if you have to. Don't worry about the final object model until you have a working understanding of the function compositions involved. This is hard at first, but trust me, it is far more powerful than iterating through several possible object model designs.
For C++, you'll probably need to perform #define pre-processing against a scheme of versions for each upstream SDK API, unless your sdk encodes its version in a file somewhere, such that you can do dll loading instead (in which case, this may be Factory design pattern), but I suspect you already knew that.

Related

Should you maintain separate version numbers of your web-based interface and APIs?

Suppose you are developing a platform which has a web-based interface for its users and APIs for third-party developers. Something similar to Salesforce (or even Facebook).
Salesforce and Facebook, both platforms have normal web-based interface for its users and APIs for third party developers.
Ideally any API internally calls the same function which is being used by the web-based interface. For e.g. "Create a Project" button and "CreateProject" API calls the same "createProject()" function internally. So you can maintain the same version for both as in most cases they are tightly integrated.
Now sometimes you add a feature which makes you increment the minor version of the web-based interface but since you are not implementing that feature in API, API version should remain as is.
How do you handle such cases? Should you maintain separate versions of your web-based interface and APIs for your platform?
It Depends. Because It is difficult to offer a conclusive answer to this question. But I would share some ideas and drill-down some scenarios to help at best.
I would suggest there should be two versions of the api. internal apis and public apis. At a caller's end, they would be two physically distinct apis/end-points so that the security policies and a of lot of other things can be done without exposing much information as well as without relaying any responsibility on code to handle the distinction policy based on who's calling from where as that may quite easilyfail.
It is ok if both versions of the apis consolidate down the line to some extent without involving any security risk but a separate team of expert engineers can design this consolidation to be seamless yet safe. It's a trade-off of between code-reuse and everything else. This is very subjective and can turn into endless discussion. But the software evolves very well as result of this design process if it is agile and iterative.
The apis should be externalizable and inter operable. If the project is very large, then different teams working on separate parts of the project will interact with each other's work using internal apis only. No hanky-panky. No direct data access. Only apis.
This approach will help you create awareness of what's being done in the project across all teams if the apis are discoverable which I personally believe is a very good thing for collaborative team work. In fact it also helps in re-usability. Testing becomes unified and automated. Every team will become responsible for their own work and hence it should be easy to address accountability.
There's more stuff that can go in here but I think this is sufficient at a high level.
IF allowed, I would also read this question as
"Should you have purely service oriented architecture or not ?"
And my answer would be, **It Depends.**Because It is difficult to offer a conclusive answer to this...
Do not publish core function directly via API, instead create all API functions as proxy functions and all changes in core functions will be handled in proxy functions.
Change public api version if there is change in API input/output.
This way you could achieve code re-usability and it does not increase public API version frequently.
Edit:
If you are talking about software version number. My answer is Yes.

WCF OData for multiplatform development?

The OP in this question asks about using an WCF/OData as an internal data access layer.
Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly
The resounding reply seems to be don't do it. I'm in similar position to the OP, but have a concern not raised in the original question. I'm trying to develop (natively) for a lot of different platforms but want to keep as much of the data and business logic server side as possible. So I'll have iOS/Android/Web (MVC)/Desktop applications. Currently, I have a single WinForms applications with an ORM data access layer (LLBLGen Pro).
I'm envisioning moving most of my business / data access logic (possibly still with LLBLGen or other ORM) behind a WCF / OData interface. Then making all my different clients on the different platforms very thin (basically UI and WCF calls).
Is this also overengineered? Am I missing a simpler solution?
I cannot see any problem in your architecture or consider it overengeenered as a OData is a standard protocol and your concept conforms the DRY principle as well.
I change the question: Why would you implement the same business logic in each client to introduce more possible bugs and loose the possibility to fix the errors at one single and centralized place. Your idea makes you able to implement the security layer only once.
OData is a cross-platform standard and you can find a OData libraries for each development platform (MSDN, OData.org, JayData for JavaScript). Furthermore, you can use OData FunctionImports/Service methods and entity-level methods, which will simplify your queries.
If you are running multiplatform development, then you may find more practical to choose platform-agnostic communication protocol, such as HTTP, rather than bringing multiple drivers and ORMs to access your data Sources directly. In addition since OData is a REST protocol, you don't need much on the Client side: anything that can format OData HTTP requests and parse HTTP responses. There are however a few aspects to be aware of:
OData server is not a replacement for an SQL database. It supports batches but they are not the same as DB transactions (although in many cases can be used to model transactional operations). It supports parent-child relations but it does not support JOINs in classic SQL sense. So you have to plan what you expose as OData entity. It's too easy to build an OData server using WCF Data Services wrapping EF model. Too easy because People often expose low Level database content instead of building high level domain types.
As for today an OData multiplatorm clients are still under development, but they are coming. If I may suggest something I am personally working on, have a look at Simple.Data OData adapter (https://github.com/simplefx/Simple.OData, look at its Wiki pages for examples) - it has a NuGet package. While this a Client Library that only supports .NET 4.0, part of it is being extracted to be published as a portable class Library Simple.OData.Client to support .NET 4.x, Windows Store, Silverlight 5, Windows Phone 8, Android and iOS. In fact, if you check winrt branch of the Git repository, you will find a multiplatform PCL already, it's just not published on NuGet yet.

API library decoupling approaches?

Imagine a set of libraries that represent some APIs. By using an inversion of control mechanisms, concrete implementations will be injected in a consuming project.
Here is a situation. I have some of the API libraries depending on other API libraries for certain functionalities - therefore the API libraries themselves are coupled at some point. This coupling can become an issue later, because changing one API will result in changes of the dependent APIs, and the corresponding implementations will also need to be changed, so in the worst case we end up with quite a number of projects that need to be modified to reflect a change form only one of them.
Now I have in mind two possible solutions for this:
Create a monolith API project that unites the related API libraries.
Further decouple APIs by making each library provide interfaces for all functionalities that are dependent on the other API, so the direct dependency is removed. This might result in a similar code in both libraries, but gives freedom to the implementations chosen via the IoC mechanisms and also allows the APIs to improve independently from each other (when an API is changed, the changes would affect only its implementation libraries, not other APIs or their implementatons).
The problem with the second approach is the duplicating of code, and the result might be of having too much api libraries that need to be referenced (for instance, in .NET application each API will be a separate DLL. In some scenarios, like Silverlight applications, this can be an issue with app size - download time and client performance overally).
Is there a better solution for the situation. When is it better to merge some API libs into one bigger and when not? I know this is a very general question I am asking, but lets ignore the due dates, estimations, client requirements and technologies for a moment, I want to be able to determine the right approach based on both achieving maximum scalability and minimum maintanance time. So, what could be a good reason to choose either approach, or another one you might suggest me?
Edit:
I feel like I must clarify something about the question. I have in mind decoupling APIs from each other, not the API from its implementation. So, for instance if I have security API for validating permissions of access, and user accounts API that uses (references) the security API, changing security API will bring the need of changing the user accounts API, and the implementations of both of them. The more APIs that happen to be coupled this way, the more changes will have to be applied. It is what I want to avoid.
The choice is between few huge libraries and a myriad of small libraries.
If you have a huge library, the code within will tend to be tightly coupled simply because there's no force providing pressure to design the various elements in a loosely coupled way. The risk is that it becomes harder and harder to evolve that library because there are so many interdependencies that must be coordinated. Think about the .NET Base Class Library as an example.
If you have a myriad of small libraries, you might risk dll hell. Yes, we were promised many years ago that this was over, but it's not. Just try to consume a lot of fine-grained open source libraries in your application code base and you'll know what I mean.
Still, the Single Responsibility Principle also applies at the package level, so I'd recommend small, focused libraries instead of huge general-purpose libraries. This also makes it easier to always pick best-of-breed libraries.
Small libraries can always be composed/compiled into larger libraries (in .NET with an Assembly Linker / Merger / Repacker utility), while it's much harder to split a big library.
No matter what you do, the most important thing to keep in mind is backwards compatibility. The fewer breaking changes you introduce, the easier those libraries will be to manage.
I don't see this as a problem, really.
Some library will depend on other libraries, and this is fine to me: improving one library will improve all the dependents! The "owner" of a library will have the responsibility not to break existing code, when making a change, but this is normal and can easily be handled if the code is well designed.
If you have changes rippling through all dependent code you should reconsider your design. If your library surfaces a certain API it should isolate its consumers from changes to underlying classes or libraries.
Update 1:
If your application uses Library1 with API1 it should not have to deal with the fact that Library1 uses Lib2, Lib3, .. , LibX.
E.g. The Moq mocking library depends on CastleDynamicProxy. Why should you have to care about that? You get an assembly where DynamicProxy is already merged in and you can just use Moq. You never see, use or have to care about DynamicProxy. So if the DP API changes, that would not affect your tests written using Moq. Moq isolates your code from changes in the API of the underlying DP.
Update 2:
Finding a problem valid for more than one branch causes modifications
of all of them
If that is the case you don't build a library but a helper for a very specific problem that should NEVER be forced upon other projects. Shared libraries tend to degenerate to a collection of "might be useful somewhere in the distant future". Don't! This will always bite you in the a**! If you have a solution for a problem that occurs in more than one place (like Guard classes): share it. If you believe that you might find a use for some solution to a problem: leave it in the project until you really have that situation. Then share it. Never do that "just in case".

API level: the lower, the better?

Is it true:
If the application does not require new features from newer APIs (i.e. higher API levels), it is better to take lower API levels.
The major concern is lower API levels means better compatibility, and thus means larger market.
Is there anything else I have to keep in mind when I make such decisions?
I came up with this question when coming across some question about Android API Levels, but I think this can be a general question, not only for Android.
I general, yes, but only if the older API is still supported by the newer implementations, of course. (For example Lucene Java changes their API in incompatible ways on major updates, so you do not have this option).
There could also be cases where the host platform looks at what API version your require, and then behave differently, in a way that you may not want (cannot think of a good real-world example right now).
For Android, at the moment, I'd say, yes, declare the lowest API level you need.

Developing API: balance between new features and back compatibility

I'm working now on an API for developers feature of our product.
The first version was released and it has small number of users at the moment. Since I started to develop its second version, some parts were reworked, some parts were removed to make the API more elegant and clear.
But the 2nd version deployment can be a pain for old version users.
Our marketing department is planning to enhance our API product a lot, add more features to it.
How should I build the system, so
1) we wouldn't be constrained to the "old version" to add new interesting features
2) current API users won't be dissatisfied because of the need to rework their systems in order to comply with the changed API
Or should the API products be tested in a sandbox for quite a long period of time before the public release, so there wouldn't be any significant modifications in the specification?
When you have to make changes to the API which already has some users, probably the best route is to deprecate the old API calls and encourage use of the new calls.
Removing the capability of the old API calls would probably break the functionality of old code, so that is probably going to cause some developers using your "old" API to become somewhat dissatisfied.
If your language provides ways to indicate that certain functionality has been deprecated, it can serve as a indication for the users to stop using old API calls and transition to new calls instead. In Java, the #deprecated javadoc tag can provide notes in the documentation that a feature has been deprecated, or from Java 5 the #Deprecated annotation can be used to raise compile-time warnings on calls to deprecated APIs features.
Also, it would probably be a good idea to provide some tips and hints on migrating from the old API to the new API to encourage people to use the new way of interacting with the API. Having examples and sample code on what to do and what not to do, the users of the API would be able to write code according to the new, preferred way.
It's going to be difficult to change a public API, but with some care taken in the transition from the old to new, I believe that it the amount of pain inflicted on the users of the API can be mitigated to a certain extent.
Here's an article on How and When to Deprecate APIs from Sun, which might provide some more information on when it would be appropriate to deprecate parts of APIs.
Also, thank you to David Schmitt who added that the Obsolete attribute in .NET is similar to the #Deprecated annotation in Java. (Unfortunately the edit was overwritten by my edit, as we were both editing this answer at the same time.)
Microsoft is pretty famous for their insane backwards compatibility. One of the things they did was to keep all the old obsolete calls, and then add new ones that new programs could use to access the enhanced features that they could not work into the old API.
You did not specify which programming language you use, but both .NET and Java has a mechanism to mark certain API calls as obsolete. If backward compatibility is very important for you, you might want to take the same route.
It's a balance you will have to strike with your community:
Keep old functions for aeons and you'll end up with the Win32 API (30000 public
functions)?
Rewrite the API all the time, and you can get something similar to .NET, where a new revision goes out every so often (1.0, 2.0, 3.0, 3.5...) and breaks existing programs or introduces new and improved ways of doing UIs etc.)
If the community is tolerant of change and open to experimenting, you will strive for a lean, current API and know that some breakage, aka bit rot, will result. If, on the other hand, the community has tons of legacy code and no resources or desire to bring it up to the latest version, you must keep backward compatibility or all of their stuff will simply not work on the new API.
Note to one of the other answers: deprecating APIs is an often-used way of indicating which functions are "on the way out", but as long as they work, many developers will use them even in the new code because those are the functions they are used to. There are very few enlightened developers that have both the awareness to actually heed "Deprecated" warnings and the time to search the code for other instances of the old API and update them.
Backward compatibility should be the default. The only reason you should compromise this principle is when the API is somehow insecure which forces users to change to more secure methods.
Idealy applicitations written to your original API will continue to work with the new version.
One way to add new features while at the same time making sure that old applications continue to run is to have two versions of an API call.
For example, suppose you currently have a function Foo that takes 2 parameters (arguments) in the API but you decide the new version really should take 3 parameters. Keep Foo the way it is and add a new function Foo2 which takes 3 parameters.
That way users can continue to code against Foo for backward compatibility or use the new Foo2 function if they require the new features.
This technique has been commonly used bu Microsoft for the Windows APIs.