should your models be aware of your API version? - api

We are going to implement a versioning system to our API code, the system is built on sinatra and there will be a default API version and clients will be able to choose a specific version adding the HTTP Accept Header.
Now I'd like to understand if you'd strictly keep the API version information inside the controller or you allow the api version to be passed inside your models in some way. If you keep it in the controller, what are the cons of propagating the API version in the models?

In a RESTful API design, versioning is done through choice of media type, which I believe is what you are trying to do. If I understood the second paragraph correctly, you are asking if the version information should be inside the delivered response too (i.e. part of the document model)?
Such a decision is arbitrary, but many formats carry version information inside themselves in case they pass through a lossy system where metadata (such as version information) might be lost. I would recommend putting it in your model for this reason.

Versioning your API doesn't mean your controller handles versioning internally and then communicates with the model which also handles versioning internally. Instead, it means you should have different versions of your controller and model that you swap in and out at runtime based on the version of the API in the request.
Now, I presume from the mention of Sinatra that you're using Ruby. I know little about Sinatra or Ruby, but I answered a similar question for ASP.NET MVC 4 and discussed a versioning framework written by Sebastiaan Dammann.
Maybe see if a similar framework already exists for Ruby.

Related

Design issue in wrapping "C" API into OO wrapper

I am trying to build an object oriented wrapper, which will wrap API specification; this includes a many structures, events, and APIs.
This API specification will be revised every year, there by releasing new specification; the updates are likely to have newer structures, events and APIs. Updates will also include
Updates to existing structures, events and APIs, the APIs as such does not change but as they take various structures as parameters which eventually have updates
The challenges
The API specification is nothing but an SDK to a lower layer,
what I am trying to build is also an SDK but will be an object
orient wrapper over this SDK.
The requirement is that the users
want Objects and methods and no ā€œCā€ like structures and APIs
The frequent version change should not have any impact on high level
application and should seamlessly work with any underlying API
version
Older application should work on newer APIs
Newer application should work on older APIs
The last one is a tricky one, what I mean is that the newer application when it sees that it an older version of SDK should somehow transform itself to an older version of API
Is there any design pattern which will help me achieve this task and tied over the frequent changes to internal data and also achieve backward compatibility and forward compatibility?
OS: Windows
Dev Environment : Visual C++
Your problem is too high level to be answerable by a design pattern.
What you are asking for are architectural principles.
These you should base on your well-founded design decisions ("API is backwards compatible using versioning because...") which in turn are based on your requirements (e.g. "Older application should work on newer APIs").
Look into this (a presentation keynote about API design by Joshua Bloch):
How to Design a Good API and Why it Matters
1) All that comes to mind at the moment, if the sdk API involves manual resource allocation:
RAII, or ctor,dtor resource management: https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
2-5) Determine a function decomposition of the API you're building, that becomes expressible in terms of each version tier of the SDK API. Some details on semi-formal function decomposition here (towards the bottom):
http://jfeltz.com/posts/2015-08-30-cost-decreasing-software-architecture.html
You can then take the resulting function compositions and make them construct-able objects if you have to. Don't worry about the final object model until you have a working understanding of the function compositions involved. This is hard at first, but trust me, it is far more powerful than iterating through several possible object model designs.
For C++, you'll probably need to perform #define pre-processing against a scheme of versions for each upstream SDK API, unless your sdk encodes its version in a file somewhere, such that you can do dll loading instead (in which case, this may be Factory design pattern), but I suspect you already knew that.

WCF OData for multiplatform development?

The OP in this question asks about using an WCF/OData as an internal data access layer.
Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly
The resounding reply seems to be don't do it. I'm in similar position to the OP, but have a concern not raised in the original question. I'm trying to develop (natively) for a lot of different platforms but want to keep as much of the data and business logic server side as possible. So I'll have iOS/Android/Web (MVC)/Desktop applications. Currently, I have a single WinForms applications with an ORM data access layer (LLBLGen Pro).
I'm envisioning moving most of my business / data access logic (possibly still with LLBLGen or other ORM) behind a WCF / OData interface. Then making all my different clients on the different platforms very thin (basically UI and WCF calls).
Is this also overengineered? Am I missing a simpler solution?
I cannot see any problem in your architecture or consider it overengeenered as a OData is a standard protocol and your concept conforms the DRY principle as well.
I change the question: Why would you implement the same business logic in each client to introduce more possible bugs and loose the possibility to fix the errors at one single and centralized place. Your idea makes you able to implement the security layer only once.
OData is a cross-platform standard and you can find a OData libraries for each development platform (MSDN, OData.org, JayData for JavaScript). Furthermore, you can use OData FunctionImports/Service methods and entity-level methods, which will simplify your queries.
If you are running multiplatform development, then you may find more practical to choose platform-agnostic communication protocol, such as HTTP, rather than bringing multiple drivers and ORMs to access your data Sources directly. In addition since OData is a REST protocol, you don't need much on the Client side: anything that can format OData HTTP requests and parse HTTP responses. There are however a few aspects to be aware of:
OData server is not a replacement for an SQL database. It supports batches but they are not the same as DB transactions (although in many cases can be used to model transactional operations). It supports parent-child relations but it does not support JOINs in classic SQL sense. So you have to plan what you expose as OData entity. It's too easy to build an OData server using WCF Data Services wrapping EF model. Too easy because People often expose low Level database content instead of building high level domain types.
As for today an OData multiplatorm clients are still under development, but they are coming. If I may suggest something I am personally working on, have a look at Simple.Data OData adapter (https://github.com/simplefx/Simple.OData, look at its Wiki pages for examples) - it has a NuGet package. While this a Client Library that only supports .NET 4.0, part of it is being extracted to be published as a portable class Library Simple.OData.Client to support .NET 4.x, Windows Store, Silverlight 5, Windows Phone 8, Android and iOS. In fact, if you check winrt branch of the Git repository, you will find a multiplatform PCL already, it's just not published on NuGet yet.

Internalizing REST API concepts

I'm having a hard time wrapping my mind around something and would appreciate some reading material on this concept.
I'm writing an application that relies heavily on providing API calls via a URI scheme. example.com/company/user/123123. The aspects of taking a URI string and converting it to an action makes sense.
But where I get confused is taking that process and utilizing within the MVC structure. Do I build calls using models or a library? My goal is to be able to do something like $this->company->user(12311), so that I can have the API functionality available externally and internally.
Also how do I make this functionality accessible without exposing core code?
One of the biggest problems with the word API is that it does not make a distinction between when you are making local in process calls and when you are making remote calls. This is the essence of the RPC style, using the same programmatic model regardless of where the code to be executed exists.
REST is explicitly about doing distributed computing and is optimized for those scenarios. Trying to use a RESTful interface as a local API is likely to produce something that is highly inefficient.
I would suggest not trying to use the same API internally and externally and I would go further and say try not to think of REST as a way of building APIs. REST is an approach to building distributed systems that requires consideration of the system holistically.
To address your question more specifically, I use an MVC approach to exposing Resources in my system. The Model is the Resource and the View is the Representation. The key to building a RESTful system is to identify the links between your models. These links are rendered into the respresentations as embedded links. Also, consider that your models should be more like Presentation Models than domain models as a RESTful interface is more about exposing the usage scenarios of a system than it is about exposing a domain model.

Developing API: balance between new features and back compatibility

I'm working now on an API for developers feature of our product.
The first version was released and it has small number of users at the moment. Since I started to develop its second version, some parts were reworked, some parts were removed to make the API more elegant and clear.
But the 2nd version deployment can be a pain for old version users.
Our marketing department is planning to enhance our API product a lot, add more features to it.
How should I build the system, so
1) we wouldn't be constrained to the "old version" to add new interesting features
2) current API users won't be dissatisfied because of the need to rework their systems in order to comply with the changed API
Or should the API products be tested in a sandbox for quite a long period of time before the public release, so there wouldn't be any significant modifications in the specification?
When you have to make changes to the API which already has some users, probably the best route is to deprecate the old API calls and encourage use of the new calls.
Removing the capability of the old API calls would probably break the functionality of old code, so that is probably going to cause some developers using your "old" API to become somewhat dissatisfied.
If your language provides ways to indicate that certain functionality has been deprecated, it can serve as a indication for the users to stop using old API calls and transition to new calls instead. In Java, the #deprecated javadoc tag can provide notes in the documentation that a feature has been deprecated, or from Java 5 the #Deprecated annotation can be used to raise compile-time warnings on calls to deprecated APIs features.
Also, it would probably be a good idea to provide some tips and hints on migrating from the old API to the new API to encourage people to use the new way of interacting with the API. Having examples and sample code on what to do and what not to do, the users of the API would be able to write code according to the new, preferred way.
It's going to be difficult to change a public API, but with some care taken in the transition from the old to new, I believe that it the amount of pain inflicted on the users of the API can be mitigated to a certain extent.
Here's an article on How and When to Deprecate APIs from Sun, which might provide some more information on when it would be appropriate to deprecate parts of APIs.
Also, thank you to David Schmitt who added that the Obsolete attribute in .NET is similar to the #Deprecated annotation in Java. (Unfortunately the edit was overwritten by my edit, as we were both editing this answer at the same time.)
Microsoft is pretty famous for their insane backwards compatibility. One of the things they did was to keep all the old obsolete calls, and then add new ones that new programs could use to access the enhanced features that they could not work into the old API.
You did not specify which programming language you use, but both .NET and Java has a mechanism to mark certain API calls as obsolete. If backward compatibility is very important for you, you might want to take the same route.
It's a balance you will have to strike with your community:
Keep old functions for aeons and you'll end up with the Win32 API (30000 public
functions)?
Rewrite the API all the time, and you can get something similar to .NET, where a new revision goes out every so often (1.0, 2.0, 3.0, 3.5...) and breaks existing programs or introduces new and improved ways of doing UIs etc.)
If the community is tolerant of change and open to experimenting, you will strive for a lean, current API and know that some breakage, aka bit rot, will result. If, on the other hand, the community has tons of legacy code and no resources or desire to bring it up to the latest version, you must keep backward compatibility or all of their stuff will simply not work on the new API.
Note to one of the other answers: deprecating APIs is an often-used way of indicating which functions are "on the way out", but as long as they work, many developers will use them even in the new code because those are the functions they are used to. There are very few enlightened developers that have both the awareness to actually heed "Deprecated" warnings and the time to search the code for other instances of the old API and update them.
Backward compatibility should be the default. The only reason you should compromise this principle is when the API is somehow insecure which forces users to change to more secure methods.
Idealy applicitations written to your original API will continue to work with the new version.
One way to add new features while at the same time making sure that old applications continue to run is to have two versions of an API call.
For example, suppose you currently have a function Foo that takes 2 parameters (arguments) in the API but you decide the new version really should take 3 parameters. Keep Foo the way it is and add a new function Foo2 which takes 3 parameters.
That way users can continue to code against Foo for backward compatibility or use the new Foo2 function if they require the new features.
This technique has been commonly used bu Microsoft for the Windows APIs.

REST type API for non web based applications, Is It a good idea?

We are developing a middleware SDK, both in C++ and Java to be used as a library/DLL by, for example, game developers, animation software developers, Avatar developers to enhance their products.
Having created a typical API using specific calls for specific functions I am considering simplifying the API by using a REST type API (GET, PUT, POST, DELETE) or CRUD type (CREATE, READ, UPDATE, DELETE) interface.
This would work in a similar way to a client-server type REST API where there are only 4 possible API calls but these can take flexible parameters.
This seems to have the benefit of making the API stable in that new calls are not being added and old calls are not being removed. So a consumer of this API need not worry about having to recompile and change their code to suit any updates to our middleware.
The overhead is that there is an extra layer of redirection in the middleware controller to route API calls and the developer needs to know what parameters are available for each REST call (supplied of course).
I have not so far seen this system used outside of web type client server applications so my question is this: Is this a feasible idea?
I am thinking in terms of its efficiency as well as if for example a game developer would find it easy to use.
Yes, this is a feasible idea. But I'm not sure the benefits would justify the costs. REST is best applied to a networked application scenario, oriented around requests and responses. While there are definite learning curve advantages to a uniform interface, those advantages can be present in almost any well-designed API which provides reasonably abstract procedures.
You also expressed concern for whether a game developer would find a RESTful API easy to use. I'd be dubious. I've implemented many RESTful web services, and helped many developers get up to speed both building them and using them, and the conceptual leap required to grasp REST can be substantial for someone who has been steeped in procedural APIs for years. I'd think that game developers in particular would be very strongly connected to procedural APIs, to the point that attempting to adopt a different paradigm, whatever its benefits, might prove extremely difficult.
Remember that REST is not specific to HTTP, and does not rely on just the 4 HTTP verbs. The verbs you have and can use depend on what protocol you're using.