How to ensure a RESTful API is backwards compatible? - api

In my company, we develop an API which is consumed by external customers. We want our API to evolve in a backwards-compatible way without using versions.
We would like to have tests on our side that validate that, whenever our API evolves (e.g. new attributes are added to requests or responses) backwards-compatibility is guaranteed. What is the best way of validating that?
I've had a look at contract-testing (Pact or Spring Cloud Contract) but they seem particularly effective in consumer-driven testing - which is not what we want to do.

Related

How do you name API paths of the same microservice but w/ different consumers

Context
Let's imagine a simple microservices architecture (e.g. 2-3 microservices). Microservices are domain-based, API gateway in place and everything is how it should be. At the same time, microservices APIs are consumed by public mobile applications, admin UI, and other services for S2S communication, hence, we have three possible APIs consumers. Depends on the consumer, response DTOs are different but the business process might be the same (e.g. response for GET /users endpoint has different DTOs for a consumer application and admin UI but technically the data is taken from the same DB).
Question
How do you segment APIs in that case? Do you use namespaces like external, internal and etc?
Also, feel free to share your experience of how you segment APIs.
Thanks in advance!
From my point of view, the APIs should be different depending on the type of consumer that is going to use them.
For example, talking about your use case, It couldn't be the same API one that is intended to provide simple user information that the one used by an administrator. You should define two different APIs in this case, with different paths like internal/users/ and external/users as you said, and internally these two endpoints can use the same logic.
This separation is not only good in order to return different dtos in each endpoint but also to define different security (authentication/authorization) mechanisms for each API because I suppose that these requirements will be different for an admin API that for a general user one
It depends a bit on the philosophy you want to adopt.
The one suggested by #JArgente is good, in that you'd get good separation, and the role of each is (or at least should be) very clear.
The other approach is layering, which (for the OO programmers out there) is a bit like developing overloads for a method. It assumes that the data required by the derived API's is provided by the base API. So:
Develop a base API that provides all the data this API family needs to provide. This API might be the one that internal users use (e.g. Admin User), and it could require authentication.
Develop a public facing API that consumes the base API. This one would be your public-facing one.
Each API has a separate API Spec; depending in how you do this you can leverage inheritance at the Spec level.
Each API also has an actual endpoint which triggers some sort of processing - e.g. logic within the API Gateway itself, or logic handled within a downstream component like a microservice.
The public-facing one can be anonymous, as long as something (e.g. the API Gateway) can make an authenticated call against the base API, using some kind of 'service account'.
The advantage here is that you still get good separation between different API's and their consumers, but you also get the advantages of inheritance, so that code duplication is reduced (testing effort isn't so diffuse, etc).
This approach also allows you to run the endpoints on the same API Gateway, or deployed on separate ones (internal vs external).

SPA with Backend API and new B2B API - how to deploy

I have currently delivered a SPA (Vue.js) web application with a Java API backend. Everything is currently sitting in AWS, with the frontend being in CloudFront and the backend in ECS connecting to a RDS instance.
As part of the next phase of delivery, we are creating a B2B API. My question is that of architectural design and deployment strategy, is it commonplace to just extend the existing API with B2B functionality? Should I keep them both separate with an API gateway in front? We envisage that the B2B use eventually will outgrow the SPA use case so the initial deployment configuration needs to have the most flexibility to grow and expand.
Is there some sort of best practice here? I imagine that a lot of code would be similar between the two backends as well.
Thanks,
Terry
First off - Deciding on service boundaries is one of the most difficult problems in a service oriented architecture design and the answer strongly depends on your exact domain requirements.
Usually I would split service implementations by the domain/function as well as by organizational concerns (e.g. separate teams developing them) and not by their target audience. This usually avoids awkward situation where team responsibility is not clear, etc. If it will grow into a very large project there may also be a need for multiple layers of services and shared libraries - And at that point you would likely run into necessary re-factorings / restructurings.
So if there is a very large overlap in function between your b2b and the regular api you may not want to split the implementation.
However, you may also have to consider how the service access is provided and an API Gateway could help with providing different endpoints for the different audiences, different charging models, different auth options, etc. Depending on your exact requirements an API Gateway may not be enough and this may also require another thin service layer implementation that uses common domain services.

Mule API Led Connectivity Design Approaches for Experience API

As part of our journey towards API-led Connectivity, we have to group our resources (i.e. API endpoints) into multiple Mule applications for the experience APIs.
In order to have meaningful names for the Mule applications while maintaining the maximum re-usability, rather than associating the consumer names with the application names (which makes the experience API tightly coupled with the current application landscape), we propose to have Mule application names to reflect the essence of the business.
The list of the options are as follows. Which one do you think is more ideal? What approach have you used in your organization?
based on Channel/Consumer
A dedicated experience API for a consumer such as WEB, CRM, Mobile etc.
uri examples:
www.example.com/example-**web**-application/v1/
www.example.com/example-**crm**-application/v1/
www.example.com/example-**mobile**-application/v1/
Pro's: - applying channel specific policies is easier, management becomes easier, smaller outage window
Con's: - reusability reduces and chances of duplication of objects across api's increases
based on Business Domain
Company data model is used. Eg - Customer, Product, Payment etc.
uri examples:
www.example.com/example-**customers**-application/v1/
www.example.com/example-**products**-application/v1/
www.example.com/example-**payments**-application/v1/
Pro's: - promotes reusability, channel agnostic, same api can be used across different consumers.
Con's: management might get complex, larger outage window, multiple consumers might be impacted
based on Customer Journey
This approach is tied to the customer's lifecycle with the organization. Eg - Prospective Customer --> Lead --> Engage --> Payments --> Customer Retain
uri examples:
www.example.com/example-**prospect**-application/v1/
www.example.com/example-**lead**-application/v1/
www.example.com/example-**engage**-application/v1/
Pro's: channel agnostic, same api can be used across different consumers.
Con's: can get increasingly big and further breakdown might still be required
Thanks.
As far I understand your question; you would like to know what URIs to be using for the endpoints of the experience APIs, right?
Based on a recent blog entry from mulesoft (July 12 2017).
Experience APIs are:
Experience APIs are the means by which data can be
reconfigured so that it is most easily consumed by its intended
audience, all from a common data source, rather than setting up
separate point-to-point integrations for each channel. An Experience
API is usually created with API-first design principles where the API
is designed for the specific user experience in mind.
Based on the examples from MuleSoft and my understanding, the experience APIs are created for one given "experience"; web, virtual reality, mobile, etc...
You are trying to create an API for a given special experience to make the consumption of the API easy for this specific client.
According to my understanding the main goal on this level is not the re-usability. You focus on re-usability on the System API and Process API level, but the Experience APIs are supposed to make the life of the developers of the different clients easier by providing exactly the interface and data they need so they don't have to communicate directly with the system and process APIs, but they get a tailor-made API, suiting exactly their special needs.
Since the experience API is tailor-made for the special experience / channel / client-application; I think respresenting this in the URI is a good idea.

Cumulocity extend API

We're working with Cumulocity and we'd like to offer services to our customers that are not currently possible to implement with Cumulocity. As an example, we'd like to be able to retrieve a list of devices located within x kilometers of a given point.
Currently there are two limitations that prevent us from doing so:
the impossibility of extending the Cumulocity API with custom route/parameters
the impossibility of implementing custom functions for specific API GET calls
I can think of a workaround to achieve this, like a POST request of an event that would be processed by an Esper rule, generating another event/measurement that could then be accessed by a GET. But I think we can agree this is not a suitable mechanism.
Please not that the use case I described above is just an example. Our needs don't limit to this and we need a standardized way to expand our services without requirering updates on Cumulocity side.
There are two topics here, I believe:
Geo-querying: Some geographical querying and aggregation use cases can be handled through CEL. A general geo-querying API is on the Cumulocity roadmap. Note: This use case is not only related to extending the API, as such queries go right down into the database.
Extending the API: That is actually possible. Cumulocity has a microservices API in which you can expose other APIs under the URL /services/.... This is, for example, how connectivity platforms are interfaced. The API is not on the web site because it's not GA yet, but you can certainly discuss it with your Cumulocity contact or open a ticket. This btw includes also adding permissions for the new microservices, so that you can do proper A&A.

Should website backend and mobile service layer consume the same API?

I'm working on a project which involves a website, and after that is done, mobile applications (most probably will be built using a cross-platform tool like Phonegap or Sencha).
The overall application is heavily data-driven, all of which will be stored in MySQL databases on our webserver. I know that I will be setting up a REST API as a service layer for the mobile applications, but what I'm not sure about it - Should I be using this API for the main website as well?
I need to know this before I can begin the project, because if I do intend to eat my own dogfood, then the API will be the first priority.
In case it matters - the API will never be exposed to third party developers.
Sure, why not? It means that you'll have only one entry-point to test and monitor, it follows the DRY principle, and it will encourage better API design if you consume it too.
Yes you should use the API for the website. It simplifies your codebase and encourages code reuse, since you only deal with one API and not two (REST + MySQL). Furthermore, it makes life easier on the developers (that includes you!), because there is only one set of API calls to keep in mind at once.
Also, in the future you may build your mobile apps with HTML (perhaps using PhoneGap, recently open-sourced and renamed to Cordova). If your website uses the REST API, you can more easily port the web code to HTML5 for mobile.
Nitpick: This isn't really a matter of eating your own dogfood. Dogfooding typically refers to using pre-release code from the perspective of a user rather than a developer, to make finding bugs easier.