Is there an API for purging a project in the openstack? - api

I need to purge my users on an OpenStack project easily, through an API call.
Just like this CLI command :
neutron purge PROJECT_ID
Which is available in the Neutron project docs, but with an API call.
I couldn't find the API, so actually my question is :
1. Isn't there such an API?
if there is not,
2.why?
Is there a specific reason for?

I checked out the source-code of the clients and neutron-server, but unfortunately there is no dedicated endpoint in the REST-API for this functionality.
This feature is only supported by the neutron-client, but not by the openstack-client. When you run neutron purge PROJECT_ID all what the neutron-client does inside the python-code of the client, is to list all resources which are related to the given project and then iterate over this list and send a delete to the neutron REST-API for each single resource. So its only a simple automatism in the python-code of the client and not a specific endpoint on server-side.
See the specific function inside the code here: https://github.com/openstack/python-neutronclient/blob/master/neutronclient/neutron/v2_0/purge.py#L63
Based on my experience with openstack and its community, I think it was done like this, because it was easier to add new code only into the neutron-client. When this should have become a new endpoint, this feature had to be implemented in neutron, openstack-client and openstacksdk as well. Each repository has its own team. This purge-feature is so small, that it was not worth to persuade all 4 teams. The more components you try to update for one simple feature, the harder it is, because the one who wants to bring the feature upstream, is responsible to bring the teams of all required components together and when only one within the core-teams have a problem with your implementation, you have to start nearly at the beginning. So it can easily take over a year or two to bring a cross-component feature like a new endpoint upstream, when you are not part of the core-team by yourself. So to bring the feature only into the neutron-client is quite easy compared to a cross-project contribution.
This is at least the reason, why I would implement this feature only in neutron-client too, or only in the openstack-client if possible, instead of adding a new endpoint, when I would bring this feature upstream.

Related

How do we version a new endpoint being added to an existing API

We have our API versioning strategy based on URL.
I have couple of scenarios to add new enpoints , where I could not find any strategical reference for this.
Scenario 1:
An existing API having endpoints varyingly ranging from versions v1 to v4. Few endpoints are upto V2, few are upto V3, and few at v4.
In this situation If I have to add a new endpoint, Should I begin the version for the new endpoint at V4? Is there any standards for it.
Scenario 2
This is the different scenario. one of the API GW spanned across multiple microservices, and the micro services are grouped by resources within the gateway. so a resource have a one on one mapping against a service.
Similarly different API versioning exists btw resources here. Few resources were upto V3 and few are up to v5. if a new endpoint is required to be added to a resource which is already upto v3, should we add a new endpoint in v3 or should we create a v5 version of resource to add that specific endpoint?
Any suggestions would be helpful.
You're unlikely find a standard way of doing things. The closest thing to a standard is what Fielding and the HTTP specifications say themselves. You should expect these questions to enlist many subjective opinions. Here's my bias opinion based on experience and a deep understanding of the specifications...
Conceptually, there's no real problem with adding new endpoints to an existing API. Where this might be problematic is if your API is public and with public documentation. Once an API version is released, it should be immutable so that clients can rely on it. If you're adding surface area to your API, then I would recommend you create a new version. If you're unsure what that version will shape up to be in totality, you can always start with a pre-release version; for example, 4.0-preview.1.
Your second question seems to ask whether you should have symmetrical versions. You can, but it's solely at your discretion. You indicated that you have microservices, so unless you are building out an API for an entire product or suite, it is more flexible to allow an API to evolve independently. This will organically result in heterogenous API versions over time. That shouldn't be a problem IMHO. The key to making that manageable is to define a sound versioning policy, such as N-2 supported versions.
You've already elected to version by URL segment, so there's no going back. Versioning this way leads to a spider web of different URLs when the versions are not symmetrical. This is just one of the many problems you may encounter. Hypermedia is almost all but impossible to achieve when versioning by URL unless the versions are symmetrical. Ultimately, versioning by URL segment is not RESTful, despite being popular, because it violates the Uniform Interface constraint. The URL path is the resource identifier. v3/order/42 and v4/order/42 are not different resources, they are different representations. In the same way, I can ask for order/42 as application/json or application/xml, but they are not different API versions, even though they look completely different over the wire.
As an example, if you retrieve v2/order/42 and it has a link to customer/42, but the Customer API supports 2.0 and 3.0, how do you know which link to provide? If the client only knows how to talk v3/customer/42 and you give them v2/customer/42, it might break them. Furthermore, what happens if the Customer API doesn't support 2.0 at all? The Order API has to incorrectly assume v2 is a valid or it has to be coupled into knowing which versions are supported; both of which are not good. In all cases, the server still doesn't know what the client really wants. It is the client's responsibility to tell the server what it wants. Every other method of versioning does not have this problem as the URLs are always consistent. Let's say you version by query string with api-version, another popular choice. If you provide a link to customer/42, the link is valid regardless of API version. It's the client's job to know and append ?api-version=<value> to indicate to the server how they want to query the resource. This is why Fielding says that media type negotiation is the only way to version an API. It's hard to argue with the G.O.A.T., but using the query string or another header doesn't explicitly violate any constraints, even if media type negotiation would be better.

What is the point of using /api/v1/(whatever route here) in express?

Ive been making API's for about a year now and I was taught to use http://IPAddress:Port/api/v1 all the time when building an API with express.js. Is there a specific reason I would want to do that? Is this just denoting that the API is in development? Ive recently changed my API to not run on port 3000 so that I am able to just say http://IPAddress.com/ instead of http://IPAddress.com:3000/api/v1 and it works just fine the new way.
One main reason for versioning an API is because it may be that an API can be improved upon but doing so might lead to breaking changes (for example, it might not work for applications that are consuming the API because an endpoint has been modified).
So, the solution to this is to allow consumers of the current API (v1) to keep using it until they want to switch, and release an updated version (v2) for new consumers.
Here's some more info on it: https://restfulapi.net/versioning/

How do I create a private and public API architecture

I got a project assigned where we already have an up and running website and one of our clients wants to be able to track statistics from the website.
We want to make this available to all our clients as soon as we finish the development. Note that each 'client' have their own 'subdomain' to say so. Eg. www.website.com/client1 , www.website.com/client2 , etc. And we want to track the usage separately for each of these clients.
We will need to create statistics based on the usage of our own platform, pull in data registered by Google Analytics and also pull in data from a 3rd party which they will offer by an API of their own (they have a 3rd party solution that uses the data accessible via our API).
All this data needs to be shown on a webpage with graphs and tables.
I wanted to make sure we choose the right architecture from the start, in order to avoid scalability issues later on.
Started reading about Private and Public API's lately.
For now, we do not have another (internal) application yet that would use our own statisics, it would just be the website using it. But in order to be able to scale-up later if needed, and another application would like to use the statistics I think a private API would benefit us greatly.
In order to allow 3rd parties to use the statistical data we chose to let out, I was thinking of creating a Public API.
Is a Private&Public API the correct way to go about this?
One of the questions I am stuck with is how does the architecture for these API's look like. Mostly, right now we already have a public API regarding vacancy data. This 'API' is basically just a PHP class (controller) inside our CodeIgniter solution. It gets called via its URL and returns a JSON object with the results. (e.g. www.website.com/api/vacancy/xxx)
In order to create a (proper) private & public API solution/architecture. Should the API be set free from the website (CodeIgniter)? What are the common go-to solutions for this?
Or is it fine to keep it in our current platform the way it is now? (and people call the stats API via www.website.com/api/stats/xxx for example?)
It's almost always right to go with microservices like architecture so your initial thoughts sounds reasonable. Acting like this will give the possibility to scale and deploy your api independently and also will help you avoid performance side effects to your site (and vice versa). Pay attention how you access your main site data from within the new api if you don't want to finish with a monolith application.
Regarding the API i would suggest you to implement protocol like oauth2 in order to achieve the flexibility you (might) need. Also you can use swagger to document and test your API.
All i said might helps you a lot but first you have to answer yourself do you really need to go so deep or you just need a simple solution.
I think multitenancy is the best choice. Generally speaking, multitenancy is when every customer has own database. Data is separate. The codebase is same and already exists. As I understood the project is in progress status. You do not redesign and rewrite anything.

Different backend endpoints in APIs depending on Products in Azure API Management

I'm an absolute newbie in Azure API Management and I have a doubt regarding how to manage Products and APIs.
Let's imagine this scenario:
I create 3 diferent Products: One for representing my Development environment (DEV), the second one for representing my Preproduction environment (PRE) and the last one to represent my Production environment (PRO).
I create several APIs which I want to publish in my DEV environment and later promotion to the others. So I need every API in every different Product to point a different backend service, as my backend services are different in every environment.
In example:
I have 3 different versions of my backend service: ServiceDEV, ServicePRE and ServicePRO. As I develop my API, I use as backend service the one named ServiceDEV, and so my API is assigned to the Product DEV. Later I want to keep this DEV version for my API but I also want to "deploy" that API in the Product PRE to make it act as a façade for ServicePRE, and the same would happen when promotioning it to PRO.
The problem with this approach is that I need to clone the APIs and change their settings to make them point to the correct backend endpoint every time I want to promotion one of them from one environment to another, thus losing all the versioning for that API, as the cloning operation just clones the current version of the API.
I don't know if policies would meet my needs in this subject.
I hope you get what I'm trying to mean...
How can I manage this situation?
Am I focusing this subject in a wrong way?
Any idea about how to overcome this?
Thank you!
If you follow this approach then you indeed could use policies to manage different backends for different products. You could create APIs without specifying backends ervice URL entirely and later use set-backend-service policy at product level to direct call to a proper endpoint.
One limiting factor of this approach is that whatever changes you may want to do to an API in dev environment (think change signature of an operation, or policy) will be immediately visible in other environments as well as this is a single API in all of them. If this is an issue, then consider having duplicate (triplicate) APIs - one per environment and later move their configuration via Azure API call.

How to store postman collections in source control

I am using POSTMAN collections to test my API before opening it up. I work with a team of developers and we would like to share/add/edit our collections amongst each other.
Doing this in source control is proving slightly tricky as can be seen in this comment on the GitHUB page:
This issue still persists in Version 2.1.1 (packaged)
The order of requests might be deterministic now, but the diff of an exported collection from two different machines and users includes data that are not related to the collections exported. The diff is full of owner and other id conflicts if there are several people working on the tests at the same time.
What is the best way that we have of putting this data in some sort of version control system? Any suggestions otherwise?
Putting it in a VCS undoubtly will give you some headaches as you mentioned. Your best bet is to use Postmans functionality to share collections. Here is from the documentation found at https://www.getpostman.com/docs/sharing
Starting with Postman v0.9.3 you have the ability to share and manage your collections more effectively. The first thing you will have to do is create a Postman account. You can create one using your email ID or a Google account. Once you are signed in after creating an account, the collections you upload on Postman are linked to your account. You can delete them later through the "Shared collections" item in the navigation bar dropdown.
Collection v2 format removes most, if not all, problems with portability.
http://blog.getpostman.com/2015/06/05/travelogue-of-postman-collection-format-v2/
The format must be highly portable so that it can be easily transported between various systems without loosing functionality.
Source Control in Postman
The question about sharing collections so that you can collaborate with your teammates has been answered a few different ways, as described in other answers of this question such as by sharing the collection or by syncing to a team account.
Version Control in Postman
The other part of the question was about putting the Postman data into a version control system. Postman introduced some version control features for the paid team accounts, like being able to restore collections to a certain point in the activity feed.
The paid team accounts also get integrations to sync their collections to their own version control systems like GitHub for example. If you're on a free account, you can use the Postman API to build your own similar integration to update the collections.
This blog post talks about some of the version control features in Postman.
UPDATE: Postman released forking and merging in Postman app v6.7.1 so you can manage version control in the app.
To automatically share your existing postman collection you can use Postman Pro.
It is a paid service provided using which a team lead can purchase the complete pro- scheme for his team and work as an admin.
Postman pro enables the following and many more:
Any changes in the API are automatically reflected in Postman for all member
Members subscribe to the collections from the Team library and get notifications of any changes.
For more information you can refer:
https://app.getpostman.com/dashboard/team-upgrades
This is what I use with my team of automation testers.