Different backend endpoints in APIs depending on Products in Azure API Management - api

I'm an absolute newbie in Azure API Management and I have a doubt regarding how to manage Products and APIs.
Let's imagine this scenario:
I create 3 diferent Products: One for representing my Development environment (DEV), the second one for representing my Preproduction environment (PRE) and the last one to represent my Production environment (PRO).
I create several APIs which I want to publish in my DEV environment and later promotion to the others. So I need every API in every different Product to point a different backend service, as my backend services are different in every environment.
In example:
I have 3 different versions of my backend service: ServiceDEV, ServicePRE and ServicePRO. As I develop my API, I use as backend service the one named ServiceDEV, and so my API is assigned to the Product DEV. Later I want to keep this DEV version for my API but I also want to "deploy" that API in the Product PRE to make it act as a façade for ServicePRE, and the same would happen when promotioning it to PRO.
The problem with this approach is that I need to clone the APIs and change their settings to make them point to the correct backend endpoint every time I want to promotion one of them from one environment to another, thus losing all the versioning for that API, as the cloning operation just clones the current version of the API.
I don't know if policies would meet my needs in this subject.
I hope you get what I'm trying to mean...
How can I manage this situation?
Am I focusing this subject in a wrong way?
Any idea about how to overcome this?
Thank you!

If you follow this approach then you indeed could use policies to manage different backends for different products. You could create APIs without specifying backends ervice URL entirely and later use set-backend-service policy at product level to direct call to a proper endpoint.
One limiting factor of this approach is that whatever changes you may want to do to an API in dev environment (think change signature of an operation, or policy) will be immediately visible in other environments as well as this is a single API in all of them. If this is an issue, then consider having duplicate (triplicate) APIs - one per environment and later move their configuration via Azure API call.

Related

SPA with Backend API and new B2B API - how to deploy

I have currently delivered a SPA (Vue.js) web application with a Java API backend. Everything is currently sitting in AWS, with the frontend being in CloudFront and the backend in ECS connecting to a RDS instance.
As part of the next phase of delivery, we are creating a B2B API. My question is that of architectural design and deployment strategy, is it commonplace to just extend the existing API with B2B functionality? Should I keep them both separate with an API gateway in front? We envisage that the B2B use eventually will outgrow the SPA use case so the initial deployment configuration needs to have the most flexibility to grow and expand.
Is there some sort of best practice here? I imagine that a lot of code would be similar between the two backends as well.
Thanks,
Terry
First off - Deciding on service boundaries is one of the most difficult problems in a service oriented architecture design and the answer strongly depends on your exact domain requirements.
Usually I would split service implementations by the domain/function as well as by organizational concerns (e.g. separate teams developing them) and not by their target audience. This usually avoids awkward situation where team responsibility is not clear, etc. If it will grow into a very large project there may also be a need for multiple layers of services and shared libraries - And at that point you would likely run into necessary re-factorings / restructurings.
So if there is a very large overlap in function between your b2b and the regular api you may not want to split the implementation.
However, you may also have to consider how the service access is provided and an API Gateway could help with providing different endpoints for the different audiences, different charging models, different auth options, etc. Depending on your exact requirements an API Gateway may not be enough and this may also require another thin service layer implementation that uses common domain services.

Is there an API for purging a project in the openstack?

I need to purge my users on an OpenStack project easily, through an API call.
Just like this CLI command :
neutron purge PROJECT_ID
Which is available in the Neutron project docs, but with an API call.
I couldn't find the API, so actually my question is :
1. Isn't there such an API?
if there is not,
2.why?
Is there a specific reason for?
I checked out the source-code of the clients and neutron-server, but unfortunately there is no dedicated endpoint in the REST-API for this functionality.
This feature is only supported by the neutron-client, but not by the openstack-client. When you run neutron purge PROJECT_ID all what the neutron-client does inside the python-code of the client, is to list all resources which are related to the given project and then iterate over this list and send a delete to the neutron REST-API for each single resource. So its only a simple automatism in the python-code of the client and not a specific endpoint on server-side.
See the specific function inside the code here: https://github.com/openstack/python-neutronclient/blob/master/neutronclient/neutron/v2_0/purge.py#L63
Based on my experience with openstack and its community, I think it was done like this, because it was easier to add new code only into the neutron-client. When this should have become a new endpoint, this feature had to be implemented in neutron, openstack-client and openstacksdk as well. Each repository has its own team. This purge-feature is so small, that it was not worth to persuade all 4 teams. The more components you try to update for one simple feature, the harder it is, because the one who wants to bring the feature upstream, is responsible to bring the teams of all required components together and when only one within the core-teams have a problem with your implementation, you have to start nearly at the beginning. So it can easily take over a year or two to bring a cross-component feature like a new endpoint upstream, when you are not part of the core-team by yourself. So to bring the feature only into the neutron-client is quite easy compared to a cross-project contribution.
This is at least the reason, why I would implement this feature only in neutron-client too, or only in the openstack-client if possible, instead of adding a new endpoint, when I would bring this feature upstream.

What is the point of using /api/v1/(whatever route here) in express?

Ive been making API's for about a year now and I was taught to use http://IPAddress:Port/api/v1 all the time when building an API with express.js. Is there a specific reason I would want to do that? Is this just denoting that the API is in development? Ive recently changed my API to not run on port 3000 so that I am able to just say http://IPAddress.com/ instead of http://IPAddress.com:3000/api/v1 and it works just fine the new way.
One main reason for versioning an API is because it may be that an API can be improved upon but doing so might lead to breaking changes (for example, it might not work for applications that are consuming the API because an endpoint has been modified).
So, the solution to this is to allow consumers of the current API (v1) to keep using it until they want to switch, and release an updated version (v2) for new consumers.
Here's some more info on it: https://restfulapi.net/versioning/

Reuse microservices across different project

We developed a monolithic API to be used as a SAAS.
In the company we also develop custom build for some customers.
Some of our customers are asking for some features that are already implemented in the monolithic application.
We are thinking about splitting our API into microservices, but our major questions are the following :
Does microservices can be reuse across different projects ?
If we do split, do we create a microservice that everybody use or do we create an instance per custom build ?
E.G :
project A use "User", "Project" so we deploy 2 microservices
project B use "User", "Project", "Store" so we deploy 3 microservices
total number of microservices deployed : 5
If we create an instance of each microservice per custom build, won't be too hard to manage the communication between all the microservices within the same custom build ?
Or do we stick with one instance per microservice that everybody use and we specify the project source ?
As we are using C# GraphQL.
We also thought about creating Nuget package for each component, so each package will contains :
Exposed GraphQL Queries / Mutations
His own db
His own logic
E.G :
- Api A install "User" & "Project" packages
- 3 db are instantiated "Api.A", "Api.A.User", "Api.A.Project"
- Api B install "User", "Project" & "Store" packages
- 4 db are instantiated "Api.B", "Api.B.User", "Api.B.Project" & "Api.B.Store"
But does it make sense to do that ?
In my mind it could very similar from Hangfire https://www.hangfire.io/
Note that we are currently using AWS Serverless to host our applications.
An important point is that we are a small team 2-4.
We are very open minded so any suggestion is good to hear.
Thank you !
First of all, I would like to say is that there is no right way here and I am providing my point of view from the way we have already done things hoping it will guide you in finding a solution best suited for your requirements.
So to understand your dilemma, you have a base vanilla product which is an API SAAS and there is a customized deployment for some customers as well. But as you are having to build custom deployments for each customer you are noticing a common pattern, wherein a lot of the functionality is repeated across the SAAS for each customer.
Now assuming I have the requirement correct, I would say micro-services will provide definite benefits in your case in terms of scaling and customer-specific customization which will be managed by independent teams.
But a lot of this depends on how your business logic is structured and how big and vast your customization is. Some of these questions should drive your solution are.
Can you store Customer-specific data in a central data store or at customers' end ? & How are your databases going to be structured and how many of them?
How big are the customizations ? are they cosmetic or workflow adhering?
How much cross-communication you expect across various services like User, Store, and Project and if there is any communication across A.User - B.User or A.Project - B.Store, etc?
Now moving to some of the important things you might want to consider post answering the above questions.
Consideration 1. If the data stores can be allowed to be in a single central place you can go ahead with a single cluster where all your micro-services can be deployed. But looking at the data provided I can assume you have multiple databases per customer and I would recommend to keep them separate and not introduce any coupling between them. Thus you may end up with one microservice or microservice per customer which talks only to that customer's database. ( more in fig.1)
Consideration 2. The customization as far I the norm goes should be separated from the service itself and your every service should have an input for configuration loading which will drive the service behavior. Again depending on how big your customization is there can be a limit to this configuration and in those cases, I woul recommend creating a new service with customizations built-in.
Consideration 3. This is a major factor for deciding the number of microservices you may have, but the boundary of each service should be defined by the domain, for example, a User service, a Store service, and a Project service. These are the vanilla services that interact with each other to produce a valid business scenario. And each of the customers is just specialized instances of these services.
ok Now that this is done lets gloss over your primary questions...
Des microservices can be reused across different projects?
-- Yes you can, but again it depends on how you have designed the business workflow, configuration injection.
If we do split, do we create a microservice that everybody uses or do we create an instance per custom build?
-- Yes this would be an ideal scenario enabling separation of concerns across projects as we do not want to mix data boundaries and client-specific sensitive configurations. That said there might be a case where the single microservice solution is what is demanded but should be done with caution.
If we create an instance of each microservice per custom build, won't it be too hard to manage the communication between all the microservices within the same custom builds?
-- Communication across microservice is an important part or factor which is more often than not unavoidable in most cases. Thus considering you will be requiring some form of cross microservice communication you can look at an enterprise bus or API communication based on your requirement. But it is a known triviality is my opinion.
Or do we stick with one instance per microservice that everybody uses and we specify the project source?
-- I would not recommend this as the example stated in your question for a module for database injections doesn't sound a good idea to me. This will cause a highly coupled system design. And this might also mean if one service fails all your customer sites go down. you surely wouldn't want that !!!
Now as it is said a picture is worth a thousand words...

Application Insights strategies for web api serving multiple clients

We have a back end API, running ASP.Net Core, with two front ends: A SPA web site (Vuejs) and a progressive web page (for mobile users). The front ends are basically only client code and all services are on different domains. We don't use cookies as authentication uses bearer tokens.
We've been playing with Application Insights for monitoring, but as the documentation is not very descriptive for our situations, I would like to get some more inputs for what is the best strategy and possibilities for:
Tracking users and metrics without cookies from e.g. the button click in the applications to the server call, Entity Framework/SQL query (I see that this is currently not supported, How to enable dependency tracking with Application Insights in an Asp.Net Core project), processing data and presentation of the result on the client.
Separating calls from mobile and standard web in an easy manner in Application Insights queries. Any way to show this in the standard charts that show up initially would be beneficial.
Making sure that our strategy will also fit in situations where other external clients will access the API, and we should be able to identify these easily, and see how much load they are creating for the system.
Doing all of the above with the least amount of code.
this might be worthy of several independent questions if you want specifics on any of them. (and generally your last bullet is always implied, isn't it? :))
What have you tried so far? most of the "best way for you" kinds of things are going to be opinions though.
For general answers:
re: tracking users...
If you're already doing user info/auth for other purposes, you'd just set the various context.user.* fields with the info you have on the incoming request's telemetry context. all other telemetry that occurs using that same telemetry context would then inerit whatever user info you already have.
re: separating calls from mobile and standard...
if you're already doing this as different services/domains, and you are already using the same instrumentation key for both places, then the domain/host info of pageviews or requests is already there, you can filter/group on this in the portal or make custom queries in the analytics portal to analyze that way. if you know which site it is regardless of the host, you could add that as custom properties in the telemetry context, you could also do that to avoid dealing with host info.
re: external callers via an api
similarly, if you're already exposing an api and using auth, you should (ideally) already know who the inbound callers are, and you can set that info in custom properties as well.
In general, custom properties (string:string key value pairs) and custom metrics (string:double key value pairs) are your friends. you can set them on contexts so all the events generated in that context inherit the same properties, you can explicitly set them on individual TrackEvent (or any of the other Track* calls) to send specific properties/metrics with any single event.
You can also use telemetry initializers to augment or filter any telemetry that's being generated automatically (like requests or dependencies on the server side, or page views and ajax dependencies client side)