I have created a demo microservices application implemented with the help of Azure Function Apps. For separation of concerns, I have created an API Layer, Business Layer, and a Data Layer.
The API layer, being the function app, calls the business layer which implements the business logic while the data layer implements logic for storing and retrieving data.
After considerable thought, I have decided to use query-based API versioning for my demo.
The question I have is,
What is the best way to organize my code to facilitate this? Is there any other way to organize my code to accommodate the different versions apart from using different namespaces / repos?
As of now, I've created separate namespaces for each version but this has created a lot of code duplication. Also after getting it reviewed by some of my friends, they raised the concern that If separate namespaces are being used I would be forcing legacy systems to change references to the new namespace if they need to update which is not recommended.
Any help would be appreciated.
The simplest way to implement versioning in Azure Functions is using endpoints. The HttpTrigger Attribute allows the definition of a custom route where you can set the expected version.
// Version 1 of get users
[FunctionName(nameof(V1UserList))]
public static IEnumerable<UserModel> V1UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v1/users")]HttpRequest req, ILogger log)
{
}
// Version 2 of get users
[FunctionName(nameof(V2UserList))]
public static IEnumerable<UserModel> V2UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v2/users")]HttpRequest req, ILogger log)
{
}
When deploying each version in isolation a router component is required to redirect requests to the correct API endpoint.
The router component can be implemented in Azure using different services, such as:
Azure Function Proxies : you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
API Management :Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management.
Sample code for Versioning APIs in Azure Functions
Related
There are a lot of answers on how to convert ODataQuery into an Expression or into a Lambda, but what I need is quite the opposite, how to get from a Linq Expression the OData query string.
Basically what I want is to transcend the query to another service. For example, having 2 services, where your first service is not persisting anything and your second service is the one that will return the data from a database. Service1 sends the same odata request to Service2 and it can add more parameters to the original odata request to Service2
What I would like:
public IActionResult GetWeatherForecast([FromServices] IWeatherForcastService weatherForcastService)
{
//IQueryable here
var summaries = weatherForcastService.GetSummariesIQ();
var url = OdataMagicHelper.ConvertToUri(summaries);
var data = RestClient2.Get(url);
return data;
}
OP Clarified the request: generate OData query URLs from within the API itself.
Usually, the queries are so specific or simple, that it's not really necessary to try and generate OData urls from within the service, the whole point of the service configuration is to publish how the client could call anything, so it's a little bit redundant or counter-intuitive to return complex resource query URLs from within the service itself.
We can use Simple.OData.Client to build OData urls:
If the URL that we want to generate is:
{service2}/api/v1/weather_forecast?$select=Description
Then you could use Simple.OData.Client:
string service2Url = "http://localhost:11111/api/v1/";
var client = new ODataClient(service2Url);
var url = await client.For("weather_forecast")
.Select("Description")
.GetCommandTextAsync();
Background, for client-side solutions
If your OData service is a client for another OData Service, then this advice is still relevant
For full linq support you should be using OData Connected Services or Simple.OData.Client. You could roll your own, or use other derivatives of these two but why go to all that effort to re-create another wheel.
One of the main drivers for a OData Standard Compliant API is that the meta data is published in a standard format that clients can inspect and can generate consistent code and or dynamic queries to interact with the service.
How to choose:
Simple.OData.Client provides a lightweight framework for dynamically querying and submitting data to OData APIs. If you already have classes that model the structure of the API then you can use typed linq style query syntax, if you do not have a strongly typed model but you do know the structure of the API, then you can use either the untyped or dynamic expression syntax to query the API.
If you do not need full compile-time validation of your queries or you already have the classes that represent the resources served by the API then this is a simple enough interface to use.
This library is perfect for use inside your API logic if you have need of generating complex URLs in a strongly typed style of code without trying to generate a context to manage the connectivity to the server.
NOTE: Simple.OData.Client is sometimes less practical when developing against a large API that is rapidly evolving or that does not have a strict versioned route policy. If the API changes you will need to diligently refactor your code to match and will have to rely on extensive regression testing.
OData Connected Services follows a pattern where some or all of the API is modelled in the client with strongly typed client side proxy interfaces. These are POCO classes that have the structure necessary to send to and receive data from the server.
The major benefit to this method is that the POCO structures, requests and responses are validated against the schema of the API. This effectively gives you full intellisense support for the API and allows you to explor it's structure, the generated code becomes your documentation. It also gives you compile time checking and runtime safety.
The general development workflow after the API is deployed or updated is:
Download the $metadata document
Select the Operations and Types from the API that you want to model
Generate classes to represent the selected DTO Types as defined in the document, so all the inputs and outputs.
Now you can start using the code.
In VS 2022/19/17 the Connected Services interface provides a simple wizard for establishing the initial connection and for updating (or re-generating) when you need to.
The OData Connected Service or other client side proxy generation pattern suits projects under these criteria:
The API definition is relatively stable
The API definition is in a state of flux
You consume many endpoints
You don't want to manually code the types to serialize or deserialze payloads
Full disclosure, I prefer the connected service approach, but I have my own generation scripts. However if you are trying to generate OData query urls from inside your API, its not really an option, it creates a messy recursive dependency... just don't go there.
Connected services is the low-(manual)-code and lazy approach that is perfect for a stable API, generate once and never do it again. But the Connected Service architecture is perfect for a rapidly changing API because it will manage the minute changes to the classes for you, you just need to update your client side proxy classes more frequently.
Can someone tell me or give a ready-made CRUD example using WebFlux, RScoket and Spring (or SpringBoot)?
I studied the RSocket documentation, WebFlux, also wrote my simple examples, but I would like to see a real CRUD application using basic methods using RSocket.
I'll be very grateful.
Thanks.
I have maintained a Spring/RSocket sample project with 4 basic interaction modes of RSocket.
If you only require request/reply case for simple CRUD operations, check the request and response mode, and select a transport protocol, TCP or WebSocket.
To implement CRUD operations, just define 4 different routes for them, like define the RESTful APIs using URI, you have to have a good plan for the naming, but in RSocket there are no HTTP methods to help you to differentiate the same routes.
For example, in the server side, we can declare a #Controller to handling messages like this.
#Controller
class ProfileController {
#MessageMapping("fetch.profile.{name}")
public Mono<Profile> greet(#DestinationVariable String name) {
}
#MessageMapping("create.profile")
public Mono<Message> greet(#Payload CreateProfileRequest p) {
}
#MessageMapping("update.profile.{name}")
public Mono<Message> greet(#DestinationVariable String name, #Payload UpdateProfileRequest p) {
}
#MessageMapping("delete.profile.{name}")
public Mono<Message> greet(#DestinationVariable String name) {
}
}
In the client side, if it is a Spring Boot application, you can use RSocket RSocketRequester to interact with the server side like this.
//fetch a profile by name
requester.route("fetch.profile.hantsy").retrieveMono()
//create a new profile
requester.data(new CreateProfileRequest(...)).route("create.profile").retrieveMono()
//update the existing profile
requester.data(new UpdateProfileRequest(...)).route("update.profile.hantsy").retrieveMono()
//delete a profile
requester.route("delete.profile.hantsy").retrieveMono()
Of course, if you just build a service exposed by rsocket protocol, the client can be a rsocket-js project or other languages and frameworks, such as Angular, React or Android etc.
Update: I've added a crud sample in my rsocket sample codes, and I have published a post on Medium.
The new WSO2 API MicroGateway 3.0 states as new feature Support for composing multiple microservices.I cannot find an example of how to do that.
We are trying a use case with just this type of processing:
An API that queries a back-end database using OData and if not found queries another (non OData) API.
In both cases the result must be transformed (reformatted).
Idea of composing microservices, is to expose set of microservices as a single API using microgateway. Basically you can define define set of REST resources and then point them to different microservices. For ex:
/list . -> micro service1
/add -> micro service2.
You can define per resource back ends using swagger (open API) extensions as below
https://github.com/wso2/product-microgateway/blob/master/samples/per_resource_endpoint.yaml
As of now microgateway does not have out of the box capability to call subsequent endpoints based on the response from the previous endpoint.
But you can transform the response using the response interceptors as explained below link
https://docs.wso2.com/display/MG300/Adding+Interceptors
I have a requirement to build a Web API for an existing system. There are various agencies throughout the country, and each agency has its own database. All databases are on one single server. All databases are identical in structure. All databases have their own username and password. An agency has one or more users. A user can belong to one or more agencies. There is also one special database which contains a table of all users, a table of all agencies, and user-agencies bridge table.
Currently they are using a traditional Windows desktop application. When a user sets up this Windows program, they log in with a username and password. The system then displays for them a list of all the agencies that they belong to (normally just one, but some "power users" can belong to a few). They pick an agency, and then the program connects to the correct database. For the remainder of the session, everything that the user does will be done on that database.
The client wants to create a web app to eventually replace the Windows program (and the two will be running side by side for a while). One developer is creating the front end in Angular 5, and I am developing the API in ASP .Net Core 2.1.
So the web app will function in a similar manner to the Windows app. A user logs in to the web app. The web app, which consumes my Web API, tells the API which user just logged in. The API then checks which agency(s) this user belongs to from that database that stores that data. The API returns the list of agencies the user belongs to to the web app. There, the user picks an agency. From this point on, the web app will include this Agency ID in the header of all API calls. The API, when it receives a request from the web app, will know which database to use, based on the Agency ID in the header of the request.
Hope that makes sense...
Obviously this means that I will have to change the connection string of the DbContext on the fly, depending on which database the API must talk to. I've been looking at this, firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers. So I am trying to move this to the DbContext's OnConfiguring event. I was thinking it'd be best to create a DbContext Factory to create the DbContexts, using the appropriate connection string. I'm just a bit lost though. You see, when the web app calls an end point on the web api (let's say an HTTP GET request to get a list of accounts), this will fire the HttpGet handler in the Accounts controller. This action method then reads the Agency ID header. But this is all happening on the controller.... If I call the DbContext Factory from the DbContext's OnConfiguring() event, it would have to send the Agency ID (which was read in the controller) to the factory so that the factory knows which connection string to create. I'm trying not to use global variables to keep my classes loosely coupled.
Unless I have some service running in the pipeline that intercepts all requests, reads the Agency ID header, and this somehow gets injected into the DbContext constructor? No idea how I would go about doing this...
In summary, I'm a bit lost. I'm not even sure if this is the correct approach. I've looked at some "multi-tenant" examples, but to be honest, I've found them a bit hard to understand, and I was hoping I could do something a bit simpler for now, and with time, as my knowledge of .Net Core improves, I can look at improving the code correspondingly.
I am working on something similar you describe here. As I am also quite at the start, I have no silver bullet yet. There is one thing where could help you with your approach though:
firstly by doing it on the controller itself, which worked but would involve a lot of copy-and-paste anti-patterns in all my controllers.
I took the approach of having a middleware being in charge of swapping the dbconnection string. Something like this:
public class TenantIdentifier
{
private readonly RequestDelegate _next;
public TenantIdentifier(RequestDelegate next)
{
_next = next;
}
public async Task Invoke(HttpContext httpContext, GlobalDbContext dbContext)
{
var tenantGuid = httpContext.Request.Headers["X-Tenant-Guid"].FirstOrDefault();
if (!string.IsNullOrEmpty(tenantGuid))
{
var tenant = dbContext.Tenants.FirstOrDefault(t => t.Guid.ToString() == tenantGuid);
httpContext.Items["TENANT"] = tenant;
}
await _next.Invoke(httpContext);
}
}
public static class TenantIdentifierExtension
{
public static IApplicationBuilder UseTenantIdentifier(this IApplicationBuilder app)
{
app.UseMiddleware<TenantIdentifier>();
return app;
}
}
Here I am using a self-created http-header called X-Tenant-Guid to identify the tenants GUID. Then I make a request to the global Database, where I do get the connection string of this tenants db.
I made the example public here. https://github.com/riscie/ASP.NET-Core-Multi-Tenant-multi-db-Example (it's not yet updated to asp net core 2.1 but it should not be a problem to do so quickly)
I have developed a web application based on Restlet API. As I am adding more features over time, I need sometimes to reuse similar group of REST API under different endpoints, which provides slightly different context of execution (like switching different instances of databases with same schema). I like to refactor my code to make the API reusable and reuse them at different endpoints. My initial thinking was to design an Application for each reusable API and attach them on the router:
router.attach("/context1",APIApplication.class)
router.attach("/foo/context2",APIApplication.class)
The API should be agnostic of configuration of the REST API. What is the best way to pass context information (for example the instance of database) to the Application API? Is this approach viable and correct? What are the best practices to reuse REST API in Restlet? Some code samples would be appreciated to illustrate your answer.
Thanks for your help.
I have see this basic set-up running using a Component as the top level object, attaching the sub applications to the VirtualHost rather than a router, as per this skeleton sample.
public class Component extends org.restlet.Component
{
public Component() throws Exception
{
super();
// Client protocols
getClients().add(Protocol.HTTP);
// Database connection
final DataSource dataSource = InitialContext.doLookup("java:ds");
final Configuration configuration = new Configuration(dataSource);
final VirtualHost host = getDefaultHost();
// Portal modules
host.attach("/path1", new FirstApplication());
host.attach("/path2", new SecondApplication(configuration));
host.attach("/path3", new ThirdApplication());
host.attachDefault(new DefaultApplication(configuration));
}
}
We used a custom Configuration object basically a pojo to pass any common config information where required, and used this to construct the Applications, we used separate 'default' Contexts for each Application.
This was coded originally against restlet 1.1.x and has been upgraded to 2.1.x via 2.0.x, and although it works and is reasonably neat there may possibly be an even better way to do it in either versions 2.1.x or 2.2.x.