Swagger JaxRs combining generated sources with existing model - jax-rs

I am using Wildfly 9.1, swagger-jaxrs 1.5.3 and swagger-codegen-maven-plugin 2.1.3
We try to combine an API (with its own model and services) defined by swagger and our database model generated by our own generator.
Our Generator already adds the annotations needed by swagger to recognize it as resources to the API.
We now try to dynamically generate the model defined by swagger in compile time (swagger-codegen-maven-plugin) which works nice as long as we do not want to use classes of our other mode.
The two problems i have are:
when writing the swagger spec i use to generate the files for the new api i am not able to reference the objects defined by our database model
if i now add these objects to the swagger model to prevent this problem (either by adding a dummy or by generating the .json from the existing entities) The classes generated by swagger obviously expect them to be in the same package.
I am searching for a smart way to combine both approaches without losing the opportunity of developing the API by editing the swagger spec.

Related

How to convert a LinqExpression into OData query URI

There are a lot of answers on how to convert ODataQuery into an Expression or into a Lambda, but what I need is quite the opposite, how to get from a Linq Expression the OData query string.
Basically what I want is to transcend the query to another service. For example, having 2 services, where your first service is not persisting anything and your second service is the one that will return the data from a database. Service1 sends the same odata request to Service2 and it can add more parameters to the original odata request to Service2
What I would like:
public IActionResult GetWeatherForecast([FromServices] IWeatherForcastService weatherForcastService)
{
//IQueryable here
var summaries = weatherForcastService.GetSummariesIQ();
var url = OdataMagicHelper.ConvertToUri(summaries);
var data = RestClient2.Get(url);
return data;
}
OP Clarified the request: generate OData query URLs from within the API itself.
Usually, the queries are so specific or simple, that it's not really necessary to try and generate OData urls from within the service, the whole point of the service configuration is to publish how the client could call anything, so it's a little bit redundant or counter-intuitive to return complex resource query URLs from within the service itself.
We can use Simple.OData.Client to build OData urls:
If the URL that we want to generate is:
{service2}/api/v1/weather_forecast?$select=Description
Then you could use Simple.OData.Client:
string service2Url = "http://localhost:11111/api/v1/";
var client = new ODataClient(service2Url);
var url = await client.For("weather_forecast")
.Select("Description")
.GetCommandTextAsync();
Background, for client-side solutions
If your OData service is a client for another OData Service, then this advice is still relevant
For full linq support you should be using OData Connected Services or Simple.OData.Client. You could roll your own, or use other derivatives of these two but why go to all that effort to re-create another wheel.
One of the main drivers for a OData Standard Compliant API is that the meta data is published in a standard format that clients can inspect and can generate consistent code and or dynamic queries to interact with the service.
How to choose:
Simple.OData.Client provides a lightweight framework for dynamically querying and submitting data to OData APIs. If you already have classes that model the structure of the API then you can use typed linq style query syntax, if you do not have a strongly typed model but you do know the structure of the API, then you can use either the untyped or dynamic expression syntax to query the API.
If you do not need full compile-time validation of your queries or you already have the classes that represent the resources served by the API then this is a simple enough interface to use.
This library is perfect for use inside your API logic if you have need of generating complex URLs in a strongly typed style of code without trying to generate a context to manage the connectivity to the server.
NOTE: Simple.OData.Client is sometimes less practical when developing against a large API that is rapidly evolving or that does not have a strict versioned route policy. If the API changes you will need to diligently refactor your code to match and will have to rely on extensive regression testing.
OData Connected Services follows a pattern where some or all of the API is modelled in the client with strongly typed client side proxy interfaces. These are POCO classes that have the structure necessary to send to and receive data from the server.
The major benefit to this method is that the POCO structures, requests and responses are validated against the schema of the API. This effectively gives you full intellisense support for the API and allows you to explor it's structure, the generated code becomes your documentation. It also gives you compile time checking and runtime safety.
The general development workflow after the API is deployed or updated is:
Download the $metadata document
Select the Operations and Types from the API that you want to model
Generate classes to represent the selected DTO Types as defined in the document, so all the inputs and outputs.
Now you can start using the code.
In VS 2022/19/17 the Connected Services interface provides a simple wizard for establishing the initial connection and for updating (or re-generating) when you need to.
The OData Connected Service or other client side proxy generation pattern suits projects under these criteria:
The API definition is relatively stable
The API definition is in a state of flux
You consume many endpoints
You don't want to manually code the types to serialize or deserialze payloads
Full disclosure, I prefer the connected service approach, but I have my own generation scripts. However if you are trying to generate OData query urls from inside your API, its not really an option, it creates a messy recursive dependency... just don't go there.
Connected services is the low-(manual)-code and lazy approach that is perfect for a stable API, generate once and never do it again. But the Connected Service architecture is perfect for a rapidly changing API because it will manage the minute changes to the classes for you, you just need to update your client side proxy classes more frequently.

API Versioning in dotnet core

I am working on APIs in dotnet core 2.2 and I'd like to version my API.
I'm looking for some solutions except:
Routing method (api/v1/controller, api/v2/contoller)
Routing method using APIVersioning package, (api/v{version: apiVersion}/contoller})
I want to know if there is any other solutions in which I don't have to change the route attribute? I might be completely wrong but can I use middleware? Like, map delegate to map the the incoming requests (based on v1, v2 it carries) to its right controller?
I'll highly appreciate any feedback and suggestions.
You can use the APIVersioning package and configure it so it selects the version based on the HTTP Header.
services.AddApiVersioning(c =>
{
c.ApiVersionReader = new HeaderApiVersionReader("api-version");
}
And then you can use the [ApiVersion] attribute on your controllers.
Can you use custom middleware - yes; however, be advised that endpoint selection is typically much more involved. The routing system provides extension and customization points, which is exactly what API Versioning does for you. Creating your own versioning solution will be a lot more involved than having to add a route template parameter.
If you're going to version by URL segment, then API Versioning requires that you use the ApiVersionRouteConstraint. The default name is registered as apiVersion, but you can change it via ApiVersioningOptions.RouteConstraintName. The route parameter name itself is user-defined. You can use whatever name you want, but version is common and clear in meaning.
Why is a route constraint required at all? API Versioning needs to resolve an API version from the request, but it has no knowledge or understanding of your route templates. For example, how would ASP.NET know that the route parameter id in values/{id:int} has be an integer without the int constraint? Short answer - it doesn't. The API version works the same way. You can compose the route template however you want and API versioning knows how and where to extract the value using the route constraint. What API versioning absolutely does not do is magic string parsing. This is a very intentional design decision. There is no attempt made by API Versioning to try and auto-magically extract or parse the API version value from the request URL. It's also important to note that the v prefix is very common for URL segment versioning, but it's not part of the API version. The approach of using a route constraint negates the need for API Versioning to worry about a v prefix. You can include it in your route template as a literal, if you want to.
If the issue or concern is having to repeatedly include the API version constraint in your route templates, it really isn't any different than including the api/ prefix in every template (which I presume you are doing). It is fairly easy to remain DRY by using one of the following, which could include the prefix api/v{version:apiVersion} for all API routes:
Extend the RouteAttribute and prepend all templates with the prefix; this is the simplest
Roll your own attribute and implement IRouteTemplateProvider
Ultimately, this requirement is yet another consequence of versioning by URL segment, which is why I don't recommend it. URL segment versioning is the least RESTful of all versioning methods (if you care about that) because it violates the Uniform Interface constraint. All other out-of-the-box supported versioning methods do not have this issue.
For clarity, the out-of-the-box supported methods are:
By query string (default)
By header
By media type (most RESTful)
By URL segment
Composition of n methods (ex: query string + header)
You can also create your own method by implementing the IApiVersionReader.
Attributes are just one way that API versions can be applied. In other words, you don't have to use the [ApiVersion] attribute if you don't want to. You can also use the conventions fluent API or create your own IControllerConvention. The VersionByNamespaceConvention is an example of such a convention that derives the API version from the containing .NET namespace. The methods by which you can create and map your own conventions are nearly limitless.

How to create a custom NiFi Controller Service?

I am trying to learn, how to create a custom NiFi controller service. To start off, I thought of mimicking the DBCPConnectionPool controller service by simply copying the original source code of DBCPConnectionPool service. To implement the same, I generated a maven archetype from "nifi-service-bundle-archetype" and got the following project structure
However, when i generated the archetype from 'nifi-processor-bundle-archetype , I got the following structure: -
I understand that in case of processor I simply need to write my code in MyProceesor.java present under nifi-ListDbTableDemo-processors folder and then create a nar file out of it. But in case of controller service, I have 4 folders generated. I can see two java files i.e.
StandardMyService.java present under nifi-DbcpServiceDemo folder
MyService.java present under nifi-DbcpServiceDemo-apifolder
Now, why is there two java files generated in case of custom controller service, while there was only one java file generated in case of custom processor. Also, Since I am trying to mimick the DBCPConnectionPool service, in which java file out of two should I copy the original source code of DBCPConnectionPool service.
Please guide me from scratch, the steps that I need to follow to create a custom service equivalent to that of DBCPConnectionPool service.
MyService.java under nifi-DbcpServiceDemo-api is an interface which be implemented by the StandardMyService.java under nifi-DbcpServiceDemo. Once the implementation is done, you have to use nifi-DbcpServiceDemo-api as dependency in the processor bundle which needs to work with this custom controller Service.
The reason why controller services are implemented this way is:
We will be hiding the actual implementation from the processor bundle because it need not depend on the implementation.
Tomorrow you write a new controller service implementation, say StandardMyServiceTwo which again implements MyService because only the implementation varies from StandardMyService and other members remains the same and can be shared. This new controller service can be introduced transparently without making any changes on the processor bundle.
Example:
The best example is the record reader/writer controller services. If you look at the nifi-record-serialization-services-bundle in nifi, they have different implementation for serializing records of JSON, Grok, avro, CSV data formats but they all are actually implementing one API - nifi-record-serialization-service-api And hence for the processors which want to use the Record Reader or Record Writer, instead of having the actual implementations as its dependency, they rather can have the api as its dependency.
So tomorrow you can add add a new implementation in the record-serialization-services-bundle for a new data format without touching anything on the processors bundle.
For you references, please take a look at the following links which would help you in writing the custom controller service from scratch
http://www.nifi.rocks/developing-a-custom-apache-nifi-controller-service/
https://github.com/bbende/nifi-dependency-example

Replacement Implementation for Provider Model in ASP.NET 5

I have existing code that uses System.Configuration.Provider namespace for provider collections to plugin various implementations of interfaces, where multiple implementations exist in the collection and are selected by name according to various logic.
This namespace is not available in .net core, so I'm looking for advice on how to implement a replacement solution that will work with .net core framework.
I know that if I was just trying to plugin one implementation, I could do it by dependency injection. But I'm looking for a way to have multiple implementations available to choose based on name.
My current implementation with provider model populates the provider collection from a folder where you can drop in xml files that declare the type of the actual implementations, so new implementations of the provider can be loaded from an assembly by just adding another file to the folder. I'd like to keep the logic as similar as possible to that but I'm open to json files rather than xml.
I am thinking I could load up a collection of implementations of the interface from json files in Startup and use dependency injection to provide the collection where needed or perhaps an interface that can get the collection would be lighter weight and allow getting them when they are needed rather than at startup.
Is that the right approach? Anyone have better ideas or done something similar?
This is done more generically than using an abstract base class like ProviderBase in the new framework. You can register multiple of the same service with the DI framework and get them all either simply by asking for an IEnumerable<> of the type you register them as or using the GetRequiredServices<> extension method. Once you get your services, however, you'll need some other way of distinguishing them, such as a property indicating a unique name, which is the pattern the ASP.Net team has been following.
You can see an example in the Identity framework v3 with the Token Providers.
IUserTokenProvider<T>, the "provider"
UserManager, which consumes the token managers

n-tiers, Linq and WCF

We have an n-tiers architecture :
-a WCF Service that communicates with the database and handles all the business logic.
-an ASP.NET MVC website that communicates with the WCF service.
Here is a scenario of data serialization-deserialization from the database to the html view of a 'guitar':
-Guitar_1 a class generated by linq,
-Guitar_2 the DataContract exposed by the WCF service, and consumed by the ASP.NET MVC website.
-Guitar_3 the model passed to the View
When an end user wants to retrieve a guitar, Guitar_1 is transformered into Guitar_2 and then into Guitar_3. That's really not a problem but if the end user requests a list of guitars then all this process is repeated for each guitar (a loop).
If i had to programmatically handle all the serialization-deserialization stuff, i'd had only one class per layer. It could still be done for example on the wcf project by annoting 'DataContract'/'DataMember' on the Linq class, but if I refresh my database model all my annotations disappear (Same case ont the ASP.NET MVC project, refreshing the service reference deletes all the added code).
Also, Is it really more productive to use these automatic serializers? the time taken to write a serializer-deserializer takes as much time as annoting classes (DataContract/DataMember) and handling the conversion of class Guitar_1 to Guitar_2... Add to that the loss of perofrmance (Loop and conversion)...
What do you guys think? Do some of you code as in the old days because of this?
UPDATE: As suggested by 'Abhijit Kadam', I used partial classes when consuming a webservice, however, I found a better solution when using Linq2SQL : POCO classes.
If the main concern is that the model classes created by framework are automatically regenerated and you changes like annotations on such classes are wiped out THEN in this case you can use partial classes, info here. If the auto generated class is Employee. Then in separate file create a partial class Employee and include the fields in this partial defination that you want to annotate. This class will not be wiped out and regenarated. However when you compile the code the resultant Employee class will be combination of the Original Employee class + the partially defined Employee class.
Also converting from class Guitar_1 to Guitar_2 is OK and at times we have to do such things to meet specific requirements. I prefer JSON data to be transferred across the network wire like from WCF to MVC Web and then browser will fetch the json data from the MVC APP. Then I use frameworks like jsrender or knockout to render the data as HTML on the client side(browser). JSON is readable, compact and javascript and javascript libraries love json.