Is spring sleuth baggage going to be deprectaed? If yes what is the alternative?
Is it must to have a configuration for the propogated field names starting version #2?
Where did you get this information that baggage is deprecated? In the documentation we've stated that you need to explicitly state what the baggage keys should be, not that the baggage is depreacted. Also in the Sleuth migration guide (https://github.com/spring-cloud/spring-cloud-sleuth/wiki/Spring-Cloud-Sleuth-2.0-Migration-Guide#baggage-needs-to-be-whitelisted) you could see
In Sleuth we used to create headers that had the baggage prefix. You
can do it via 2 properties:
spring.sleuth.baggage-keys - those keys will be prefixed with baggage- and baggage_. That way we are backward compatible with
previous versions of Sleuth.
spring.sleuth.propagation-keys - those keys will be whitelisted as they are. No prefix will be set.
Related
RFC 4741 defines Netconf 1.0 and RFC 6241 defines Netconf 1.1. Section 3.1 of these RFC says that;
All NETCONF protocol elements are defined in the following namespace: urn:ietf:params:xml:ns:netconf:base:1.0
My query is; RFC6241 has defined a new RFC <cancel-commit> with the same XML namespace. Do not we need a new namespace to identify this new RPC operation? Please clarify.
Please clarify the role of Namespace.
No, a new namespace is not needed every time an operation is added to the protocol.
A namespace is just a grouping of names. It exists to prevent name clashes. If some entity (other than the IETF NETCONF WG) decides that "cancel-commit" is an appropriate name for one of their operations, they can use this same name - by placing it in a different namespace and retaining the (local) name. No name clash can occur between the two "cancel-commit" names, since the clash is resolved by their namespace.
If there is no clash between local names within a namespace after adding a new name, any name can be added to it.
You can also take a look at this from the YANG perspective (the data modeling language for NETCONF). A YANG module is essentially a namespace. Would you publish a new YANG module with a changed namespace statement every time you add a rpc or action schema node to it? No, you would not. That is also why we have two revisions of the same module (ietf-netconf) for the two versions of the protocol (1.0 and 1.1). (I forgot that 1.0 predates YANG, but you probably get the idea. There is only one revision of ietf-netconf in existence).
What defines the version of the protocol (and whether "cancel-commit" is available) is the base NETCONF capability, reported as part of a NETCONF hello message (for 1.1):
urn:ietf:params:netconf:base:1.1
Capabilities are advertised in messages sent by each peer during
session establishment. When the NETCONF session is opened, each peer
(both client and server) MUST send a element containing a
list of that peer's capabilities. Each peer MUST send at least the
base NETCONF capability, "urn:ietf:params:netconf:base:1.1". A peer
MAY include capabilities for previous NETCONF versions, to indicate
that it supports multiple protocol versions.
8.1. Capabilities Exchange
Note how this URI differs from the namespace for NETCONF protocol XML elements (no :xml:ns).
The capability for NETCONF 1.0 is urn:ietf:params:netconf:base:1.0.
I am working on APIs in dotnet core 2.2 and I'd like to version my API.
I'm looking for some solutions except:
Routing method (api/v1/controller, api/v2/contoller)
Routing method using APIVersioning package, (api/v{version: apiVersion}/contoller})
I want to know if there is any other solutions in which I don't have to change the route attribute? I might be completely wrong but can I use middleware? Like, map delegate to map the the incoming requests (based on v1, v2 it carries) to its right controller?
I'll highly appreciate any feedback and suggestions.
You can use the APIVersioning package and configure it so it selects the version based on the HTTP Header.
services.AddApiVersioning(c =>
{
c.ApiVersionReader = new HeaderApiVersionReader("api-version");
}
And then you can use the [ApiVersion] attribute on your controllers.
Can you use custom middleware - yes; however, be advised that endpoint selection is typically much more involved. The routing system provides extension and customization points, which is exactly what API Versioning does for you. Creating your own versioning solution will be a lot more involved than having to add a route template parameter.
If you're going to version by URL segment, then API Versioning requires that you use the ApiVersionRouteConstraint. The default name is registered as apiVersion, but you can change it via ApiVersioningOptions.RouteConstraintName. The route parameter name itself is user-defined. You can use whatever name you want, but version is common and clear in meaning.
Why is a route constraint required at all? API Versioning needs to resolve an API version from the request, but it has no knowledge or understanding of your route templates. For example, how would ASP.NET know that the route parameter id in values/{id:int} has be an integer without the int constraint? Short answer - it doesn't. The API version works the same way. You can compose the route template however you want and API versioning knows how and where to extract the value using the route constraint. What API versioning absolutely does not do is magic string parsing. This is a very intentional design decision. There is no attempt made by API Versioning to try and auto-magically extract or parse the API version value from the request URL. It's also important to note that the v prefix is very common for URL segment versioning, but it's not part of the API version. The approach of using a route constraint negates the need for API Versioning to worry about a v prefix. You can include it in your route template as a literal, if you want to.
If the issue or concern is having to repeatedly include the API version constraint in your route templates, it really isn't any different than including the api/ prefix in every template (which I presume you are doing). It is fairly easy to remain DRY by using one of the following, which could include the prefix api/v{version:apiVersion} for all API routes:
Extend the RouteAttribute and prepend all templates with the prefix; this is the simplest
Roll your own attribute and implement IRouteTemplateProvider
Ultimately, this requirement is yet another consequence of versioning by URL segment, which is why I don't recommend it. URL segment versioning is the least RESTful of all versioning methods (if you care about that) because it violates the Uniform Interface constraint. All other out-of-the-box supported versioning methods do not have this issue.
For clarity, the out-of-the-box supported methods are:
By query string (default)
By header
By media type (most RESTful)
By URL segment
Composition of n methods (ex: query string + header)
You can also create your own method by implementing the IApiVersionReader.
Attributes are just one way that API versions can be applied. In other words, you don't have to use the [ApiVersion] attribute if you don't want to. You can also use the conventions fluent API or create your own IControllerConvention. The VersionByNamespaceConvention is an example of such a convention that derives the API version from the containing .NET namespace. The methods by which you can create and map your own conventions are nearly limitless.
According to the docs here:
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/
and
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/
is it possible to create a custom ddl RabbitMQ connector to be used in pyflink TABLE API 1.11?
how?
Firstly, you need to implement your custom connector implementation based on the interface provided by Java. Then you need to use the API or command line parameters to refer to the jar used which you can refer to
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/python/common_questions.html#adding-jar-files
I really like the idea of using Javadoc comments for auto-generating REST Docs!
Huge parts of our REST API are automatically generated by Spring Data REST (by adding #RepositoryRestResource to Repositories). It would be great if REST Docs could also be generated for these - that would be a very high degree of automatition.
But unfortunately most "auto-"snippets are "empty" (e.g. auto-response-fields.adoc only contains a list of links[]-Attributes). I guess the reason could be that the REST Controllers are probably created dynamically by Spring Data REST. Currently I do not see how to re-use the Javadoc comments for them.
Is there any way to auto-generate REST Docs for such REST APIs that are provided by Spring Data REST?
It would even be helpful to manually tell Spring Auto REST Docs which classes are used in requests and responses instead of letting it discover it statically - is that possible?
And we also add HATEOAS "_links" to most response Resources (by providing ResourceProcessors as Beans). These links contain "title"s which are used by Spring REST Docs - if we list all of them with HypermediaDocumentation.linkWithRel(...). This is a bit redundant, and it would be nice if all the _links with "title"s could be processed automatically. (But this can be done by listing all of them in some extra code, so it is not as bad as with Spring Data REST.)
If necessary, I could also create an example project for what I am talking about.
Answer to the question whether one can manually tell Spring Auto REST Docs which classes to use for the documentation:
Spring Auto REST Docs allows to specify the request and response classes to use for the documentation. This can be done with requestBodyAsType and responseBodyAsType. In a test it looks like this:
.andDo(document("folderName",
requestFields().requestBodyAsType(Command.class),
responseFields().responseBodyAsType(CommandResult.class)));
This is from a test in the example project.
I have setup Presto with mysql connector enabled.
Now I want to write my own connector for a special type of data source.
Custom connector for SQLAlchemy is done. But this time, I am facing dozens of Java classes. What base classes can be used as good starting point? Which interfaces must be implemented? Maybe RawFile connector?
Thank you in advance.
See the developer documentation: https://prestodb.io/docs/current/develop/connectors.html. The example HTTP connector is a good starting point.
You need to implement ConnectorFactory, Connector, ConnectorMetadata,
ConnectorSplitManager, ConnectorHandleResolver, and either ConnectorRecordSetProvider or ConnectorPageSourceProvider at the minimum, other classes may be needed depending on what you want to do.