Why are Netconf 1.1 RPC operations are defined with same old XML namespace? - netconf

RFC 4741 defines Netconf 1.0 and RFC 6241 defines Netconf 1.1. Section 3.1 of these RFC says that;
All NETCONF protocol elements are defined in the following namespace: urn:ietf:params:xml:ns:netconf:base:1.0
My query is; RFC6241 has defined a new RFC <cancel-commit> with the same XML namespace. Do not we need a new namespace to identify this new RPC operation? Please clarify.
Please clarify the role of Namespace.

No, a new namespace is not needed every time an operation is added to the protocol.
A namespace is just a grouping of names. It exists to prevent name clashes. If some entity (other than the IETF NETCONF WG) decides that "cancel-commit" is an appropriate name for one of their operations, they can use this same name - by placing it in a different namespace and retaining the (local) name. No name clash can occur between the two "cancel-commit" names, since the clash is resolved by their namespace.
If there is no clash between local names within a namespace after adding a new name, any name can be added to it.
You can also take a look at this from the YANG perspective (the data modeling language for NETCONF). A YANG module is essentially a namespace. Would you publish a new YANG module with a changed namespace statement every time you add a rpc or action schema node to it? No, you would not. That is also why we have two revisions of the same module (ietf-netconf) for the two versions of the protocol (1.0 and 1.1). (I forgot that 1.0 predates YANG, but you probably get the idea. There is only one revision of ietf-netconf in existence).
What defines the version of the protocol (and whether "cancel-commit" is available) is the base NETCONF capability, reported as part of a NETCONF hello message (for 1.1):
urn:ietf:params:netconf:base:1.1
Capabilities are advertised in messages sent by each peer during
session establishment. When the NETCONF session is opened, each peer
(both client and server) MUST send a element containing a
list of that peer's capabilities. Each peer MUST send at least the
base NETCONF capability, "urn:ietf:params:netconf:base:1.1". A peer
MAY include capabilities for previous NETCONF versions, to indicate
that it supports multiple protocol versions.
8.1. Capabilities Exchange
Note how this URI differs from the namespace for NETCONF protocol XML elements (no :xml:ns).
The capability for NETCONF 1.0 is urn:ietf:params:netconf:base:1.0.

Related

Is a REST API supposed to support multiple protocols

Fielding has written in his blog entry REST APIs must be hypertext-driven that:
A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc. In general, any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. [Failure here implies that identification is not separated from interaction.]
From my reading of this, any REST API must support more than one protocol in order to be considered restful.
Would an application that would fit all other conditions not be considered restful, if the application only supports one protocol such as HTTP?
From my reading of this, any REST API must support more than one protocol in order to be considered restful.
I don't believe that's quite the right way to read it.
Recall that the web is the reference implementation of the REST architectural style. Most of the identifiers will be https/http; but we also find ftp, and mailto, and about. Fielding's point is that, for the purposes of identifying resources, our mechanisms should be scheme agnostic.
any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. (emphasis added)
We've got an entire registry full of URI schemes, and for the purposes of identifying resources, they all have equal standing.
Link: <mailto://JMG#example.org>; rel="author"; anchor="https://stackoverflow.com/questions/73654298/is-a-rest-api-supposed-to-support-multiple-protocols"
That's a perfectly fine link relation, indicating that there is an author relationship between two resources.
Fielding doesn't mean that web servers also have to be mail servers; or that browsers need to figure out what you meant when you put a mailto URI in an image tag.
It may be useful to review Fielding's follow up essay, Specialization:
My dissertation is written to a certain audience: experts in the fields of software engineering and network protocol design....
Fielding designed REST for HTTP 1.1 machine to machine communication. It has many constraints: https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm Afaik. only HTTP fulfills all of them and even if you use HTTP you must use it in a specific way. Though I don't know much about other protocols. What's certain, that you cannot use REST for websockets, because the communication is stateful and it violates the statelessness constraint.

What is the difference of credential-store and secret-key-credential-store

In the following table the different credential store implementations of different credential types are listed.
Credential Type
KeyStoreCredentialStore
PropertiesCredentialStore
PasswordCredential
Supported
Unsupported
KeyPairCredential
Supported
Unsupported
SecretKeyCredential
Supported
Supported
I still do not quite understand the difference of KeyStoreCredentialStore (credential-store) and PropertiesCredentialStore (secret-key-credential-store) in wildfly subsystem elytron. If KeyStoreCredentialStore supports SecretKeyCredential, why one need PropertiesCredentialStore type?
An official documentation describe the differences of credential store implementations with details very well. However, for someone starting new with this topic, it can be confusing. Hence, I thought of briefly describing the differences and practical benefits based on my experience:
KeyStoreCredentialStore (i.e. credential-store) and PropertiesCredentialStore (i.e. secret-key-credential-store) are two default credential store implementations WildFly Elytron contain.
KeyStoreCredentialStore implementation backed by a Java KeyStore which is protected using the mechanisms provided by the KeyStore implementations. As listed in table above it supports credential types as PasswordCredential, KeyPairCredential and SecretKeyCredential.
PropertiesCredentialStore is another implementation dedicated to store SecretKeyCredential using a properties file and its primary purpose is to provide an initial key to a server environment. It does not offer any protection of the credentials it stores but can be still from filesystem level its access restricted to just the application server process.
In my case I needed e.g. SecretKeyCredential to encrypt expression (i.e. passwords in clear text) in server configuration file and I added my SecretKey to KeyStoreCredentialStore protected by password, rather than using PropertiesCredentialStore.

Should the schema be altered if the server does not handle all attributes?

If our SCIM server only handles a small subset of the attributes in the core User schema and ignores most other attributes:
Should the server return a reduced schema that reflects what is supported on the schemas endpoint?
Or should it return the full default core schema definition?
And if the schema is altered to reflect what the server actually supports, should it still be named urn:ietf:params:scim:schemas:core:2.0:User, or does it need to get a different name?
Should the server return a reduced schema that reflects what is supported on the schemas endpoint?
Yes.
Or should it return the full default core schema definition?
No.
Service providers are free to omit attributes and change attribute characteristics, provided it does not change any other requirements outlined in the RFC nor redefine the attributes. The purpose of the discovery endpoints, including "/Schemas", is to provide service providers the ability to specify their schema definitions.
And if the schema is altered to reflect what the server actually supports, should it still be named urn:ietf:params:scim:schemas:core:2.0:User, or does it need to get a different name?
Provided you meet the above criteria, the schema should continue to be named urn:ietf:params:scim:schemas:core:2.0:User. But, you should use custom resources and/or extensions for new attributes/resources not defined in the RFC.
I agree that the RFC could perhaps be more clear about this, but there are some hints throughout, such as the following from Section 2:
SCIM's support of schema is attribute based,
where each attribute may have different type, mutability,
cardinality, or returnability. Validation of documents and messages
is always performed by an intended receiver, as specified by the SCIM
specifications. Validation is performed by the receiver in the
context of a SCIM protocol request (see [RFC7644]). For example, a
SCIM service provider, upon receiving a request to replace an
existing resource with a replacement JSON object, evaluates each
asserted attribute based on its characteristics as defined in the
relevant schema (e.g., mutability) and decides which attributes may
be replaced or ignored.
Additional references:
https://www.ietf.org/mail-archive/web/scim/current/msg02851.html
SCIM (System for Cross-domain Identity Management) core supported attributes

Best practice : What instead of "User-Agent" in http headers to identify an app?

It looks like you can't always set "user-agent" header using Ajax. (user-agent is somewhat a reserved keyword and you can't forge it on some browser because of security concern).
When calling my REST service I'd like the caller to give me a clue about who (which application) is using it.
Registration won't be mandatory, it's rather a way to check if there are some external (valuable) clients that still use my web service when I'd like to close it.
So if I can't use the "user-agent" is there some name of choice to use instead ?
X-Application-Id ? X-UserAgent ?
Is there some doc that lists all those X-*** headers ?
Depending on your application, it may or may not be possible to add custom headers to your application.
From your question, I assume that you are able to set custom headers. The default for your application would be to use the User-Agent Flag as described in the Official HTTP Specification:
The "User-Agent" header field contains information about the user agent originating the request, which is often used by servers to help identify the scope of reported interoperability problems, to work around or tailor responses to avoid particular user agent limitations, and for analytics regarding browser or operating system use. A user agent SHOULD send a User-Agent field in each request unless specifically configured not to do so.
If you choose to add another custom header, there are no restrictions or recommendations for the name of the header. Please note, that you should not use an "X-" prefix as described in RFC 6648:
Historically, designers and implementers of application protocols have often distinguished between standardized and unstandardized parameters by prefixing the names of unstandardized parameters with the string "X-" or similar constructs. In practice, that convention causes more problems than it solves. Therefore, this document deprecates the convention for newly defined parameters with textual (as opposed to numerical) names in application protocols.

WCF API Deployment Versioning

I was just looking to develop .NET WCF API. We may need to frequently update APIs.
How to manage multiple versions of API deployment?
Versioning your services is a huge topic with many considerations and guidelines.
For a start, there are different classes of changes you can make; fully-breaking, semi-breaking, and non-breaking.
Non-breaking changes (no change needed to existing clients) include:
changing the internal implementation of the service while keeping the exposed contract unchanged
changing the contract types in a way which does not break clients, for example, by adding fields to your operation return types (most serializers will raise an event rather than throw an exception when encountering an unexpected field on deserialization)
polymorphically exposing new types (using ServiceKnownType attribute)
changing the instance management settings of the service (per-call to singleton, sessionless to sessionful etc, although sometimes this will require configuration or even code changes)
Semi-breaking changes (usually can be configured on the client) inlcude:
changing the location of a service
changing the transport type a service is exposed across (although changing from a bi-directional to a uni-directional transport - eg http to msmq - can be a fully-breaking change)
changing the availability of the service (through use of service windows etc)
Fully-breaking changes (need new version of the client) include:
changing service operation signatures
changing exposed types in a breaking manner (removing fields, etc)
When you are going to make a semi or fully breaking change, you should evaluate the best way of doing this. Do you force all your clients to upgrade to use the new version, or do you co-host both versions of the service at different endpoints? If you choose the latter then how will you control and manage the propagation of different versionning dependencies which this may introduce?
Taken to an extreme, you could look into dynamic endpoint resolution, whereby the client resolves the suitable endpoint to call at runtime using some kind of resolver service.
There's good reading about this here:
http://msdn.microsoft.com/en-us/library/ms731060.aspx