What is ms-cv in response header - header

I am looking for the definition of ms-cv response header unfortunately i cannot find a good one
In one of my http response I saw this MS-CV: wrtqvas27jtgu2oq.0 and

MS-CV stands for Microsoft Correlation Vector. It is described here - https://github.com/microsoft/CorrelationVector
It looks like its mostly used for distributed tracing.

Related

How do I design a REST call that is just a data transformation?

I am designing my first REST API.
Suppose I have a (SOAP) web service that takes MyData1 and returns MyData2.
It is a pure function with no side effects, for example:
MyData2 myData2 = transform(MyData myData);
transform() does not change the state of the server. My question is, what REST call do I use? MyData can be large, so I will need to put it in the body of the request, so POST seems required. However, POST seems to be used only to change the server state and not return anything, which transform() is not doing. So POST might not be correct? Is there a specific REST technique to use for pure functions that take and return something, or should I just use POST, unload the response body, and not worry about it?
I think POST is the way to go here, because of the sheer fact that you need to pass data in the body. The GET method is used when you need to retrieve information (in the form of an entity), identified by the Request-URI. In short, that means that when processing a GET request, a server is only required to examine the Request-URI and Host header field, and nothing else.
See the pertinent section of the HTTP specification for details.
It is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”
It's not a great answer, but it's the right answer. The real issue here is that HTTP, which is a protocol for the transfer of documents over a network, isn't a great fit for document transformation.
If you imagine this idea on the web, how would it work? well, you'd click of a bunch of links to get to some web form, and that web form would allow you to specify the source data (including perhaps attaching a file), and then submitting the form would send everything to the server, and you'd get the transformed representation back as the response.
But - because of the payload, you would end up using POST, which means that general purpose components wouldn't have the data available to tell them that the request was safe.
You could look into the WebDav specifications to see if SEARCH or REPORT is a satisfactory fit -- every time I've looked into them for myself I've decided against using them (no, I don't want an HTTP file server).

Should REST API returns error 400 on request that should not have query?

Is there any standard in HTTP regarding how request should be handled in case where an endpoint/request URL that supposed to not receive any query but the requester supply a query anyway ?
if it exist, what the standard states ? if not, is there any other related standard/statement regarding this like how request query supposed to be handled in general or such within the HTTP standard ?
thank you
By default, when you receive more parameters than expected and if you don't need it, you don't care
Is there any standard in HTTP regarding how request should be handled in case where an endpoint/request URL that supposed to not receive any query but the requester supply a query anyway ?
From the perspective of HTTP, the entire target-uri in the request line is the identifier for the resource:
POST /123?456
In this example, the target-uri is /123?456.
RFC 7231 defines 404 Not Found
The 404 (Not Found) status code indicates that the origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
Which is to say, if you want to draw the clients attention to the spelling of the target-uri, 404 Not Found is the way to do that.
That said, there are no "rules" about how the server interprets the target-uri. If you want to ignore the query part, and provide the same representation that is used for /123 then that's fine.
when you receive more parameters than expected and if you don't need it, you don't care
In messaging, ignoring unrecognized parts of the message can be useful when trying to ensure that your schema is forwards/backwards compatible; old clients using the old specification can talk to new servers using the new specification, and vice versa.
The same principle is described in the header specification for HTTP
Other recipients SHOULD ignore unrecognized header fields. These requirements allow HTTP's functionality to be enhanced without requiring prior update of deployed intermediaries.

Google cloud load balancer custom http header is missing

While using Google Cloud HTTPS Load Balancer we hit the following bug. Couldn't find any information on it.
We have a custom http header in our request:
X-<Company name>-abcde. If we are working directly against the server all is good, but once we are working through the load balancer, than our custom header is missing. We didn't find any reference in the documentation that there is a need to white list our headers or something like that.
Why my custom header is not being transferred to my backend server while working through Google Cloud Load Balancer? And how to make it work?
Thanks
Data
After a lot of testing, these are the results I've come up with:
The Google Cloud HTTPS Load Balancer does transfer custom HTTP headers to the backend service.
However, it changes them to lower-case.
So, in your case, X-Custom-Header is transformed to x-custom-header.
Solution
Simply change your code to read the lower-case version of your custom HTTP header. This is a simple fix, but one which may not be supported in the long-term by Google (there's not a word on this in Google's documentation so it's subject to change with no notice).
Petition Google to change this idiosyncratic behaviour or at the very least mention it clearly in their documentation.
A little extra
As far as I know, the RFC 2047 which specified the X- prefix for custom HTTP headers and propagated the pseudo-standard of a capital letter for each word has been deprecated and replaced by RFC 6648 which recommends against the X- prefix in general and mentions nothing regarding the rest of the words in the custom HTTP header key name. If I were Google, I would change this behaviour to pass custom HTTP headers as is and let developers deal with the strings as they've set them.
The RFC (RFC 7230) for HTTP/1.1 Message Syntax and Routing says that header fields have a case-insensitive field name. If you're relying on case to match the header that doesn't align with the RFC.
Way back in the day I looked through either the Tomcat of Jetty source and they worked with everything as a .toLower().
Go has a CanonicalMIMEHeaderKey where it'll format the headers in a common way to be sure everything is on the same page.
Python still harkens back to the RFC822 (hg.python.org/cpython/file/2.7/Lib/rfc822.py#l211) days, but it forces a .lower() on headers to standardize.
Basically though what the GCP HTTP(S) Load Balancer is doing is acceptable as far as the RFC is concerned.
This is most likely an application bug.
As other answers have stated, HTTP header names are case insensitive. Ime, every time headers appear to be case sensitive, it is because there is a request wrapper somewhere in the application call stack.
Request wrappers like this are common (usually necessary) in Java servlet filters. It's a common, newbie mistake to use case-sensitive matching (e.g. a regular Java HashMap<String, T>()) for the header names in the wrapper.
That's where I would start looking for your bug.
A reasonable way to create a Java Map<String, T> that is both case insensitive and that doesn't modify the keys is to use new TreeMap<String, T>( String.CASE_INSENSITIVE_ORDER ).

API design - what's the best way to respond useful error messages with 400s?

I'm designing an API to be as HTTP compliant as I can. This includes sending specific response codes back and using the Accept header to specify versions and response types.
I understand this may appear subjective, but I'm sure there's a conventional way of doing this. I have a set of response types that the API supports, along with a vendor-specific mime type to specify the type and version.
Currently, when the client specifies a non-existant version or type, I'm just returning a 400 Bad Request with an empty body, however, I want to return a useful error message. In the event that I don't know the response type, I feel a bit dirty responding with plain text (or defaulting to JSON). Is there a header I'm missing, or some convention that I should follow? I want to get this one right from the offset.
Thanks, and my best,
Jamie
Try the status code of 406 Not Acceptable.
http://support.microsoft.com/kb/943891/
This is a list of HTTP sub-codes supported by Microsoft IIS. I've found this page damn handy since it gives you some insight into the messages they use to handle various errors. There are some HTTP sub-codes that refer to headers.

Content-Range header - allowed units?

This is related to:
How should I implement a COUNT verb in my RESTful web service? , Paging in a Rest Collection
and Using the HTTP Range Header with a range specifier other than bytes?
Actually I think the -1 rated anwser here is correct https://stackoverflow.com/a/1434701/1237617
Generally anwsers say that you can use custom units citing the sec 3.12
range-unit = bytes-unit | other-range-unit
bytes-unit = "bytes"
other-range-unit = token
However when you read the HTTP spec please notice the production rules are thus:
Content-Range = "Content-Range" ":" content-range-spec
content-range-spec = byte-content-range-spec
byte-content-range-spec = bytes-unit SP
byte-range-resp-spec "/"
( instance-length | "*" )
The header spec only references bytes-unit from sec 3.12, not range-units, so I think that actually it's against the spec to use custom units here.
Am I missing something or is the popular anwser wrong?
EDIT: Since this probbably isn't clear, the gist of my question is:
rfc2616 sec14.16 only references bytes-unit. It never mentions range-unit, so range-unit production is not relevant for Content-Range, and thus only byte-units can be used.
I think this adresses my concerns best, although I needed some time to understand it (plus I wanted to make sure, that there is something wrong with the wording).
This reflects the fact that, apparently, the first set of grammar rules has been specifically made for parsing and the second one for producing HTTP requests
thanks to elgaton
The spec, as being revised, allows custom range units. See HTTPbis Part 5, Section 2.
If you read the HTTP/1.1 RFC, section 3.12, you will see that:
The only range unit defined by HTTP/1.1 is "bytes". HTTP/1.1 implementations MAY ignore ranges specified using other units.
So, the other-range-unit token has been introduced only to make servers more "liberal" when accepting. This reflects the fact that, apparently, the first set of grammar rules has been specifically made for parsing and the second one for producing HTTP requests, so that servers could accept even invalid requests (they will be simply ignored) and clients would use only the universally-accepted bytes unit.
Therefore, I personally recommend to:
use only the bytes unit when acting as a client, and
accept other units (discarding the Content-Range header if they are invalid) when acting as a server.
This is a purely personal opinion, but I think it is fairly consistent with how other HTTP extensions (custom methods or headers) are used. Here is how I read it: Yes I can use custom range units and no, I shouldn't submit a bug report when it gets ignored when passing through firewalls, web proxies, and other intermediaries. I conform to the HTTP spec when I'm sending it and they conform to HTTP when they ignore it. WebDAV uses HTTP extensions correctly, IMO, but rarely works over the Internet for exactly this reason. As I said, a personal opinion only.
Apparently it's OK to use custom units, because:
This reflects the fact that, apparently, the first set of grammar
rules has been specifically made for parsing and the second one for
producing HTTP requests