Propagating Baggage using B3 / Zipkin and Spring Cloud Sleuth - spring-cloud-sleuth

I understand that with the advent of Spring Cloud Sleuth 3.x.x, you need to specify the name of baggage headers to be propagated to remote components like this:
# in application.yaml
spring:
sleuth:
baggage:
remote-fields: <name of the header>
For custom headers I think this is fine. I was wondering what this looks like for B3 / Zipkin if baggage should be transferred.
I remember a discussion about B3 not specifying baggage but that there was also a kind of "convention" that fields prefixed with baggage- were forwarded.
Question 1: Is this correct, and is it still the case, or was that removed in Sleuth 3.x.x?
Question 2: If it was removed, does that mean that for B3 / Zipkin the baggage headers an application forwards will always have to be explicitly specified using the remote-fields property?
Question 3: How are standardized baggage headers treated by Sleuth, e.g. W3C Trace Context Baggage. Are they forwarded automatically, or do they need to be explicitly specified using remote-fields?
Thanks!

Question 1: Is this correct, and is it still the case, or was that removed in Sleuth 3.x.x?
We don't prefix the baggage with baggage- or baggage_ anymore. Whatever you set as baggage (e.g. foo) will be set as is in the headers.
Question 2: If it was removed, does that mean that for B3 / Zipkin the baggage headers an application forwards will always have to be explicitly specified using the remote-fields property?
Yes.
Question 3: How are standardized baggage headers treated by Sleuth, e.g. W3C Trace Context Baggage. Are they forwarded automatically, or do they need to be explicitly specified using remote-fields?
They are forwarded automatically. Check this part of the docs for more information https://docs.spring.io/spring-cloud-sleuth/docs/current/reference/html/howto.html#how-to-change-context-propagation

Related

Default spring.cloud.stream.rabbit.* properties to appy to multiple channels?

With spring cloud stream, you can avoid redundant properties for each individual channel, by specifying "default" properties.
For example, if I have 2 channels bound to the same destination/exchange, I can do:
spring.cloud.stream.default.destination=myExchange
spring.cloud.stream.bindings.myChannel1.group=queue1
spring.cloud.stream.bindings.myChannel2.group=queue2
And queue1 and queue2 will both by bound to myExchange.
That works as documented, and I do it for some properties.
But....I'd like to do the same for RabbitMQ binding properties.
For example, if I want DLQ for all of my consumers/queues, do something like:
spring.cloud.stream.rabbit.default.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.default.consumer.dlq-ttl=10000
spring.cloud.stream.rabbit.default.consumer.dlq-dead-letter-exchange=
Otherwise, I have to specify those same 3 lines for every channel.
Is there any way to do this? I've tried several different permutations to no avail.
BTW, I'm on version 1.2.1.RELEASE of spring-cloud-starter-stream-rabbit.
Thanks.
It is supported. Please see https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#binding-properties section of the user guide
To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of spring.cloud.stream.default.<property>=<value>
According to Spring Cloud Stream documentation, it is possible since version 2.1.0.RELEASE.
See 9.2 Binding Properties.
When it comes to avoiding repetitions for extended binding properties,
this format should be used
spring.cloud.stream.<binder-type>.default.<producer|consumer>.<property>=<value>
Unfortunately so far I couldn't make it work. Did anyone get it?
It is not supported yet.
See 3.2. RabbitMQ Consumer Properties
The following properties are available for Rabbit consumers only and must be prefixed with spring.cloud.stream.rabbit.bindings..consumer..
including ttl/dlq*

Google cloud load balancer custom http header is missing

While using Google Cloud HTTPS Load Balancer we hit the following bug. Couldn't find any information on it.
We have a custom http header in our request:
X-<Company name>-abcde. If we are working directly against the server all is good, but once we are working through the load balancer, than our custom header is missing. We didn't find any reference in the documentation that there is a need to white list our headers or something like that.
Why my custom header is not being transferred to my backend server while working through Google Cloud Load Balancer? And how to make it work?
Thanks
Data
After a lot of testing, these are the results I've come up with:
The Google Cloud HTTPS Load Balancer does transfer custom HTTP headers to the backend service.
However, it changes them to lower-case.
So, in your case, X-Custom-Header is transformed to x-custom-header.
Solution
Simply change your code to read the lower-case version of your custom HTTP header. This is a simple fix, but one which may not be supported in the long-term by Google (there's not a word on this in Google's documentation so it's subject to change with no notice).
Petition Google to change this idiosyncratic behaviour or at the very least mention it clearly in their documentation.
A little extra
As far as I know, the RFC 2047 which specified the X- prefix for custom HTTP headers and propagated the pseudo-standard of a capital letter for each word has been deprecated and replaced by RFC 6648 which recommends against the X- prefix in general and mentions nothing regarding the rest of the words in the custom HTTP header key name. If I were Google, I would change this behaviour to pass custom HTTP headers as is and let developers deal with the strings as they've set them.
The RFC (RFC 7230) for HTTP/1.1 Message Syntax and Routing says that header fields have a case-insensitive field name. If you're relying on case to match the header that doesn't align with the RFC.
Way back in the day I looked through either the Tomcat of Jetty source and they worked with everything as a .toLower().
Go has a CanonicalMIMEHeaderKey where it'll format the headers in a common way to be sure everything is on the same page.
Python still harkens back to the RFC822 (hg.python.org/cpython/file/2.7/Lib/rfc822.py#l211) days, but it forces a .lower() on headers to standardize.
Basically though what the GCP HTTP(S) Load Balancer is doing is acceptable as far as the RFC is concerned.
This is most likely an application bug.
As other answers have stated, HTTP header names are case insensitive. Ime, every time headers appear to be case sensitive, it is because there is a request wrapper somewhere in the application call stack.
Request wrappers like this are common (usually necessary) in Java servlet filters. It's a common, newbie mistake to use case-sensitive matching (e.g. a regular Java HashMap<String, T>()) for the header names in the wrapper.
That's where I would start looking for your bug.
A reasonable way to create a Java Map<String, T> that is both case insensitive and that doesn't modify the keys is to use new TreeMap<String, T>( String.CASE_INSENSITIVE_ORDER ).

RabbitMQ headers exchange routing: match all listed headers

I have a lot of consumers with different set of features, so I want route message to process to correct one. I decided use headers exchange and specify necessary features in message headers, but here fall into obstacle.
In rabbitMQ there is binding argument x-match which may take values only any and all (https://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2013-July/028575.html). Each consumer instead while binds has big list of available features (most of them are true/false, but also there are strings), which I specify as binding arguments along with x-match argument. But when I publish message I wanna specify only necessary headers, for instance, feature-1 and feature-7 with specific values. I don't even know about all available consumer features when publish the message. And here is problem: if I miss some of binding argument when x-match==all, message won't be routed, and if I set x-match to any, the only matching header is enough to route message - despite another header's value may not match.
To give you example, let's consider consumer with features: country=US, f1=true, f2=true, f3=false.
Scenario 1: So I attach (create binding) its queue to headers exchange with these arguments and x-match set to all. Then I publish message and I need country to be "US" and f2 to be true. I don't know anything about other possible consumer features. Message won't be routed, because not all headers match exactly.
Scenario 2: Another use case is if I bind queue with x-match argument set to any. If I specify again country to be "US" and f2 to be true - message will be routed, but also it will be (incorrectly) routed if f2 is set to false and only country matches.
So probably I misunderstand something, but I look for easiest solution for me: how to route message to right consumer based on list of necessary features. I would like to use something like all-specified value for x-match argument which wouldn't demand list all available features but will require all given headers to match exactly.
Indeed, only own exchange may help for my purpose. If I'm succeed in erlang I'll report here.
Update
I managed to write own plugin which fits my purposes. Probably it is not perfect but works good for me for now.
https://github.com/senseysensor/rabbitmq-x-features-exchange

Content-Range header - allowed units?

This is related to:
How should I implement a COUNT verb in my RESTful web service? , Paging in a Rest Collection
and Using the HTTP Range Header with a range specifier other than bytes?
Actually I think the -1 rated anwser here is correct https://stackoverflow.com/a/1434701/1237617
Generally anwsers say that you can use custom units citing the sec 3.12
range-unit = bytes-unit | other-range-unit
bytes-unit = "bytes"
other-range-unit = token
However when you read the HTTP spec please notice the production rules are thus:
Content-Range = "Content-Range" ":" content-range-spec
content-range-spec = byte-content-range-spec
byte-content-range-spec = bytes-unit SP
byte-range-resp-spec "/"
( instance-length | "*" )
The header spec only references bytes-unit from sec 3.12, not range-units, so I think that actually it's against the spec to use custom units here.
Am I missing something or is the popular anwser wrong?
EDIT: Since this probbably isn't clear, the gist of my question is:
rfc2616 sec14.16 only references bytes-unit. It never mentions range-unit, so range-unit production is not relevant for Content-Range, and thus only byte-units can be used.
I think this adresses my concerns best, although I needed some time to understand it (plus I wanted to make sure, that there is something wrong with the wording).
This reflects the fact that, apparently, the first set of grammar rules has been specifically made for parsing and the second one for producing HTTP requests
thanks to elgaton
The spec, as being revised, allows custom range units. See HTTPbis Part 5, Section 2.
If you read the HTTP/1.1 RFC, section 3.12, you will see that:
The only range unit defined by HTTP/1.1 is "bytes". HTTP/1.1 implementations MAY ignore ranges specified using other units.
So, the other-range-unit token has been introduced only to make servers more "liberal" when accepting. This reflects the fact that, apparently, the first set of grammar rules has been specifically made for parsing and the second one for producing HTTP requests, so that servers could accept even invalid requests (they will be simply ignored) and clients would use only the universally-accepted bytes unit.
Therefore, I personally recommend to:
use only the bytes unit when acting as a client, and
accept other units (discarding the Content-Range header if they are invalid) when acting as a server.
This is a purely personal opinion, but I think it is fairly consistent with how other HTTP extensions (custom methods or headers) are used. Here is how I read it: Yes I can use custom range units and no, I shouldn't submit a bug report when it gets ignored when passing through firewalls, web proxies, and other intermediaries. I conform to the HTTP spec when I'm sending it and they conform to HTTP when they ignore it. WebDAV uses HTTP extensions correctly, IMO, but rarely works over the Internet for exactly this reason. As I said, a personal opinion only.
Apparently it's OK to use custom units, because:
This reflects the fact that, apparently, the first set of grammar
rules has been specifically made for parsing and the second one for
producing HTTP requests

BizTalk Dynamic WCF-WSHttp Send Port reverting to Http Adapter

I'm trying to send a message to the WCF-WSHttp adapter with a dynamic
send port from an orchestration, but BizTalk seems to always be
reverting back to the HTTP Adapter.
According to the docs that I've been able to find, I should just need
to set the transport type from my expression shape to get BizTalk to
use the WCF-WSHttp adapter, and I AM, but it still seems to be
reverting. Below is an example of my expression shape that's setting
the properties (as you can see, I've tried both
Microsoft.XLANGs.BaseTypes.TransportType and
BTS.OutboundTransportType):
Body(BTS.OutboundTransportType) = "WCF-WSHttp";
SendMessagePort(Microsoft.XLANGs.BaseTypes.Address) =
System.String.Format("{0}/Accept{1}", "http://myserver/myservice/
myservice.svc/Accept{0}", messageInfo.MessageType);
SendMessagePort(Microsoft.XLANGs.BaseTypes.TransportType) = "WCF-
WSHttp";
Probably are Craig :-)
When using a dynamic send port, BizTalk uses the "scheme" part of the url to decide which adapter to use.
When your url starts with "Http://" or "Https://" BizTalk would always use the HTTP adapter.
Similarly url's begining with ftp:// will use the FTP adapter.
Same works for custom adapaters as well - when you install the adapter's configuration you register the moniker to use; for example - the open source Scheduled Task adapter uses schedule:// (I believe).
Using dynamic send ports with WCF is slightly more involved than most other adapaters because of the various configuration that's required but you can find detailed explanation here, just scroll down to the "Dynamic Send Ports" section about half way down.
I ended up resolving my issue, but am still unsure of the reasoning for the behavior I saw.
The Expression shape mentioned in the question was located inside of an Atomic Scope. Once the Orchestration exited the scope containing the Expression shape, the Transport Type was reset back to its original value. Moving the Expression out of the atomic scope resolved the issue, in that the TransportType was set correctly.