RabbitMQ headers exchange routing: match all listed headers - rabbitmq

I have a lot of consumers with different set of features, so I want route message to process to correct one. I decided use headers exchange and specify necessary features in message headers, but here fall into obstacle.
In rabbitMQ there is binding argument x-match which may take values only any and all (https://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2013-July/028575.html). Each consumer instead while binds has big list of available features (most of them are true/false, but also there are strings), which I specify as binding arguments along with x-match argument. But when I publish message I wanna specify only necessary headers, for instance, feature-1 and feature-7 with specific values. I don't even know about all available consumer features when publish the message. And here is problem: if I miss some of binding argument when x-match==all, message won't be routed, and if I set x-match to any, the only matching header is enough to route message - despite another header's value may not match.
To give you example, let's consider consumer with features: country=US, f1=true, f2=true, f3=false.
Scenario 1: So I attach (create binding) its queue to headers exchange with these arguments and x-match set to all. Then I publish message and I need country to be "US" and f2 to be true. I don't know anything about other possible consumer features. Message won't be routed, because not all headers match exactly.
Scenario 2: Another use case is if I bind queue with x-match argument set to any. If I specify again country to be "US" and f2 to be true - message will be routed, but also it will be (incorrectly) routed if f2 is set to false and only country matches.
So probably I misunderstand something, but I look for easiest solution for me: how to route message to right consumer based on list of necessary features. I would like to use something like all-specified value for x-match argument which wouldn't demand list all available features but will require all given headers to match exactly.

Indeed, only own exchange may help for my purpose. If I'm succeed in erlang I'll report here.
Update
I managed to write own plugin which fits my purposes. Probably it is not perfect but works good for me for now.
https://github.com/senseysensor/rabbitmq-x-features-exchange

Related

How to invoke a custom ResultFilter before ClientErrorResultFilter is executed in ASP.NET 6

I spent almost a full day debugging why my client can't post any forms, until I found out the anti-forgery mechanism got borked on the client-side and the server just responded with a 400 error, with zero logs or information (turns out anti-forgery validation is logged internally with Info level).
So I decided the server needs to special handle this scenario, however according to this answer I don't really know how to do that (aside from hacking).
Normally I would set up a IAlwaysRunResultFilter and check for IAntiforgeryValidationFailedResult. Easy.
Except that I use Api Controllers, so by default all results get transformed into ProblemDetails. So context.Result as mentioned here is always of type ObjectResult. The solution accepted there is to use options.SuppressMapClientErrors = true;, however I want to retain this mapping at the end of the pipeline. But if this option isn't set to true, I have no idea how to intercept the Result in the pipeline before this transformation.
So in my case, I want to do something with the result of the anti-forgery validation as mentioned in the linked post, but after that I want to retain the ProblemDetails transformation. But my question is titled generally, as it is about executing filters before the aforementioned client mapping filter.
Through hacking I am able to achieve what I want. If we take a look at the source code, we can see that the filter I want to precede has an order of -2000. So if I register my global filter like this o.Filters.Add(typeof(MyResultFilter), -2001);, then the filter shown here correctly executes before ClientErrorResultFilter and thus I can handle the result and retain the transformation after the handling. However I feel like this is just exploiting the open-source-ness of .Net 6 and of course as you can see it's an internal constant, so I have no guarantee the next patch doesn't change it and my code breaks. Surely there must be a proper way to order my filter to run before the api transform.

How to Use Correlation Sets with multiple Receive Shapes in BizTalk Orchestration

My scenario is:
There are four txt-files created in a source-folder at the same time. They should be mapped to four xml-files. Then one of the xml-files should be sent to a remote Web Service and, if it was received properly, a response containing a "replacement-id" is coming back. And if so, finally, the replacement-id is mapped into the other three xml-files before they are sent to the Web Service.
I'm trying to control all the flow in one single orchestration. In my BizTalk Server Project I have all the schemas and maps that is needed. Since there must be four Receive Shapes in the Orchestration I understand I must deal with Correlation sets. There is one (date) field that is common to all the input txt-files and have the same unique value in each file. I assumed that this field could be used in a Correlation Set. But how is that done?
I've found questions on similar cases on forums but not yet found any answer that gives me the right clue to this case.
I tried this:
• promoted the common field in all the files (a Property Schema was created for me)
• created a Correlation Type based on the Property Schema
• created a Correlation Set based on the Correlation Type
• in the first Receive Shape: set Initializing Correlation Sets = my Correlation Set
• in the other Receive Shapes: set Following Correlation Sets = my Correlation Set
• the first Receive Shape has also Active = true and the others = false
This don’t work however. On building the solution I get errors telling that my Correlation Set does not exist in the messages I use as input to the mappings. The “blocking stone” is that I don’t seem know how to use Correlation Sets the right way to solve a Multiple-Receive-Shape problem. (Hope that calling the Web Service and mapping the replacement-id won’t be a problem when I get that far.) Would be glad if someone could correct the list of steps or put me on the right road if I’m completely wrong.

Two-way Apache Camel Routes - Infinite loop

I have 2 endpoints that I would like to establish routes between. Due to the nature of these endpoints (JMS topics), I would like the bridging to be bidirectional.
The underlying JmsComponent for the Tibco endpoint has the pubSubNoLocal parameter enabled which ensures that a consumer does not receive messages it has itself sent as per http://camel.apache.org/jms.html
pubSubNoLocal false
Specifies whether to inhibit the delivery of messages published by its own connection.
However this has no effect since the 2 routes create separate connections to the JMS topic my.topic.
As a result, the following will create an infinite loop.
As mentioned, I need the routes to operate in both directions for "seamless integration"
<c:route>
<c:from uri="tibco:topic:my.topic"/>
<c:to uri="solace-jms:topic:mytopic" />
</c:route>
<c:route>
<c:from uri="solace-jms:topic:mytopic"/>
<c:to uri="tibco:topic:my.topic" />
</c:route>
I suggest le consideration the concepts of message selectors and headers.
The way I see it, you do 2 things:
Add a "PRODUCER" header with your Server ID (however you define it)
All your listeners must be configured with a negative selector "NOT (PRODUCER='YOUR_ID')"
Done ?
(Of course, you could also use 2 topics... but I assume it is out of the question...)
You will need to add some indication in the message that it has been sent through either bridge. You could play with existing properties (redelivery ?) or better add a new one. For example set property bridged=true when it passes your bridge. Then inside your definition you can filter out each message already bridged.

REST how to name queue resource

I'm implementing Asynchronous REST API where file is posted and then it's added to queue for further processing. My questions is what's the best practice in this case of naming the resources.
Resource #1 POST Files where {type} is dynamic.
POST /files/{type}
Posting data to this resource. It's queued and user will receive unique Queue ID. How should the resource of files queue be named ?
Resource #2. GET files queue
OPTION 1. GET /files/queues/{QueueID}
OR
OPTION 2. GET /files/{type}/queues/{QueueID}
Which one makes more sense ? User can upload files with different {type}.
OR should I just use completely different resource of Getting queue items like:
GET /queues
AND
/queues/{QueueID}
Thanks for tips.
It depends on your needs. (Or the client's needs)
I would go with the "/queues/{QueueID}" option, since the queueId itself (without the type) identifies the file, so there is no need to include it.
Additionally I would omit the {type} variable even from the POST method, because you can simply send that information in the HTTP header. (Content-Type)
The "files/{type}" approach is more useful, when you have to display the files grouped by type. Without that need, there is no need to further complicate the resource identifier.
(Note: If the "queue" and "file" items are the same, then you could use GET /files/{QueueId} )

The <invoke>-element of Mule 3, what does it do?

I have previously only used Mule 2.2.1, but i'm now reading up on Mule 3.4/3.5.
One major change between theses versions is the introduction of flows.
In the documentation of the Mule configuration i found this:
A flow begins with an inbound endpoint from which messages are read and continues with a list of message processors
However, in this post i came across the invoke-element. It appears that a flow can also begin with an invoke-element.
I was searching the Mule documentation for documentation of the invoke element, but was not able to find it. Can someone help explaining the semantics of the invoke-element, or point to any relevant documentation?
The "invoke" element is a message processor and not a message source.
The quote "A flow begins with an inbound endpoint from which messages are read and continues with a list of message processors" is not quite true as flows such as sub-flows or private flows that are referenced via other flows using flow-refs do not need an inbound-endpoint and can just have a list of message processors.
So it cannot be used to trigger a flow. However the example above seems to be a private flow which would be referenced from another flow via flow-ref so hence why it starts with a message processor. More eon private and sub-flows here: http://www.mulesoft.org/documentation/display/current/Using+Flows+for+Service+Orchestration
Back to the invoke message processor. THere is lacking documentation around this, but simply put, it calls the specified method for the given object using the given arguments.
From the javadoc: invokes a specified method of an object. An array of argument expressions can be provided to map the message to the method arguments. The method used is determined by the method name along with the number of argument expressions provided. The results of the expression evaluations will automatically be transformed where possible to the method argument type. Multiple methods with the same name and same number of arguments are not supported currently - http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/processor/InvokerMessageProcessor.html