How to Use Correlation Sets with multiple Receive Shapes in BizTalk Orchestration - properties

My scenario is:
There are four txt-files created in a source-folder at the same time. They should be mapped to four xml-files. Then one of the xml-files should be sent to a remote Web Service and, if it was received properly, a response containing a "replacement-id" is coming back. And if so, finally, the replacement-id is mapped into the other three xml-files before they are sent to the Web Service.
I'm trying to control all the flow in one single orchestration. In my BizTalk Server Project I have all the schemas and maps that is needed. Since there must be four Receive Shapes in the Orchestration I understand I must deal with Correlation sets. There is one (date) field that is common to all the input txt-files and have the same unique value in each file. I assumed that this field could be used in a Correlation Set. But how is that done?
I've found questions on similar cases on forums but not yet found any answer that gives me the right clue to this case.
I tried this:
• promoted the common field in all the files (a Property Schema was created for me)
• created a Correlation Type based on the Property Schema
• created a Correlation Set based on the Correlation Type
• in the first Receive Shape: set Initializing Correlation Sets = my Correlation Set
• in the other Receive Shapes: set Following Correlation Sets = my Correlation Set
• the first Receive Shape has also Active = true and the others = false
This don’t work however. On building the solution I get errors telling that my Correlation Set does not exist in the messages I use as input to the mappings. The “blocking stone” is that I don’t seem know how to use Correlation Sets the right way to solve a Multiple-Receive-Shape problem. (Hope that calling the Web Service and mapping the replacement-id won’t be a problem when I get that far.) Would be glad if someone could correct the list of steps or put me on the right road if I’m completely wrong.

Related

How to invoke a custom ResultFilter before ClientErrorResultFilter is executed in ASP.NET 6

I spent almost a full day debugging why my client can't post any forms, until I found out the anti-forgery mechanism got borked on the client-side and the server just responded with a 400 error, with zero logs or information (turns out anti-forgery validation is logged internally with Info level).
So I decided the server needs to special handle this scenario, however according to this answer I don't really know how to do that (aside from hacking).
Normally I would set up a IAlwaysRunResultFilter and check for IAntiforgeryValidationFailedResult. Easy.
Except that I use Api Controllers, so by default all results get transformed into ProblemDetails. So context.Result as mentioned here is always of type ObjectResult. The solution accepted there is to use options.SuppressMapClientErrors = true;, however I want to retain this mapping at the end of the pipeline. But if this option isn't set to true, I have no idea how to intercept the Result in the pipeline before this transformation.
So in my case, I want to do something with the result of the anti-forgery validation as mentioned in the linked post, but after that I want to retain the ProblemDetails transformation. But my question is titled generally, as it is about executing filters before the aforementioned client mapping filter.
Through hacking I am able to achieve what I want. If we take a look at the source code, we can see that the filter I want to precede has an order of -2000. So if I register my global filter like this o.Filters.Add(typeof(MyResultFilter), -2001);, then the filter shown here correctly executes before ClientErrorResultFilter and thus I can handle the result and retain the transformation after the handling. However I feel like this is just exploiting the open-source-ness of .Net 6 and of course as you can see it's an internal constant, so I have no guarantee the next patch doesn't change it and my code breaks. Surely there must be a proper way to order my filter to run before the api transform.

Why the ExpressionLanguageScope in DBCPConnectionPool service is limited to only 'VARIABLE_REGISTRY' and not ' FLOWFILE_ATTRIBUTES'?

The DBCPConnectionPool Service requires 5 connection parameters to establish connection to a database as shown in the picture below [Marked Yellow]
I used UpdateAttribute Processor to manually add these 5 connection parameters and gave them their respective values as shown in the picture below [Marked Yellow]
Now, when I was trying to read the values for the connection parameters in DBCPConnectionPool Service through these attributes (Shown in picture below) , I was unable to read them.
To know the reason why the DBCPConnectionPool Service was unable to read the Flowfile attributes, I went ahead to check the source code for both DBCPConnectionPool Service and UpdateAttribute Processor.
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/DBCPConnectionPool.java
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-update-attribute-bundle/nifi-update-attribute-processor/src/main/java/org/apache/nifi/processors/attributes/UpdateAttribute.java
Souce code for DBCPConnectionPool Service :
Souce code for UpdateAttribute Processor :
Thus, I came to know the reason why it was unable to read the values from FlowFile attributes. This is because the ExpressionLanguageScope is limited to VARIABLE_REGISTRY and not FLOWFILE_ATTRIBUTES.
Now, My Question is that why the ExpressionLanguageScope for DBCPConnectionPool Service is limited to VARIABLE_REGISTRY. What is the reason for this limitation? The reason why I am asking this question is because I want to read the values for the connection parameters through FlowFile attributes.
For the same question that has been asked in the NiFi dev mailing list, Andy had answered it in the best way possible. The reason why DBCPConnectionPool services or any controller services for that matter, uses ExpressionLanguageScope.VARIABLE_REGISTRY is that, the controller services have no access to the flowfiles so it won't read the flowfiles' attributes. And for the question, why it only supports VARIABLE_REGISTRY is:
Just because it doesn't read flowfile attributes doesn't mean that it should not use attributes from elsewhere.
One of the major reasons why VARIABLE_REGISTRY was introduced was to avoid exposing the sensitive values which is the case when we pass around such values as flowfile attributes. The controller services fit this case, because many of them use sensitive properties like Password.
And if you're assuming that you can make it work just changing the the scope for those properties to ExpressionLanguageScope.FLOWFILE_ATTRIBUTES, you're wrong. Changing them makes no sense and doesn't work, the reason is again the controller services never get to access the flowfiles.
If there is a specific requirement for you wherein you need to use different property values for different flowfiles, Andy in the original dev thread had shared some links which I'm posting again:
https://stackoverflow.com/a/49412970/70465
https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Using_Custom_Properties

MULE 3.7.0 C.E. - BufferInputStream payload turns into String

We are programming a MULE REST service which is divided in several layers.
The API layer (RAML-based) receives the inbound requests and prepares some flowVars so that the lower layers know how to proceed.
The second layer is also service defined, so there's one flow for each service oferred.
Finally, the third layer contains a unique flow and is the one which, depending on the flowVars configured in the upper layer, carries out a call using a HTTP Request component to the third-party service needed.
In this third layer, some audit registers are made in order to know what we are sending and what we are receiving. So, our audit component (a custom MULE connector) needs to write the content of the payload to our database, so a message.getPayloadAsString() (or similar) is needed. If we use a clean getter (like message.getPayload()), only the data type is obtained and thus written into the database.
The problem lays right in here. Every single payload received seems to be a BufferInputStream and, when doing the message.getPayloadAsString(), an inner casting seems to be affecting the payload. This, normally, wouldn't be a problem except for one of the cases that we have found: one of the services we invoke returns a PNG file, so message.getPayloadAsString() turns it into a String and breaks the image.
We've tried to clone the payload in order to keep one of the copies safe from the casting but, as an Object, it's not implementing Cloneable interface; we've tried to make a copy of the payload in any other single way, but only a new reference is generated; we've tried to serialize the payload to create a new copy from the serialized data but the Object doesn't either implement Serializable interface... Everything useless.
Any help, idea or piece of advice would be appreciated.
We finally managed to solve the problem by using message.getPayloadAsBytes();, which return value is a brand new byte[] object. This method doesn't either alter the payload within the message. By using the byte array we can create a String object to be written in our audit like this:
byte[] auditByteArray[] = message.getPayloadAsBytes();
String auditString = new String(auditByteArray);
Moreover, we tried a test consisting in stablishing that byte array as the new payload in the message and both JSON and PNG responses are managed correctly by the browser.

RabbitMQ headers exchange routing: match all listed headers

I have a lot of consumers with different set of features, so I want route message to process to correct one. I decided use headers exchange and specify necessary features in message headers, but here fall into obstacle.
In rabbitMQ there is binding argument x-match which may take values only any and all (https://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2013-July/028575.html). Each consumer instead while binds has big list of available features (most of them are true/false, but also there are strings), which I specify as binding arguments along with x-match argument. But when I publish message I wanna specify only necessary headers, for instance, feature-1 and feature-7 with specific values. I don't even know about all available consumer features when publish the message. And here is problem: if I miss some of binding argument when x-match==all, message won't be routed, and if I set x-match to any, the only matching header is enough to route message - despite another header's value may not match.
To give you example, let's consider consumer with features: country=US, f1=true, f2=true, f3=false.
Scenario 1: So I attach (create binding) its queue to headers exchange with these arguments and x-match set to all. Then I publish message and I need country to be "US" and f2 to be true. I don't know anything about other possible consumer features. Message won't be routed, because not all headers match exactly.
Scenario 2: Another use case is if I bind queue with x-match argument set to any. If I specify again country to be "US" and f2 to be true - message will be routed, but also it will be (incorrectly) routed if f2 is set to false and only country matches.
So probably I misunderstand something, but I look for easiest solution for me: how to route message to right consumer based on list of necessary features. I would like to use something like all-specified value for x-match argument which wouldn't demand list all available features but will require all given headers to match exactly.
Indeed, only own exchange may help for my purpose. If I'm succeed in erlang I'll report here.
Update
I managed to write own plugin which fits my purposes. Probably it is not perfect but works good for me for now.
https://github.com/senseysensor/rabbitmq-x-features-exchange

Dynamic Actions on a WCF-Custom Biztalk SQL port?

So i have an orchestration that successfully does everything i need it to. Now, i want to reuse the logic in the orchestration, but with a set of slightly different data sources. Rather than copy and paste the orchestration into another one and having to use a decision tree to choose which orchestration to call, I was thinking of making my calls to SQL a little more dynamic.
For example, lets say i have a stored procedure called spGetUSCust. I have coded the orchestration to call the SQL server via a send/receive port with an operation of GetCust on it. It was generated using a strongly typed method so the response message is of a type spGetUSCustResponse.
I now want to call spGetCACust on the same SQL server. The responding data is in the exact same format (structure) as the US stored proc, but they have different names.
So my question is, can i do this by setting the action on the message that will be going to the port within the code? Since my response is strongly typed, will it cause a problem that the response will be really coming from the CA procedure and not the US one? If so, how do i solve that? I could go with generic responses, but they return XML ANY fields and i need to map these responses for additional use in the orchestration.
In an Message Assignment Shape, you can set the WCF.Action property on the request message like this:
MySQLRequest(WCF.Action)="TypedProcedure/dbo/spGetUSCust";
You can use the same physical Port and will have to remove any content in the Action Mapping section.
However, because the Request Messages are different types, you will need two Orchestration Ports, one US, one CA.