Endpoints returning BufferInputStream with strange value in Mulesoft Anypoint Studio - mule

I have 3 separate APIs, A, B, and C. A and B are completely independent, whereas C queries A and B to compile data together. Each API is in its own project and running on its own port (8081, 8082, and 8083, respectively).
I am able to successfully hit A and B individually AND through C...sort of. When C hits one of these endpoints, the result comes back as a glassfish.grizzlly.utils.BufferInputStream.
I've dealt with this BufferInputStream type before by using a Transform Message Component. However, doing so here simply produces an error, saying that payload.id is of the wrong type (it should be an integer). When running this in debug mode, I can see that A has an Output Payload with id: Integer (it is of a custom type). However, upon moving back into C's flow, the payload is now the aforementioned BufferInputStream type, and I'm unable to directly access payload.id.
In short: How do I retrieve data in one project from another project?
Thanks in advance!
Update:
I used an Object to String transformer on the BufferInputStream to get a better look at the value. It appears to be in the format of a URL:
id=12345&name=nameValue&otherVal=%5B8499%5D...
I can #[payload.split('&')] at this point and get most of what I need, but then there's still the issue of things like the following:
summary=Words+with+plus+signs+in+the+middle
Again, I can work around this with things like split, but surely this is not what is intended.
Update 2:
I discovered the following warning:
[[test].api-httpListenerConfig.worker.01]
org.mule.module.http.internal.listener.HttpResponseBuilder:
Payload is a Map which will be used to generate an url encoded http body but
Contenty-Type specified is application/java; charset=windows-1252 and not
application/x-www-form-urlencoded
I'm not entirely sure what to do with that info, though the Contenty-Type typo is interesting ^^

Solved! In A and B, I needed to use an Object to Byte Array transformer before returning the value. This allows me to use a Byte Array to Object transformer in C and get the original value back.

Related

ADF json expression formatting

I've been doing this for about 6 hours now, so I'm turning to the crowd.
I am using ADF to move data from and API to a DB. I'm using the REST copy data activity and I need to properly format a json for the body param with two pipeline parameters and an item from a for loop. My json needs to be formatted as such:
"key" : ["value"]
I'm have difficulty understanding how to format the json body. I believe I need to start the whole body using the json expression:
#json('{"foo":"bar"}')
But I am unable to get the pipeline parameters to be properly expressed in the json. This is makes the most sense as far as I understand it and it simply returns what you see when I peek in the input window.
#json('{"foo":["activity('bar').output.value]"}
"key":["#{activity('bar').output.value}"]
Works, but I still believe I should just be able to pass an array!

Run-State values within shape script EA

Enterprise Architect 13.5.
I made MDG technology extending Object metatype. I have a shape script for my stereotype working well. I need to print several predefined run-state parameters for element. Is it possible to access to run-state params within Shape ?
As Geert already commented there is no direct way to get the runstate variables from an object. You might send a feature request to Sparx. But I'm pretty sure you can't hold your breath long enough to see it in time (if at all).
So if you really need the runstate in the script the only way is to use an add-in. It's actually not too difficult to create one and Geert has a nice intro how to create it in 10 minutes. In your shape script you can print a string restult returned from an operation like
print("#addin:myAddIn,pFunc1#")
where myAddIn is the name of the registered operation and pFunc1 is a parameter you pass to it. In order to control the script flow you can use
hasproperty('addin:myAddIn,pFunc2','1')
which evaluates the returned string to match or not match the string 1.
I once got that to work with no too much hassle. But until now I never had the real need to use it somewhere in production. Know that the addin is called from the interpreted script for each shaped element on the diagram and might (dramatically) affect rendering times.

Pentaho PDI unable to use parameter in transformation

Using Pentaho PDI 8.3.0
I am unable to use a parameter in a REST call within a transformation. What I've done is:
Create a transformation and given it a parameter called PAGE_NR with default value 1
Create a job
Call the transformation with parameter PAGE_NR = 1
In the transformation, set up a GET request to a REST API.
In the URL field, setup the call like http://myurl.com/foo/bar?page=${PAGE_NR}
When I call this from either SoapUI or a browser it works but it always breaks when running the job. It does not seem to translate this parameter into the value, but instead passes it exactly like mentioned above.
I need this parameter because of calling the same URL but with different results. I don't know the amount of pages up front but take care of that logic later in said transformation.
Working on Linux btw. I have tried different variations of calling the parameter but nothing seems to work.
With the information given in the comments, I am willing to make an educated guess:
The REST Client step does not perform variable substitution on the URL if it comes from a field in the stream. What you can do is insert a Calculator step before the REST step with the operation "Variable substitution on string A" with your URL field as Field A.
This should give you the desired URL with page number.

Flows disappearing when deleting a single flow

In my ODL code, I have recently noticed that when uninstalling flows, I get unexpected behavior. The scenario goes something like this:
A bunch of flows are installed across multiple tables
I delete a flow by using the same NodeId, TableId and FlowId that I used when creating it. For reference, I use SalFlowService's addFlow and removeFlow methods.
I execute ovs-ofctl dump-flows and notice that ALL flows on the given node and given table are deleted. For reference, the flowId I use is something like "routing-rename-src-0.0.0.0-to-123.123.123.0".
It appears to me that ODL somehow completely fails at recognizing the FlowId, and defaults to deleting all flows on the given table. No error messages are sent from OpenFlow, and no errors are logged in ODL.
The thing is, I am definitely using the same FlowId object.
Now, I am a bit confused about what could go wrong, but I have an idea, it's just that there's conflicting evidence online, and since I haven't worked on OpenFlowPlugin, I can't quite tell myself.
Flows are or tend to be posted using integers for flowIds, in the REST request paths.
In ODL code such as l2switch, flowIDs can be strings. This makes certain debugging easier to parse through.
Now, this is pretty strange. Are we using integers, or strings, or can ODL make a conversion between integer and strings by a mapping mechanism of sorts? Either way, I get unexpected behavior. Interestingly, the code I linked to does not do deletion... so maybe it's more of a hack in this case?
EDIT : I have now tried to rename my IDs as mere numbers, or as "PluginName" + "-" + number, and uninstallation still seems to fail. The problem is now that I just can't uninstall a flow rule without uninstalling the entire table with it...
EDIT 2 : This issue allowed me to understand that the flow id is not necessarily used to remove the flow. I came up with the following procedure to delete flows, in a way that doesn't cause all flows on the table to get deleted:
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
flowIdentity.context.salFlowService.removeFlow(builder.build());
The very difference with my previous code was that I was not using a Flow object to initialize the input builder. In this form, my methods for adding and removing are identical. As long as I preserve the Flow object after adding the flow, I can delete the flow, and the tables will not be wiped.
But there is an exception. On table 0, I have installed two different table-change rules with identical actions, but different priorities. The matches are slightly different (one defines an in-port, the other doesn't). When I delete the most generic (and lowest priority) rule, the other one gets deleted also.
I don't understand why this happens. Even if I try setting the priority in the input builder, this still happens. Hrm.
As I wrote in my second edit, this post suggests that flow deletion does not work explicitly based on Id, but rather, on the fields that are defined in the input builder of the method. I haven't tested this, but I suspect if the flow reference is omitted from the builder, the defined fields will be used to delete all matching rules, which could imply deleting all flows by accident if the wrong fields are set.
Given the following code to add flows:
final AddFlowInputBuilder builder = new AddFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
builder.setPriority(flow.getPriority());
flowIdentity.context.salFlowService.addFlow(builder.build());
The following code to remove flows works as expected (using the SAME Flow object):
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(flowLocation.nodeIdentifier));
builder.setFlowRef(new FlowRef(flowLocation.flowPath));
builder.setFlowTable(new FlowTableRef(flowLocation.tableIdentifier));
builder.setPriority(flow.getPriority());
builder.setStrict(Boolean.TRUE);
flowIdentity.context.salFlowService.removeFlow(builder.build());
Without "strict" set to true, this can cause unexpected deletion of similar rules on the same table. I am unsure of the way flows are matched on deletion, with or without strict, but this much I can confirm.

What is the simplest way to completely replace an assertion message using FA (and nunit)?

For example;
results.Errors.Count.Should().Be(0, $"because {results.Errors[0]}");
produces the Result Message:
Expected value to be 0 because 'Name' should not be empty., but found 2.
But what I really want, in this particular instance (invocation of the assertion) is just the value of results.Errors[0], that is I would like the message to be just: 'Name' should not be empty.
(As an aside what I really want is to pass a concatenated string representation of the entire results.Errors array, but my linq/lambda skills aren't there yet)!
So how can I get FA to just use what I supply as the message string?
You can't do that. The because part is baked into the language to promote failure messages that are as natural as possible.