Any idea about this weird type error in Pentaho Data Integration? - pentaho

I have this :
Insertion des données dans table some_table.0 - SOME_AUTO_GENERATED_DB_KEY Integer : There was a data type error: the data type of java.lang.Boolean object [true] does not correspond to value meta [Integer]
What boolean??? Where do you see a boolean? I have added a trace writing to the step just before this failing inserting step, and I see a perfectly fine integer as value of SOME_AUTO_GENERATED_DB_KEY .
How can this be possible? I am very new to Kettle, if you have any idea or tips it would be awesome.
Here a screenshot of the transformation :

Just before the failed insert, you have a filter that splits the stream. On one half of the stream, it looks like you have an Add Constant step. If I'm reading this right, then the two inputs to the Insert step don't have the same fields in the same order. A few steps earlier, there is a similar splitting of the paths that goes off to the right, which could have the same effect.
Whenever you remerge streams like this without being very careful, strange errors like this can pop up. Pentaho usually tries to warn you when you create the hop to remerge the streams, but there are ways to miss that warning.
Suggestion: For each time the stream remerges, right-click on each of the two previous steps, and have it show you the output fields. Compare the two lists side-by-side to verify they are the same. If not, then you will have to add or remove fields as appropriate to make them the same on both sides.

Related

Writing csv file with CSV.jl throws StackOverflowError message

CSV.write() returns: ERROR: LoadError: StackOverflowError: further down claiming --- the last 2 lines are repeated 39990 more times ---. Which is nonsense since the data frame to be written is acknowledged to be of 129x6 dimension with no empty fields. I fear the function is counting rows beyond the dimension of the data frame and shouldn't be doing so. How do I make this function work?
using CSV,DataFrames
file=DataFrame(
a=1:50,
b=1:50,
c=Vector{Any}(missing,50),
d=Vector{Any}(missing,50))
CSV.write("a.csv",file)
file=CSV.read("a.csv",DataFrame)
c,d=[],[]
for i in 1:nrow(file)
push!(c,file.a[i]*file.b[i])
push!(d,file.a[i]*file.b[i]/file.a[i])
end
file[!,:c]=c
file[!,:d]=d
# I found no straight forward way to fill empty columns based on the values of filled columns in a for loop that wouldn't generate error messages
CSV.write("a.csv",DataFrame) # Error in question here
The StackOverflowError is indeed due to the line
CSV.write("a.csv",DataFrame)
which tries to write the data type itself to a file instead of the actual dataframe variable. (The reason that leads to this particular error is an implementation detail that's described here.)
# I found no straight forward way to fill empty columns based on the values of filled columns in a for loop that wouldn't generate error messages
You don't need a for loop for this, just
file.c = file.a .* file.b
file.d = file.a .* file.b ./ file.a
will do the job. (Though I don't understand the point of the file.d calculation - maybe this is just a sample dummy calculation you chose just for illustration.)

Meaning of (052) at the end of text literal?

Really just a curious question.
Here are a few examples of the same concern that I have since they are being exported to the FM "REUSE_ALV_GRID_DISPLAY" for parameter "it_fieldcat".
ls_fieldcat-seltext_l = 'Material number'(052).
ls_fieldcat-seltext_m = 'Material'(053).
ls_fieldcat-seltext_s = 'Mat.'(054).
I tried removing the numbers on the right and executed the program but I didn't see any differences and I also tried to see what happens inside debug mode but it only fills the field with the string value, am I missing something or is there something that I wasn't able to notice?
I've been tasked to create a copy of a program which originally joins multiple tables and filters them according to the Parameters from the SELECTION-SCREEN and then shows the results in an ALV Grid Report, but for the use case of the copy it should instead populate a table in ECC that we will then be replicating to BW side. I have successfully copied and modified it accordingly but I can't seem to understand what the numbers beside the strings are doing.
Can someone please explain what their use is, would be very grateful to see a few examples.
Thanks!
The number in the brackets is a text symbol defined as a part of the text elements of the program. Using the syntax 'Literal'(idf) replaces these literals in the program if the symbol is in the currently loaded text pool.

Pentaho PDI - Not setting constant

I can't for the life of me figure out why my transform which I have setting a 'constant' value is not setting it? Anywhere I can look for details?
Here's the add header - constant step
So WHY do I not get my test value in the output??
I even tried 'set field value to constant' step which takes the one row coming from my filer step and sets it to a constant value which also produces nothing! IDK whats going on here!
Thanks for the suggestions… I actually figured this out it has to do with the fact that the text output steps have a setting to not create when the transformation first starts I had to check that box…

Flows disappearing when deleting a single flow

In my ODL code, I have recently noticed that when uninstalling flows, I get unexpected behavior. The scenario goes something like this:
A bunch of flows are installed across multiple tables
I delete a flow by using the same NodeId, TableId and FlowId that I used when creating it. For reference, I use SalFlowService's addFlow and removeFlow methods.
I execute ovs-ofctl dump-flows and notice that ALL flows on the given node and given table are deleted. For reference, the flowId I use is something like "routing-rename-src-0.0.0.0-to-123.123.123.0".
It appears to me that ODL somehow completely fails at recognizing the FlowId, and defaults to deleting all flows on the given table. No error messages are sent from OpenFlow, and no errors are logged in ODL.
The thing is, I am definitely using the same FlowId object.
Now, I am a bit confused about what could go wrong, but I have an idea, it's just that there's conflicting evidence online, and since I haven't worked on OpenFlowPlugin, I can't quite tell myself.
Flows are or tend to be posted using integers for flowIds, in the REST request paths.
In ODL code such as l2switch, flowIDs can be strings. This makes certain debugging easier to parse through.
Now, this is pretty strange. Are we using integers, or strings, or can ODL make a conversion between integer and strings by a mapping mechanism of sorts? Either way, I get unexpected behavior. Interestingly, the code I linked to does not do deletion... so maybe it's more of a hack in this case?
EDIT : I have now tried to rename my IDs as mere numbers, or as "PluginName" + "-" + number, and uninstallation still seems to fail. The problem is now that I just can't uninstall a flow rule without uninstalling the entire table with it...
EDIT 2 : This issue allowed me to understand that the flow id is not necessarily used to remove the flow. I came up with the following procedure to delete flows, in a way that doesn't cause all flows on the table to get deleted:
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
flowIdentity.context.salFlowService.removeFlow(builder.build());
The very difference with my previous code was that I was not using a Flow object to initialize the input builder. In this form, my methods for adding and removing are identical. As long as I preserve the Flow object after adding the flow, I can delete the flow, and the tables will not be wiped.
But there is an exception. On table 0, I have installed two different table-change rules with identical actions, but different priorities. The matches are slightly different (one defines an in-port, the other doesn't). When I delete the most generic (and lowest priority) rule, the other one gets deleted also.
I don't understand why this happens. Even if I try setting the priority in the input builder, this still happens. Hrm.
As I wrote in my second edit, this post suggests that flow deletion does not work explicitly based on Id, but rather, on the fields that are defined in the input builder of the method. I haven't tested this, but I suspect if the flow reference is omitted from the builder, the defined fields will be used to delete all matching rules, which could imply deleting all flows by accident if the wrong fields are set.
Given the following code to add flows:
final AddFlowInputBuilder builder = new AddFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
builder.setPriority(flow.getPriority());
flowIdentity.context.salFlowService.addFlow(builder.build());
The following code to remove flows works as expected (using the SAME Flow object):
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(flowLocation.nodeIdentifier));
builder.setFlowRef(new FlowRef(flowLocation.flowPath));
builder.setFlowTable(new FlowTableRef(flowLocation.tableIdentifier));
builder.setPriority(flow.getPriority());
builder.setStrict(Boolean.TRUE);
flowIdentity.context.salFlowService.removeFlow(builder.build());
Without "strict" set to true, this can cause unexpected deletion of similar rules on the same table. I am unsure of the way flows are matched on deletion, with or without strict, but this much I can confirm.

Adobe Livecycle: how to get the value of process data

I have a process data defined in Adobe livecycle, addField as a string. I am passing the value of this variable as an input when i invoke a process. Further i want to compare the value of this process data if it is true or false. I am trying to use the following expression:
string(/process_data/#addField)=string("true")
but i am not getting the value out of the expression. Is the above expression true? If not what is used to get the value of the process data?
I believe your XPath expression is wrong. I just did a quick mock-up in workbench and I got the correct response. Here are the details of what I did:
Created an input string variable i workbench called addField.
Created a two step process. The first step has two routes going out to two different set values.
On one of the routes, I added the following condition:
/process_data/#addField = "true"
Turned on recording, and invoked the process.
In the input parameters screen in workbench, I added the following text: true
In the recording, I can see the expression evaluating correctly and going to the correct set value.
Do let me know if you have any other questions.
Thanks,
Armaghan.