I am trying to use the lookup function in a dataweave script but I am getting the error from the question "There is no variable named 'lookup'". Has anyone come across this and know how to fix it?
I am using Mule 3.7.1
Update
This works at runtime, but won't give a preview, is there a way to make a preview work when using lookup?
This is not possible during designtime in the current versions of Mule 3.7.x / 3.8.x and the current versions of Studio.
The invocation of the flow you are calling via DataWeave's lookup function only happens at execution time. I had a look in latest 3.8.5, you won't see the error but null as a value for the lookup.
Metadata population via DataSense from the flow that will be called doesn't come through via this function in Dataweave.
Debugging may also gave you a hard time in 3.7.x but improved in 3.8.x, so you can see the behaviour and values to and from the flow you execute the lookup to.
Related
I have the IDEA Ultimate 2018.1 with flowtype (flow-bin) configured and all the checkboxes selected. I followed this guide: https://www.jetbrains.com/help/idea/2017.2/flow-type-checker.html
The type checking needs much time to be executed. I change something in my code (reverting a wrong annotation, or creating a wrong one), and I need to wait around 30 seconds to get the correct annotation, this is, IDEA triggers the flow server to analyse the files and modify the editor accordingly. That is quite a lot.
Can I trigger that type checking analysis manually inside IDEA to get the editor updated? Or can I change the auto-running interval?
As Kraus noticed, my version of flow-bin was old.
I was using the version 0.26.0 instead of the new 0.74.0, mainly because when I updated flow I was not using flow-bin but flow...
Thanks. Now IDEA and flow are fast.
I wonder if Kettle (AKA Pentaho PDI) supports metadata changing at run-time.
I've implemented a couple of custom plugins:
The first plugin sends data to the second plugin. The metadata of the rows sent in output can change when some conditions occur. In practice, this means that processRow() starts with a certain metadata and then, after a while, it changes it. Of course, the row sent in output through putRow() is always synchronized with the related metadata.
The second plugin receives data from the first plugin, calling getInputRowMeta() for understanding the metadata of the received row. However, such metadata seems to not be synchronized with the received row.
Given the results of this simple example, I wonder if the Kettle engine supports this kind of run-time behavior --- i.e. if getInputRowMeta() returns the correct metadata for the specific row that has been received.
Is anybody able of providing evidence that metadata changing is actually not possible ? Otherwise, is there any safe way for getting the metadata of the specific row received in processRow() ?
From page 616 of the book Pentaho Kettle Solutions:
The calculation of the output row
metadata is something that needs to happen once and only once because the layout of
all the output rows needs to be the same.
When I preview rows in Text file Input control of Pentaho, no rows appear and 'Show log' option displays this message
"Dispatching started for transformation".
What does it mean? How to overcome this issue?
It seems that either your transformation is invalid (you're missing one essential checkbox or another) or your PDI installation isn't working properly.
Which JAVA version are you using? And which PDI version? Try it on a fresh install and if it still doesn't work, go over your text file input step and validate that it's correctly configured.
Also, try removing all other steps, it could be that one of the subsequent steps is the one causing problems and stopping PDI from starting the transformation execution.
Well... maybe it's quite late, but I'm currently struggling with this issue in the Pentaho Community Version 8.
What I found, and solved some of my issues is that this message can be a potential warning for a Deadlock process. You have to be sure that none of this situations are present in your code:
An external component like a table lock by the database blocks the transformation.
The "Block this step until steps finish" step might run into a deadlock when there are more rows to process than the number of Rows in Rowset.
Within transformations there are situations when streams get split and joined again, so that the transformation blocks by design.
You could see full examples in the Jira Pentaho documentation page:
https://pentaho-community.atlassian.net/wiki/spaces/EAI/pages/386807182/Transformation+Deadlocks
I hope that it will help you!
Our SSIS pacakges a structured as one Control package and many child packages (about 30) that are invoked from the control package. The child packages are invoked with Execute Package Task. There is one Execute Package Task per child package. Each Execute Package Task uses File Connection Manager to specify path to the child package dtsx file. There is one File Connection Manager per child package. Each File Connection Manager has an expression defined for ConnectionString property. This expression looks like this:
#[Template::FolderPackages]+"MyPackage.dtsx"
The file name is different for each package. The variable (FolderPackages) is specified in the SSIS package configuration file.
The error that is generated during run time is
Error 0x80070002 while loading package file "MyPackage.dtsx"
The system cannot find the file specified." The package that fails is different from run to run and sometimes no packages fail at all. This is when run on exactly the same environment/data etc.
I ran FileMon during this error and found out that when the error happens SSIS tries to read the dtsx file from a wrong place, namely from system32. I checked that this is identical to what would happen if #[Template::FolderPackages] variable were empty, but because the very same variable is used for every child package and works for some but doesn't work sometimes for others, I have no expalnation to this fact.
Anything obvious, or time to raise a support call with Microsoft?
Are you using Expressions on the SSIS variables directly? Variables with Expressions are calculated each time the variable is referenced by the consuming object which needs to use it. That is where the race condition bug exists, because sometimes the expression doesn't get evaluated if another thread is already evaluating a different variable, and the default value for the variable is provided to the consumer object.
If that matches your design, these two bugs on the connect site discuss the problem, and the workarounds:
https://connect.microsoft.com/SQLServer/feedback/details/332372/ssis-variable-expressions-dont-always-evaluate
A second one at
connect.microsoft.com/SQLServer/feedback/details/406534/ssis-2008-variable-expressions-dont-always-evaluate
A summary of workarounds is
{
- Note the parallel tasks that could run in you SSIS control flow and utilize these expression variables. If you have two tasks side-by-side if each relies upon the same variable, and that variable has an Expression to set its value, then you could hit this.
Manually sequentialize such tasks, so that they don't run in parallel. Ie. Add a green arrow on the control flow, so that the tasks occur in order Task1, Task2, Task3, rather than side-by-side on parallel paths and rather than inside the same container with no paths.
You could avoid variable expressions: Assigning local variables in the required order using a home-made script task that does the same kind of work, so that variables are not evaluated using expressions (ie. the thing which can hit this race condition). In other words, manually assign the variable values at a point in time in your control flow just before they are used. The point of using expressions on variables is to dynamically set a value based on another value whenever it is used, so this acheives a similar design goal but in a manual way.
Reduce threads to minimize potential: Setting the Dataflow task EngineThreads to 1 and MaxConcurrentExecutables to 1. This will help sequentalize execution of your package to one task at a time, but that has the side effect which may cause slower performance.
Create and set values on distinct copies of variables at different scope levels in the design, so that they evaluate in different parallel execution scopes and avoid the expression evaluation on parallel threads. Master::Var1, Child1::Var1, Child2::Var1
}
A bit of a stab in the dark but...
I've had a similar issue with variables where readonly=false and multiple components were reading the variable at the same time and causing locking issues.
I consistently recreated the problem by running a pair of dataflows that did nothing but reference the variable inside a for loop container and changed the variable to be read only and this resolved the problem.
If you temporarily hardcode the package name does this resolve the issue?
Turns out after sending trace info to Microsoft that we are encountering heap corruption. I'll update this question if we get to the bottom of it.
The current suggestion is to disable heap lookaside for dtexec.exe.
The official answer to this issue is that it is a bug in SQL 2005 and 2008. Many tasks accessing the same variable cause a race condition, and some tasks get the default value for the expression instead of the evaluated value.
The workaround is to ensure that the default value (the value defined in the property sheet for whatever property you are having trouble with) should be the value that will work in your production environment.
This way, when the race condition happens in prod, SSIS will fall back to the package value, which will still work.
In dev? Well you're just going to have to deal with that manually until we get a bug fix from Microsoft.
There is a KB article relating to this issue: http://support.microsoft.com/kb/2448991 which states when and where this was fixed.
I got the following SQLException: "invalid options in all7"
Upon googling the error message, the ONLY hits I saw were Oracle error lists which pinpointed the error at "ORA-17432: invalid options in all7". Sadly, googling for the error # brought up only combined lists with no explanation for the error, aside from this page that said "A TTC Error Message" as the entire explanation.
The error happens when a Java program retrieves data from a prepared statement call executing a procedure that returns a fairly large, but not unreasonable, # of rows via a cursor.
I can add the stack trace from the exception as well as condensed code, but I assume that's not terribly relevant to figuring out what "ORA-17432: invalid options in all7" means.
Context:
Error seemed to have appeared when the Java program was migrated from Oracle 9 OCI to Oracle 10.2 thin client. The procedure, when run directly against database (via Toad) works perfectly fine and returns the correct cursor with correct data and no errors.
This seems to be something data specific (result set size may be?) since running that same exact code against a different currency as a procedure parameter (which returns much smaller resultset) works 100% fine.
This is almost certainly not something you're going to have control of. It looks like a problem with the way your thin driver is using the two-task common (TTC) protocol. One thing to note is that this sort of thing can be very sensitive to the version of the driver you are using. Make absolutely certain that you have the latest version of the JDBC driver for the combination of the version of Java you're using and the version of Oracle on the server.
Akohchi - you were in the right area though not quite correct. The explanation obtained via Oracle Support call was that this version of Java (1.3) was not compatible with new Oracle. Java 1.4 fixed the issue.