I have the same name parameter in a job and in a transformation inside this job. I use the same in the transformation for debugging.
The problem is when I execute the job, the transformation doesn't use the value of the job but the transformation.
Example:
job
parameter:
week 10
transformation
parameter:
week 30
I need the transformation to use the week 10.
You need to check option "Pass parameter values to sub transformation" in "execute transformation" step in job.
and -> use step "Get Variables" in the transformation. You can see my example from Here
Related
Now I have a pentaho job using shell script to process some data.
But I found if I want to use the result in the script, I had to write that into a file and read the file to asign variables.
Is there an esaier way to use the result of a script step in the following steps?
This is the Script content.
Here is the whole process.
In pentaho you cannot create a variable and used in the same place.
Basically you just need to create one ktr and one job:
the first one is in charge of perform some task and save the variable with set-variable step (root job level option)
variables created in the first ktr are also available at job level
If you want to use the variable in another ktr
the second ktr, at the beginning should use the get-variable step to retrieve the variable created in previous transformation
transformations should be executed sequentially using a job
In your case, you should run the shell in the first ktr, transform the result into a variable and save it using set-variable. Your job which invokes the ktr, are able to use the variable create in the previous ktr
I created ControlM job with all required parameters(named PARM1, PARM2, PARM3). I able to Order the job and under Monitoring section when i try to run the job the parameters i set while creating job are not passing to script.
May i know why the parameters are not passing to script?
Could you specify what kind of Job you are designing?
Job's type could be determining this behavior.
It is important to consider this aspect. For example, if the Job type is OS Script then Control-M will send the values of the 3 parameters at the indicated positions. However, if the Job is of type OS Command then you must specify, yourself, those parameters in the command line to execute.
For example, in the What specification of the command you should write:
thecommnad %%PARM1 %%PARM2 %%PARM3
There is a case in my ETL where i am trying to take "table output" name from command line. The table name does not correspond to any streaming field's name. Is there any way to get it done in pentaho kettle?
Pentaho DI is a metadata based tool. I assume you will be trying to pass the output table name from the command line like below:
.../pan.sh -file:"/home/user/sample.ktr" -param:table_output=SOMETABLE
Assuming the command above is what you are trying.
So firstly, change the transformation settings of sample.ktr (just an example) and add the parameter name : "table_output" to the Parameters section.
Next, in the Table Output Step, use this parameter name in the format : ${table_output} in place of table name. This should solve your query.
Incase you are passing the parameters to a job. As mentioned above, the first section of the adding the parameters remains the same.
You can next take a separate transformation (.ktr) file inside a job, double click on the ktr (from the job file) and you will find PARAMETERS Section like the image below. Add the parameters
Thirdly inside the .ktr file, repeat the step from above (first section) and use a SET VARIABLE or TABLE OUTPUT. SET Variable step will ensure that you have the parameter available across the entire job. Mostly depends on your requirement.
Hope it helps :)
This should give you an idea how to do it. Since transformations are just xml you can read the metadata from them. Basically you find the table output step and set it as a variable in this case "TABLE"
When you log a job in Pentaho Data Integration, one of the fields is ID_JOB, described as "the batch id- a unique number increased by one for each run of a job."
Can I get this ID? I can see it in my logging tables, but I want to set up a transformation to get it. I think there might be a runtime variable that holds an ID for the running job.
I've tried using the Get Variables and Get System Info transformation steps to no avail. I am a new Kettle user.
You have batch_ids of the current transformation and of the parent job available on the Get System Info step. On PDI 5.0 they come before the "command line arguments", but order changes with each version, so you may have to look it up.
You need to create the variable yourself to house the parent job batch ID. The way to do this is to add another transformation as the first step in your job that sets the variable and makes it available to all the other subsequent transformations and job steps that you'll call from the job. Steps:
1) As you have probably already done, enable logging on the job
JOB SETTINGS -> SETTINGS -> CHECK: PASS BATCH ID
JOB SETTINGS -> LOG -> ENABLE LOGGING, DEFINE DATABASE LOG TABLE, ENABLE: ID_JOB FIELD
2) Add a new transformation call it "Set Variable" as the first step after the start of your job
3) Create a variable that will be accessible to all your other transformations that contains the value of the current jobs batch id
3a) ADD A GET SYSTEM INFO STEP. GIVE A NAME TO YOUR FIELD - "parentJobBatchID" AND TYPE OF "parent job batch ID"
3b) ADD A SET VARIABLES STEP AFTER THE GET SYSTEM INFO STEP. DRAW A HOP FROM THE GET SYSTEM INFO STEP TO THE SET VARIABLES STEP AS ITS MAIN OUTPUT
3c) IN THE SET VARIABLES STEP SET FIELDNAME: "parentJobBatchID", SET A VARIABLE NAME - "myJobBatchID", VARIABLE SCOPE TYPE "Valid in the Java Virtual Machine", LEAVE DEFAULT VALUE EMPTY
And that's it. After that, you can go back to your job and add subsequent transformations and steps and they will all be able to access the variable you defined by substituting ${myJobBatchID} or whatever you chose to name it.
IT IS IMPORTANT THAT THE SET VARIABLES STEP IS THE ONLY THING THAT HAPPENS IN THE "Set Variables" TRANSFORMATION AND ANYTHING ELSE YOU WANT TO ACCESS THAT VARIABLE IS ADDED ONLY TO OTHER TRANSFORMATIONS CALLED BY THE JOB. This is because transformations in Pentaho are multi-threaded and you cannot guarantee that the set variables step will happen before other activities in that transformation. The parent job, however, executes sequentially so you can be assured that once you establish the variable containing parent job batch ID in the first transformation of the job that all other transformaitons and job steps will be able to use that variable.
You can test that it worked before you add other functionality by adding a "Write To Log" step after the Set Variables transformation that writes the variable ${myJobBatchID} to the log for you to view and confirm it is working.
I have a pentaho transformation which is consist of, for example, 10 steps. I want to start this job for N input parameters but not in parallel, each job evaluation should start after previous transformation are fully completed(process done in transaction and commited or rollbacked). Is it possible with Pentaho?
You can add 'Block this step until steps finish' from Flow to your transformation. Or you can mix 'Wait for SQL' component from Utility with loop on your job.
Regards
Mateusz
Maybe you must do it using jobs instead of transformations. Jobs only run on sequence while transformations run on parallel. (Truly, a transformation has a initialize phase whose run is in parallel and then the flow runs sequentially).
If you can't use jobs, you always can do what Matusz said.