I need to have one KTR which is used to invoke multiple KTRs.
That base KTR should be connected to MongoDB and based on the mapping in MONGODB, I need to invoke the specific KTR.
Am I supposed to use Sub Transformation mapping? but how will i configure which KTR to be invoked?
we need to have JOB instead of mapped Transformations. Just configure first KTR in your job, whatever is the output of that KTR, set it in a variable (using setVariable) and then configure second KTR. In second KTR, configure Transformation file name as ${VariableName}. It will dynamically invoke the required KTR configured in your first KTR.
Related
I have a yaml properties file stored in a S3 bucket. In Mule4 I can read this file using S3 connector. I need to use properties defined in this file (for dynamic values reading and using it in Mule4) in DB connectors. I am not able to create properties from this file such that I can use them as ${dbUser} in mule configuration or flow as an example. Any guidance on how can I accomplish this?
You will not be able to use the S3 connector to do that. The connector can read the file in an operation at execution time, but properties placeholders, like ${dbUser} have to be defined earlier, at deployment time.
You might be able to to read the value into a variable (for example: #[vars.dbUser]) and use the variable in the database connector configuration. That is called a dynamic configuration, because it is evaluated dynamically at execution time.
The way my ADF setup currently works, is that I have multiple pipelines, each containing atleast one activity. Then I have one big pipeline that sort of chains these pipelines together.
However, now in the big "master" pipeline, I would like to use the output of an activity from one pipeline and then pass it to another pipeline. All of this orchestrated from the "master" pipeline.
My "master" pipeline would look something like this:
What I have tried to do is adding a parameter to "Execute Pipeline2", and I have tried passing:
#activity('Execute Pipeline1').output.pipeline.runId.output.runOutput
#activity('Execute Pipeline1').output.pipelineRunId.output.runOutput
#activity('Execute Pipeline1').output.runOutput
How would one go about doing this?
unfortunately we don't have a way to pass the output of an activity across pipelines. Right now pipelines don't have outputs (only activities).
We have a workitem that will allow a user to choose what should be the output for a pipeline (imagine a pipeline with 40 activities, user would be able to choose the output of activity 3 as pipeline output). However, this workitem is in very early stages so don't expect to see this soon.
For now, the only way would be to save the output that you want in storage (blob, for example) and then read it and pass it to the other pipeline. Another method could be a web activity that gets the pipeline run (passing run id) and you get the output using ADF SDK or REST API, and then you pass that to the next Execute Pipeline activity.
I want to dynamically change Connection string in Custom task and then want this to reflect in ADF pipeline? Is there a way I can set the pipeline Parameter value in Custom Code task and make my Connection String Parametrised in ADF pipeline?
Thanks
This feature is now supported by data factory, read more here: https://learn.microsoft.com/en-us/azure/data-factory/parameterize-linked-services
Always think of the context where it will be running, for example if you reference a pipeline parameter from the linked service you will receive a warning. But if at runtime there is a pipeline that matches what you configured at the linked service, you will have no problems.
Hope this helped!
I hope this message finds everyone well!
I'm stucked on a situation on Pentaho PDI Tool and I'm looking for an answer (or at least a light in the end of the cave) to solve it!
I have to import, every month, a bunch of xls's files of differents clients. Every file has a different name (witch is given aleatory) and this files are on a folder named with the name of the client. However, I use the same process for all clients and situations.
Is there a way to pass the name of the directory as a variable and change this variable on every process? How can I read this files on differents paths?
The answer you're looking for requires a flow with variables as you stated. In a JOB you will start with a KTR with the client's name and their respective folder. In the same JOB you are going to pass these results and use them as variables, to another JOB if needed, or to a KTR, and you are going to use the options "Copy previous results to parameters" and "Execute for every input row" (Advanced Tab), and in the parameters tab you will name the variables and stream column name (where the data is coming from in the previous KTR, ie.: Clients name and directory).
If you have trouble with creating this flow i can spare some more time and share a sample if you need.
EDIT:
Sample Here
You have an example of this in the sample directory which is shipped with your PDI distribution.
Your case is covered by the samples/jobs/run_all.
This question is regarding Spinnaker. Within each Pipeline, you have the ability to define custom parameters. When a Pipeline is triggered, you have the ability to use the default value, or supply a new value to those parameters.
I assume I can create Stages within that Pipeline that will use the value of the parameters, when the Pipeline is triggered. However, I can't figure out how to access these values in any Stage of the Pipeline.
For example, I have a Pipeline "Test". I create a parameter "Version", in the configuration for "Test".
Creating a parameter
Then, I add a Pipeline Stage to execute a Jenkins job. The job I have selected has a parameter, "Version".
Using a parameter's value
When the Pipeline "Test" is triggered, I want it to use the value of the Pipeline parameter "Version" and supply it to the Jenkins job. I tried the following syntax for the Jenkins job's Version field: $Version, {{Version}}, #Version, ((Version)), (Version), {Version}, #Version, and more. Nothing seems to translate into the value of the Pipeline parameter "Version", when the Pipeline is triggered. How do I do this?
On a related note, is there a way to use the Application name or Pipeline name in Pipeline Stages?
Parameters (and templated pipelines in general) are accessed via Spring Expression Language.
If your pipeline has a Version parameter and your Jenkins stage has a Version parameter, then in the Jenkins stage configuration you explicitly have to map the pipeline's Version to the Jenkins stage's Version using a value ${parameters.Version}.
Any pipeline parameter is accessible via the ${parameters.XXX} syntax.
Have a look at the pipeline expressions guide for more examples.