I have the following transformation in Pentaho PDI (note the question mark in the SQL statement):
The transformation is called from a job. What I need is to get the value from the user when the job is run and pass it to the transformation so the question mark is replaced.
My problem is that there are parameters, arguments and variables, and I don't know which one to use. How to make this work?
What karan means is that your sql should look like delete from REFERENCE_DATA where rtepdate = ${you_name_it}, and check the box Variable substitution. The you_name_it parameter must be declared in the transformation option (click anywhere in the spoon panel, Option/Parameters), with or without a default value.
When running the transformation, you are prompted with a panel where you can set the value of the parameters, including you_name_it.
Parameters pass from job to transformation transparently, so you can declare you_name_it as a parameter of the job. Then when the user run the job, it will be prompted to give values to a list of parameters, including you_name_it.
An other way to achieve the same result, is to use arguments. The question marks will be replaced by the fields specified in the Parameters list box, in the same order. Of course the field you use must be defined in a previous step. In your case, a Get variable step, which reads the variable defined in the calling job, and put them in a row.
Note that, there is a ready made Delete step to delete records from a database. Specify the table name (which can be a parameter: just Crtl+Space in the box), the table column and the condition. The condition will come from a previous step defined in a Get parameter like in the argument method.
You can use variables or arguments. If you are using variables then use
${variable1}
syntax in your query and if you want to use arguments then you have to use? In your query and mention the names of those arguments in "Field names to be used as arguments" section. Both will work. Let me know if you need further clarifications.
Related
I have an API endpoint to retrieve all users. There are 3 query parameters for searching/filtering the results as follows.
GET /users?name=test&age=23&area=abc
Now I want to introduce an option to ignore the case when searching for the name parameter. For example, the above API call should return even if the name equals Test or tesT.
What's the correct way of implementing this option? Adding another query parameter or is there any better way of implementing it?
In this specific case, an easier option could be to define the query parameter value as a regex expression, since regex expression itself allows us to define a string to be case insensitive / sensitive.
In other scenarios, another option would be to incorporate the specification (that the value needs to be case insensitive) into the query param value itself, like
http://localhost:3000?name=case_insensitive(test)
I would realize two new parameters for name, namely: nameIgnoreCase and nameCaseSensitive. In this case the user of the endpoint can and must decide. If this is well documented, the user gets an additional hint that this ‚question’ exists at all.
You can also continue to provide name as the default behavior, which will fall back to either nameIgnoreCase or nameCaseSensitive.
I want to introduce a problem that we are facing in our project regarding Input parameter filtering issue.
Problem:
We have 5 input parameters in our SAP HANA view with default value ‘*’ to have the possibilities to select all values.
Now when we want to select data from this HANA view into our table function using script we pass input parameter values using “PLACEHOLDER” statement but for this statement ‘*’ is not working( it returns no result).
More important point is this that if I hard code value as ‘’, it is showing the data correctly but if I use variable (that holds ‘’ value), it shows me no data.
For example:
For plant (WERKS) filter, if I put constant ‘*’, it is giving me all data
For plant (WERKS) filter, if I put use a variable (ZIN_WERKS) that have ‘*’ value passed from input screen of final view, it is giving me no data.
I checked that variable is correctly filled with ‘*’ value but still no data that we are not able to understand.
Additional question, do we always give default value as ‘*’ for input parameters because if it is blank or empty, it always filter on blank values and value help could also not be generated?
Have you ever encounter these issues because it seems very basic points in SAP HANA…?
We would really appreciate for any help/hint regarding these issues…
This is indeed a question that has been asked already. The point here is that you seem to want to mimic the selection behaviour from SAP Netweaver based applications in your HANA models.
One difference to consider here is that the placeholder character on SQL databases like HANA is not * but %.
Also, the placeholder search only works when your model uses the LIKE comparison, but not with = (equal) or >, <, or any other combination of range queries.
In short: if you want to have this specific behaviour just like in SAP Netweaver, you will have to build your own scripted view and explicitly test for which parameters had been provided and which are "INITIAL".
One useful feature for this scenario is the APPLY_FILTER() function in SQLScript, that allows to apply dynamic filters in information models.
More on that can be found in the modelling guide.
I'm struggling with what seems like the simplest thing there is: assigning a value to a mapping variable that I later on use in my flow to make a decision upon... With my MS SSIS background this is a 10 seconds task, however in Informatica PowerCenter, it is taking me hours...
So I have a mapping variable $$V_FF and a workflow variable $$V_FF. At first the names were different but while trying things out, I changed that. But that shouldn't matter, right?
In a mapping, I have a view as a source that returns -1, 0 or 1. The mapping variable aggregate function is set to MIN.
In the session that I have created for this mapping, I have a post-session assignment between the wf variable and the mapping variable.
In this mapping I use setvariable function in an Extrans block.
Every time I run the wf, I see in the log that it uses a persistent value instead of assigning a new value everytime the flow is running...
What am I missing here?
Thanks in advance!
Well, the variables here work in a bit different way indeed. So it would be easier to come up with a good answer or you'd explain the whole scenario: what are you using the variable for?
Anyway, the variable values are persisted in repository and reused, as you've noticed. For your scenario you could add an Assignment Task to the Workflow before your Session. Set some low value (eg. -1 if you expect your variable to have some positive value after the Mapping run) and use PreSession Variable Assignment to pass the value to the Mapping. This will override the use of persisted repository value. If course in this case you will need to use Max aggregation.
In the end, I managed to accomplish what I wanted back then. There might be a better way, but this solution is easy to maintain and easy to understand.
Create a variable in your workflow, lets say $$FailureFlag with type integer.
Create a view in your DB that returns 1 row with a integer value between 0 and x, where x is a positive integer value.
Create a mapping with the view that we just created as the source and use a dummy table as destination.
In this mapping you also create a variable, lets say $$MYVAR, with type integer and aggregation "Count". In an "Expression Transformation" I assign the result of the view, column FF, to this variable $$MYVAR by using SETVARIABLE($$MYVAR,FF).
From this mapping, create a session in your workflow. In the Components tab, in the "Postsession_success_variable_mapping" section, add a row and link workflow variable $$FailureFlag with session variable $$MYVAR.
Add a Decision component right after the session you just created, and test the content of your workflow variable, for example $$V_FAILURE_FLAG_IMX = 1.
Connect your decision then with your destination and add a test clause for example:
"$MyDecision.Condition = true AND $MyDecision.PrevTaskStatus = succeeded"
Voila, that's it.
I would like to use a predefined queries from csv file.
The problem is that some of the values into the queries must be randomly chosen and each query has different number of parameters.
So i have tried something like this:
"select * from table where column = "${variable1};"
Please note that variable1 is already defined and has proper value.
The problem is that jMeter executes the query without replacing the parameter with its value.
It is not an option to use "?" ( question mark) as it is explained into the basic tutorial.
Has anybody has an idea how to solve this issue, without writing custom code using PreSampler like Beanshell, etc.
It is possible to use JMeter variables in SELECT statements
The reasons for not getting it resolved can be
(Most likely) Variable is not set. Use Debug Sampler and View Results Tree listener combination to double check its value.
You have syntax error in your SQL query
If you have a "complex" variable like variable - is a prefix and 1 is a random number which comes from i.e. __Random() or __threadNum() function you need to refer the variable a little bit differently, like:
${__evalVar(variable${__threadNum})}
or
${__evalVar(${variable}${__Random(1,9,)})}
Is it possible to mock/spy functions with T-SQL? I couldn't find anything mentioning it. I was thinking of creating my own implementation using the SpyProcedure as a guideline (if no implementation exists). Anyone had any success with this?
Thanks.
In SQL Server functions cannot have side-effects. That means, in your test you can replace the inner function with on that returns a fixed result, but there is no way to record the parameters that were past into the function.
There is one exception: If the function returns a string and the string does not have to follow a specific format, you could concatenate the passed-in parameters and then assert later on that the value coming back out contained all the correct values, but that is a very special case and not generally possible.
To fake a function, just drop or rename the original and create your own within the test. I would put this code into a helper function, as it probably will be called from more than one test.