Saving an extracted value in a specific parameter file(data table) and using that file as an input for SAP GUI Loadrunner script - scripting

I am working on SAP GUI scripting in Loadrunner.
I have one script in an Action which generates a "Delivery Number" Eg:80004600 in the script.
I am able to successfully extract the value of the delivery number 80004600 in a variable called "Deliver_Number" using sap_gui functions, as shown below:
sapgui_status_bar_get_type("Delivery_Status",LAST);
sapgui_status_bar_get_text("Delivery",LAST);
sapgui_status_bar_get_param("2","Delivery_Number",LAST);
I need to pass this "Delivery_Number" in the next step in a Table below:
sapgui_table_fill_data("Table",
tblSAPSAMPLE_EX_OBJECT,
"{Delivery_Number}",
BEGIN_OPTIONAL,
"AdditionalInfo=sapgui2017",
END_OPTIONAL);
This can't be done as its a table and not able to take any input from a variable. Fetching the below error.
Whereas, if the "Delivery Number" Eg:80004600 is passed via table by creating a parameter Eg:data_2.dat file, the script successfully passes.
sapgui_table_fill_data("Table",
tblSAPSAMPLE_EX_OBJECT,
"{data_2}",
BEGIN_OPTIONAL,
"AdditionalInfo=sapgui2017",
END_OPTIONAL);
Parameter:
I would like to have a code in C programming language, that can allow me to save the extracted data 80004600 i.e Delivery Number into the parameter file "data_2.dat" shown above, which can be used in the next action to pass the value to the table mentioned above.
Is there any other possible way of doing this? Any aid will be really appreciated.

Won't work. Data files are loaded into RAM at the beginning of the script execution so any value you write to a file you will not see until the next execution (if ever). You also then have the problem of managing lock if you have multiple users writing to the same file.
Virtual Table server exists as a broker for these types of items.
Just save it to a new variable name, or even a C String instead of a LoadRunner variable, and use that C String in place of the parameter. You have many more paths to explore before resorting to Virtual TAble Server for a "within the same script" operation.
Try 1:
sapgui_table_fill_data("Table",
tblSAPSAMPLE_EX_OBJECT,
lr_eval_string("{Delivery_Number}"),
BEGIN_OPTIONAL,
"AdditionalInfo=sapgui2017",
END_OPTIONAL);
Try 2:
lr_save_string(lr_eval_string("{Delivery_Number}"), "data_2");
sapgui_table_fill_data("Table",
tblSAPSAMPLE_EX_OBJECT,
lr_eval_string("{data_2}"),
BEGIN_OPTIONAL,
"AdditionalInfo=sapgui2017",
END_OPTIONAL);
Try 3: Assumes an appropriately declared and initialized C Variable
strcpy(MyCString, lr_eval_string("Delivery_Number"));
sapgui_table_fill_data("Table",
tblSAPSAMPLE_EX_OBJECT,
MyCString,
BEGIN_OPTIONAL,
"AdditionalInfo=sapgui2017",
END_OPTIONAL);

The below resolution worked for me: Only in Vugen, but its not working in Crontoller.
I wrote the below C Function to copy the "Delivery Number" Eg:80004600 in "data_2.dat" file.
Declare the below variables in global.sh file
char * filename;
long filestream;
Write the below code snippet after the value is extracted, Eg:Delivery_Number
filename = "C:\\filepath\\data_2.dat";
filestream = fopen(filename,"a");
fprintf(filestream,"%s \n", lr_eval_string("{Delivery_Number}"));
fclose(filestream);
Then I used that "data_2.dat" as a parameter{data_2}" in the next Action. It worked for me, parameter substitution successful.
So, It is possible to extract a dynamic value from one Action of the script, save it in a parameter file using C function and then use the same parameter file in another Action of the same script.

Related

Repast: how to add and set a new parameter directly from the code instead of GUI

I want to create a parameter that contains a list of string (list of hub codes). This list of string is created by reading an external csv file (this list could contain the different codes depending on the hub codes in the CSV file)
What I want is to find a easy auto way to perform batch runs by each hub code in the list.
So this question is:
1) how to add and set a new parameter directly from the code (during the initialization when reading the CSV) instead of GUI parameter panel?
2) how to avoid manual configuration of hub list in the batch run configuration
Something like this for adding the parameters should work in your ContextBuilder.
Parameters params = RunEnvironment.getInstance().getParameters();
((DefaultParameters)params).addParameter("foo", "Big Foo", Integer.class, 3, false);
You would read the csv file to get the parameter name and value.
I'm not sure I completely understand the batch run configuration question, but each batch run has a run number associated with it
RunState.getInstance().getRunInfo().getRunNumber()
If you can associate line numbers in your csv parameter file with run number (e.g. run number 1 should use line 1, and so on), then each batch run would use a different parameter line.

How to add a whole package to transport request by code?

My task is to do all these steps programmatically:
Create a new transport request, I managed to do this with TR_INSERT_REQUEST_WITH_TASKS
Add package content to the newly created transport, this is the part I am stuck in.
Release the transport, I managed to do this with TR_RELEASE_REQUEST
My problem is that I can manually add the package to the transport request via transaction SE03 and then release it with FM TR_RELEASE_REQUEST, but that is not the goal, everything from step 1 to 3 has to happen in one program execution if anyone can guide me how to do step 2 it would be very helpful, thanks in advance.
In your program, you must :
First get the list of objects which belong to the package, via the table TADIR (object in columns PGMID, OBJECT, OBJ_NAME, and package in column DEVCLASS)
And add these objects to the task or transport request via the non-released function modules TRINT_APPEND_COMM or TR_APPEND_TO_COMM_OBJS_KEYS.
To add the whole project into request you must first select all the objects from package and add them one by one. You can do it like this:
DATA: l_trkorr TYPE trkorr,
l_package TYPE devclass VALUE 'ZPACKAGE'.
cl_pak_package_queries=>get_all_subpackages( EXPORTING im_package = l_package
IMPORTING et_subpackages = DATA(lt_descendant) ).
INSERT VALUE cl_pak_package_queries=>ty_subpackage_info( package = l_package ) INTO TABLE lt_descendant.
SELECT pgmid, object, obj_name FROM tadir
INTO TABLE #DATA(lt_segw_objects)
FOR ALL ENTRIES IN #lt_descendant
WHERE devclass = #lt_descendant-package.
DATA(instance) = cl_adt_cts_management=>create_instance( ).
LOOP AT lt_segw_objects ASSIGNING FIELD-SYMBOL(<fs_obj>).
TRY.
instance->insert_objects_in_wb_request( EXPORTING pgmid = <fs_obj>-pgmid
object = <fs_obj>-object
obj_name = CONV trobj_name( <fs_obj>-obj_name )
IMPORTING result = DATA(result)
request = DATA(request)
CHANGING trkorr = l_trkorr ).
CATCH cx_adt_cts_insert_error.
ENDTRY.
ENDLOOP.
Note, that you cannot add objects that are already locked in another request, it will give you cx_adt_cts_insert_error exception. There is no way to unlock objects programmatically, only via SE03 tool.
You can check code behind, Write Transport Entry in SE80 right click on package menu.

How to use insert_job

I want to run a Bigquery SQL query using insert method.
I ran the following code just like so:
JobConfigurationQuery = Google::Apis::BigqueryV2::JobConfigurationQuery
bq = Google::Apis::BigqueryV2::BigqueryService.new
scopes = [Google::Apis::BigqueryV2::AUTH_BIGQUERY]
bq.authorization = Google::Auth.get_application_default(scopes)
bq.authorization.fetch_access_token!
query_config = {query: "select colA from [dataset.table]"}
qr = JobConfigurationQuery.new(configuration:{query: query_config})
bq.insert_job(projectId, qr)
and I got an error as below:
Caught error invalid: Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0:
Please let me know how to use the insert_job method.
I'm not sure what client library you're using, but insert_job probably takes a JobConfiguration. You should create one of those and set the query parameter to equal your JobConfigurationQuery you've created.
This is necessary because you can insert various jobs (load, copy, extract) with different types of configurations to this one API method, and they all take a single configuration type with a subfield that specifies which type and details about the job to insert.
More info from BigQuery's documentation:
jobs.insert documentation
job resource: note the "configuration" field and its "query" subfield

In pentaho..How to pass a text file which contains all the definition of the connection parameters in the job?

I am using jdbc connection and i am passing parameters with example ${sample_db_connection} and that parameters has been defined in server in a text file as sample_db_connection=localhost and i want to pass the text file in the job step so that whenever the job ran and it found this parameter ,automatically it will take the value defined in text file.
You need to create a KTR file using "Property Input" as the input step and "Modified Java Script" Step to define the key value mapping. Check the image below:
Define your filename in the input step. In the JS step, you can use "setVariable" function to define the key-value mapping.
Once this job is executed at the start, pentaho will set the variables for all the connection.
Hope i have understood the question correctly and this is what you are looking for !! :)

BeanShell PreProcessor updates User define variables

I'm very new at JMeter issues.
In a test script i have a BeanShell PreProcessor element that updates some variables previously defined at a "User Defined Variables" element.
Latter those variables are used in "Http Requests". However, the value that is used in the http request is the default one.
The scripts seems to be working due to some debug print();
My question is if it's necessary to delay the script to be sure that the BeanShell finishes?
Thanks a lot for your attention
There is no need to put any delay to Beanshell Pre-Processor as it's being executed before request. I'd recommend to check your jmeter.log file to see if there are any scripting issues as Beanshell Pre-Processor does not report errors anywhere including View Results Tree listener.
There are at least 2 ways to assure that everything is fine with your Beanshell script:
Put your debug print code after variables replace logic to see if it fires
Use JMeter __Beahshell function right in your HTTP request. If it's ok - View Results Tree will demonstrate beanshell-generated value. If not - the field will be blank and relevant error will be displayed in the log.
Example test case:
Given following Test Plan structure:
Thread Group with 1 user and 1 loop
HTTP GET Request to google.com with path of / and parameter q
If you provide as parameter "q" following beanshell function:
${__BeanShell(System.currentTimeMillis())}
and look into View Results Tree "Request" tab you should see something like:
GET http://www.google.com/?q=1385206045832
and if you change function to something incorrect like:
${__BeanShell(Something.incorrect())}
you'll see a blank request.
The correct way of changing existing variable (or creating new if variable doesn't exist) looks like
vars.put("variablename", "variablevalue");
*Important: * JMeter Variables are Java Strings, if you're trying to set something else (date, integer, whatever) to JMeter Variable you need to cast it to String somehow.
Example:
int i = 5;
vars.put("int_i", String.valueOf(i));
Hope this helps.
You can update the vale of a "user defined variable".
You have to create a bean shell sampler
vars.put("user_defined_variable", "newvalue");
#theINtoy got it right.
http://www.blazemeter.com/blog/queen-jmeters-built-componentshow-use-beanshell
I'm new to jmeter too but as I know variables defined in "User defined variables" are constants, so you can't change them. I recommend to use "User Parameters" in preprocessors or CSV Data Set Config.