Write extracted data to a file using jmeter - file-io

I am using JMeter v2.5.
I need to get data from the responses of the test and extract data from it (which I am doing using regular exp extractor). How do I store this extracted data to a file?

Just solved a similar problem. After getting the data using a regular expression extractor, add a BeanShell PostProcessor element. Use the code below to write the variables to a file:
name = vars.get("name");
email = vars.get("email");
log.info(email); // if you want to log something to jmeter.log file
// Pass true if you want to append to existing file
// If you want to overwrite, then don't pass the second argument
f = new FileOutputStream("/my/file/path/result.csv", true);
p = new PrintStream(f);
this.interpreter.setOut(p);
print(name + "," + email);
f.close();

import org.apache.jmeter.services.FileServer;
String path=FileServer.getFileServer().getBaseDir();
name1= vars.get("user_Name_value");
name2= vars.get("UserId_value");
f = new FileOutputStream("E://csvfile/result.csv", true); //spec-ify true if you want to overwrite file. Keep blank otherwise.
p = new PrintStream(f);
this.interpreter.setOut(p);
p.println(name1+"," +name2);
f.close();
this is worked for me i hope it will work for you also

If you just want to write extracted variables to CSV results file, then just add to user.properties the variables you want:
sample_variables=name,email
As per doc:
https://jmeter.apache.org/usermanual/properties_reference.html#results_file_config
They will be appended as last column of CSV results file.

You have a couple options
You can tally the results by adding an aggregate report listener to your thread group => add listener => aggregate report
You can get raw results by adding a simple data writer listener to your thread group => add listener => simple data writer
Hope this helps

You may use https://jmeter-plugins.org/wiki/FlexibleFileWriter/ with sample variables set up.
Or with fake Dummy Sampler.
Anyway Flexible File Writer is good for writing data into file.

Related

read specific files names in adf pipeline

I have got requirement saying, blob storage has multiple files with names file_1.csv,file_2.csv,file_3.csv,file_4.csv,file_5.csv,file_6.csv,file_7.csv. From these i have to read only filenames from 5 to 7.
how we can achieve this in ADF/Synapse pipeline.
I have repro’d in my lab, please see the below repro steps.
ADF:
Using the Get Metadata activity, get a list of all files.
(Parameterize the source file name in the source dataset to pass ‘*’ in the dataset parameters to get all files.)
Get Metadata output:
Pass the Get Metadata output child items to ForEach activity.
#activity('Get Metadata1').output.childItems
Add If Condition activity inside ForEach and add the true case expression to copy only required files to sink.
#and(greater(int(substring(item().name,4,1)),4),lessOrEquals(int(substring(item().name,4,1)),7))
When the If Condition is True, add copy data activity to copy the current item (file) to sink.
Source:
Sink:
Output:
I took a slightly different approaching using a Filter activity and the endsWith function:
The filter expression is:
#or(or(endsWith(item().name, '_5.csv'),endsWith(item().name, '_6.csv')),endsWith(item().name, '_7.csv'))
Slightly different approaches, similar results, it depends what you need.
You can always do what #NiharikaMoola-MT suggested . But since you already know the range of the files ( 5-7) , I suggest
Declare two paramter as an upper and lower range
Create a Foreach loop and pass the parameter and to create a range[lowerlimit,upperlimit]
Create a paramterized dataset for source .
Use the fileNumber from the FE loop to create a dynamic expression like
#concat('file',item(),'.csv')

Question about why "Set cannot change my json file value"

This is my feature file:
* def Json = read ('1.json')
* print Json.Id
* set Json.Id = Product_Num
* print Json.Id
I want to replace my Id with new Product number. After run Karate, I see the result is correct, new Product_num is put out (by the second print out result)
But id value doesn't update in 1.json file.
How to update 1.json file values? I need to replace id values in 1.json file
You are trying to write to a file. This is normally never needed when you use Karate, and hence is not supported directly. Do you want your test data changing all the time because your tests are updating the files from which test data is being read ? I don't think so.
Please read this answer for details: https://stackoverflow.com/a/54593057/143475

How can I create dynamic destination files name based on what is filtered?

For example if in my log line appears something like that [xxx], I must put this message in a file with a name starting as xxx.log
And if the message changes and appears [xxy] I must create a new log file named as xxy.log
How can I do that in a syslog-ng config file?
to filter for specific messages, you can use filter expressions in syslog-ng:
You can use regular expressions in the filter as well.
To use the results of the match in the filename, try using a named pattern in the filter expression:
filter f_myfilter {message("(?<name>pattern)");};
Then you can use the named match in the destination template:
destination d_file {
file ("/var/log/${name}.log");
};
Let me know if it works, I haven't had the time to test it.
I find this way to resolve mi problem.
parser p_apache {
csv-parser(columns("MY.ALGO", "MY.MOSTRAR", "MY.OTRA")
delimiters("|")
);
};
destination d_file {
file("/var/log/syslog-ng/$YEAR-$MONTH/$DAY/messages-${MY.ALGO:-nouser}.log");
};
Regex is the answer here.
Eg: I have a file name access2018-10-21.log for source so my access log source file entry becomes
file("/opt/liferay-portal-6.2-ee-sp13/tomcat-7.0.62/logs/access[0-9][0-9][0-9][0-9]\-[0-9][0-9]\-[0-9][0-9].log" follow_freq(1) flags(no-parse));

How to get the reorder the column with csv input fixed column in pentaho

Scenario:
I have created transformation to load data into table from csv file and I have following columns in csv file:
Customer_Id
Company_Id
Employee_Name
But user may give input file with column ordering (random order) as
Employee_Name
Company_Id
Customer_Id
so, if I try to load file which has random column ordering, will kettle load correct column values as per column names ... ?
Using ETL Metadata Injection you can use a transformation like this, to either normalize the data, or to store it to your database:
Then you just need to send the correct data to that transformation. You can read the header line from the CSV, and use Row Normaliser to convert to the format used by ETL Metadata Injection.
I have included a quick example here: csv_inject on Dropbox, if you make something like this and run it from something that runs it per csv file it should work.
Ooh, thats some nasty javascript!
The way to do this is with metadata injection. Look at the samples, but basically you need a template which reads the file, and writes it back out. you then use another parent transformation to figure out the headings, configure that template and then execute it.
There are samples in the PDI samples folder, and also take a look at the "figuring out file format" example in matt casters blueprints project on github.
You could try something like this as your JavaScript:
//Script here
var seen;
trans_Status = CONTINUE_TRANSFORMATION;
var col_names = ['Customer_Id','Company_Id','Employee_Name'];
var col_pos;
if (!seen) {
// First line
trans_Status = SKIP_TRANSFORMATION;
seen = 1;
col_pos = [-1,-1,-1];
for (var i = 0; i < col_names.length; i++) {
for (var j = 0; j < row.length; j++) {
if (row[j] == col_names[i]) {
col_pos[i] = j;
break;
}
}
if (col_pos[i] === -1) {
writeToLog("e", "Cannot find " + col_names[i]);
trans_Status = ERROR_TRANSFORMATION;
break;
}
}
}
var Customer_Id = row[col_pos[0]];
var Company_Id = row[col_pos[1]];
var Employee_Name = row[col_pos[2]];
Here is the .ktr I tried: csv_reorder.ktr
(edit, here are the test csv files)
1.csv:
Customer_Id,Company_Id,Employee_Name
cust1,comp1,emp1
2.csv:
Employee_Name,Company_Id,Customer_Id
emp2,comp2,cust2
Assuming rejecting the input file is not an option you basically have 4 solutions.
reorder the fields in an external editor (don't use excel if it contains dates)
Use code within your transformation to detect the column headers and reorder the file.
Use metadata injection as proposed by bolav
Create a job. This need to:
a. load the file into a temporary database.
b. use an sql statement to retrieve the fields (use a SELECT with an ORDER By clause)
c. output the file in the correct order

JMeter regex extractor forEach controller

I'm creating some tests with JMeter, the situation is very simple, I have a search page with a list of results, and I have to retrieve from this results some values to use in the next request.
Those results are around 350, I don't need them all.
I used the RegexExtractor to retrieve those results and it works (I setted it to retrieve just 10 results), but now I don't know how to access the results inside a LoopCounter.
The extractor put the results into a variable named Result.
The problem is that I don't know hot to build dinamically the name of a variable.
Do I have to use some function like _p()??
I can access the variable just putting the static name Result_0_g1
Inside the LoopCounter I putted also a Counter to store the loop count into the variable index
Thank you
EDIT:
SOLVED I have to write:
${__V(Result_${index}_g1)
You have to reference the variable with the function:
${__V(Result_${index}_g1)
...Just for collection.
See also this post for another implementation (case without using ForEach Controller):
ThreadGroup
HttpSampler
Regex Extractor (variableName = links)
WhileController(${__javaScript(${C} < ${links_matchNr})})
HTTPSampler use ${__V(links_${C})} to access the current result
Counter (start=1, increment=1, maximum=${links_matchNr}, referenceName=C)
Use the ForEach Controller - it's specifically designed for this purpose and does exactly what you want.
You may use ForEach Controller:
ThreadGroup
YourSampler
Regular Expression Extractor (match -1, any template)
Foreach controller
Counter(Maximum -> ${Result_matchNr} , Rf Name -> index)
LinkSamplerUsingParsedData(use -> ${__V(Result_${index}_g1)}
Now, if you want to iterate to all groups, you need another foreach to do that. As you know which group represent what value, you can use this way.