Jmeter - read parameters from csv and write back updated parameters - api

I'm using Jmeter for testing. I need to use some keys in order to perform login, and then change the keys.
I understood that the best way to do it is to create csv file that contains two variables.
I understand how I can read the parameters (using 'CSV Data Set Config'), but I still don't know how to extract specific parameters from result (new keys) and save them in file instead the old ones.

You can use Regular Expression Extractor to extract the values from the response. This site will give you an idea how it works.
It is NOT a good idea to write in the same file which is read by CSV dataset config. Instead, you can use Beanshell post processor to create a CSV file & write as you want.
import org.apache.jmeter.services.FileServer;
f = new FileOutputStream("/your/file/path/filename.csv", true);
p = new PrintStream(f);
p.println("content,to be,written,in,csv,file");
p.close();
f.close();

Related

how to read a test data file ( a json file) in a feature file only once

Karate has callonce that will call a function or feature only once for all scenerios in a feaure file? Is there a similar feature for reading a json file only once in a feature file before executing all scenarios. Can this be achieved by passing a function to karate.callonce() and that function will then just use read function to read the json file. Kindly answer how can I do this correctly?
I do not want to use another feature file for this. Should be able to pass a function name to the callonce.
I tried karate.callSingle and pass read function to read the json file.
Personally I think reading a JSON file from the file-system is so cheap that you are un-necessary worrying about this.
The only way that I know of is like this:
Feature:
Background:
* def dataFn = function(){ return read('data.json') }
* def data = callonce dataFn
Scenario: one
* print data
Scenario: two
* print data
But you are quite likely to complain here that we are initializing the function dataFn for every Scenario ;) In that case, you may need to look for another framework.
And I personally think calling a re-usable feature (for data set-up) is fine. Programming languages do this kind of re-use all the time.
EDIT: well, I just remembered that this would work:
* def data = callonce read 'data.json'
Explained here: https://github.com/karatelabs/karate#call-vs-read

GridFs read PDF

I am trying to build a financial dashboard with Flask and pymongo. The starting point is a flask form which saves data in a MongoDB database. One of the fields in the form is a FileField (wtforms) which allows the upload of a PDF, which is then stored in MongoDB with GridFS.
Now I manage to save the pdf and I can see the resulting entries within the .files and .chunks collections. Now I would like to build a function that retrieves the PDFs and analyses them with some basic NLP, however I struggle with the getting meaningful data.
When I do:
storage = gridfs.GridFS(db, collection)
data = storage.get('some id')
a = data.read()
The result is a binary file. If I continue with:
with open(data, 'rb') as f:
b = f.read()
The result is "ValueError: embedded null byte or sometimes an empty "byte string".
Any help on this?
To follow up on the above, I found a solution for myself that consists in 2 separate functions:
(1) Upon upload of the form and before uploading the files to MongoDB, I apply a function based on pdfminer that extracts the string content of the PDF and tranform it into a list of sentences using NLTK. I will then store this list in the .files via the storage.put(file, sent_list = sent_list) #sent_list being the variable name of the list of sentences.
Whenever I wish to run NLP operations on the file, I will just call the "sent_list" variable from mongodb.
(2) If I wish to display the stored pdf in its original content however, I included the following function as a separate route.
storage = GridFS(db, collection)
data = storage.get_last_version(filename)
response = make_response(data.read())
extension = data.filename.split('.')[-1]
response.headers['Content-Type'] = f'application/{extension}'
response.headers['Content-Disposition'] = f'inline; filename={data.filename}'
return response
(2) will open a new tab in my flask app showing the .pdf file in its original format.
I hope this helps anyone coming across a similar problem in the future.

Is there any way to store array/list as variable/parameter in constructing an API in Postman?

I'm trying to parameterize a url in Postman and loop over that list to call multiple APIs one after another and append the JSON bodies.
For example:
if the url is GET https://location/store/{{user_id}}/date
and
user_id = ['1','3','5','2','6','8']
then how do I store user_idas a variable such that request can loop over each user_idin the url and generate an appended JSON body?
You can use data files. Store the user id values into a csv or JSON file and call that file in to your collection runner. The headers of the csv files can be used as variables names. please see the details of this approach in the below link
https://learning.postman.com/docs/postman/collection-runs/working-with-data-files/

How to Read specific column value from .text file in wso2 esb

I am starting to work with wso2 esb from few days back.
I need to read particular column value and set into property in wso2 esb.
My .txt file contains following values:
**SNO|FIRSTNAME|LASTNAME|EMAIL|PHONE|ADDRESS|SELLING_DEALER**
**51|christopher|chris|cpko78#gmail.com|0406-755909|US|MacGgor**
I need to read email and phone column value from this .txt file and set into property which can be used for further operations like EmailValidation or PhoneValidation.
Can anyone help me out to fine solution?
If you use ESB one option would be to do a smooks transformation into xml and then read the values from the generated xml. Just keep in mind that if you need the original csv content later in your proxy/api, you need to store the original content and restore it after you've read the needed values (using enrich mediator).
https://docs.wso2.com/display/ESB481/Smooks+Mediator
Another option would be to do a xslt transformation into xml (similar to smooks).
https://docs.wso2.com/display/ESB481/XSLT+Mediator
The last option I could think of is using the script mediator and extract the values using JavaScript, Groovy, or Ruby.
https://docs.wso2.com/display/ESB481/Script+Mediator
If you use EI then you also might expose your csv as a data service.
https://docs.wso2.com/display/DSS351/Exposing+CSV+Data+as+a+Data+Service
Hope that helps.

Conditionally, Converting of JSON to XML using MuleSoft

I have a simple conversion of JSON to XML using MuleSoft. In "Transform Message" component, I provided JSON Schema as Input and XML Schema as Output. When I run the app, the conversion happens if the file matches with both schema but it generates an empty XML file if it doesn't match.
I want below conditions:
1) If the file matches with schema, the converted output file should be sent to converted folder and the original file should move to Success folder.
2) If the file doesn't match with schema, the original file should move to the Failure folder instead of conversion.
Hope, I explained it comprehensively as I am new to MuleSoft. Here is a sample diagram which may simplify my requirement. Provide me with a new one if I badly designed the process.
First thing you need to create a flowVar that will hold your original payload.
When your doing your evaluation, if its XML then use a simple XPath expression like //elementName[not(node())]
Lastly, on your success use scatter-gather for multi-threading write. Pull your original payload from flowVar and write to Success and Write your regular payload to your Converted folder