Question about why "Set cannot change my json file value" - karate

This is my feature file:
* def Json = read ('1.json')
* print Json.Id
* set Json.Id = Product_Num
* print Json.Id
I want to replace my Id with new Product number. After run Karate, I see the result is correct, new Product_num is put out (by the second print out result)
But id value doesn't update in 1.json file.
How to update 1.json file values? I need to replace id values in 1.json file

You are trying to write to a file. This is normally never needed when you use Karate, and hence is not supported directly. Do you want your test data changing all the time because your tests are updating the files from which test data is being read ? I don't think so.
Please read this answer for details: https://stackoverflow.com/a/54593057/143475

Related

Can't figure out how to insert keys and values of nested JSON data into SQL rows with NiFi

I'm working on a personal project and very new (learning as I go) to JSON, NiFi, SQL, etc., so forgive any confusing language used here or a potentially really obvious solution. I can clarify as needed.
I need to take the JSON output from a website's API call and insert it into a table in my MariaDB local server that I've set up. The issue is that the JSON data is nested, and two of the key pieces of data that I need to insert are used as variable key objects rather than values, so I don't know how to extract it and put it in the database table. Essentially, I think I need to identify different pieces of the JSON expression and insert them as values, but I'm clueless how to do so.
I've played around with the EvaluateJSON, SplitJSON, and FlattenJSON processors in particular, but I can't make it work. All I can ever do is get the result of the whole expression, rather than each piece of it.
{"5381":{"wind_speed":4.0,"tm_st_snp":26.0,"tm_off_snp":74.0,"tm_def_snp":63.0,"temperature":58.0,"st_snp":8.0,"punts":4.0,"punt_yds":178.0,"punt_lng":55.0,"punt_in_20":1.0,"punt_avg":44.5,"humidity":47.0,"gp":1.0,"gms_active":1.0},
"1023":{"wind_speed":4.0,"tm_st_snp":26.0,"tm_off_snp":82.0,"tm_def_snp":56.0,"temperature":74.0,"off_snp":82.0,"humidity":66.0,"gs":1.0,"gp":1.0,"gms_active":1.0},
"5300":{"wind_speed":17.0,"tm_st_snp":27.0,"tm_off_snp":80.0,"tm_def_snp":64.0,"temperature":64.0,"st_snp":21.0,"pts_std":9.0,"pts_ppr":9.0,"pts_half_ppr":9.0,"idp_tkl_solo":4.0,"idp_tkl_loss":1.0,"idp_tkl":4.0,"idp_sack":1.0,"idp_qb_hit":2.0,"humidity":100.0,"gp":1.0,"gms_active":1.0,"def_snp":23.0},
"608":{"wind_speed":6.0,"tm_st_snp":20.0,"tm_off_snp":53.0,"tm_def_snp":79.0,"temperature":88.0,"st_snp":4.0,"pts_std":5.5,"pts_ppr":5.5,"pts_half_ppr":5.5,"idp_tkl_solo":4.0,"idp_tkl_loss":1.0,"idp_tkl_ast":1.0,"idp_tkl":5.0,"humidity":78.0,"gs":1.0,"gp":1.0,"gms_active":1.0,"def_snp":56.0},
"3396":{"wind_speed":6.0,"tm_st_snp":20.0,"tm_off_snp":60.0,"tm_def_snp":70.0,"temperature":63.0,"st_snp":19.0,"off_snp":13.0,"humidity":100.0,"gp":1.0,"gms_active":1.0}}
This is a snapshot of an output with a couple thousand lines. Each of the numeric keys that you see above (5381, 1023, 5300, etc) are player IDs for the following stats. I have a table set up with three columns: Player ID, Stat ID, and Stat Value. For example, I need that first snippet to be inserted into my table as such:
Player ID Stat ID Stat Value
5381 wind_speed 4.0
5381 tm_st_snp 26.0
5381 tm_off_snp 74.0
And so on, for each piece of data. But I don't know how to have NiFi select the right pieces of data to insert in the right columns.
I believe that it's possible to use jolt to transform your json into a format:
[
{"playerId":"5381", "statId":"wind_speed", "statValue": 0.123},
{"playerId":"5381", "statId":"tm_st_snp", "statValue": 0.456},
...
]
then use PutDatabaseRecord with json reader.
Another approach is to use ExecuteGroovyScript processor.
Add new parameter to it with name SQL.mydb and link it to your DBCP controller service
And use the following script as Script Body parameter:
import groovy.json.JsonSlurper
import groovy.json.JsonBuilder
def ff=session.get()
if(!ff)return
//read flow file content and parse it
def body = ff.read().withReader("UTF-8"){reader->
new JsonSlurper().parse(reader)
}
def results = []
//use defined sql connection to create a batch
SQL.mydb.withTransaction{
def cmd = 'insert into mytable(playerId, statId, statValue) values(?,?,?)'
results = SQL.mydb.withBatch(100, cmd){statement->
//run through all keys/subkeys in flow file body
body.each{pid,keys->
keys.each{k,v->
statement.addBatch(pid,k,v)
}
}
}
}
//write results as a new flow file content
ff.write("UTF-8"){writer->
new JsonBuilder(results).writeTo(writer)
}
//transfer to success
REL_SUCCESS << ff

How to parameterize tests for each iteration in Postman collection?

My GET request goes like this:
<some ip>/search?IW_INDEX={IW_INDEX}&IW_FIELD_WEB_STYLE={IW_FIELD_TEXT}
The data file is as follows:
IW_INDEX,IW_FIELD_TEXT
index1,text1
index2,text2
My test for iteration 1 is as follows:
tests["parameter1"] = responseBody.has("value=\"19\"");
Now this value 19 will change depending upon the iteration and might be 20 in iteration 2.
Is there a way to provide expected test results iteration-wise in Postman?
I think you could do it by adding a column to your CSV file with "expected_result" and in the test, call that value by {{data.expected_result}}, so your test should probably look like:
tests["parameter1"] = responseBody.has("value=\"{{data.expected_result}}\");

Default values for query parameters

Please forgive me if my question does not make sense.
What im trying to do is to inject in values for query parameters
GET1 File
Scenario:
Given path 'search'
And param filter[id] = id (default value or variable from another feature file)
POST1 File
Scenario:
def newid = new id made by a post call
def checkid = read call(GET1) {id : newid}
like if one of my feature files creates a new id then i want to do a get call with the above scenario. therefore i need a parameter there which takes in the new id.
On the other hand if i do not have an id newly created or the test creating it is not part of the suite. i want to still be able to run the above mentioned scenario but this time it has a default value to it.
Instead of param use params. It is designed so that any keys with null values are ignored.
After the null is set on the first line below, you can make a call to another feature, and overwrite the value of criteria. If it still is null, no params will be set.
* def criteria = null
Given path 'search'
And params { filter: '#(criteria)' }
There are multiple other ways to do this, also refer to this set of examples for data-driven search params: dynamic-params.feature
The doc on conditional logic may also give you some ideas.

store data into a path based on a field's value in the tuple

{'aaa', 'bbb', 'ccc'}
....
Let's suppose above tupple gets loaded with following schema:
as (firstField:chararray, secondField:chararray, thirdField:chararray)
I want to store the tuple in HDFS with the path based on 2nd field (which is 'bbb' in above example). So the above tuple would get stored in the path
/SomeBaseDir/bbb/testoutput.txt
Any help would be appreciated.
To load the file use below command. Remember the file input data should be tab separated.If you are using any other separator like comma then change the parameter passes in PigStorage funtion. It should be PigStorage(',')
A = load '/home/abhishek/Work/pigInput/data' using PigStorage('\t') as (firstField:chararray, secondField:chararray, thirdField:chararray);
Now, to get the second element simple use:
result = foreach A generate secondField;
Result
dump result
('bbb')
You can store it using below command
store result into 'provide the path';
I think you want to use MultiStorage (https://pig.apache.org/docs/r0.8.1/api/org/apache/pig/piggybank/storage/MultiStorage.html). This should do pretty much what you want. Specify base path, then the field on which subdirectories should be based.
The SPLIT operator in PigLatin would do the job here.
For an input data (in) which is loaded, it could be split into different output variable as following:
loadedData = load ' ' as (.. ,somefield, ) using ... ;
SPLIT loadedData INTO
segmentA IF (somefield=='A'),
segmentB IF (somefield=='B'),
OtherSources OTHERWISE;
store segmentA into 'hdfs://<path for data segmentA >' using ....;
store segmentB into 'hdfs://<path for data segmentB >' using ....;

Write extracted data to a file using jmeter

I am using JMeter v2.5.
I need to get data from the responses of the test and extract data from it (which I am doing using regular exp extractor). How do I store this extracted data to a file?
Just solved a similar problem. After getting the data using a regular expression extractor, add a BeanShell PostProcessor element. Use the code below to write the variables to a file:
name = vars.get("name");
email = vars.get("email");
log.info(email); // if you want to log something to jmeter.log file
// Pass true if you want to append to existing file
// If you want to overwrite, then don't pass the second argument
f = new FileOutputStream("/my/file/path/result.csv", true);
p = new PrintStream(f);
this.interpreter.setOut(p);
print(name + "," + email);
f.close();
import org.apache.jmeter.services.FileServer;
String path=FileServer.getFileServer().getBaseDir();
name1= vars.get("user_Name_value");
name2= vars.get("UserId_value");
f = new FileOutputStream("E://csvfile/result.csv", true); //spec-ify true if you want to overwrite file. Keep blank otherwise.
p = new PrintStream(f);
this.interpreter.setOut(p);
p.println(name1+"," +name2);
f.close();
this is worked for me i hope it will work for you also
If you just want to write extracted variables to CSV results file, then just add to user.properties the variables you want:
sample_variables=name,email
As per doc:
https://jmeter.apache.org/usermanual/properties_reference.html#results_file_config
They will be appended as last column of CSV results file.
You have a couple options
You can tally the results by adding an aggregate report listener to your thread group => add listener => aggregate report
You can get raw results by adding a simple data writer listener to your thread group => add listener => simple data writer
Hope this helps
You may use https://jmeter-plugins.org/wiki/FlexibleFileWriter/ with sample variables set up.
Or with fake Dummy Sampler.
Anyway Flexible File Writer is good for writing data into file.