How to pass json parameters in jmeter? - dynamic

In jmeter, I want to pass dynamic parameters. For simple json its easy to put ${value1} but if json structure is complex like array or with multiple values then what is the proper method to pass parameter dynamically. Please refer below json.
Below is json with parameter :
{
"squadName": "Super hero squad",
"homeTown": "Metro City",
"formed": 2016,
"secretBase": "Super tower",
"active": true,
"members": [
{
"name": "Molecule Man",
"age": 29,
"secretIdentity": "Dan Jukes",
"powers": [
"Radiation resistance",
"Turning tiny",
"Radiation blast"
]
},
{
"name": "Madame Uppercut",
"age": 39,
"secretIdentity": "Jane Wilson",
"powers": [
"Million tonne punch",
"Damage resistance",
"Superhuman reflexes"
]
},
{
"name": "Eternal Flame",
"age": 1000000,
"secretIdentity": "Unknown",
"powers": [
"Immortality",
"Heat Immunity",
"Inferno",
"Teleportation",
"Interdimensional travel"
]
}
]
}
=======
Now I have used below method to send parameter through csv config file.
Is there any other simple method to pass parameter through variables in Jmeter for complex json (5-6 level with array data) ?

CSV DATA config is the best to parameterize your test data.
If you want to customize the way you want to pick values from CSV you can use BeanShell /JSR223 sampler
here is one article that shows how to pick random values from CSV data config.

Related

How to load a jsonl file into BigQuery when the file has mix data fields as columns

During my work flow, after extracting the data from API, the JSON has the following structure:
[
{
"fields":
[
{
"meta": {
"app_type": "ios"
},
"name": "app_id",
"value": 100
},
{
"meta": {},
"name": "country",
"value": "AE"
},
{
"meta": {
"name": "Top"
},
"name": "position",
"value": 1
}
],
"metrics": {
"click": 1,
"price": 1,
"count": 1
}
}
]
Then it is store as .jsonl and put on GCS. However, when I load it onto BigQuery for further extraction, the automatic schema inference return the following error:
Error while reading data, error message: JSON parsing error in row starting at position 0: Could not convert value to string. Field: value; Value: 100
I want to convert it in to the following structure:
app_type
app_id
country
position
click
price
count
ios
100
AE
Top
1
1
1
Is there a way to define manual schema on BigQuery to achieve this result? Or do I have to preprocess the jsonl file before put it to BigQuery?
One of the limitations in loading JSON data from GCS to BigQuery is that it does not support maps or dictionaries in JSON.
A invalid example would be:
"metrics": {
"click": 1,
"price": 1,
"count": 1
}
Your jsonl file should be something like this:
{"app_type":"ios","app_id":"100","country":"AE","position":"Top","click":"1","price":"1","count":"1"}
I already tested it and it works fine.
So wherever you process the conversion of the json files to jsonl files and storage to GCS, you will have to do some preprocessing.
Probably you have to options:
precreate target table with an app_id field as an INTEGER
preprocess jsonfile and enclose 100 into quotes like "100"

Karate: compare csv data with api response

I have a use case where I want to assert on a API response and compare it with the csv data.
Step1:
Csv file: *test.csv*
id,date,fullname,cost,country,code
1,02-03-2002,user1,$200,Canada,CAN
2, 04-05-2016,user2,$1500,United States, USA
I read the csv file and store it in a variable
def var1 = read(test.csv)
So now, var1 is a list of jsons based on my csv
var1 = [
{
"id":1,
"date":"02-03-2002",
"fullname": "user1",
"cost": "$200",
"country": "Canada",
"code": "CAN"
},
{
"id":2,
"date":"04-05-2016",
"fullname": "user2",
"cost": "$1500",
"country": "United States",
"code": "USA"
}
]
Step2:
I hit my api and get a response
Given url "https://dummyurl.com
Given path "/userdetails"
When method get
Then status 200
* def apiResponse = response
Step 3:
My api returns a list response which is:
{
"id":1,
"date":"02-03-2002",
"fullname": "user1",
"cost": "$200",
"country": {
"name": "Canada",
"code": "CAN"
}
},
{
"id":2,
"date":"05-04-2012",
"fullname": "user2",
"cost": "$1500",
"country": {
"name": "United States",
"code": "USA"
}
},
...and more 100 records..
]
Step 4:
So there are two assertions now which I wanted to perform
Get the count of csvresponse and apiresponse and compare which I did using the .length operator
Secondly, I want to confirm if each csv records are matching with each api response.
And if possible in my case id key from csv and apiresponse is primary key, so if I can iterate on id and match the api response for any discrepancy.
Let me know if this is readable for you and if I was able to explain my use case.
Thanks for your earlier response.
Please read up on the match contains syntax, that's all you need: https://github.com/intuit/karate#match-contains
So this one line should be enough:
* match var1 contains response
Also look at this answer in case the new contains deep helps: https://stackoverflow.com/a/63103746/143475
Try to avoid iterating, it is not needed for most API tests. But you can certainly do it. Look at these answers:
https://stackoverflow.com/a/62567262/143475
Also read this - because I suspect you are trying to over-complicate your tests. Please don't. Write tests where your are 100% sure of the "shape" of the response as far as possible: https://stackoverflow.com/a/54126724/143475
And please please read the docs. It is worth it.

How to extract the field from JSON object with QueryRecord

I have been struggling with this problem for a long time. I need to create a new JSON flowfile using QueryRecord by taking an array (field ref) from input JSON field refs and skip the object field as shown in example below:
Input JSON flowfile
{
"name": "name1",
"desc": "full1",
"refs": {
"ref": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
}
QueryRecord configuration
JSONTreeReader setup as Infer Schema and JSONRecordSetWriter
select name, description, (array[rpath(refs, '//ref[*]')]) as sources from flowfile
Output JSON (need)
{
"name": "name1",
"desc": "full1",
"references": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
But got error:
QueryRecord Failed to write MapRecord[{references=[Ljava.lang.Object;#27fd935f, description=full1, name=name1}] with schema ["name" : "STRING", "description" : "STRING", "references" : "ARRAY[STRING]"] as a JSON Object due to java.lang.ClassCastException: null
Try the following approach, in your case it shoud work:
1) Read your JSON field fully (I imitated it with GenerateFlowFile processor with your example)
2) Add EvaluateJsonPath processor which will put 2 header fileds (name, desc) into the attributes:
3) Add SplitJson processor which will split your JSON byt refs/ref/ groups (split by "$.refs.ref"):
4) Add ReplaceText processor which will add you header fields (name, desc) to the split lines (replace "[{]" value with "{"name":"${json.name}","desc":"${json.desc}","):
5) It`s done:
Full process in my demo case:
Hope this helps.
Solution!: use JoltTransformJSON to transform JSON by Jolt specification. About this specification.

Swagger 2,0 query parameter syntax for a list or array of integers

I have been wrecking my brain reading docs and trying to successfully pass through a list or an array of integers using swagger because I am trying to test my API.
I am trying to pass through a list of teamIDs like this: (I've been trying all kinds of variations, mind you) but the values are still not pulling through when I debug
{
"type": "array",
"items": {
"type": "integer",
"enum": [
1,
2,
3
]
}
I am using "enum" because its a list of values I am trying to pass through.
Here is a screenshot of my swagger method as well:
A query parameter that is an array of integers is defined as follows:
{
"in: query",
"name": "teamIDs",
"type": "array",
"items": {
"type": "integer"
}
}
In your Swagger UI, enter the values for the teamIDs parameter one per line, like so:
1
2
3

MultiLevel JSON in PIG

I am new to PIG scripting and working with JSONs. I am in the need of parsing multi-level json files in PIG. Say,
{
"firstName": "John",
"lastName" : "Smith",
"age" : 25,
"address" :
{
"streetAddress": "21 2nd Street",
"city" : "New York",
"state" : "NY",
"postalCode" : "10021"
},
"phoneNumber":
[
{
"type" : "home",
"number": "212 555-1234"
},
{
"type" : "fax",
"number": "646 555-4567"
}
]
}
I am able to parse a single level json through JsonLoader() and do join and other operations and get the desired results as JsonLoader('name:chararray,field1:int .....');
Is it possible to parse the above mentioned JSON file using the built-in JsonLoader() function of PIG 0.10.0. If it is. Please explain me how it is done and accessing fields of the particular JSON?
You can handle nested json loading with Twitter's Elephant Bird: https://github.com/kevinweil/elephant-bird
a = LOAD 'file3.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad')
This will parse the JSON into a map http://pig.apache.org/docs/r0.11.1/basic.html#map-schema the JSONArray gets parsed into a DataBag of maps.
It is possible by creating your own UDF. A simple UDF example is shown in below link
http://pig.apache.org/docs/r0.9.1/udf.html#udf-java
C = load 'path' using JsonLoader('firstName:chararray,lastName:chararray,age:int,address:(streetAddress:chararray,city:chararray,state:chararray,postalCode:chararray),
phoneNumber:{(type:chararray,number:chararray)}')