What kind of data format is this - dataformat

I have a 3rd party application which stored the data in the below format, I haven't seen any such format before
A<1,?,'clientID'=?,'displayType'='show','firstName'='First','id'=1234567,'info'=A<1,?,'acceptUser'=48141315,'nodeID'=1234567,'shareUser'=63490234>,'lastName'='Last','shareOwner'=63490234,'subtype'='shareaccepted','toUser'=48141315,'type'='notification','userID'=48141315,'username'='lastFirst'>

Might be a custom format. They should be telling you what format it is, you shouldn't have to guess, it should be documented somewhere.
So first thing you should do is ask them.
Now, if you indent it, it looks pretty straightforward. A<...> seems to define an object.
A<
1,
?,
'clientID'=?,
'displayType'='show',
'firstName'='First',
'id'=1234567,
'info'=A<
1,
?,
'acceptUser'=48141315,
'nodeID'=1234567,
'shareUser'=63490234
>,
'lastName'='Last',
'shareOwner'=63490234,
'subtype'='shareaccepted',
'toUser'=48141315,
'type'='notification',
'userID'=48141315,
'username'='lastFirst'
>
Unless this is a known format, I am afraid you will have to parse it yourself.
An other way could be to manually translate it to Json (with a custom class of your own) and then use Gson or Jackson to parse it to POJOs.
A JSON equivalent would be
{
"v1": 1,
"v2": "?",
"clientID": "?",
"displayType": "show",
"firstName": "First",
"id": 1234567,
"info": {
"v1": 1,
"v2": "?",
"acceptUser": 48141315,
"nodeID": 1234567,
"shareUser": 63490234
},
"lastName": "Last",
"shareOwner": 63490234,
"subtype": "shareaccepted",
"toUser": 48141315,
"type": "notification",
"userID": 48141315,
"username": "lastFirst"
}

Related

What is equivalent to multiple types in OpenAPI 3.1? anyOf or oneOf?

I want to change multiple types (supported in the latest drafts of JSON Schema so does OpenAPI v3.1) to anyOf, oneOf but I am a bit confused to which the types would be mapped to. Or can I map to any of the two.
PS. I do have knowledge about anyOf, oneOf, etc. but multiple types behavior is a little ambiguous. (I know the schema is invalid but it is just an example that is more focused towards type conversion)
{
"type": ["null", "object", "integer", "string"],
"properties": {
"prop1": {
"type": "string"
},
"prop2": {
"type": "string"
}
},
"enum": [2, 3, 4, 5],
"const": "sample const entry",
"exclusiveMinimum": 1.22,
"exclusiveMaximum": 50,
"maxLength": 10,
"minLength": 2,
"format": "int32"
}
I am converting it this way.
{
"anyOf": [{
"type": "null"
},
{
"type": "object",
"properties": {
"prop1": {
"type": "string"
},
"prop2": {
"type": "string"
}
}
},
{
"type": "integer",
"enum": [2, 3, 4, 5],
"exclusiveMinimum": 1.22,
"exclusiveMaximum": 50,
"format": "int32"
},
{
"type": "string",
"maxLength": 10,
"minLength": 2,
"const": "sample const entry"
}
]
}
anyOf gives you a closer match for the semantics than oneOf;
The problem (or benefit!) of oneOf is that it will fail if you happen to match 2 different cases.
That is unlikely to be what you want, given the source of your conversion which has those looser semantics.
Imagine converting ["integer","number"], for example; if the input was a 1, you'd match both and fail using oneOf.
First of all, your example is not valid:
The initial schema doesn't match anything, it's an "impossible" schema. The "enum": [2, 3, 4, 5] and "const": "sample const entry" constraints are mutually exclusive, and so are "const": "sample const entry" and "maxLength": 10.
The rewritten schema is not equivalent to the original schema because the enum and const were moved from the root level into subschemas. Yes, this way the schema makes more sense and will sort of work (e.g. it will match the specified numbers - but not strings! because of const vs maxLength contradiction), but it's not the same the original schema.
With regard to oneOf/anyOf:
It depends.
The choice between anyOf and oneOf depends on the context, i.e. whether an instance is can match more than one subschema or exactly one subschema. In other words, whether multiple subschema match is considered OK or an error. Nullable references typically need anyOf rather than oneOf, but other cases vary from schema to schema.
For example,
"type": ["number", "integer"]
corresponds to anyOf because there's an overlap - integer values are also valid "number" values in JSON Schema.
Whereas
"type": ["string", "integer"]
can be represented using either oneOf or anyOf. oneOf is semantically closer since strings and integers are totally different data types with no overlap. But technically anyOf also works, it's just there won't be more than one subschema match in this particular case.
In your example, all base type values are distinct with no overlap, so I would use oneOf, but technically anyOf will also work.

Karate: compare csv data with api response

I have a use case where I want to assert on a API response and compare it with the csv data.
Step1:
Csv file: *test.csv*
id,date,fullname,cost,country,code
1,02-03-2002,user1,$200,Canada,CAN
2, 04-05-2016,user2,$1500,United States, USA
I read the csv file and store it in a variable
def var1 = read(test.csv)
So now, var1 is a list of jsons based on my csv
var1 = [
{
"id":1,
"date":"02-03-2002",
"fullname": "user1",
"cost": "$200",
"country": "Canada",
"code": "CAN"
},
{
"id":2,
"date":"04-05-2016",
"fullname": "user2",
"cost": "$1500",
"country": "United States",
"code": "USA"
}
]
Step2:
I hit my api and get a response
Given url "https://dummyurl.com
Given path "/userdetails"
When method get
Then status 200
* def apiResponse = response
Step 3:
My api returns a list response which is:
{
"id":1,
"date":"02-03-2002",
"fullname": "user1",
"cost": "$200",
"country": {
"name": "Canada",
"code": "CAN"
}
},
{
"id":2,
"date":"05-04-2012",
"fullname": "user2",
"cost": "$1500",
"country": {
"name": "United States",
"code": "USA"
}
},
...and more 100 records..
]
Step 4:
So there are two assertions now which I wanted to perform
Get the count of csvresponse and apiresponse and compare which I did using the .length operator
Secondly, I want to confirm if each csv records are matching with each api response.
And if possible in my case id key from csv and apiresponse is primary key, so if I can iterate on id and match the api response for any discrepancy.
Let me know if this is readable for you and if I was able to explain my use case.
Thanks for your earlier response.
Please read up on the match contains syntax, that's all you need: https://github.com/intuit/karate#match-contains
So this one line should be enough:
* match var1 contains response
Also look at this answer in case the new contains deep helps: https://stackoverflow.com/a/63103746/143475
Try to avoid iterating, it is not needed for most API tests. But you can certainly do it. Look at these answers:
https://stackoverflow.com/a/62567262/143475
Also read this - because I suspect you are trying to over-complicate your tests. Please don't. Write tests where your are 100% sure of the "shape" of the response as far as possible: https://stackoverflow.com/a/54126724/143475
And please please read the docs. It is worth it.

How to pass json parameters in jmeter?

In jmeter, I want to pass dynamic parameters. For simple json its easy to put ${value1} but if json structure is complex like array or with multiple values then what is the proper method to pass parameter dynamically. Please refer below json.
Below is json with parameter :
{
"squadName": "Super hero squad",
"homeTown": "Metro City",
"formed": 2016,
"secretBase": "Super tower",
"active": true,
"members": [
{
"name": "Molecule Man",
"age": 29,
"secretIdentity": "Dan Jukes",
"powers": [
"Radiation resistance",
"Turning tiny",
"Radiation blast"
]
},
{
"name": "Madame Uppercut",
"age": 39,
"secretIdentity": "Jane Wilson",
"powers": [
"Million tonne punch",
"Damage resistance",
"Superhuman reflexes"
]
},
{
"name": "Eternal Flame",
"age": 1000000,
"secretIdentity": "Unknown",
"powers": [
"Immortality",
"Heat Immunity",
"Inferno",
"Teleportation",
"Interdimensional travel"
]
}
]
}
=======
Now I have used below method to send parameter through csv config file.
Is there any other simple method to pass parameter through variables in Jmeter for complex json (5-6 level with array data) ?
CSV DATA config is the best to parameterize your test data.
If you want to customize the way you want to pick values from CSV you can use BeanShell /JSR223 sampler
here is one article that shows how to pick random values from CSV data config.

Append to Specific Array Index from another table dynamically SQL Server

I have a json string that I need to modify
{"RecordCount":3,"Top":10,"Skip":0,"SelectedSort":"Seed asc","value":[{"AccountProductListId":22091612871138,"Name":"April 4th 2018","AccountId":256813438078643,"IsPublic":false,"Comment":"Test order sheet","Quantity":3},{"AccountProductListId":166305848801939,"Name":"test","AccountId":256813438078643,"IsPublic":false,"Comment":"","Quantity":1},{"AccountProductListId":21177711287586,"Name":"Test Order sheet","AccountId":256813438078643,"IsPublic":true,"Comment":"the very first sheet","Quantity":2}]}
Inside value the array looks like this:
"value": [{
"AccountProductListId": 22091612871138,
"Name": "April 4th 2018",
"IsPublic": false,
"Comment": "Test order sheet",
"Quantity": 3
}, {
"AccountProductListId": 166305848801939,
"Name": "test",
"IsPublic": false,
"Comment": "",
"Quantity": 1
}, {
"AccountProductListId": 21177711287586,
"Name": "Test Order sheet",
"IsPublic": true,
"Comment": "the very first sheet",
"Quantity": 2
}],
What I need to do is append some data from another table:
AccountProductListId ProductID
21177711287586 97096131867163|32721319938943
22091612871138 97096131867163|145461009584740|130005306921282
166305848801939 8744071222157
As you can see the AccountProductListId is already in the JSON result so I should know which array it should go to. The only problem is I don't know the syntax to merge the ProductID data into its specific array index. The JSON array could have more than 3 items.
Essentially ending up with something like this:
"value": [{
"AccountProductListId": 22091612871138,
"Name": "April 4th 2018",
"IsPublic": false,
"Comment": "Test order sheet",
"Quantity": 3,
"ProductID": "97096131867163|145461009584740|130005306921282"
}, {
"AccountProductListId": 166305848801939,
"Name": "test",
"IsPublic": false,
"Comment": "",
"Quantity": 1,
"ProductID": "8744071222157"
}, {
"AccountProductListId": 21177711287586,
"Name": "Test Order sheet",
"IsPublic": true,
"Comment": "the very first sheet",
"Quantity": 2,
"ProductID": "97096131867163|32721319938943"
}],
Any information would be greatly appreciated. Thanks.
Process with SQL Server
Prior to SQL Server 2016, there is not in-built support to read or write JSON.
Starting SQL Server 2016, you can use OPENJSON rowset function to read JSON and FOR JSON clause to write JSON.
See https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server
The approach would be to use OPENJSON to read the JSON string as a rowset, join it with the table to pickup ProductID and use FOR JSON to convert back to JSON.
Process outside SQL Server
Depending on your situation, it might be simpler to parse JSON outside SQL Server. If going that route, then you could
Collect all the AccountProductListIDs from the parsed JSON
Send the collected id to SQL Server via a stored procedure that takes a TVP input and outputs the AccountProductListID -> ProductID mapping
Inject the ProductIDs into JSON object and serialize back to string

Turned off the dynamic mapping in Elasticsearch, but the custom mapping still not work?

my problem is: I have a JsonObject like this:
{
"success": true,
"type": "message",
"body": {
"_id": "5215bdd32de81e0c0f000005",
"id": "411c79eb-a725-4ad9-9d82-2db54dfc80ee",
"type": "metaModel",
"title": "testchang",
"authorId": "5215bd552de81e0c0f000001",
"drawElems": [
{
"type": "App.draw.metaElem.ModelStartPhase",
"id": "27re7e35-550j",
"x": 60,
"y": 50,
"width": 50,
"height": 50,
"title": "problem engagement",
"isGhost": true,
"pointTo": "e88e2845-37a4-4c45-a030-d02a3c3e03f9",
"bindingId": "90f79d70-0afc-11e3-98d2-83967d2ad9a6",
"model": "meta",
"entityType": "phase",
"domainId": "411c79eb-a725-4ad9-9d82-2db54dfc80ee",
"authorId": "5215bd552de81e0c0f000001",
"userData": {},
"_id": "5215f4c5d89f629c1700000d"
},
{...}
]
}
}
And I tried to define a mapping as follows to index only parts of this object.
String mapping = XContentFactory.jsonBuilder()
.startObject()
.startObject("domaindata").field("dynamic","false")
.startObject("properties")
.startObject("id").field("type","string").field("store","yes").endObject()
.startObject("type").field("type","string").field("store","yes").endObject()
.startObject("title").field("type","integer").field("store","yes").endObject()
.startObject("drawElems")
.startObject("properties")
.startObject("type").field("store","yes").field("type","string").endObject()
.startObject("title").field("store","yes").field("type","string").endObject()
.endObject().endObject().endObject().endObject().endObject().string();
after adding this mapping into my type with:
node.client().admin()
.indices().prepareCreate("test")
.addMapping("domaindata", mapping)
.execute().actionGet();
I still got all of the jsonobject in my indexresponse, it seems that my mapping does not work.
Could anybody help me? Thanks a lot!
The problem here is that using static mapping only means that fields that are not already present in the mapping won't be added to it, thus won't be indexed either. But as they are part of the source document that you sent, they are returned as part of the _source field.
Same goes if you disable a specific object in the mapping ("enable":false) as mentioned here. That object won't be parsed nor indexed, but will still be part of the stored _source field.
If you want to avoid storing part of the _source you can use the source includes/excludes feature as described here.