Karate - Match two complex shuffled json - karate

Below question is very similar to this: Karate - Validate json responses stored in different files I went through suggested contains-shortcuts and could not figure out the answer.
I need to compare two json files but using contains keyword. Why only contains? Because in some cases i need to match only some of the selected fields in json files. Below are the samples and codes.
Json File 1: Test.Json
{
"webServiceDetail":{
"feature":{
"featureCd":"ABCD",
"imaginaryInd":"100.0",
"extraInd1":"someRandomValue1"
},
"includefeatureList":[
{
"featureCd":"PQRS",
"featureName":"Checking SecondAddOn Service",
"extraInd1":"someRandomValue1",
"extraInd2":"someRandomValue1"
},
{
"featureCd":"XYZ",
"featureName":"Checking AddOn Service",
"imaginaryInd":"50.0"
}
]
}
}
Json File 2: Test1.json
{
"webServiceSummary":{
"service":{
"serviceCd":"ABCD"
},
"includeServicesList":[
{
"serviceCd":"XYZ",
"serviceDescription": "Checking AddOn Service"
},
{
"serviceDescription":"Checking SecondAddOn Service",
"serviceCd":"PQRS",
"randon":"FGDD"
}
]
}
}
My Code:
* def Test = read('classpath:PP1/data/test.json')
* def Test1 = read('classpath:PP1/data/Test1.json')
* def feature = Test.webServiceDetail.feature
* set expected.webServiceSummary.service
| path | value |
| serviceCd | feature.featureCd |
* def mapper = function(x){ return { serviceCd: x.featureCd, serviceDescription: x.featureName} }
* def expectedList = karate.map(Test.webServiceDetail.includefeatureList, mapper)
* set expected.webServiceSummary.includeServicesList = '#(^*expectedList)'
* match Test1.webServiceSummary.includeServicesList == expected.webServiceSummary.includeServicesList
Now, above code, perfectly works and i get success response as well. But my concern is I am matching with contains any here. I should verify with contains keyword. Because i need to ensure all the parameters in expected.webServiceSummary.includeServicesList are present in Test1.webServiceSummary.includeServicesList; not any or some of them. I tried using #(^expectedList) -- for contains; but didn't worked out. I know that these series of questions look silly, but i can't figure out the behavior!

This will always check that a value contains only all the array elements in expectedList.
'#(^^expectedList)'
Read the docs: https://github.com/intuit/karate#contains-short-cuts

Related

Karate way of accessing a value in JSON object with number string as property (key)

I'm having a JSON response from an API as per below:
{
"lanes": [
{
"laneId": "6ef9deb2-de6d-43f7-baed-fe00de3a11d4",
"name": "LaneTest",
"dayShifts": {
"1": [
"Morning Shift"
]
}
},
{
"laneId": "559165e9-d675-4537-99e7-a0f3b73de3c8",
"name": "testaaa",
"dayShifts": {
"1": [
"11am shift",
"5:00 pm shift"
]
}
}]
}
Based on the response, I'm trying to find a lane object which contains the value "Morning Shift" under the dayShift.1 array and below is my attempt:
* def shiftFilter = function(x) { return x == 'Morning Shift' }
* def laneIdFilter = function(x) { return karate.filter(x.dayShifts.1, shiftFilter) }
* def laneObj = karate.filter(response.lanes, laneIdFilter)
Upon executing the above code, I'm getting the following error:
>>> failed features:
js failed:
>>>>
01: (function(x) { return karate.filter(x.dayShifts.1, shiftFilter) })
<<<<
org.graalvm.polyglot.PolyglotException: SyntaxError: Unnamed:1:47 Expected , but found .1
(function(x) { return karate.filter(x.dayShifts.1, shiftFilter) })
^
Unnamed:1:65 Expected ; but found )
(function(x) { return karate.filter(x.dayShifts.1, shiftFilter) })
^
Unnamed:1:66 Expected } but found eof
(function(x) { return karate.filter(x.dayShifts.1, shiftFilter) })
^
- org.graalvm.polyglot.Context.eval(Context.java:425)
- com.intuit.karate.graal.JsEngine.evalForValue(JsEngine.java:139)
- com.intuit.karate.graal.JsEngine.eval(JsEngine.java:135)
- com.intuit.karate.core.ScenarioEngine.evalJs(ScenarioEngine.java:1190)
- com.intuit.karate.core.ScenarioEngine.evalKarateExpression(ScenarioEngine.java:2143)
- com.intuit.karate.core.ScenarioEngine.evalKarateExpression(ScenarioEngine.java:2062)
- com.intuit.karate.core.ScenarioEngine.evalAndCastTo(ScenarioEngine.java:1251)
However, when I attempt to filter based on other properties of the lane object (i.e: name), it works correctly. It seems that Karate is unhappy with the usage of 1 in the dayShifts object.
I'm new to Karate and I would really appreciate if someone can shed some lights on this.
Karate version: v1.3.0
Thank you!
The reason is 1 is interpreted as a number, so you need to use the alternate way of referring to JSON using keys in square-brackets - which is: x.dayShifts['1']
I'm giving you a more concise solution, the latest version of Karate has JS "baked in" so normal Array operations work:
* def lane = response.lanes.find(x => x.dayShifts['1'][0] == 'Morning Shift')

How to validate an object inside a JSON schema in Karate whether its empty or contains a series of key:value pairs?

I am trying to validate an API response using Karate for either of these two states.
Scenario 1 (when it returns a contractData object that contains a fee key):
{
"customer": {
"financialData": {
"totalAmount": 55736.51,
"CreateDate": "2022-04-01",
"RequestedBy": "user1#test.com"
},
"contractData": {
"Fee": 78.00
}
}
}
Scenario 2 (when it returns an empty contractData object):
{
"customer": {
"financialData": {
"totalAmount": 55736.51,
"CreateDate": "2022-04-01",
"RequestedBy": "user1#test.com"
},
"contractData": {}
}
}
How can I write my schema validation logic to validate both states?
The best thing I could have done is to write it like this:
* def schema = {"customer":{"financialData":{"totalAmount":"#number","CreateDate":"#?isValidDate(_)","RequestedBy":"#string"},"contractData":{"Fee": ##number}}}
* match response == schema
And it seems like it works for both above scenarios, but I am not sure whether this is the best approach or not. Problem with this approach is if I have more than one key:value pair inside "contractData" object and I want to be sure all those keys are present in there when it is not empty, I cannot check it via this approach because for each individual key:value pair, this approach assumes that they could either be present or not and will match the schema even if some of those keys will be present.
Wow, I have to admit I've never come across this case ever, and that's saying something. I finally was able to figure out a possible solution:
* def chunk = { foo: 'bar' }
* def valid = function(x){ return karate.match(x, {}).pass || karate.match(x, chunk).pass }
* def schema = { hey: '#? valid(_)' }
* def response1 = { hey: { foo: 'bar' } }
* def response2 = { hey: { } }
* match response1 == schema
* match response2 == schema

Consumer with message selector not working

I have a simple consumer:
try
{
factory = new NMSConnectionFactory(Settings.Endpoint);
connection = factory.CreateConnection(Settings.UserName, Settings.Password);
connection.ClientId = Settings.Name;
session = connection.CreateSession(AcknowledgementMode.AutoAcknowledge);
destination = SessionUtil.GetDestination(session, Settings.QueueName, DestinationType.Queue);
consumer = session.CreateConsumer(destination, "portCode = 'GB'", false);
consumer.Listener += new MessageListener(OnMessage);
}
catch
{
throw;
}
I need to apply a selector to get messages when the portCode field is equal to "GB".
This queue receives many messages.
The message is in JSON and a sample of this message is shown below:
{
"message": {
"list": [
{
xxxxxxx
}
]
},
"header": {
"messageCode": "xxxxxx",
"portCode": "GB",
"sourceSystem": "origin",
"messageId": "ca0bf0e0-cefa-4f5a-a80a-b518e7d2f645",
"dateTimeMessage": "2021-04-22T07:12:48.000-0300",
"version": "1.0"
}
}
However, I do not receive messages using the specified "GB" selector.
It seems simple to define selectors, but it is not working for me.
Thanks.
Selectors do not work on the body of the message (i.e. your JSON data). They only work on the headers and properties of the message.

GraphQL stitch and union

I have a need to 'aggregate' multiple graphQl services (with same schema) into single read-only (query only) service exposing data from all services. For example:
---- domain 1 ----
"posts": [
{
"title": "Domain 1 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 1 - Second post",
"description": "Content of the second post"
}
]
---- domain 2 ----
"posts": [
{
"title": "Domain 2 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 2 - Second post",
"description": "Content of the second post"
}
]
I understand that 'stitching' is not meant for UC's like this but more to combine different micro-services into same API. In order to have same types (names) into single API, I implemented 'poor man namespaces' by on-the-fly' appending domain name to all data types. However, I'm able only to make a query with two different types like this:
query {
domain_1_posts {
title
description
}
domain_2_posts {
title
description
}
}
but, it results with data set consist out of two arrays:
{
"data": {
"domain_1_posts": [
{ ...},
],
"domain_2_posts": [
{ ...},
]
}
}
I would like to hear your ideas what I can do to combine it into single dataset containing only posts?
One idea is to add own resolver that can call actual resolvers and combine results into single array (if that is supported at all).
Also, as a plan B, I could live with sending 'domain' param to query and then construct query toward first or second domain (but, to keep initial query 'domain-agnostic', e.g. without using domain namses in query itself?
Thanks in advance for all suggestions...
I manage to find solution for my use-case so, I'll leave it here in case that anyone bump into this thread...
As already mentioned, stitching should be used to compose single endpoint from multiple API segments (microservices). In case that you try to stitch schemas containing same types or queries, your request will be 'routed' to pre-selected instance (so, only one).
As #xadm suggested, key for 'merging' data from multiple schemas into singe data set is in using custom fetch logic for Link used for remote schema, as explained:
1) Define custom fetch function matching your business needs (simplified example):
const customFetch = async (uri, options) => {
// do not merge introspection query results!!!
// for introspection query always use predefined (first?) instance
if( operationType === 'IntrospectionQuery'){
return fetch(services[0].uri, options);
}
// array fecth calls to different endpoints
const calls = [
fetch(services[0].uri, options),
fetch(services[1].uri, options),
fetch(services[2].uri, options),
...
];
// execute calls in parallel
const data = await Promise.all(fetchCalls);
// do whatever you need to merge data according to your needs
const retData = customBusinessLogic();
// return new response containing merged data
return new fetch.Response(JSON.stringify(retData),{ "status" : 200 });
}
2) Define link using custom fetch function. If you are using identical schemas you don't need to create links to each instance, just one should be enough.
const httpLink = new HttpLink(services[0].uri, fetch: customFetch });
3) Use Link to create remote executable schema:
const schema = await introspectSchema(httpLink );
return makeRemoteExecutableSchema({
schema,
link: httpLink,
context: ({ req }) => {
// inject http request headers into context if you need them
return {
headers: {
...req.headers,
}
}
},
})
4) If you want to forward http headers all the way to the fetch function, use apollo ContextLink:
// link for forwarding headers through context
const contextLink = setContext( (request, previousContext) => {
if( previousContext.graphqlContext ){
return {
headers: {
...previousContext.graphqlContext.headers
}
}
}
}).concat(http);
Just to mention, dependencies used for this one:
const { introspectSchema, makeRemoteExecutableSchema, ApolloServer } = require('apollo-server');
const fetch = require('node-fetch');
const { setContext } = require('apollo-link-context');
const { HttpLink } = require('apollo-link-http');
I hope that it will be helfull to someone...

How to convert `json query` to `sql query`?

I have an AngularJS App designed to build queries in JSON format, those queries are built with many tables, fields, and operators like "join", "inner", "where","and","or","like", etc.
AngularJS App is sending this JSON queries to my Django-Restframework backend, so I need to translate that JSON query into SQL query, to be able to run Raw SQL with previous validations of what tables/models are allowed for select.
I don't need a full JSON query to SQL query translation, I just want to translate selects with support for clauses like "where","and", "or", "group_by".
For better understanding of my question I put the following snippets:
{
"selectedFields": {
"orders": {
"id": true,
"orderdate": true},
"customers": {
"customername": true,
"customerlastname": true}
},
"from": ["orders"],
"inner_join":
{
"customers": {
"on_eq": [
{
"orders": {
"customderID": true
},
},
{
"customers": {
"customerID": ture
}
}
]
}
}
}
SELECT
Orders.OrderID,
Customers.CustomerName,
Customers.CustomerLastName,
Orders.OrderDate
FROM Orders
INNER JOIN Customers
ON Orders.CustomerID=Customers.CustomerID;
I took the example from: http://www.w3schools.com/sql/sql_join.asp
Please note that I am not trying to serialize any SQL query output to JSON.
I found a NodeJS package (https://www.npmjs.com/package/json-sql) who converts JSON queries to SQL queries, so I made a NodeJS script and then I create a class in Python to call NodeJS script.
With this approach I just need to send all AngularJS queries following this syntax (https://github.com/2do2go/json-sql/tree/master/docs#join)
NodeJS script.
// Use:
//$ nodejs reporter/services.js '{"type":"select","fields":["a","b"],"table":"table"}'
var jsonSql = require('json-sql')();
var json_query = JSON.parse(process.argv[2]);
var sql = jsonSql.build(json_query);
console.log(sql.query);
DRF class:
from unipath import Path
import json
from django.conf import settings
from Naked.toolshed.shell import muterun_js
full_path = str(Path(settings.BASE_DIR, "reporter/services.js"))
class JSONToSQL:
def __init__(self, json_):
self.json = json_
self.sql = None
self.dic = json.loads(json_)
self.to_sql()
def to_sql(self):
response = muterun_js('%s \'%s\'' % (full_path, self.json))
if response.exitcode == 0:
self.sql = str(response.stdout).replace("\n","")
You could write some custom JS to parse the object like this:
var selectedfields ='';
var fields = Object.keys(obj.selectedFields);
for (i=0;i<fields.length; i++) {
var subfields = Object.keys(obj.selectedFields[fields[i]]);
for (j=0;j<subfields.length; j++) {
selectedfields = selectedfields + fields[i]+'.'+subfields[j]+' ';
}
}
var from="";
for (i=0;i<obj.from.length; i++) {
if (from=="") {
from = obj.from[i]
} else {
from = from + ',' +obj.from[i]
}
}
var output = 'SELECT '+selectedfields+ ' FROM '+from;
document.getElementById('output').innerHTML=output;
or in angular you would use $scope.output = ... from within a controller perhaps.
jsfiddle here: https://jsfiddle.net/jsheridan390/fpbp6cz0/1/