How to validate an object inside a JSON schema in Karate whether its empty or contains a series of key:value pairs? - api

I am trying to validate an API response using Karate for either of these two states.
Scenario 1 (when it returns a contractData object that contains a fee key):
{
"customer": {
"financialData": {
"totalAmount": 55736.51,
"CreateDate": "2022-04-01",
"RequestedBy": "user1#test.com"
},
"contractData": {
"Fee": 78.00
}
}
}
Scenario 2 (when it returns an empty contractData object):
{
"customer": {
"financialData": {
"totalAmount": 55736.51,
"CreateDate": "2022-04-01",
"RequestedBy": "user1#test.com"
},
"contractData": {}
}
}
How can I write my schema validation logic to validate both states?
The best thing I could have done is to write it like this:
* def schema = {"customer":{"financialData":{"totalAmount":"#number","CreateDate":"#?isValidDate(_)","RequestedBy":"#string"},"contractData":{"Fee": ##number}}}
* match response == schema
And it seems like it works for both above scenarios, but I am not sure whether this is the best approach or not. Problem with this approach is if I have more than one key:value pair inside "contractData" object and I want to be sure all those keys are present in there when it is not empty, I cannot check it via this approach because for each individual key:value pair, this approach assumes that they could either be present or not and will match the schema even if some of those keys will be present.

Wow, I have to admit I've never come across this case ever, and that's saying something. I finally was able to figure out a possible solution:
* def chunk = { foo: 'bar' }
* def valid = function(x){ return karate.match(x, {}).pass || karate.match(x, chunk).pass }
* def schema = { hey: '#? valid(_)' }
* def response1 = { hey: { foo: 'bar' } }
* def response2 = { hey: { } }
* match response1 == schema
* match response2 == schema

Related

How to evaluate value of a variable in Json Key in Karate? [duplicate]

This question already has an answer here:
Karate API function/keyword to substitute JSON placeholder key with argument passed
(1 answer)
Closed 1 year ago.
I have a request Json for one of my API calls, where the Key of Json Object itself is a variable which needs to be evaluated while hitting the API.
Normally when i have to use variable in Json, I simply use #(varName) and it works fine as long as I am having this as Json Value.
I want to do the same for Json Key.
Sample json snippet is:-
"Registrations": {
"#(varName)": {
"requestedAction": "REGISTER",
"productId": "#(varName)",
"registrationSourceType": "Selected",
"includedInAgenda": false
}`
In above example, Registration Json block has nested Json where my KeyName will be an uuid.
Just use JS:
* def myJson = { registrations: {} }
* def uuid = 'someString'
* myJson.registrations[uuid] = { foo: 'bar' }
* match myJson == { registrations: { someString: { foo: 'bar' } } }

GraphQL stitch and union

I have a need to 'aggregate' multiple graphQl services (with same schema) into single read-only (query only) service exposing data from all services. For example:
---- domain 1 ----
"posts": [
{
"title": "Domain 1 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 1 - Second post",
"description": "Content of the second post"
}
]
---- domain 2 ----
"posts": [
{
"title": "Domain 2 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 2 - Second post",
"description": "Content of the second post"
}
]
I understand that 'stitching' is not meant for UC's like this but more to combine different micro-services into same API. In order to have same types (names) into single API, I implemented 'poor man namespaces' by on-the-fly' appending domain name to all data types. However, I'm able only to make a query with two different types like this:
query {
domain_1_posts {
title
description
}
domain_2_posts {
title
description
}
}
but, it results with data set consist out of two arrays:
{
"data": {
"domain_1_posts": [
{ ...},
],
"domain_2_posts": [
{ ...},
]
}
}
I would like to hear your ideas what I can do to combine it into single dataset containing only posts?
One idea is to add own resolver that can call actual resolvers and combine results into single array (if that is supported at all).
Also, as a plan B, I could live with sending 'domain' param to query and then construct query toward first or second domain (but, to keep initial query 'domain-agnostic', e.g. without using domain namses in query itself?
Thanks in advance for all suggestions...
I manage to find solution for my use-case so, I'll leave it here in case that anyone bump into this thread...
As already mentioned, stitching should be used to compose single endpoint from multiple API segments (microservices). In case that you try to stitch schemas containing same types or queries, your request will be 'routed' to pre-selected instance (so, only one).
As #xadm suggested, key for 'merging' data from multiple schemas into singe data set is in using custom fetch logic for Link used for remote schema, as explained:
1) Define custom fetch function matching your business needs (simplified example):
const customFetch = async (uri, options) => {
// do not merge introspection query results!!!
// for introspection query always use predefined (first?) instance
if( operationType === 'IntrospectionQuery'){
return fetch(services[0].uri, options);
}
// array fecth calls to different endpoints
const calls = [
fetch(services[0].uri, options),
fetch(services[1].uri, options),
fetch(services[2].uri, options),
...
];
// execute calls in parallel
const data = await Promise.all(fetchCalls);
// do whatever you need to merge data according to your needs
const retData = customBusinessLogic();
// return new response containing merged data
return new fetch.Response(JSON.stringify(retData),{ "status" : 200 });
}
2) Define link using custom fetch function. If you are using identical schemas you don't need to create links to each instance, just one should be enough.
const httpLink = new HttpLink(services[0].uri, fetch: customFetch });
3) Use Link to create remote executable schema:
const schema = await introspectSchema(httpLink );
return makeRemoteExecutableSchema({
schema,
link: httpLink,
context: ({ req }) => {
// inject http request headers into context if you need them
return {
headers: {
...req.headers,
}
}
},
})
4) If you want to forward http headers all the way to the fetch function, use apollo ContextLink:
// link for forwarding headers through context
const contextLink = setContext( (request, previousContext) => {
if( previousContext.graphqlContext ){
return {
headers: {
...previousContext.graphqlContext.headers
}
}
}
}).concat(http);
Just to mention, dependencies used for this one:
const { introspectSchema, makeRemoteExecutableSchema, ApolloServer } = require('apollo-server');
const fetch = require('node-fetch');
const { setContext } = require('apollo-link-context');
const { HttpLink } = require('apollo-link-http');
I hope that it will be helfull to someone...

Karate - Match two complex shuffled json

Below question is very similar to this: Karate - Validate json responses stored in different files I went through suggested contains-shortcuts and could not figure out the answer.
I need to compare two json files but using contains keyword. Why only contains? Because in some cases i need to match only some of the selected fields in json files. Below are the samples and codes.
Json File 1: Test.Json
{
"webServiceDetail":{
"feature":{
"featureCd":"ABCD",
"imaginaryInd":"100.0",
"extraInd1":"someRandomValue1"
},
"includefeatureList":[
{
"featureCd":"PQRS",
"featureName":"Checking SecondAddOn Service",
"extraInd1":"someRandomValue1",
"extraInd2":"someRandomValue1"
},
{
"featureCd":"XYZ",
"featureName":"Checking AddOn Service",
"imaginaryInd":"50.0"
}
]
}
}
Json File 2: Test1.json
{
"webServiceSummary":{
"service":{
"serviceCd":"ABCD"
},
"includeServicesList":[
{
"serviceCd":"XYZ",
"serviceDescription": "Checking AddOn Service"
},
{
"serviceDescription":"Checking SecondAddOn Service",
"serviceCd":"PQRS",
"randon":"FGDD"
}
]
}
}
My Code:
* def Test = read('classpath:PP1/data/test.json')
* def Test1 = read('classpath:PP1/data/Test1.json')
* def feature = Test.webServiceDetail.feature
* set expected.webServiceSummary.service
| path | value |
| serviceCd | feature.featureCd |
* def mapper = function(x){ return { serviceCd: x.featureCd, serviceDescription: x.featureName} }
* def expectedList = karate.map(Test.webServiceDetail.includefeatureList, mapper)
* set expected.webServiceSummary.includeServicesList = '#(^*expectedList)'
* match Test1.webServiceSummary.includeServicesList == expected.webServiceSummary.includeServicesList
Now, above code, perfectly works and i get success response as well. But my concern is I am matching with contains any here. I should verify with contains keyword. Because i need to ensure all the parameters in expected.webServiceSummary.includeServicesList are present in Test1.webServiceSummary.includeServicesList; not any or some of them. I tried using #(^expectedList) -- for contains; but didn't worked out. I know that these series of questions look silly, but i can't figure out the behavior!
This will always check that a value contains only all the array elements in expectedList.
'#(^^expectedList)'
Read the docs: https://github.com/intuit/karate#contains-short-cuts

Load only the data that's needed from database with Graphql

I'm learning graphql and I think I've spot one flaw in it.
Suppose we have schema like this
type Hero {
name: String
friends: [Person]
}
type Person {
name: String
}
and two queries
{
hero {
name
friends {
name
}
}
}
and this
{
hero {
name
}
}
And a relational database that have two corresponding tables Heros and Persons.
If my understanding is right I can't resolve this queries such that for the first query the resulting sql query would be
select Heros.name, Persons.name
from Heros, Persons
where Hero.name = 'Some' and Persons.heroid = Heros.id
And for the second
select Heros.name, Persons.name from Heros
So that only the fields that are really needed for the query would be loaded from the database.
Am I right about that?
Also if graphql would have ability to return only the data that's needed for the query, not the data that's valid for full schema I think this would be possible, right?
Yes, this is definitely possible and encouraged. However, the gist of it is that GraphQL essentially has no understanding of your storage layer until you explicitly explain how to fetch data. The good news about this is that you can use graphql to optimize queries no matter where the data lives.
If you use javascript, there is a package graphql-fields that can simplify your life in terms of understanding the selection set of a query. It looks something like this.
If you had this query
query GetCityEvents {
getCity(id: "id-for-san-francisco") {
id
name
events {
edges {
node {
id
name
date
sport {
id
name
}
}
}
}
}
}
then a resolver might look like this
import graphqlFields from 'graphql-fields';
function getCityResolver(parent, args, context, info) {
const selectionSet = graphqlFields(info);
/**
selectionSet = {
id: {},
name: {},
events: {
edges: {
node: {
id: {},
name: {},
date: {},
sport: {
id: {},
name: {},
}
}
}
}
}
*/
// .. generate sql from selection set
return db.query(generatedQuery);
}
There are also higher level tools like join monster that might help with this.
Here is a blog post that covers some of these topics in more detail. https://scaphold.io/community/blog/querying-relational-data-with-graphql/
In Scala implementation(Sangria-grahlQL) you can achieve this by following:
Suppose this is the client query:
query BookQuery {
Books(id:123) {
id
title
author {
id
name
}
}
}
And this is your QueryType in Garphql Server.
val BooksDataQuery = ObjectType(
"data_query",
"Gets books data",
fields[Repository, Unit](
Field("Books", ListType(BookType), arguments = bookId :: Nil, resolve = Projector(2, (context, fields) =>{ c.ctx.getBooks(c.arg(bookId), fields).map(res => res)}))
)
)
val BookType = ObjectType( ....)
val AuthorType = ObjectType( ....)
Repository class:
def getBooks(id: String, projectionFields: Vector[ProjectedName]) {
/* Here you have the list of fields that client specified in the query.
in this cse Book's id, title and author - id, name.
The fields are nested, for example author has id and name. In this case author will have sequence of id and name. i.e. above query field will look like:
Vector(ProjectedName(id,Vector()), ProjectedName(title,Vector()),ProjectedName(author,ProjectedName(id,Vector()),ProjectedName(name,Vector())))
Now you can put your own logic to read and parse fields the collection and make it appropriate for query in database. */
}
So basically, you can intercept specified fields by client in your QueryType's field resolver.

How to convert `json query` to `sql query`?

I have an AngularJS App designed to build queries in JSON format, those queries are built with many tables, fields, and operators like "join", "inner", "where","and","or","like", etc.
AngularJS App is sending this JSON queries to my Django-Restframework backend, so I need to translate that JSON query into SQL query, to be able to run Raw SQL with previous validations of what tables/models are allowed for select.
I don't need a full JSON query to SQL query translation, I just want to translate selects with support for clauses like "where","and", "or", "group_by".
For better understanding of my question I put the following snippets:
{
"selectedFields": {
"orders": {
"id": true,
"orderdate": true},
"customers": {
"customername": true,
"customerlastname": true}
},
"from": ["orders"],
"inner_join":
{
"customers": {
"on_eq": [
{
"orders": {
"customderID": true
},
},
{
"customers": {
"customerID": ture
}
}
]
}
}
}
SELECT
Orders.OrderID,
Customers.CustomerName,
Customers.CustomerLastName,
Orders.OrderDate
FROM Orders
INNER JOIN Customers
ON Orders.CustomerID=Customers.CustomerID;
I took the example from: http://www.w3schools.com/sql/sql_join.asp
Please note that I am not trying to serialize any SQL query output to JSON.
I found a NodeJS package (https://www.npmjs.com/package/json-sql) who converts JSON queries to SQL queries, so I made a NodeJS script and then I create a class in Python to call NodeJS script.
With this approach I just need to send all AngularJS queries following this syntax (https://github.com/2do2go/json-sql/tree/master/docs#join)
NodeJS script.
// Use:
//$ nodejs reporter/services.js '{"type":"select","fields":["a","b"],"table":"table"}'
var jsonSql = require('json-sql')();
var json_query = JSON.parse(process.argv[2]);
var sql = jsonSql.build(json_query);
console.log(sql.query);
DRF class:
from unipath import Path
import json
from django.conf import settings
from Naked.toolshed.shell import muterun_js
full_path = str(Path(settings.BASE_DIR, "reporter/services.js"))
class JSONToSQL:
def __init__(self, json_):
self.json = json_
self.sql = None
self.dic = json.loads(json_)
self.to_sql()
def to_sql(self):
response = muterun_js('%s \'%s\'' % (full_path, self.json))
if response.exitcode == 0:
self.sql = str(response.stdout).replace("\n","")
You could write some custom JS to parse the object like this:
var selectedfields ='';
var fields = Object.keys(obj.selectedFields);
for (i=0;i<fields.length; i++) {
var subfields = Object.keys(obj.selectedFields[fields[i]]);
for (j=0;j<subfields.length; j++) {
selectedfields = selectedfields + fields[i]+'.'+subfields[j]+' ';
}
}
var from="";
for (i=0;i<obj.from.length; i++) {
if (from=="") {
from = obj.from[i]
} else {
from = from + ',' +obj.from[i]
}
}
var output = 'SELECT '+selectedfields+ ' FROM '+from;
document.getElementById('output').innerHTML=output;
or in angular you would use $scope.output = ... from within a controller perhaps.
jsfiddle here: https://jsfiddle.net/jsheridan390/fpbp6cz0/1/