Karate: Unable to move schema definition out of feature file - karate

I'm able to successfully run the Feature/scenario, When I define the schema inside my feature file .
Here is a simplified example of the schema.
Background:
...
...
* def accountSchema = { id: '#number? _ >= 0', prop1: '#number? _ >= 0', prop2: '#string', prop3: '#string', optionaProp: '##string' }
* def userAccountsSchema = ({ entitledAccounts: '#[] accountSchema', prop5: '#number' , prop6: '##string'})
And here is how I'm validating
Scenario:
...
...
When method GET
Then status 200
* print userAccountsSchema
And match response == userAccountsSchema
But the schema I posted here is simplified to ask this question, the real schema is far more complex.
So for clarity purpose, I decided to put schema in a separate js file response-schemas.js under the same folder as the feature file.
Here is the simplified content of response-schemas.js file.
function schema () {
let accountSchema = {
id: '#number? _ >= 0',
prop1: '#number? _ >= 0',
prop2: '#string',
prop3: '#string',
optionaProp: '##string',
}'
return {
accounts: `#[] ${accountSchema}` ,
prop5: '#string',
prop6: '#string',
};
}
now if I replace the 2 lines I mentioned at the beginning of the question under Background:, with below line
* def userAccountsSchema = call read('response-schemas.js')
I get this error
And match response == schemas
SyntaxError: Unnamed:1:8 Expected comma but found ident
[object Object]
^
I believe, I understand the problem, is this line
accounts: `#[] ${accountSchema}` ,
but unable to figure out the solution. If I tried to change the accountSchema variable in response-schemas.js to use multiline string then I get error in read step in Background
the whole idea to have a dedicated js file for schema is to keep it readable (by using multiple lines, preferably objects not a long string)

The main problem is this part:
accounts: `#[] ${accountSchema}`
Where you are trying to stuff a JSON into the Karate "fuzzy" expression. This is just not supported. Note that the Karate way of defining things like #(foo) and #[] bar has nothing to do with JavaScript, so I recommend not mixing these.
I know there is a desire to achieve the match in just one-line and somehow get one monstrous schema to do everything and I very strongly discourage this. Split your assertions into multiple lines. Split your response into smaller chunks of JSON if needed. There is nothing wrong with that. This also makes the life much easier of people who come along later who have to maintain your test.
For ideas see this answer: https://stackoverflow.com/a/61252709/143475
Other answers: https://stackoverflow.com/search?q=%5Bkarate%5D+array+schema
Tip: you can keep your schema "chunks" as JSON files if needed.

Related

Karate Api : check if a phrase is available response object array

I've a response
{ errors: [
{
code: 123,
reason: "this is the cause for a random problem where the last part of this string is dynamically generated"
} ,
{
code: 234,
reason: "Some other error for another random reason"
}
...
...
}
Now when I validate this response
I use following
...
...
And match response.errors[*].reason contains "this is the cause"
This validation fails, because there is an equality check for complete String for every reason ,
I all I want is, to validate that inside the errors array, if there is any error object, which has a reason string type property, starting with this is the cause phrase.
I tried few wild cards but didn't work either, how to do it ?
For complex things like this, just switch to JS.
* def found = response.errors.find(x => x.reason.startsWith('this is the cause'))
* match found == { code: 123, reason: '#string' }
# you can also do
* if (found) karate.log('found')
Any questions :)

Scalding Unit Test - How to Write A Local File?

I work at a place where scalding writes are augmented with a specific API to track dataset meta data. When converting from normal writes to these special writes, there are some intricacies with respect to Key/Value, TSV/CSV, Thrift ... datasets. I would like to compare the binary file is the same prior to conversion and after conversion to the special API.
Given I cannot provide the specific api for the metadata-inclusive writes, I only ask how can I write a unit test for .write method on a TypedPipe?
implicit val timeZone: TimeZone = DateOps.UTC
implicit val dateParser: DateParser = DateParser.default
implicit def flowDef: FlowDef = new FlowDef()
implicit def mode: Mode = Local(true)
val fileStrPath = root + "/test"
println("writing data to " + fileStrPath)
TypedPipe
.from(Seq[Long](1, 2, 3, 4, 5))
// .map((x: Long) => { println(x.toString); System.out.flush(); x })
.write(TypedTsv[Long](fileStrPath))
.forceToDisk
The above doesn't seem to write anything to local (OSX) disk.
So I wonder if I need to use a MiniDFSCluster something like this:
def setUpTempFolder: String = {
val tempFolder = new TemporaryFolder
tempFolder.create()
tempFolder.getRoot.getAbsolutePath
}
val root: String = setUpTempFolder
println(s"root = $root")
val tempDir = Files.createTempDirectory(setUpTempFolder).toFile
val hdfsCluster: MiniDFSCluster = {
val configuration = new Configuration()
configuration.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, tempDir.getAbsolutePath)
configuration.set("io.compression.codecs", classOf[LzopCodec].getName)
new MiniDFSCluster.Builder(configuration)
.manageNameDfsDirs(true)
.manageDataDfsDirs(true)
.format(true)
.build()
}
hdfsCluster.waitClusterUp()
val fs: DistributedFileSystem = hdfsCluster.getFileSystem
val rootPath = new Path(root)
fs.mkdirs(rootPath)
However, my attempts to get this MiniCluster to work haven't panned out either - somehow I need to link the MiniCluster with the Scalding write.
Note: The Scalding JobTest framework for unit testing isn't going to work due actual data written is sometimes wrapped in bijection codec or setup with case class wrappers prior to the writes made by the metadata-inclusive writes APIs.
Any ideas how I can write a local file (without using the Scalding REPL) with either Scalding alone or a MiniCluster? (If using the later, I need a hint how to read the file.)
Answering ... There is an example of how to use a mini cluster for exactly reading and writing to HDFS. I will be able to cross read with my different writes and examine them. Here it is in the tests for scalding's TypedParquet type
HadoopPlatformJobTest is an extension for JobTest that uses a MiniCluster.
With some hand-waiving on detail in the link, the bulk of the code is this:
"TypedParquetTuple" should {
"read and write correctly" in {
import com.twitter.scalding.parquet.tuple.TestValues._
def toMap[T](i: Iterable[T]): Map[T, Int] = i.groupBy(identity).mapValues(_.size)
HadoopPlatformJobTest(new WriteToTypedParquetTupleJob(_), cluster)
.arg("output", "output1")
.sink[SampleClassB](TypedParquet[SampleClassB](Seq("output1"))) {
toMap(_) shouldBe toMap(values)
}
.run()
HadoopPlatformJobTest(new ReadWithFilterPredicateJob(_), cluster)
.arg("input", "output1")
.arg("output", "output2")
.sink[Boolean]("output2")(toMap(_) shouldBe toMap(values.filter(_.string == "B1").map(_.a.bool)))
.run()
}
}

Schema validation of empty array for nested structure verification

I am working on a POC with Karate framework (latest version 0.9.6) and I came across with the following:
* match each response.bar contains { id:"#uuid ? _ != ''", name: "#notnull", foo: "#[] #object"}
I noticed that, when foo is an empty array, it does not fail the step.
Is it possible to add a length verification on the above step to fail if the array is empty?
Thanks in advance.
Yes, read the docs: https://github.com/intuit/karate#schema-validation
* match each response.bar contains { id:"#uuid ? _ != ''", name: "#notnull", foo: "#[_ > 0] #object"}

Karate - how to access an array element by UUID during a 'retry until' statement

I have an endpoint which returns this JSON response:
{
"jobs": [
{
"name": "job1",
"id": "d6bd9aa1-0708-436a-81fd-cf22d5042689",
"status": "pending"
},
{
"name": "job2",
"id": "4fdaf09f-51de-4246-88fd-08d4daef6c3e",
"status": "pending"
}
]
I would like to repeatedly GET call this endpoint until the job I care about ("job2") has a "status" of "completed", but I'd like to check this by using a UUID stored in a variable from a previous call.
i.e. by doing something like this:
#NB: code for previous API call is executed
* def uuidVar = response.jobRef
#NB: uuidVar equates to '4fdaf09f-51de-4246-88fd-08d4daef6c3e' for this scenario
* configure retry = { count: 5, interval: 10000 }
Given path /blah
And retry until response.jobs[?(#.id==uuidVar)].status == 'completed'
When method GET
Could anyone suggest the correct syntax for the retry until?
I've tried referencing the fantastic Karate docs & examples (in particular, js-arrays.feature) and some questions on SO (including this one: Karate framework retry until not working as expected) but sadly I haven't been able to get this working.
I also tried using karate.match here as suggested in the link above, but no cigar.
Apologies in advance if I am missing something obvious.
First I recommend you read this answer on Stack Overflow, it is linked from the readme actually, and is intended to be the definitive reference. Let me know if it needs to be improved: https://stackoverflow.com/a/55823180/143475
Short answer, you can't use JsonPath in the retry until expression, it has to be pure JavaScript.
While you can use karate.jsonPath() to bridge the worlds of JsonPath and JS, JsonPath can get very hard to write and comprehend. Which is why I recommend using karate.filter() to do the same thing, but break down the steps into simple, readable chunks. Here is what you can try in a fresh Scenario:. Hint, this is a good way to troubleshoot your code without making any "real" requests.
* def getStatus = function(id){ var temp = karate.filter(response.jobs, function(x){ return x.id == id }); return temp[0].status }
* def response =
"""
{
"jobs": [
{
"name": "job1",
"id": "d6bd9aa1-0708-436a-81fd-cf22d5042689",
"status": "pending"
},
{
"name": "job2",
"id": "4fdaf09f-51de-4246-88fd-08d4daef6c3e",
"status": "pending"
}
]
}
"""
* def selected = '4fdaf09f-51de-4246-88fd-08d4daef6c3e'
* print getStatus(selected)
So if you have getStatus defined up-front, you can do this:
* retry until getStatus(selected) == 'completed'
Note you can use multiple lines for a JS function if you don't like squeezing it all into one line, or even read it from a file.

How to do conditional variables definition on Karate

I had written karate tests for one environment only (staging). Since the tests are successful on capturing bugs (thanks a lot Karate and Intuit team!), there is now request to run the tests on production.
Our tests are graphql-based where most of the requests are query. I wonder if it is possible for us to switch variables based on karate.env we passed on terminal?
Most of our requests look like this:
And def variables = {objectID:"1234566", cursor:"1", cursorType:PAGE, size:'10', objectType:USER}
And request { query: '#(query)', variables: '#(variables)' }
When method POST
Then status 200
I had tried reading the conditional-logic page on github page but haven't yet found a success.
What I tried so far is:
* if (karate.env == 'staging') * def variables = {objectID:"1234566", cursor:"1", cursorType:PAGE, size:'10', objectType:USER}
But to no success.
Any help will be greatly appreciated. Thanks a lot!
We keep our graphql queries & variables in separate json files, but, we're attempting to solve the same issue. Based on what Peter wrote I came up with this, though it will likely get cleaned up before deployment.
Given def query = read('graphqlQuery.graphql')
And def prodVariable = read('prod-variables.json')
And def stageVariable = read('stage-variables.json')
And def variables = karate.env == 'prod' ? prodV : stageV
And path 'api/' + 'graphql'
And request { query: '#(query)', variables: '#(variables)' }
When method post
Then status 200
This should be easy:
* def variables = karate.env == 'staging' ? { objectID: "1234566", cursor: "1", cursorType: 'PAGE', size: '10', objectType: 'USER' } : { }
Here is another hint:
* def data = { staging: { foo: 'bar }, production: { foo: 'baz' } }
* def variables = data[karate.env]
EDIT: also see this explanation: https://stackoverflow.com/a/59162760/143475