Karate - callonce feature doesn't cache the response - karate

I have two feature files and I'm trying to reuse the result of the first feature file as a background to the second feature file's scenario
Feature file 1
Feature: First feature file
Scenario: create random session id
* def sessionId = Java.type('com.company.RandomSessionId').getRandomSessionId()
Feature file 2
Feature: calling another feature file
Background:
* def mycall = callonce read('first.feature')
* def randomId = mycall.sessionId
Scenario: print sessionId
* print randomId
Scenario: print sessionId-2
* print randomId
When I execute the scenarios in Feature file 2, I get two different results.

It must be because you are using the IDE support / right-click / "run-as" option. This is an open issue, because Karate needs to cache across Scenarios which "native" Cucumber does not support: https://github.com/intuit/karate/issues/136 - apologies and I need to update the documentation.
Please use a JUnit runner for these cases, I recommend having these in place for dev-mode anyway, and the new HTML dev-mode report makes this even more useful: https://twitter.com/KarateDSL/status/935029435140489216

Related

How we can use after test in case of Karate Framework? [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

How to delete dynamically created resources after every scenario in karate? [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

Karate: Scenario fails if contains __arg and run in 'stand-alone' mode

I have faced a problem when I try to run a Scenario containing built-in __arg variable as 'stand-alone' (not 'called'), then my test fails with an error (I do not #ignore the called one as in order to use it in both 'called' and 'stand-alone' modes):
evaluation (js) failed: __arg, javax.script.ScriptException: ReferenceError: "__arg" is not defined in <eval> at line number 1
stack trace: jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
Following two simple features should be enough to reproduce.
called-standalone.feature:
Feature: Called + Stand-alone Scenario
Scenario: Should not fail on __arg when run as stand-alone
* def a = __arg
* print a
caller.feature:
Feature: Caller
Scenario: call without args
When def res = call read('called-standalone.feature')
Then match res.a == null
Scenario: call with args
When def res = call read('called-standalone.feature') {some: 42}
Then match res.a == {some: 42}
Putting these two features into the skeleton project and run mvn test will show an error.
I'm expecting this should work as the docs say that "‘called’ Karate scripts ... can behave like ‘normal’ Karate tests in ‘stand-alone’ mode".
‘called’ Karate scripts don’t need to use any special keywords to ‘return’ data and can behave like ‘normal’ Karate tests in ‘stand-alone’ mode if needed
All Karate variables have to be "defined" at run time. This is a rule which can not be relaxed.
So you should re-design your scripts. The good thing is you can use karate.get() to set a "default value".
* def a = karate.get('__arg', null)
That should answer your question.

Can we run one scenario for list of object using Karate API [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

How to use hooks for central reporting feature? [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks