Karate: Scenario fails if contains __arg and run in 'stand-alone' mode - karate

I have faced a problem when I try to run a Scenario containing built-in __arg variable as 'stand-alone' (not 'called'), then my test fails with an error (I do not #ignore the called one as in order to use it in both 'called' and 'stand-alone' modes):
evaluation (js) failed: __arg, javax.script.ScriptException: ReferenceError: "__arg" is not defined in <eval> at line number 1
stack trace: jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
Following two simple features should be enough to reproduce.
called-standalone.feature:
Feature: Called + Stand-alone Scenario
Scenario: Should not fail on __arg when run as stand-alone
* def a = __arg
* print a
caller.feature:
Feature: Caller
Scenario: call without args
When def res = call read('called-standalone.feature')
Then match res.a == null
Scenario: call with args
When def res = call read('called-standalone.feature') {some: 42}
Then match res.a == {some: 42}
Putting these two features into the skeleton project and run mvn test will show an error.
I'm expecting this should work as the docs say that "‘called’ Karate scripts ... can behave like ‘normal’ Karate tests in ‘stand-alone’ mode".
‘called’ Karate scripts don’t need to use any special keywords to ‘return’ data and can behave like ‘normal’ Karate tests in ‘stand-alone’ mode if needed

All Karate variables have to be "defined" at run time. This is a rule which can not be relaxed.
So you should re-design your scripts. The good thing is you can use karate.get() to set a "default value".
* def a = karate.get('__arg', null)
That should answer your question.

Related

Access Pytest result in teardown of Appium test

my conftest file for my appium/python test framework looks like:
#pytest.fixture()
def setup(request):
desired_caps = {
...
}
request.cls.driver = webdriver.Remote(
command_executor= "https://"blah.com",
desired_capabilities= desired_caps
)
yield request.cls.driver
request.cls.driver.quit()
And what I am trying to do is be able to access pytest results from within the 'yield' section, and send a pass/fail result to BrowserStack using the command:
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed", "reason": "All elements located and assertions passed!"}}')
The problem is, the only method I know to access the pytest results utilizes a hook in conftest, i.e:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == "call" and result.passed:
do_something
if result.when == "call" and result.failed:
do_something_else
But how do I integrate these too? In other words, how can I take the result from the hook to get the test result, and then use the driver instance from the setup to run the execute_script command. Everything I have tried leads to issues with not being able to access the Appium driver instance. Please help!!
Update:
I have achieved this by using a global variable in the hook to save the result, and then in the fixture I use this data to send the corresponding message, but I know this is not ideal. So the question remains, how can I store a variable from the hook in conftest that gets the pytest result, and pass that to the yield section of the setup fixture?

How we can use after test in case of Karate Framework? [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

Karate call read feature failing with reference error

I am trying to invoke a feature for each element of json array
* def values = karate.mapWithKey(values, 'value')
* def result = call read('my-feature') values
My feature is defined as
#Ignore
Feature: My feature
Background:
*some task
Scenario:
# TEST: My scenario
Given path urlPath, value
This works fine if i use #Tags and only run this scenario.
But on trying to run all karate tests, this fails with error
com.intuit.karate.exception.KarateException: my-feature.feature:15 - javascript evaluation failed: value, ReferenceError: "value" is not defined in at line number 1
How do i fix this?
I have marked the ignored feature as #Ignore, but that doesnt help
Got the solution ,
I was using #Ignore annotation but it also need to be mapped at APITest class
Defining
#KarateOptions(tags = {"~#Ignore"})
And marking the feature file as #Ignore, solved my issue
Shouldn't it be:
* def result = call read('my-feature') ids
If still stuck, follow this process please: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Can we run one scenario for list of object using Karate API [duplicate]

Strange behaviour when I call a feature file for test clean using afterFeature hook. The cleanup feature file is called correctly because I can see the print from Background section of the file, but for some reason the execution hangs for Scenario Outline.
I have tried running feature with Junit5 runner and also in IntelliJ IDE by right clicking on feature file but get the same issue, the execution hangs.
This is my main feature file:
Feature: To test afterFeature hook
Background:
* def num1 = 100
* def num2 = 200
* def num3 = 300
* def dataForAfterFeature =
"""
[
{"id":'#(num1)'},
{"id":'#(num2)'},
{"id":'#(num3)'}
]
"""
* configure afterFeature = function(){ karate.call('after.feature'); }
Scenario: Test 1
* print 'Hello World 1'
Scenario: Test 2
* print 'Hello World 2'
The afterFeature file:
#ignore
Feature: Called after calling feature run is completed
Background:
* def dynamicData = dataForAfterFeature
* print 'dynamicData: ' + dynamicData
Scenario Outline: Print dynamic data
* print 'From after feature for id: ' + <id>
Examples:
| dynamicData |
The execution stalls at Scenario Outline. I can see the printed value for dynamicData variable in console but nothing happens after that.
Seems like the outline loop is not starting or has crashed? Was not able to get details from log as the test has not finished or there is no error reported. What else can I check or what might be the issue?
If not easily reproducible, what test cleanup workaround do you recommend?
For now, I have done the following workaround where I have added a test clean-up scenario at the end of the feature that has tests. Have stopped parallel execution for these tests and to be honest I do not mind these tests not running in parallel as they are fast to run anyways.
Ids to delete:
* def idsToDelete =
"""
[
101,
102,
103
]
"""
Test clean up scenario:
# Test data clean-up scenario
Scenario: Delete test data
# Js method to call delete data feature.
* def deleteTestDataFun =
"""
function(x) {
var temp = [x];
// Call to feature. Pass argument as json object.
karate.call('delete-test-data.feature', { id: temp });
}
"""
* karate.forEach(idsToDelete, deleteTestDataFun)
Calls the delete test data scenario and passes it a list of ids that needs to be deleted.
Delete test data feature:
Feature: To delete test data
Background:
* def idVal = id
Scenario: Delete
Given path 'tests', 'delete', idVal
Then method delete
Yeah I personally recommend a strategy to pre-clean-up always, because you cannot guarantee that an "after" hook gets called, e.g. if the machine is switched off.
Sometimes the simplest option is to do this as plain old Java code in your JUnit test-suite. So maybe a one-line after using Runner is sufficient.
It gets tricky if you need to keep track of dynamic data that your tests have created. What I would do is write a Java singleton, use it in your tests to "collect" the ID-s that need to be deleted, and then use this in your JUnit class. You can use things like #AfterClass.
Please try and replicate using the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - because this can indeed be a bug with Scenario Outline.
Finally, you can evaluate ExecutionHook which has an afterSuite() callback: https://github.com/intuit/karate/issues/970#issuecomment-557443551
EDIT: in 1.0 - it has become RuntimeHook: https://github.com/intuit/karate/wiki/1.0-upgrade-guide#hooks

Karate - callonce feature doesn't cache the response

I have two feature files and I'm trying to reuse the result of the first feature file as a background to the second feature file's scenario
Feature file 1
Feature: First feature file
Scenario: create random session id
* def sessionId = Java.type('com.company.RandomSessionId').getRandomSessionId()
Feature file 2
Feature: calling another feature file
Background:
* def mycall = callonce read('first.feature')
* def randomId = mycall.sessionId
Scenario: print sessionId
* print randomId
Scenario: print sessionId-2
* print randomId
When I execute the scenarios in Feature file 2, I get two different results.
It must be because you are using the IDE support / right-click / "run-as" option. This is an open issue, because Karate needs to cache across Scenarios which "native" Cucumber does not support: https://github.com/intuit/karate/issues/136 - apologies and I need to update the documentation.
Please use a JUnit runner for these cases, I recommend having these in place for dev-mode anyway, and the new HTML dev-mode report makes this even more useful: https://twitter.com/KarateDSL/status/935029435140489216