Karate CLI Testing - Execute Command within context of another Command - testing

I'm currently testing a CLI app written by a developer in my team - specifically this command:
./mycli init
After entering this command, it responds with this:
Enter endpoint:
As suggested, I need to provide the target URL, then hit enter.
I'll then be asked 2 more questions which require input.
For example, let's say I have this:
* def listener =
"""
function(line) {
if (line.contains('Enter endpoint')) {
//input the answer
}
}
"""
* def initCmd = karate.fork({ args: ['sh','mycli','init'], listener: listener })
* print initCmd.sysOut
(NB: This snippet was inspired by fork-listener.feature and this interesting thread with jbang team https://github.com/karatelabs/karate/issues/1191)
Karate is working really well for all the other tests I need to do for this CLI - is there something I can put in the IF statement for this particular use case?

Related

Access Pytest result in teardown of Appium test

my conftest file for my appium/python test framework looks like:
#pytest.fixture()
def setup(request):
desired_caps = {
...
}
request.cls.driver = webdriver.Remote(
command_executor= "https://"blah.com",
desired_capabilities= desired_caps
)
yield request.cls.driver
request.cls.driver.quit()
And what I am trying to do is be able to access pytest results from within the 'yield' section, and send a pass/fail result to BrowserStack using the command:
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed", "reason": "All elements located and assertions passed!"}}')
The problem is, the only method I know to access the pytest results utilizes a hook in conftest, i.e:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == "call" and result.passed:
do_something
if result.when == "call" and result.failed:
do_something_else
But how do I integrate these too? In other words, how can I take the result from the hook to get the test result, and then use the driver instance from the setup to run the execute_script command. Everything I have tried leads to issues with not being able to access the Appium driver instance. Please help!!
Update:
I have achieved this by using a global variable in the hook to save the result, and then in the fixture I use this data to send the corresponding message, but I know this is not ideal. So the question remains, how can I store a variable from the hook in conftest that gets the pytest result, and pass that to the yield section of the setup fixture?

Karate: Scenario fails if contains __arg and run in 'stand-alone' mode

I have faced a problem when I try to run a Scenario containing built-in __arg variable as 'stand-alone' (not 'called'), then my test fails with an error (I do not #ignore the called one as in order to use it in both 'called' and 'stand-alone' modes):
evaluation (js) failed: __arg, javax.script.ScriptException: ReferenceError: "__arg" is not defined in <eval> at line number 1
stack trace: jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
Following two simple features should be enough to reproduce.
called-standalone.feature:
Feature: Called + Stand-alone Scenario
Scenario: Should not fail on __arg when run as stand-alone
* def a = __arg
* print a
caller.feature:
Feature: Caller
Scenario: call without args
When def res = call read('called-standalone.feature')
Then match res.a == null
Scenario: call with args
When def res = call read('called-standalone.feature') {some: 42}
Then match res.a == {some: 42}
Putting these two features into the skeleton project and run mvn test will show an error.
I'm expecting this should work as the docs say that "‘called’ Karate scripts ... can behave like ‘normal’ Karate tests in ‘stand-alone’ mode".
‘called’ Karate scripts don’t need to use any special keywords to ‘return’ data and can behave like ‘normal’ Karate tests in ‘stand-alone’ mode if needed
All Karate variables have to be "defined" at run time. This is a rule which can not be relaxed.
So you should re-design your scripts. The good thing is you can use karate.get() to set a "default value".
* def a = karate.get('__arg', null)
That should answer your question.

Using mocks in Karate DSL feature file with stanalone run

I have REST service, written in language different from Java.
It have few dependencies from other REST services.
For example service under development and testing is A, other services are respectively B and C.
I want to run system test for A, some tests require B or/and C to be online and perform queries from A.
I wrote b-mock.featue and c-mock.feature to represent that services in mock.
Also I wrote some a-test-smth.feature files to run test against A
Is it possible to add some information into a-test-smth.feature to enable some mocks for concrete test?
Now I should run standalone karate.jar twice, first - for mocking. second - for run tests. That approach works, but, I can't ceck that:
some API calls to A not required B or C
can't emulate service B down or for example slow or incorrect response answer fetching
Thanks.
Are you using Java ? If so then the best approach is to perform the set-up of your test in Java code. You can start 2 mocks for B and c and then start the main test for your service A. And at the end do clean-up if needed.
You can refer this as an example: https://github.com/intuit/karate/tree/master/karate-netty#consumer-provider-example
Row 3 shows how you can start a mock and run a Karate test.
If you are not using Java and would like to use only the stand-alone JAR, it is actually possible using Java-interop and quite easy, I just tried it.
EDIT: This API is now built into Karate, so you don't need to write the extra JS code below: https://github.com/intuit/karate/tree/master/karate-netty#within-a-karate-test
(Obsolete)
First create this bit of JavaScript code that is smart enough to start a Karate mock:
function() {
var Mock = Java.type('com.intuit.karate.netty.FeatureServer');
var file = new java.io.File('src/test/java/mock/web/cats-mock.feature');
var server = Mock.start(file, 0, false, null);
return server.port;
}
And this is how it can look in the Background of your main Karate test. You can see how you can do some conditional logic if needed and you have plenty of ways to change things based on your environment.
Background:
* def starter = read('start-mock.js')
* def port = karate.env == 'mock' ? starter() : 8080
* url 'http://localhost:' + port + '/cats'
Does this answer your question ? Let me know and I will add this trick to the documentation !

How to run an specific test case in the selected environment in SoapUI

I have multiple Environment and a lot of test cases, but not all test cases are needed to be run in all environment. Is there a way to run only an specific test cases from a test suite based on the selected Environment.
For Example
If I select Environment1, it will run the following test cases
TC0001
TC0002
TC0003
TC0004
TC0005
If I select Environment2, it will run only the following test cases
TC0001
TC0003
TC0005
There can be different solution to achieve this since you have multiple environments i.e., pro software being used.
I would achieve the solution using Test Suite's Setup Script:
Create Test Suite level custom property. Use the same name as your environment name. For instance, DEV is the environment defined, use the same as test suite property name and provide the list of values separated by comma as value for that property, say TC1, TC2 etc.,
Similarly defined other environments and its values as well.
Copy the below script in Setup Script for the test suite and execute the script which enables or disables the test cases according to the environment and property value
Test Suite's Setup Script
/**
* This is soapui's Setup Script
* which enables / disables required
* test cases based on the user list
* for that specific environment
**/
def disableTestCase(testCaze) {
testCaze.disabled = true
}
def enableTestCase(testCaze) {
testCaze.disabled = false
}
def getEnvironmentSpecificList(def testSuite) {
def currentEnv = testSuite.project.activeEnvironment.NAME
def enableList = testSuite.getPropertyValue(currentEnv).split(',').collect { it.trim()}
log.info "List of test for enable: ${enableList}"
enableList
}
def userList = getEnvironmentSpecificList(testSuite)
testSuite.testCaseList.each { kase ->
if (userList.contains(kase.name)) {
enableTestCase(kase)
} else {
disableTestCase(kase)
}
}
Other way to achieve this is using Event feature of ReadyAPI, you may use TestRunListener.beforeRun() and filter the test case whether to execute or ignore.
EDIT:
If you are using ReadyAPI, then you can the new feature called tag the test cases. A test case can be tagged with multiple values and you can execute tests using specific tags. In this case, you may not needed to have the setup script as that is for the open source edition. Refer documentation for more details.
This solution is only specific to Pro software and Open Source edition does have this tag feature.

Generate custom steps with behat

I try to write a custom step that's generate step
my code looks like :
/**
* #Then /^Check_raoul$/
*/
public function checkRaoul()
{
// grab the content ...
// get players ...
$to_return = array();
foreach ($players as $player) {
$player = $player->textContent;
if (preg_match('/^.*video=([^&]*)&.*$/', $player, $matches))
{
array_push($to_return, new Step\Then('I check the video of id "'.$matches[1].'"'));
}
}
return $to_return;
}
/**
* #Then /^I check the video of id "([^"]*)"$/
*/
public function iCheckTheVideoOfId($id)
{
// ...
}
works fine but when integrating to jenkins or un cli, if many executions of iCheckTheVideoOfId fail, I see just one error. I wish generate a number of steps equal to the number of iCheckTheVideoOfId calls
what I a doing wrong ?
We abandoned using Jenkins to do BDD checks due to the differences in how test feedback is presented and what Jenkins is capable of. We found that just running our suites locally and then a full check before pushing code to the repo produced better results and helped everyone get better at using the framework.
To answer your question directly I would suggest configuring your jenkins job to not fail when a test fails.
This can be accomplished by not outputting results at all. You can modify your command line options to not output failures at all and just log results to an output file. You could then run a script at the end to check for failures.