Access Pytest result in teardown of Appium test - selenium

my conftest file for my appium/python test framework looks like:
#pytest.fixture()
def setup(request):
desired_caps = {
...
}
request.cls.driver = webdriver.Remote(
command_executor= "https://"blah.com",
desired_capabilities= desired_caps
)
yield request.cls.driver
request.cls.driver.quit()
And what I am trying to do is be able to access pytest results from within the 'yield' section, and send a pass/fail result to BrowserStack using the command:
driver.execute_script('browserstack_executor: {"action": "setSessionStatus", "arguments": {"status":"passed", "reason": "All elements located and assertions passed!"}}')
The problem is, the only method I know to access the pytest results utilizes a hook in conftest, i.e:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == "call" and result.passed:
do_something
if result.when == "call" and result.failed:
do_something_else
But how do I integrate these too? In other words, how can I take the result from the hook to get the test result, and then use the driver instance from the setup to run the execute_script command. Everything I have tried leads to issues with not being able to access the Appium driver instance. Please help!!
Update:
I have achieved this by using a global variable in the hook to save the result, and then in the fixture I use this data to send the corresponding message, but I know this is not ideal. So the question remains, how can I store a variable from the hook in conftest that gets the pytest result, and pass that to the yield section of the setup fixture?

Related

Is it still possible to access javascript variables with Selenium IDE?

I am working on tests using the selenium IDE extension for chrome and I have basically the same problem as in this question Access JavaScript variables with Selenium IDE
Using this.browserbot.getUserWindow() didn't work at all to get my defined variable and as the question I mentioned is 8 years old, I was wondering if there weren't some updates I missed.
I also looked at the recent documentation https://www.selenium.dev/selenium-ide/docs/en/api/commands and I couldn't find the assertEval command they mention in the question.
I am wondering if the run script command is there for that purpose.
To summarize:
Assuming that I defined an array in the window variable like this window.data=['hello', 'world'] what would be the correct syntax to fetch it during my test?
What is the difference between those two commands? execute script and run script
Is there a more detailed documentation about how to get along with js? I couldn't find where the this.browserbot variable came from.
I will attempt to help on this one as I've used Arrays in IDE. Array creation is done in JS, but there's different ways it's stored in JS and in Selenium and then manipulated upon. To answer point 2, Execute Script is what i always as it calls JS but then completes it in the IDE framework, e.g. can store it in Selenium. Run Script only does it in JS and does not return a log with IDE. To answer point 1 and the whole question:
Method 1
To create a Selenium IDE array, use the JS execute script command and that will enable storing as a Selenium IDE variable only, here's a basic version of that:
Command: execute script
Target: return ["hello","world"]
Value: myArray
To check this, add a 2nd command to check the Selenium variable is correctly stored and displays right result in the IDE log:
Command: echo
Target: ${myArray}
Method 2
If you already have an array defined in JS, e.g. let myJSArray = ["hello", "world"] or window.myJSArray = ["hello", "world"], and it's defined in the window that Selenium already has open, then you can store that in Selenium using:
Command: execute script
Target: return myJSArray | Target: return window.myJSArray
Value: myArray
Then double check it's worked by using the same echo command above
Method 3
If you haven't got the window open and you want to store the variable both in JS and in Selenium, then
Command: execute script
Target: return myJSArray = ["hello","world"] | Target: return window.myJSArray = ["hello","world"]
Value: myArray
Then again use the echo command to check it's worked.
In summary - Method 1 stores the array in Selenium only (using JS to achieve it), Method 2 stores an already stored JS variable in the open window to Selenium, Method 3 stores the variable both in JS window and Selenium.
Depending on the method used, you can then do asserts on the variable (JS one or Selenium one) or even use them in If/Else, loops or general JS functions. If you are using the Selenium variable for any JS scripts of if/else etc, you need to add the ${} to it, if using the JS variable then can call it without the dollar. Example of an Assert using Method 3 for both the Selenium myArray variable or the JS myJSArray variable:
Command: execute script
Target: return myJSArray = ["hello","world"]
Value: myArray
//Now have 2 variables, 1. myJSArray that the browser can use; and 2. myArray that Selenium can use. Below to assert either variable contains the array item "world":
Command: execute script
Target: return ${myArray}.includes("world")
Value: myArrayIncludesWorld
Command: assert
Target: myArrayIncludesWorld
Value: true
Command: execute script
Target: return myJSArray.includes("world")
Value: myJSArrayIncludesWorld
Command: assert
Target: myJSArrayIncludesWorld
Value: true

Karate: Scenario fails if contains __arg and run in 'stand-alone' mode

I have faced a problem when I try to run a Scenario containing built-in __arg variable as 'stand-alone' (not 'called'), then my test fails with an error (I do not #ignore the called one as in order to use it in both 'called' and 'stand-alone' modes):
evaluation (js) failed: __arg, javax.script.ScriptException: ReferenceError: "__arg" is not defined in <eval> at line number 1
stack trace: jdk.nashorn.api.scripting.NashornScriptEngine.throwAsScriptException(NashornScriptEngine.java:470)
Following two simple features should be enough to reproduce.
called-standalone.feature:
Feature: Called + Stand-alone Scenario
Scenario: Should not fail on __arg when run as stand-alone
* def a = __arg
* print a
caller.feature:
Feature: Caller
Scenario: call without args
When def res = call read('called-standalone.feature')
Then match res.a == null
Scenario: call with args
When def res = call read('called-standalone.feature') {some: 42}
Then match res.a == {some: 42}
Putting these two features into the skeleton project and run mvn test will show an error.
I'm expecting this should work as the docs say that "‘called’ Karate scripts ... can behave like ‘normal’ Karate tests in ‘stand-alone’ mode".
‘called’ Karate scripts don’t need to use any special keywords to ‘return’ data and can behave like ‘normal’ Karate tests in ‘stand-alone’ mode if needed
All Karate variables have to be "defined" at run time. This is a rule which can not be relaxed.
So you should re-design your scripts. The good thing is you can use karate.get() to set a "default value".
* def a = karate.get('__arg', null)
That should answer your question.

TestCafe unable to use testController (t) outside of test run (e.g. as a conditional to skip a test)

I'm trying to check which browser we're running tests on, and then skip a test/fixture based on the result (as mentioned in this TestCafe Issue).
import { t } from 'testcafe';
fixture `test`
.page('https://testcafe.devexpress.com')
if (t.browser.name.includes('Chrome')) {
test('is Chrome?', async () => {
console.log(t.browser.name);
await t.expect(t.browser.name.includes('Chrome').ok();
});
} else {
test.skip('is Chrome?')
};
Results in...
ERROR Cannot prepare tests due to an error.
Cannot implicitly resolve the test run in the context of which the test controller action should be executed. Use test function's 't' argument instead.
Is there any way I can call the testObject (t) outside of the test?
I don't have a solution to exactly your question. But I think it's better to do it slightly differently, so the outcome will be the same, but the means to achieve it will differ a bit. Let me explain.
Wrapping test cases in if statements is, in my opinion, not a good idea. It mostly clutters test files so you don't only see test or fixture at the left side, but also if statements that make you stop when reading such files. It presents more complexity when you just want to scan a test file quickly from top to bottom.
The solution could be you introduce meta data to your test cases (could work well with fixtures as well).
test
.meta({
author: 'pavelsaman',
creationDate: '16/12/2020',
browser: 'chrome'
})
('Test for Chrome', async t => {
// test steps
});
Then you can execute only tests for Chrome like so:
$ testcafe --test-meta browser=chrome chrome
That's very much the same as what you wanted to achieve with the condition, but the code is a bit more readable.
In case you want to execute tests for both chrome and firefox, you can execute more commands:
$ testcafe --test-meta browser=chrome chrome
$ testcafe --test-meta browser=firefox firefox
or:
$ testcafe --test-meta browser=chrome chrome && testcafe --test-meta browser=firefox firefox
If your tests are in a pipeline, it would probably be done in two steps.
The better solution, as mentioned in one of the comments in this question is to use the runner object in run your tests instead of the command line. Instead of passing the browser(s) as a CLI argument, you would pass it as an optional argument to a top-level script.
You would then read the browser variable from either the script parameter or the .testcaferc.json file.
You would need to tag all tests/fixtures with the browser(s) they apply to using meta data.
You then use the Runner.filter method to add a delegate that returns true if the browser in the meta data is equal to the browser variable in the top level script
var runner = testcafe.createRunner();
var browser = process.env.npm_package_config_browser || require("./testcaferc.json").browser;
var runner.filter((testName, fixtureName, fixturePath, testMeta, fixtureMeta) => {
return fixtureMeta.browser === browser || testMeta.browser === browser ;
}

Access window object / browser scope from protractor

I'm running tests with protractor, but it seems impossible to access the JS 'window' object. I even tried adding a tag in my html file that would contain something like
var a = window.location;
and then try expect(a) but I couldn't make it work, I always get undefined references...
How should I process to access variables that are in the browser scope ?
Assuming you are using a recent version of Protractor, let's say >= 1.1.0, hopefully >= 1.3.1
Attempting to access Browser side JS code directly from Protractor won't work because Protractor runs in NodeJS and every Browser side code is executed through Selenium JsonWireProtocol.
Without further detail, a working example:
browser.get('https://angularjs.org/');
One-liner promise that, as of today, resolves to '1.3.0-rc.3'
browser.executeScript('return window.angular.version.full;');
You can use it directly in an expect statement given Protractor's expect resolves promises for you:
expect(browser.executeScript('return window.angular.version.full;')).
toEqual('1.3.0-rc.3');
Longer example passing a function instead of a string plus without expect resolving the promise for you. i.e. for more control and for doing some fancy thing with the result.
browser.driver.executeScript(function() {
return window.angular.version.full;
}).then(function(result) {
console.log('NodeJS-side console log result: ' + result);
//=> NodeJS-side console log result: 1.3.0-rc.3
});

Retrieve response from a "Run Test Step", using SoapUI/ Groovy?

In SoapUI, I have a host Test Case, which executes another external Test Case (with several test steps) using the "Run Test Case" test step. I need to access a response from the external TC from within my host TC, since I need to assert on some values.
I cannot transfer the properties since they are in XML. Could I get some pointers as to how I could leverage Groovy/SoapUI for this.
For Response you can use the below code.
testRunner.testCase.getTestStepByName("test step").testRequest.response.responseContent
In you external TC create another property and at the end of the TC use Transfer Property step to transfer your XML node to it. In your host TC, just read that property as you would any other.
I also had a look around to see if this can be done from Groovy. SoapUI documentation says that you need to refer to the external name of the testsuite / testcase:
def tc = testRunner.testCase.testSuite.project.testSuites["external TestSuite"].testCases["external TestCase"]
def ts = tc.testSteps["test step"]
But I could not find how to get at the Response after that.
In addition to Guest and SiKing answers, I share a solution to a problem that I've met:
If your step is not of type 'request' but 'calltestcase' you cannot use Guest answer.
I have a lot of requests contained each in a testCase and my other testCases call these testCases each time I need to launch a request.
I configured my request testCases in order to return the response as a custom property that I call "testResponse" so I can easily access it from my other testCases.
I met a problem in the following configuration :
I have a "calltestcase" step that gives me a request result.
Further in the test I have a groovy script that needs to call this step and get the response value
If I use this solution :
testRunner.runTestStepByName("test step")
followed by testRunner.testCase.getTestStepByName("test step").testRequest.response.responseContent
I'm stuck as there is no testRequest property for the class.
The solution that works is :
testRunner.runTestStepByName("test step")
def response_value = context.expand( '${test step#testResponse#$[\'value\']}' )
another solution is :
testRunner.runTestStepByName("test step")
tStep = testRunner.testCase.getTestStepByName("test step")
response = tStep.getPropertyValue("testResponse")
Then I extract the relevant value from 'response' (in my case, it is a json that I have to parse).
Of course it works only because I the request response as a custom property of my request test case.
I hope I was clear enough