I am trying to figure out what parts of my React Native app are causing Detox to wait unnecessarily long as instructed in the documentation. However, when I run:
detox test --debug-synchronization 20
I get no additional output, only the regular Jest output. I know for a fact that there are network requests slower than that, setTimeout's of 400 ms and animations which are slowing Detox down, but it doesn't output them.
What could be causing the output not to work?
This feature had a bug before and they just fixed it in this release: 18.18.0
You may consider this apprach as well
await device.disableSynchronization();
put this line before interacting with the animated element
and then you can enable the synchorization again
await device.enableSynchronization();
Related
I am writing a e2e test to a react native app(-v : 0.63.4) using Detox (-v : 18.2.2)
the app contains several screens and I want to start at certain point in the test while I am writing it just to spare time by writing the test and not start every time from the beginning because I run the test while I am writing it to check if everything like I want it to be
I know it is called e2e test but is there a way to do that ?
Edit: I should mention I'm using Jest as a test-runner, I don't know if you're using Jest, but I'll leave this here for now anyway.
The way I'm doing it in my project now is to use the .only function on my tests.
https://jestjs.io/docs/en/api#testonlyname-fn-timeout
test.only('the user can continue to account creation', async () => {
await element(by.id('continueButton')).tap()
await expect(element(by.id('accountCreationScreen'))).toBeVisible()
})
Notice though, that in my case I have to keep the tests that logs a user in to the app. But that's fine for me, I still skip all the other tests.
Hope that works out for you, or helps in some way.
The recommended testing framework for Meteor 1.7 seems to be meteortesting:mocha.
With Meteor 1.7.0.3 I created a default app (meteor create my-app), which has the following tests (in test/main.js)
import assert from "assert";
describe("my-app", function () {
it("package.json has correct name", async function () {
const { name } = await import("../package.json");
assert.strictEqual(name, "noteit");
});
if (Meteor.isClient) {
it("client is not server", function () {
assert.strictEqual(Meteor.isServer, false);
});
}
if (Meteor.isServer) {
it("server is not client", function () {
assert.strictEqual(Meteor.isClient, false);
});
}
});
I ran
meteor add meteortesting:mocha
meteor test --driver-package meteortesting:mocha
and with meteortesting:mocha#2.4.5_6 I got this in the console:
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)? ----- RUNNING SERVER TESTS -----
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)?
I20180728-12:06:37.730(2)?
I20180728-12:06:37.731(2)?
I20180728-12:06:37.737(2)? the server
✓ fails a test.753(2)?
I20180728-12:06:37.755(2)?
I20180728-12:06:37.756(2)?
I20180728-12:06:37.756(2)? 1 passing (26ms)
I20180728-12:06:37.756(2)?
I20180728-12:06:37.757(2)? Load the app in a browser to run client tests, or set the TEST_BROWSER_DRIVER environment variable. See https://github.com/meteortesting/meteor-mocha/blob/master/README.md#run-app-tests
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
Actually, it was repeated three times. Not pretty. And I wasn't expecting a passing test to crash my app.
Also in the browser I got this
I was expecting something more like the nice output, as per the Meteor testing guide:
As with most things Node.js, there are a multitude of forks of almost anything. So also with meteortesting:mocha.
cultofcoders:mocha seems to be a few commits ahead of practicalmeteor:mocha, which was at one point the recommended testing framework for Meteor.
If you run
meteor add cultofcoders:mocha
meteor test --driver-package cultofcoders:mocha
you'll get the nice output.
As a curiousity, I found that the version of cultofcoders:mocha I got (meteor list | grep mocha) was 2.4.6, a version that the github repo does not have...
The screenshot, you reference to, is made using practicalmeteor:mocha, but meteortesting:mocha is not (as the other answer claims) a fork of it but a separately developed package, aiming for the same goal, which is running of tests in Meteor.
The usage of the packages is very different and practicalmeteor:mocha might look a bit trickier to set up and this list only applies to it's version 1.0.1 and might change later.
But I have to admit that the documentation needs a refresh ... Anyways, here are some helpful tipps which I'll include in the documentation soon.
If you just want to get started, run this:
meteor add meteortesting:mocha
npm i --save-dev puppeteer#^1.5.0
TEST_BROWSER_DRIVER=puppeteer meteor test --driver-package meteortesting:mocha --raw-logs --once
Do you want to exit after the tests are completed or re-run them after file-change?
Usually, Meteor will restart your application when it exits (a normal exit or a crash), which includes the test-runner.
In case you want to use it in one of your CI or you just want to run the tests once, add --once to the meteor-command, otherwise set TEST_WATCH=1 before running this script. If you don't set the env variable, and don't define --once, Meteor will print these lines and restart the tests once they're finished:
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
As of now I haven't found a way to check if the flag --once is set, which would omit the env variable. The flexibility here to choose between CI and continuous testing is very useful.
Maybe you're currently working on a feature and want to run the tests as you work. If you have set TEST_WATCH=1 and are not using --once, Meteor will restart the tests once it registers that a file was changed. You can even limit the test collection using MOCHA_GREP.
Where and how do you want to see the results?
You currently have to choose between seeing all the test-results on the command-line or to show the server-tests in the commandline and the client-tests in the browser. Currently practicalmeteor:mocha does not support showing the result of the server- and client-tests in the browser, as your screenshot shows.
Please take a look at the package documentation for further details:
You should disable the Meteor timestamp to make it look better.
Tests might look quite gambled because of the timestamp added to every line. To avoid this, add --raw-logs to your command.
I hope this answers most of your question. I know that the documentation needs some improvements and would welcome if someone would take the time to take it into a more logical order for people who "just want to get started".
Actually, I am creating automation testing for an e-commerce website. Actually, the website have function lazy load or something. I am testing it on UAT server. So, it will load the page slowly because the specification of the server. It takes more than 60 sec or more to load all the resources from the webpage. So, when I am trying to create selenium automation, it always waiting more than 60 sec to continue the next step (because waiting the page fully loaded). Please, someone give me tips how to continue run the test step after 10 seconds wait the page to load. It won't throw an exception, just continue the test step.
Not possible.
If you find some element and try execute some action while loading you will get stale element error + due loading issue you will have a lot of failed tests and it will take a lot more time to debug.
Automation means to execute fast and have reliable results.
It seems that this environment is not built for automation, you should request more resources.
As an alternative maybe you can use a headless driver or see if you can put the same build on a VM.
Why this is an issue: Selenium needs to wait for each request to be complete.For example when you request a page, if the page is not received entirely and the server still sending info then the request is not done, it is logical that you need a complete request in order to continue.
You should address this to your Project Manager/QA Lead and ask for advice/option on how to handle this.
Please note that these costs should be included/added in the automation price.You need to address this in a simple way:
good server -> automation runs smoothly and fast and the testing is
done faster
bad server -> unable to run automation since is not reliable and each
test has a high rate of failure => alternative X day(s) of
manual testing for each build
If this would be a coding issue like some delayed ajax request then you would have some solutions, devs could help, but if is an infrastructure/resources issue then if not depending on you, and you cannot solve it.
You could use try any type of wait implicit/explicit, explicit would throw some exception, but this is not a solution for poor resources.
I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.
I'm using Zombie.js for testing with Cucumber-js, and I can't seem to get my client side scripts to run.
Visiting the page:
this.browser.visit("http://localhost/boic",function(e, browser,status,errors){
console.log('status',status);
console.log('error',errors);
console.log('console',browser.text("H1"));
});
Returns a status of 200, no errors, and displays the H1 text correctly. However, if I include a script to change the H1 code in the page:
<script>
$('H1').html('hello world');
</script>
The H1 text remains unchanged, and no global variables are accessible via browser.window...
thanks!
Did you load jQuery in your page before the script is called?
there is also the runScripts browser option but that defaults to true.
But I'm gonna recommend running an external phantom.js process and going through the webdriver interface. Just because I spent 6 months trying to get zombie to do what I wanted and phantomjs made it all easy. http://phantomjs.org/release-1.8.html https://github.com/LearnBoost/soda
I agree w/ Jon Biz. Zombie is difficult to work with. Many sites that use JS libraries that might contain minor errors cause it to crash (I think node fails) when the zombie browser encounters them - if you have the runScripts option set. This makes it very hard to use for any application/site that requires external JS - ie most of them.
I also recommend switching to Phantomjs.