We are facing an issue while running karma test cases with phantomJs our phantomJs crashes and gets disconnected.
Is that due to memory leakage or some other issue.Kindly let me know if some one has some suitable solution.
I found that the workaround is to break test cases into multiple grunt task but since we have a lot of test cases more than 1500 so that would not be a feasible task.
We are using the below versions
Node:- 0.10.32
Karma:- 0.12.24
PhantomJs:- 1.9.8 (karma-phantomJs-Launcher)
Please let me know the solutions asap.
There are two reasons I found that this can happen.
PhantomJS does not release memory until its tab is closed so if your test suite is too large, you could be running out of memory.
karma-phantomjs-launcher & karma-phantomjs2-launcher do not hook the stdout/stderr output for their started browser process and so I've seen some instances that the started browser just hangs and gets disconnected, most likely due to its stderr output getting filled up
The first problem can be worked around by splitting your test suite into smaller ones. Or, you could research if there is perhaps a way to tell PhantomJS to run its JavaScript garbage collection, but I have not gone down that road so can't provide much more detail there.
The second problem can be fixed by:
using the latest karma-phantomjs-launcher version that hooks browser the stdout/stderr output (fixed in version 0.2.1)
using a version of karma-phantomjs2-launcher from its pull request #5 which brings in upstream changes from the base karma-phantomJS-launcher project and thus resolves the problem here as well.
I had the same kind of issue with handling random crashes. Though i did not find a way to avoid them, there is the possibility to restart the grunt-task upon a crash.
grunt.registerTask('karma-with-retry', function (opt) {
var done = this.async();
var count = 0;
var retry = function () {
grunt.util.spawn({
cmd : "grunt",
args : ["connect", "karma"], // your tasks
opts: {
stdio: 'inherit'
}
}, function (error, result, code) {
count++;
if (error && code === 90 /* Replace with code thrown by karma */) {
if(count < 5) {
grunt.log.writeln("Retrying karma tests upon error: " + code );
retry();
} else {
done(false);
}
} else {
done(result);
}
});
}
retry();
});
Source https://github.com/ariya/phantomjs/issues/12325#issuecomment-56246505
I was getting Phantom crashed when asserting the following line
dom.should.be.instanceof(HTMLCollection);
Worked on chrome, but phantom was crashing without any useful error message.
I've been able to see the real error message after running the same test on PhantomJS_debug browser with debug option set to true.
The following error message started showing up.
The instanceof assertion needs a constructor but object was given.
Instead of
PhantomJS has crashed. Please read the bug reporting guide at
<http://phantomjs.org/bug-reporting.html> and file a bug report.
So chrome was ok with the assertion but phantom 2.1.1 is crashing with the above error. Hope this will help.
Related
I'm currently using ChromeDriver v91.0.4472.1900 and Selenium v3.141.0 and having problem with Timing out receiving message from renderer. I've checked all posts on stackoverflow and probably tried every link that I found on Google and none of them helped.
What I'm trying to do is to stop executing my tests when I get
Timed out receiving message from renderer
because ChromeDriver won't execute any command after this warning happens
I've tried adding those 'cryptic' arguments to ChromeDriver like:
// ChromeDriver is just AWFUL because every version or two it breaks unless you pass cryptic arguments
//AGRESSIVE: options.setPageLoadStrategy(PageLoadStrategy.NONE); // https://www.skptricks.com/2018/08/timed-out-receiving-message-from-renderer-selenium.html
// this.chromeOptions.AddArguments("start-maximized"); // https://stackoverflow.com/a/26283818/1689770
this.chromeOptions.AddArguments("enable-automation"); // https://stackoverflow.com/a/43840128/1689770
//this.chromeOptions.AddArguments("--headless"); // only if you are ACTUALLY running headless
this.chromeOptions.AddArguments("--no-sandbox"); //https://stackoverflow.com/a/50725918/1689770
this.chromeOptions.AddArguments("--disable-infobars"); //https://stackoverflow.com/a/43840128/1689770
this.chromeOptions.AddArguments("--disable-dev-shm-usage"); //https://stackoverflow.com/a/50725918/1689770
this.chromeOptions.AddArguments("--disable-browser-side-navigation"); //https://stackoverflow.com/a/49123152/1689770
//this.chromeOptions.AddArguments("--disable-gpu"); //https://stackoverflow.com/questions/51959986/how-to-solve-selenium-chromedriver-timed-out-receiving-message-from-renderer-exc
As you can see, this is one of the StackOverflow solutions but it didn't help either.
I know one of the solutions is to find perfect match between ChromeDriver and Chrome versions but the problem is I have to be up to date all the time, so only solution for me would be to stop executing test and rerun it but I am not able to 'catch' that
Timed out receiving message from renderer
and handle it properly.
I have the following code...
async function GetFirstAssessment() {
try {
const response = await axios.get('http://192.168.254.10/App/GetFirstAssessment/');
return response.data;
} catch (error) {
alert('error: ' + error);
console.error(error);
}
};
It's been working fine for some time, but suddenly it no longer works and eventually times out. Or I don't even know if it "times out" since I believe the default timeout for axios is 0 but eventually it does error with the message "Error: Network Error". I've also added a picture of the stack trace but I don't think it's helpful.
I can put the url in a browser and it returns the json I'm expecting, so the problem is definitely not on the server side. I am testing from an android device connected via usb and developing with cli (not expo).
If there is any other information I can provide please let me know... not sure what to do next because this really makes no sense. I would wonder if it was a security issue except that it was working perfectly earlier. Also I have updated some other code that calls this, but even after reverting I still have the same problem... and seeing as how it is making it to the catch, I don't see how any other code could be affecting this.
I did just install the standalone react native debugger. I believe it has worked since I installed it, though I'm not 100% certain on that. Still doesn't work after closing it.
I set a break point in the server code on the first line of the api method, but it doesn't get hit. Not sure how to troubleshoot further up the chain though. I also just thought to check fiddler and it doesn't show any request coming in, though I honestly don't know if it should normally or not.
The recommended testing framework for Meteor 1.7 seems to be meteortesting:mocha.
With Meteor 1.7.0.3 I created a default app (meteor create my-app), which has the following tests (in test/main.js)
import assert from "assert";
describe("my-app", function () {
it("package.json has correct name", async function () {
const { name } = await import("../package.json");
assert.strictEqual(name, "noteit");
});
if (Meteor.isClient) {
it("client is not server", function () {
assert.strictEqual(Meteor.isServer, false);
});
}
if (Meteor.isServer) {
it("server is not client", function () {
assert.strictEqual(Meteor.isClient, false);
});
}
});
I ran
meteor add meteortesting:mocha
meteor test --driver-package meteortesting:mocha
and with meteortesting:mocha#2.4.5_6 I got this in the console:
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)? ----- RUNNING SERVER TESTS -----
I20180728-12:06:37.729(2)? --------------------------------
I20180728-12:06:37.729(2)?
I20180728-12:06:37.730(2)?
I20180728-12:06:37.731(2)?
I20180728-12:06:37.737(2)? the server
✓ fails a test.753(2)?
I20180728-12:06:37.755(2)?
I20180728-12:06:37.756(2)?
I20180728-12:06:37.756(2)? 1 passing (26ms)
I20180728-12:06:37.756(2)?
I20180728-12:06:37.757(2)? Load the app in a browser to run client tests, or set the TEST_BROWSER_DRIVER environment variable. See https://github.com/meteortesting/meteor-mocha/blob/master/README.md#run-app-tests
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
Actually, it was repeated three times. Not pretty. And I wasn't expecting a passing test to crash my app.
Also in the browser I got this
I was expecting something more like the nice output, as per the Meteor testing guide:
As with most things Node.js, there are a multitude of forks of almost anything. So also with meteortesting:mocha.
cultofcoders:mocha seems to be a few commits ahead of practicalmeteor:mocha, which was at one point the recommended testing framework for Meteor.
If you run
meteor add cultofcoders:mocha
meteor test --driver-package cultofcoders:mocha
you'll get the nice output.
As a curiousity, I found that the version of cultofcoders:mocha I got (meteor list | grep mocha) was 2.4.6, a version that the github repo does not have...
The screenshot, you reference to, is made using practicalmeteor:mocha, but meteortesting:mocha is not (as the other answer claims) a fork of it but a separately developed package, aiming for the same goal, which is running of tests in Meteor.
The usage of the packages is very different and practicalmeteor:mocha might look a bit trickier to set up and this list only applies to it's version 1.0.1 and might change later.
But I have to admit that the documentation needs a refresh ... Anyways, here are some helpful tipps which I'll include in the documentation soon.
If you just want to get started, run this:
meteor add meteortesting:mocha
npm i --save-dev puppeteer#^1.5.0
TEST_BROWSER_DRIVER=puppeteer meteor test --driver-package meteortesting:mocha --raw-logs --once
Do you want to exit after the tests are completed or re-run them after file-change?
Usually, Meteor will restart your application when it exits (a normal exit or a crash), which includes the test-runner.
In case you want to use it in one of your CI or you just want to run the tests once, add --once to the meteor-command, otherwise set TEST_WATCH=1 before running this script. If you don't set the env variable, and don't define --once, Meteor will print these lines and restart the tests once they're finished:
=> Exited with code: 0
=> Your application is crashing. Waiting for file change.
As of now I haven't found a way to check if the flag --once is set, which would omit the env variable. The flexibility here to choose between CI and continuous testing is very useful.
Maybe you're currently working on a feature and want to run the tests as you work. If you have set TEST_WATCH=1 and are not using --once, Meteor will restart the tests once it registers that a file was changed. You can even limit the test collection using MOCHA_GREP.
Where and how do you want to see the results?
You currently have to choose between seeing all the test-results on the command-line or to show the server-tests in the commandline and the client-tests in the browser. Currently practicalmeteor:mocha does not support showing the result of the server- and client-tests in the browser, as your screenshot shows.
Please take a look at the package documentation for further details:
You should disable the Meteor timestamp to make it look better.
Tests might look quite gambled because of the timestamp added to every line. To avoid this, add --raw-logs to your command.
I hope this answers most of your question. I know that the documentation needs some improvements and would welcome if someone would take the time to take it into a more logical order for people who "just want to get started".
I maintain a complex Angular (1.5.x) application that is being E2E tested using Protractor (2.5.x). I am experiencing a problem with this approach, which presents primarily in the way the tests seem flaky. Tests that worked perfectly well in one pull request fail in another. This concerns simple locators, such as by.linkTest(...). I debugged the failing tests and the app is on the correct page, the links are present and accessible.
Has anyone else experienced these consistency problems? Knows of a cause or workaround?
Just Say No to More End-to-End Tests!
That said, here are the few things you can do to tackle our mutual merciless "flakiness" enemy:
update to the latest Protractor (currently 4.0.0) which also brings latest selenium and chromedriver with it
turn off Angular animations
use dragons browser.wait() with a set of built-in or custom Expected Conditions. This is probably by far the most reliable way to approach the problem. Unfortunately, this is use-case and problem specific, you would need to modify your actual tests in the problematic places. For example, if you need to click an element, wait for it to be clickable:
var EC = protractor.ExpectedConditions;
var elm = $("#myid");
browser.wait(EC.elementToBeClickable(elm), 5000);
elm.click();
maximize the browser window (to avoid random element not visible or not clickable errors). Put this to onPrepare():
browser.driver.manage().window().maximize();
increase the Protractor and Jasmine timeouts
slow Protractor down by tweaking the Control Flow (not sure if it works for 4.0.0, please test)
manually call browser.waitForAngular(); in problematic places. I am not sure why this helps but I've seen reports where it definitely helped to fix a flaky test.
use the jasmine done() callback in your specs. This may help to, for example, not to start the it() block until done is called in beforeEach()
return a promise from the onPrepare() function. This usually helps to make sure things are prepared for the test run
use protractor-flake package that would automatically re-run failed tests. More like a quick workaround to the problem
There are also other problem-specific "tricks" like slow typing into the text box, clicking via JavaScript etc.
Yes, I think all of us experienced such flakiness issue.
Actually, the flakiness is quite common issue with any browser automation tool. However, this is supposed to be less in case of Protractor as Protractor has built-in wait consideration which performs actions only after loading the dom properly. But, in few cases you might have to use some explicit waits if you see intermittent failures.
I prefer to use few intelligent wait methods like:
function waitForElementToClickable(locator) {
var domElement = element(by.css(locator)),
isClickable = protractor.ExpectedConditions.elementToBeClickable(domElement);
return browser.wait(isClickable, 2000)
.then(function () {
return domElement;
});
}
Where 2000 ms is used as timeout, you can make it configurable using a variable.Sometimes I also go with browser.sleep() when none of my intelligent wait works.
It's been my experience that some methods (eg. sendKeys()) do not always fire at the expected time, within the controlFlow() queue, and will cause tests to be flakey. I work around this by specifically adding them to the controlFlow(). Eg:
this.enterText = function(input, text) {
return browser.controlFlow().execute(function() {
input.sendKeys(text);
});
};
A workaround that my team has been using is to re-run only failed tests using the plugin protractor-errors. Using this tool, we can identify real failures versus flakey tests within 2-3 runs. To add the plugin, just add a require statement to the bottom of the Protractor config's onPrepare function:
exports.config = {
...
onPrepare: function() {
require('protractor-errors');
}
}
You will need to pass these additional parameters when to run your tests with the plugin:
protractor config.js --params.errorsPath 'jasmineReports' --params.currentTime (timestamp) --params.errorRun (true or false)
There is also a cli tool that will handle generating the currentTime if you don't have an easy way to pass in a timestamp.
When I try to run multiple UI tests at the same time in the new Xcode beta, it fails after the first test, with the error "UI Testing Failure: App state is still not terminated" for each test after the first. Anyone got a fix for this?
I have faced the same issue. It seems that, at least in my case, application had never been terminated.
I solved it by putting the following statement in the setUp() method:
continueAfterFailure = false
This should stop a running test process after first failure.
I cannot post a comment, so I will try to answer your question while getting some clarification.
I have faced this issue when running on physical devices. On simulator I did not face this issue. So are you facing this issue when running on device or simulator or both?
If its a device, then there is a known bug which has been reported to Apple. To overcome this issue, at runtime I have inserted dummy test cases between two genuine test cases.
Did you include a
func tearDown() {
super.tearDown()
}
function?
It's needed to terminate the app state after each test before it is re-initialized by the
func setUp() {
super.setUp();
XCUIApplication().launch
}