I am facing problems with the chrome processes not closing after tests are done.
If i debug each testcase and also debug them by changing the test so the tests fail on purpose, the TearDown method is being always called and all processes are killed.
But sometimes and this happens in like 1 of 10 runs, some processes stay alive when not run in debug.
I found this issue report from 2018:
Chromedriver quit() method doesn't close all chrome.exe processes GITHUB
in which some people are facing the same issue.
In this link, there is not a fix mentioned.
Someone is mentioning in that link above that the reason could be caused by "Zombie processes" explained here:
Zombie Processes are Eating your Memory
Did anyone else face such issues?
It may be happening because the tear down function as you mentioned is not called
first how do you call your tear down function ?
Related
Good day all... I have been having an issue with Sanity for the past 3 days... After running sanity start on my linux vps I get the success message Content Studio successfully compiled! Go to http://localhost:3333 however the link returns nothing - it just loads indefinitely without any error message in my console. I have tried chrome, brave and firefox as well as turning third-party cookies on - I'd appreciate any assistance or ideas as to what the problem may be
If simply re-starting Sanity Studio doesn't work, you might check if there's still a thread running after exiting studio, and if so, terminate the thread and start Studio again.
A simple CLI command that kills the process running on port 3333 is:
lsof -t -i tcp:3333 | xargs kill
I've experienced similar issues with Studio being unresponsive after inadvertently leaving it running when my MBP goes to sleep. When I return, it appears Sanity is still running in the terminal, but there is no browser response and no errors thrown in the terminal.
More resources that might help:
Finding the PID of the process using a specific port?
Find (and kill) process locking port 3000 on Mac
When I run Selenium tests on Chrome in parallel with two threads(both threads running on same machine), sometimes I've seen one of the chrome instances stuck(i.e UI interactions do not happen) for some time or indefinitely.
This causes test execution to take more time or fail test unnecessarily.
Could you please help to understand if this is known behaviour when tests run on same machine? If not, what are the causes and solutions? Also guide on how we can pro-grammatically(using Java) set memory for Chrome browser which is running through Selenium?
I am running Karma, Jasmin and Instanbul on Windows 10 and test against ChromeHeadless, FireFoxHeadless and MS-Edge.
The tests all run just fine and the coverage output is written. BUT... Firefox never closes. I get this error:
WARN [launcher]: Firefox was not killed in 2000 ms, sending SIGKILL.
If I don't test with Firefox, everything works fine.
If I don't use coverage, everything works fine.
If I make Firefox non-headless then it still fails in the same way
If I use JUST Firefox then it still fails in the same way
I have spent over 2 weeks trying to find an answer here on StackOverflow and all over the internet. There were similar problems reported but no one ever had a definitive answer related to Firefox and Coverage.
AWESOME!!!!!!! I figured it out.
I asked someone a question and their answer got me thinking about timeouts and I changed the following values in my karma.conf.js file and now it is working:
browserDisconnectTimeout: 10000,
browserDisconnectTolerance: 1,
processKillTimeout: 100000,
It seems that the coverage reporting was taking too long and just extending the timeout makes it work fine. The default timeout is 2000ms.
It seems that something related to Firefox takes longer to write out the Firefox coverage files and that was taking longer than 2000ms. This was leading to the error I was seeing.
Increasing the timeout allows everything to be written and Firefox to shut down correctly.
I have a local selenium server (2.42.2) running with Chromedriver and Firefox. It seems to get stuck after loading and running client.html. I can see that my functional suite code runs in node, far enough to execute the main body. Anything in registerSuite never gets called.
Here are the selenium logs:
http://pastebin.com/KKg5ycvW
I can see the browsers in the selenium sessions page, but they don't appear to be doing anything.
Try opening up a new browser window and paste in the url that you see in the existing selenium ones. Then open up dev tools and see what console errors you are getting.
Intern's runner seems to get stuck in an infinite loop if any uncaught javascript errors are thrown during unit tests if the 'reporter' hasn't been setup yet.
This has been raised as an issue in intern's github repository.
I'm running my DalekJS tests (0.0.8) successfully in PhantomJS and also in Chromium on a Linux system.
But i have a small problem with Chromium.
After running the tests the dalek process will not quit. I can only end it with Ctrl+C or by Closing Chrome manually.
I would like to implement an automatic testing system. Therefore it would be nice if the test process would quit like with phantom or with the saucelabs driver.
Is there something i can do about that?
Edit: From the verbose log i see that "dalek-browser-chrome: Shutting down ChromeDriver" is emitted. So the kill code somehow does not work on my Debian 7.
Thanks!
I helped myself with a quick and dirty fix.
It looks like the code does not recognize all chrome processes to kill. Many of the pids and processIDs that are checked are 'undefined'. Maybe it has to do with the fact that i use Chromium on my Debian 7.4 x86 system.
The dirty fix is to add the following code into the index.js of the dalek-browser-chrome module at line 599 in the function _checkProcesses() just under the comment "//kill leftover chrome browser processes":
if (process.platform != 'win32'){
cp.exec('pkill -f /usr/lib/chrom');
}
Of course this will kill all Chromium instances. Not only the ones spawned by DalekJS. But for my usecase this is sufficient for now.