Click does not always work in Selenium - selenium

I use Selenium with PHPUnit, and sometimes test fail with an error condition which seems to be caused by the browser ignoring clickAndWait calls. The test execution passes the clickAndWait command without much delay (even if I set a large timeout), and the next assertion or element access fails; if I make a screenshot, it shows the previous page as if the click command did not happen at all. This happens both with links and with submit buttons (both normal, no javascript: or similar trickery), non-deterministically. It seems to happen more often on certain controls than others (many are not affected at all), and the frequency of tests failing seems more or less contant in the short term, but changes wildly in the long term (sometimes it is 1 in 100, sometimes 1 in 2). I am guessing it is influenced by some sort of server load, but could not see any obvious correlation.

I work more with Selenium 2 but I have noticed this as well. In my case I suspect other system clicks were interfering with Selenium (purely speculation) since I ran the tests on my machine.
The way I solved it was to instead send a key press of the Return key. For most cases this is equivalent to a click and in my experience has created more stable tests.
A quick caveat is that this technique stopped working for me after version 2.3.0. I submitted a bug report about it if you want to take a look.

Related

Is there a way to stop Selenium from going to other pages?

I have a selenium-based scraper. Occasionally one of its locators fails in a way that is seemingly undetectable to me, and it "finds" the wrong button. Usually this isn't a big deal, since I expected it won't succeed 100% of the time, but the problem is, that button is leading me to a different session, thereby ending that entire session of scraping.
I was wondering if there is a way to configure selenium to temporarily not load any new pages and stay on the current page only, such that misclicks like this won't have any effect.
(Of course the ideal and "real" solution is to find a way to fix or detect the locator's mistake, but I want to have this at least as a temporary solution, if it is possible),

Selenium grid runs out of free slots

I have a large suite of SpecFlow tests executing against a selenium grid running locally. The grid has a single host configured for max 10 firefox instances. The tests are run from NUnit serially, so I would only expect to require a single session at a time.
However, when approximately half of the test cases have been run, the console window reporting output from the hub starts reporting
INFO: Node host [url] has no free slots
Why?
All the test cases are associated with a TearDown method that closes and disposes the WebDriver, although I haven't verified that absolutely every test gets to this method without failing. I would expect a maximum of one session to be active at once. How can I find out what is stopping the host from recycling those sessions?
edit #1:
I think I've narrowed down the cause of the issue - it is indeed to do with not closing the WebDriver. There are [AfterScenario] attributes on the teardown methods that are meant to do this, but they only match a subset of scenarios as they have parameters on them. Removing the parameter so that the teardown associates with every scenario fixes the session exhaustion (or seems to) but there are some tests that expect to reacquire an existing session, so I'll have to fix them separately.
A bit of background: This test suite was inherited as part of a 'complete' solution and it's been left untouched and never run since delivery. I'm putting it back into service and have had to discover its quirks as I go - I didn't write any of this. I've had brief encounters with both Selenium and SpecFlow but never used the two together.
The issue turned out to be a facepalm-level fail - mostly in the sense that I didn't spot it. Some logging code was trying to write to a file that wasn't there, the thrown exception bypassed the call to Dispose() on the WebDriver, and was then swallowed with no error reporting. Therefore the sessions were hanging around. Removing the logging code fixed the session exhaustion.
Look on the node (remote desktop) and see what is happening on the box. It does sound like your test isn't closing out it's session properly.

Display history of a single test result in Jenkins - additional plugin or config issue?

Currently our Jenkins server only displays a history/graph for the overall number of passed/skipped/failed tests - I'm assuming that's the behavior out of the box.
If you select a single test, you'll get information for how long the test was failing (assuming it did fail).
However, we'd like to see is a history for that single test across the different builds to identify whether the test has been failing in the past (and when) even though it just passed. If you find a build where it failed, you could click on it, and investigate what might have caused the failure; if it passes again, you could check whether something actually fixed the test, or whether it was failing randomly all along.
Is this something that can be done somehow through the config, or do we need an additional plugin for this? If yes, which one?
Not sure if this makes much difference, but we're using Java (Maven) & TestNG (Surefire).
Both the TestNG plugin and the JUnit plugin will actually display history of the test results.
You just need to pick a given result and then:
For JUnit click on "History" on the left side, and
For TestNG click you will see the history in the graph above the result. You can just click on the bars in the bars to see the older results, and also if you click closer to the edge, the scope of the test results will adjust
The Test Results Analyzer plugin does the job for me. There appears to be other suitable plugins out there as well.
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
Does the Static Code Analysis plugin help?

PHPUnit & Selenium code coverage - coverage metrics stop halfway through test

I'm just getting started with PHPUnit and Selenium, yet one problem has been bothering me: I can't seem to get correct coverage figures.
My app takes a user through a multi-step process that involves multiple pages, each of which is handled in PHP by a display function (to output HTML) and a processing function (to handle the results of POST operations). My baseline test runs through the entire process, and completes correctly having visited each of about seven pages. I've both verified this visually and through assertions in the testcase itself.
This issue is that the coverage report indicates that only the first couple of functions are executed and that the others are never visited (despite my visual and testcase checks). I thought the problem was a PHP Notice that occurred during the first function and that might stop XDebug/PHPUnit from collecting stats, but I fixed this and the problem remains.
Is there anything that can stop collection of coverage statistics mid-way through a test? All the functions in question are in the same file and are called from a (different) central PHP script which chooses which function to call based on an incrementing session variable.

Possible issue with running selenium tests on one machine concurrently

I have multiple similar sites (same layout, just different data), and each of them has drop down menu on mouse over (and disappears on mouse out).
I am using Selenium 2 and WebDriver, and I have one selenium test case that basically do the mouse over and make sure each of the link in the drop down menu works.
I am using selenium grid, so I have a hub and few test machines.
Because I have many sites (few hundred) to test, so I am thinking of making each machine to run the test case against multiple sites in parallel.
My concern is because there can be only one active browser at a time, will it cause issue if web driver tries to perform Action.moveToElement() on multiple browsers at roughly the same time? Will only the active browser performs Action.moveToElement() properly and other browsers fail? If there will be an issue, is there any workaround?
I have tried it using JUnitCore.runClasses(ParallelComputer.classes(), SomeClass1.class, SomeClass2.class, SomeClass3.class);, it decreased the passed tests percentage from 100% to about 67% when running three tests on a machine. Not good =/.
The good part - firefox actually can do it in parallel. If the FF instances are delayed between each other so they don't do the same thing at the same time, it works better. Some of the failures happened during a Firefox bootup - so if you can minimize closing and opening windows, do it. But still, sometimes it just fails for no reason.
If you really would use the saved time, then go for it, log all failed tests and run them again after the first round - this time one at a time.
You could also solve this, depending on your ultimate goal of testing, by not using the Action class with the mouse-movement click, but instead use the WebDriver findBy-click method or Javascript executor method. It would probably be less contentious when running multiple windows at the same time. If the Action class, when defining a mouse movement, uses native calls at all, such as "move to Point", then one browser over the top of another, then I would guess it's possible that the movement point could be masked by another window. I am really not sure about this, just giving you another idea to try.