PHPUnit & Selenium code coverage - coverage metrics stop halfway through test - selenium

I'm just getting started with PHPUnit and Selenium, yet one problem has been bothering me: I can't seem to get correct coverage figures.
My app takes a user through a multi-step process that involves multiple pages, each of which is handled in PHP by a display function (to output HTML) and a processing function (to handle the results of POST operations). My baseline test runs through the entire process, and completes correctly having visited each of about seven pages. I've both verified this visually and through assertions in the testcase itself.
This issue is that the coverage report indicates that only the first couple of functions are executed and that the others are never visited (despite my visual and testcase checks). I thought the problem was a PHP Notice that occurred during the first function and that might stop XDebug/PHPUnit from collecting stats, but I fixed this and the problem remains.
Is there anything that can stop collection of coverage statistics mid-way through a test? All the functions in question are in the same file and are called from a (different) central PHP script which chooses which function to call based on an incrementing session variable.

Related

Workflow from development to testing and merge

I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.

How can an uncalled test affect another in Go?

I have a test function TestJobqueue() in https://github.com/VertebrateResequencing/wr/blob/develop/jobqueue/jobqueue_test.go that I can call in isolation: go test -tags netgo ./jobqueue -v -run 'TestJobqueue$'.
I recently started getting test failures related to boltdb (one of my dependencies) bombing out with signal SIGBUS: bus error code panics, or just normally failing tests because the database couldn't be opened. But only when working off an NFS mounted directory. Fair enough, I or boltdb have some kind of NFS-related bug.
But the thing I can't wrap my head around is that I only get these errors when an entirely different test function exists.
As per the comments in TestREST() in https://github.com/VertebrateResequencing/wr/blob/92fb61ccd7819c8f1edfa8cce8468c4250d40ea7/jobqueue/rest_test.go, if I call Serve(serverConfig) (a function in the package being tested, a function call which is made many times in TestJobqueue() and other test functions) in that test function, TestJobqueue() fails. If I don't, it doesn't.
In short, the failure of tests in one test function can be controlled by the value of a boolean in a test function that I'm not running.
How is this possible?
Edit: to address some points brought up by the first answer, TestJobqueue() is being run in isolation. No other test runs before or after it. If the database file already exists, Serve() results in those files being deleted first, then a new one created to run the new set of tests. The odd thing that I'm seeking an answer for is how an unexecuted function can have this side effect. I can demonstrate it is really unexecuted by beginning or ending TestREST() with a panic call: the output of that panic is never seen, but TestJobqueue() failure can still be controlled by the boolean in TestREST() (if the panic comes at the end).
Edit2: this turns out to be caused by an unusual thing I do in TestJobqueue(), which is to call go test on itself. Needless to say, if you do this, strange things can happen...
In short, the failure of tests in one test function can be controlled by the value of a boolean in a test function that I'm not running.
This is not a great summary. Your test starts a server. The other test starts a server, clearly, the problem is there. You appear to have commented out the bit of code that stops the server at the end of the test? You can't run two servers on the same port.
You probably have a port conflict or some network condition that is triggered by running the two servers at once, because they both appear to use a similar (identical?) config loaded like this:
config := internal.ConfigLoad("development", true)
Running with no config uses default values, avoiding the conflict, running with config causes the conflict. So to pin it down, try creating a config with one setting at a time till you find the config setting that causes the problem (most likely Port or WebPort). Alternatively, make sure the tests stop the server at the end.
[EDIT] Looks like you have narrowed it down to DBFile config setting by changing one at a time. This implies the server starts a new db instance - if both try to use the same file for a new db, this would cause contention and the second test to run would fail.
It's not entirely clear from your description above what you're doing or what the problem is, so you could try to improve that to state exactly the sequence of actions and the problem. If for example you have previously run a test which creates a db, it could affect later test runs because of the presence of a db file, so your tests are not completely independent.
[EDIT 2 - after further edits to question]
If commenting out TestREST completely solves your problem (or a panic before it starts), and given changing it breaks the other test, you are executing TestREST somehow.
Looking at your code for jobqueue_test, it appears to invoke go test so you might be running more tests that you assume? Given you don't see the panic output I'd suspect your use of exec.Command in this big test. Try removing bits of the failing test till it works to narrow down exactly which invocation is running the other test. Calling go test within a test is pretty unusual!
https://github.com/VertebrateResequencing/wr/blob/develop/jobqueue/jobqueue_test.go#L2445

Forcing integration tests to run one at a time in a jenkins pipeline

I have a small collection of integration tests that utilize selenium in a class. The idea is that these tests run every time there is a merge to the codebase, with the merge proceeding through the pipeline and having a series of tests running against the new code.
The thing is, these selenium tests have to run one at a time. They're using the browser to log into a website, and the account will just log out if more than one person tries to log into the account at once, it'll just log out, and the test will obviously fail, so I need these tests to run one at a time. I've tried using the #NotThreadSafe annotation, doesn't seem to have changed anything, and I've searched through for some sort of switch or parameter that defines how many tests run at once with no luck. These tests are using junit 4.12.

Selenium grid runs out of free slots

I have a large suite of SpecFlow tests executing against a selenium grid running locally. The grid has a single host configured for max 10 firefox instances. The tests are run from NUnit serially, so I would only expect to require a single session at a time.
However, when approximately half of the test cases have been run, the console window reporting output from the hub starts reporting
INFO: Node host [url] has no free slots
Why?
All the test cases are associated with a TearDown method that closes and disposes the WebDriver, although I haven't verified that absolutely every test gets to this method without failing. I would expect a maximum of one session to be active at once. How can I find out what is stopping the host from recycling those sessions?
edit #1:
I think I've narrowed down the cause of the issue - it is indeed to do with not closing the WebDriver. There are [AfterScenario] attributes on the teardown methods that are meant to do this, but they only match a subset of scenarios as they have parameters on them. Removing the parameter so that the teardown associates with every scenario fixes the session exhaustion (or seems to) but there are some tests that expect to reacquire an existing session, so I'll have to fix them separately.
A bit of background: This test suite was inherited as part of a 'complete' solution and it's been left untouched and never run since delivery. I'm putting it back into service and have had to discover its quirks as I go - I didn't write any of this. I've had brief encounters with both Selenium and SpecFlow but never used the two together.
The issue turned out to be a facepalm-level fail - mostly in the sense that I didn't spot it. Some logging code was trying to write to a file that wasn't there, the thrown exception bypassed the call to Dispose() on the WebDriver, and was then swallowed with no error reporting. Therefore the sessions were hanging around. Removing the logging code fixed the session exhaustion.
Look on the node (remote desktop) and see what is happening on the box. It does sound like your test isn't closing out it's session properly.

Click does not always work in Selenium

I use Selenium with PHPUnit, and sometimes test fail with an error condition which seems to be caused by the browser ignoring clickAndWait calls. The test execution passes the clickAndWait command without much delay (even if I set a large timeout), and the next assertion or element access fails; if I make a screenshot, it shows the previous page as if the click command did not happen at all. This happens both with links and with submit buttons (both normal, no javascript: or similar trickery), non-deterministically. It seems to happen more often on certain controls than others (many are not affected at all), and the frequency of tests failing seems more or less contant in the short term, but changes wildly in the long term (sometimes it is 1 in 100, sometimes 1 in 2). I am guessing it is influenced by some sort of server load, but could not see any obvious correlation.
I work more with Selenium 2 but I have noticed this as well. In my case I suspect other system clicks were interfering with Selenium (purely speculation) since I ran the tests on my machine.
The way I solved it was to instead send a key press of the Return key. For most cases this is equivalent to a click and in my experience has created more stable tests.
A quick caveat is that this technique stopped working for me after version 2.3.0. I submitted a bug report about it if you want to take a look.