phpunit database test after teardown - selenium

I want to execute several tests in a testcase / testsuite (via selenium) and hook a database test onto the end of every tearDown (with assert which can't be called in tearDown).
So the workflow would be:
Setup the Connection to the database and the Schema in setUpBeforeClass()
Setup the Database (only the contents) in setUp()
Execute test01
TearDown contents
Assert if every Table in Database has a rowCount of Zero.
So is there a way to hook a additional assert onto the end of every tearDown?
I tried to do the Setup in assertPreConditions and the tearDown in assertPostConditions but thats kind of ugly.
Thx in advance

It seems you can use an assert anywhere, even in tearDown(). This test case (save as testTearDown.php, run with phpunit testTearDown.php) correctly gives a fail:
class TearDownTest extends PHPUnit_Framework_TestCase
{
/** */
public function setUp(){
echo "In setUp\n";
//$this->assertTrue(false);
}
/** */
public function tearDown(){
echo "In tearDown\n";
$this->assertTrue(false);
}
/** */
public function assertPreConditions(){
echo "In assertPreConditions\n";
//$this->assertTrue(false);
}
/** */
public function assertPostConditions(){
echo "In assertPostConditions\n";
//$this->assertTrue(false);
}
/**
*/
public function testAdd(){
$this->assertEquals(3, 1+2);
}
}
But, one rule of thumb I have is: if the software is making my life difficult, maybe I'm doing something wrong. You wrote that after the tearDown code has run you want to: "Assert if every Table in Database has a rowCount of Zero."
This sounds like you want to validate your unit test code has been written correctly, in this case that tearDown has done its job correctly? It is not really anything to do with the code that you are actually testing. Using the phpUnit assert mechanisms would be confusing and misleading; in my sample above, when tearDown asserts it tells me that testAdd() has failed. If it is actually code in tearDown() that is not working correctly, I want to be told that instead. So, for validating your unit test code, why not use PHP's asserts:
So I wonder if the tearDown() function you want could look something like this:
public function tearDown(){
tidyUpDatabase();
$cnt=selectCount("table1");
assert($cnt==0);
$cnt=selectCount("table2");
assert($cnt==0);
}

Related

Can't perform a Laravel 4 action/route test more than once

I'm running into a Laravel 4 testing issue: An action/route test can only be run once, and it has to be the first test run. Any subsequent action/route test will fail with an exception before the assert is called.
route/action tests run as long as they are the first test run.
Non-route/action tests run normally, although they cause subsequent route/action tests to throw an exception
It's important to note that the tests in question don't fail, they throw an exception when the action is fired, for example:
Symfony\Component\Routing\Exception\RouteNotFoundException: Unable to generate a URL for the named route "home" as such route does not exist.
Sample test class:
class ExampleTest extends TestCase {
// passes
public function testOne()
{
$class = MyApp::ApiResponse();
$this->assertInstanceOf('\MyApp\Services\ApiResponse', $class);
}
// this fails unless moved the top of the file
public function testRoute()
{
$this->route('GET','home');
$this->assertTrue($this->client->getResponse()->isOk());
}
// passes
public function testTwo()
{
$class = MyApp::ProjectService();
$this->assertInstanceOf('\MyApp\Services\ProjectService', $class);
}
}
This is implementation-specific, a fresh Laravel 4 project does not exhibit the issue. What could be causing this behaviour? How would you go about tracking down the problem?
In this case, the routes file was being called using an include_once call. When subsequent tests were run the routes were empty.
Changing to include() fixed the issue exhibited in the question

markTestSkipped() not working with sausage-based Selenium tests via Sauce Labs

I am using the sausage framework to run parallelized phpunit-based Selenium web driver tests through Sauce Labs. Everything is working well until I want to mark a test as skipped via markTestSkipped(). I have tried this via two methods:
setting markTestSkipped() in the test method itself:
class MyTest
{
public function setUp()
{
//Some set up
parent::setUp();
}
public function testMyTest()
{
$this->markTestSkipped('Skipping test');
}
}
In this case, the test gets skipped, but only after performing setUp, which performs a lot of unnecessary work for a skipped test. To top it off, phpunit does not track the test as skipped -- in fact it doesn't track the test at all. I get the following output:
Running phpunit in 4 processes with <PATH_TO>/vendor/bin/phpunit
Time: <num> seconds, Memory: <mem used>
OK (0 tests, 0 assertions)
The other method is by setting markTestSkipped() in the setUp method:
class MyTest
{
public function setUp()
{
if (!shouldRunTest()) {
$this->markTestSkipped('Skipping test');
} else {
parent::setUp();
}
}
protected function shouldRunTest()
{
$shouldrun = //some checks to see if test should be run
return $shouldrun;
}
public function testMyTest()
{
//run the test
}
}
In this case, setUp is skipped, but the test still fails to track the tests as skipped. phpunit still returns the above output. Any ideas why phpunit is not tracking my skipped test when they are executed in this fashion?
It looks like, at the moment, there is no support for logging markTestSkipped() and markTestIncomplete() results in PHPunit when using paratest. More accurately, PHPunit won't log tests which call markTestSkipped() or markTestIncomplete() if called with arguments['junitLogfile'] set -- and paratest calls PHPunit with a junitLogfile.
For more info, see: https://github.com/brianium/paratest/issues/60
I suppose I can hack away at either phpunit or paratest...

Is it possible to skip a scenario with Cucumber-JVM at run-time

I want to add a tag #skiponchrome to a scenario, this should skip the scenario when running a Selenium test with the Chrome browser. The reason to-do this is because some scenario's work in some environments and not in others, this might not even be browser testing specific and could be applied in other situation for example OS platforms.
Example hook:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
// Skip scenario code here
}
}
I know it is possible to define ~#skiponchrome in the cucumber tags to skip the tag, but I would like to skip a tag at run-time. This way I don't have to think about which steps to skip in advance when I starting a test run on a certain environment.
I would like to create a hook that catches the tag and skips the scenario without reporting a fail/error. Is this possible?
I realized that this is a late update to an already answered question, but I want to add one more option directly supported by cucumber-jvm:
#Before //(cucumber one)
public void setup(){
Assume.assumeTrue(weAreInPreProductionEnvironment);
}
"and the scenario will be marked as ignored (but the test will pass) if weAreInPreProductionEnvironment is false."
You will need to add
import org.junit.Assume;
The major difference with the accepted answer is that JUnit assume failures behave just like pending
Important Because of a bug fix you will need cucumber-jvm release 1.2.5 which as of this writing is the latest. For example, the above will generate a failure instead of a pending in cucumber-java8-1.2.3.jar
I really prefer to be explicit about which tests are being run, by having separate run configurations defined for each environment. I also like to keep the number of tags I use to a minimum, to keep the number of configurations manageable.
I don't think it's possible to achieve what you want with tags alone. You would need to write a custom jUnit test runner to use in place of #RunWith(Cucumber.class). Take a look at the Cucumber implementation to see how things work. You would need to alter the RuntimeOptions created by the RuntimeOptionsFactory to include/exclude tags depending on the browser, or other runtime condition.
Alternatively, you could consider writing a small script which invokes your test suite, building up a list of tags to include/exclude dynamically, depending on the environment you're running in. I would consider this to be a more maintainable, cleaner solution.
It's actually really easy. If you dig though the Cucumber-JVM and JUnit 4 source code, you'll find that JUnit makes skipping during runtime very easy (just undocumented).
Take a look at the following source code for JUnit 4's ParentRunner, which Cucumber-JVM's FeatureRunner (which is used in Cucumber, the default Cucumber runner):
#Override
public void run(final RunNotifier notifier) {
EachTestNotifier testNotifier = new EachTestNotifier(notifier,
getDescription());
try {
Statement statement = classBlock(notifier);
statement.evaluate();
} catch (AssumptionViolatedException e) {
testNotifier.fireTestIgnored();
} catch (StoppedByUserException e) {
throw e;
} catch (Throwable e) {
testNotifier.addFailure(e);
}
}
This is how JUnit decides what result to show. If it's successful it will show a pass, but it's possible to #Ignore in JUnit, so what happens in that case? Well, an AssumptionViolatedException is thrown by the RunNotifier (or Cucumber FeatureRunner in this case).
So your example becomes:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
throw new AssumptionViolatedException("Not supported on Chrome")
}
}
If you've used vanilla JUnit 4 before, you'd remember that #Ignore takes an optional message that is displayed when a test is ignored by the runner. AssumptionViolatedException carries the message, so you should see it in your test output after a test is skipped this way without having to write your own custom runner.
I too had the same challenge, where in I need to skip a scenario from running based on a flag which I obtain from the application dynamically in run-time, which tells whether the feature to be tested is enabled on the application or not..
so this is how I wrote my logic in the scenarios file, where we have the glue code for each step.
I have used a unique tag '#Feature-01AXX' to mark my scenarios that need to be run only when that feature(code) is available on the application.
so for every scenario, the tag '#Feature-01XX' is checked first, if its present then the check for the availability of the feature is made, only then the scenario will be picked for running. Else it will be merely skipped, and Junit will not mark this as failure, instead it will me marked as Pass. So the final result if these tests did not run due to the un-availability of the feature will be pass, that's cool...
#Before
public void before(final Scenario scenario) throws Exception {
/*
my other pre-setup tasks for each scenario.
*/
// get all the scenario tags from the scenario head.
final ArrayList<String> scenarioTags = new ArrayList<>();
scenarioTags.addAll(scenario.getSourceTagNames());
// check if the feature is enabled on the appliance, so that the tests can be run.
if (checkForSkipScenario(scenarioTags)) {
throw new AssumptionViolatedException("The feature 'Feature-01AXX' is not enabled on this appliance, so skipping");
}
}
private boolean checkForSkipScenario(final ArrayList<String> scenarioTags) {
// I use a tag "#Feature-01AXX" on the scenarios which needs to be run when the feature is enabled on the appliance/application
if (scenarioTags.contains("#Feature-01AXX") && !isTheFeatureEnabled()) { // if feature is not enabled, then we need to skip the scenario.
return true;
}
return false;
}
private boolean isTheFeatureEnabled(){
/*
my logic to check if the feature is available/enabled on the application.
in my case its an REST api call, I parse the JSON and check if the feature is enabled.
if it is enabled return 'true', else return 'false'
*/
}
I've implemented a customized junit runner as below. The idea is to add tags during runtime.
So say for a scenario we need new users, we tag the scenarios as "#requires_new_user". Then if we run our test in an environment (say production environment which dose not allow you to register new user easily), then we will figure out that we are not able to get new user. Then the ""not #requires_new_user" will be added to cucumber options to skip the scenario.
This is the most clean solution I can imagine now.
public class WebuiCucumberRunner extends ParentRunner<FeatureRunner> {
private final JUnitReporter jUnitReporter;
private final List<FeatureRunner> children = new ArrayList<FeatureRunner>();
private final Runtime runtime;
private final Formatter formatter;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws java.io.IOException if there is a problem
* #throws org.junit.runners.model.InitializationError if there is another problem
*/
public WebuiCucumberRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
addTagFiltersAsPerTestRuntimeEnvironment(runtimeOptions);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
formatter = runtimeOptions.formatter(classLoader);
final JUnitOptions junitOptions = new JUnitOptions(runtimeOptions.getJunitOptions());
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader, runtime.getEventBus());
jUnitReporter = new JUnitReporter(runtime.getEventBus(), runtimeOptions.isStrict(), junitOptions);
addChildren(cucumberFeatures);
}
private void addTagFiltersAsPerTestRuntimeEnvironment(RuntimeOptions runtimeOptions)
{
String channel = Configuration.TENANT_NAME.getValue().toLowerCase();
runtimeOptions.getTagFilters().add("#" + channel);
if (!TestEnvironment.getEnvironment().isNewUserAvailable()) {
runtimeOptions.getTagFilters().add("not #requires_new_user");
}
}
...
}
Or you can extends the official Cucumber Junit test runner cucumber.api.junit.Cucumber and override method
/**
* Create the Runtime. Can be overridden to customize the runtime or backend.
*
* #param resourceLoader used to load resources
* #param classLoader used to load classes
* #param runtimeOptions configuration
* #return a new runtime
* #throws InitializationError if a JUnit error occurred
* #throws IOException if a class or resource could not be loaded
* #deprecated Neither the runtime nor the backend or any of the classes involved in their construction are part of
* the public API. As such they should not be exposed. The recommended way to observe the cucumber process is to
* listen to events by using a plugin. For example the JSONFormatter.
*/
#Deprecated
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
You can manipulate runtimeOptions here as you wish. But the method is marked as deprecated, so use it with caution.
If you're using Maven, you could read use a browser profile and then set the appropriate ~ exclude tags there?
Unless you're asking how to run this from command line, in which case you tag the scenario with #skipchrome and then when you run cucumber set the cucumber options to tags = {"~#skipchrome"}
If you wish simply to temporarily skip a scenario (for example, while writing the scenarios), you can comment it out (ctrl+/ in Eclipse or Intellij).

phpunit selenium usage

My question is about phpunit+selenium usage.
The standard usage of this union is
class BlaBlaTest extends PHPUnit_Extensions_SeleniumTestCase
{... }
OR
class BlaBlaTest extends PHPUnit_Extensions_Selenium2TestCase
{...}
The first one (PHPUnit_Extensions_SeleniumTestCase) is not very convinient to use
(e.g. there is no such thing as $this->elements('xpath')).
Second(PHPUnit_Extensions_Selenium2TestCase) also has limited functionality
(e.g. there is no such functions as waitForPageToLoad() or clickAndWait(),
and using something like $this->timeouts()->implicitWait(10000) looks for me like
complete nonsense).
Is it possible to use the functional
PHPUnit_Extensions_SeleniumTestCase + PHPUnit_Extensions_Selenium2TestCase
in one test class?
Maybe smb knows good alternatives to phpunit+selenium?
Inspired by Dan I've written this for use in PHPUnit_Extensions_Selenium2TestCase and it seems to work ok:
/**
* #param string $id - DOM id
* #param int $wait - maximum (in seconds)
* #retrn element|false - false on time-out
*/
protected function waitForId($id, $wait=30) {
for ($i=0; $i <= $wait; $i++) {
try{
$x = $this->byId($id);
return $x;
}
catch (Exception $e) {
sleep(1);
}
}
return false;
}
Sorry for resurrecting this but I'd like to hopefully clear up some confusion for anyone stumbling across this.
You're saying that you wanted functionality from RC and WebDriver combined where there are workarounds to it, but I wouldn't recommend it. Firstly you'll need to understand the difference between both frameworks.
My brief definitions...
Selenium RC (PHPUnit_Extensions_SeleniumTestCase) is script oriented. By that I mean it will run your tests exactly how you expect the page to respond. This often will require more explicit commands such as the waitForPageToLoad() that you have mentioned when waiting for elements to appear and/or pages to loads.
Selenium WebDriver (PHPUnit_Extensions_Selenium2TestCase) uses a more native approach. It cuts off 'the middle-man' and runs your tests through your chosen browsers driver. Using the waitForPageToLoad() example, you wont need to explicitly put that wherever you open a page in your code because the WebDriver already knows when the page is loading and will resume the test when the page load request is complete.
If you need to define an implicit timeout in WebDriver, you will only need to place that in the setUp() method within a base Selenium class that will be extended in your test classes;
class BaseSelenium extends PHPUnit_Extensions_Selenium2TestCase {
protected function setUp() {
// Whatever else needs to be in here like setting
// the host url and port etc.
$this->setSeleniumServerRequestsTimeout( 100 ); // <- seconds
}
}
That will happily span across all of your tests and will timeout whenever a page takes longer than that to load.
Although I personally prefer WebDriver over RC (mainly because it's a lot faster!) there is a big difference between the methods available. Whenever I got stuck when recently converting a lot a RC tests to WebDriver I always turned to this first. It's a great reference to nearly every situation.
I hope that helps?
For functions such as waitForPageToLoad() and clickAndWait(), which are unavailable in Selenium2, you can reproduce those functions by using try catch blocks, in conjunction with implicit waits and explicit sleeps.
So, for a function like clickAndWait(), you can define what element you are waiting for, and then check for that element's existence for a set amount of seconds. If the element doesn't exist, the try catch block will stop the error from propagating. If the element does exist, you can continue. If the element doesn't exist after the set amount of time, then bubble up the error.
I would recommend using Selenium2 and then reproducing any functionality that you feel is missing from within your framework.
EXAMPLE:
def wait_for_present(element, retry = 10, seconds = 2)
for i in 0...retry
return true if element.present?
sleep(seconds)
end
return false
end
you can try use traits to extend two different classes http://php.net/manual/en/language.oop5.traits.php
class PHPUnit_Extensions_SeleniumTestCase{
...
}
change PHPUnit_Extensions_Selenium2TestCase class to trait:
trait PHPUnit_Extensions_Selenium2TestCase {
...
}
class blabla extends PHPUnit_Extensions_SeleniumTestCase {
use PHPUnit_Extensions_Selenium2TestCase;
your tests here..
}

Do I need to recreate my driver for each test?

I've just started using Selenium - currently I'm only interested in IE as it's an intranet site and not for public consumption. I'm using IEDriverServer.exe to set my browser sessions up, but I'm unsure as to whether I need to recreate it for each test, or if it will maintain atomicity of the browser sessions/tests automatically. I've not been able to find any information on this as most of the examples are for a single test rather than a batch of unit tests.
So currently I have
[TestInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
and
[TestCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
Is this correct or am I doing extra unnecessary work for each test? Should I just initialise it when it's declared? If so, how do I manage its lifecycle? I presume I can call .Close() for each test to kill the browser window, but what about .Quit()?
I use Selenium with NUnit, but you don't need to recreate it every time. Since you are using MSTest, I would do something like this:
[ClassInitialize]
public void SetUp()
{
_driver = new InternetExplorerDriver();
}
[ClassCleanup]
public void TearDown()
{
_driver.Close();
_driver.Quit();
}
ClassInitialize will call code once per test class initialisation, and ClassCleanup will call code once per test class teardown / dispose.
Although this is still not guaranteed because the test runner may make several threads of the test:
http://blogs.msdn.com/b/nnaderi/archive/2007/02/17/explaining-execution-order.aspx
You must also think about what kind of state you want to tests to start at each time. The most common reason for shutting down and starting a new browser session each time is then you can have a clean slate to work with.
Sometimes this is unnecessary work, as you've pointed out, but what is your tests starting point?
For me, I have one browser per test class, with a method to sign out of my web application and keep at the login page at the end of each test.