phpunit: reusing dataprovider - iterator

I want to run multiple test cases against the content of a whole set of files. I could use a data provider to load my files and use the same provider for all the tests like this:
class mytest extends PHPUnit_Framework_TestCase {
public function contentProvider() {
return glob(__DIR__ . '/files/*');
}
/**
* #dataProvider contentProvider
*/
public function test1($file) {
$content = file_get_contents($file);
// assert something here
}
...
/**
* #dataProvider contentProvider
*/
public function test10($file) {
$content = file_get_contents($file);
// assert something here
}
}
Obviously that means if I have 10 test cases, each file is loaded 10 times.
I could adjust the data provider to load all files and return one big structure with all the contents. But since the provider is called separately for each test it would still mean each file is loaded 10 times and in addition it would load all files into memory at the same time.
I could of course condense the 10 tests into one test with 10 assertions, but then it would abort right after the first assertion fails and I really want a report of all things that are wrong with the file.
I know that data providers can also return an iterator. But phpunit seems to rerun the iterator separately for each test, still resulting in loading each file 10 times.
Is there a clever way to make phpunit run an iterator only once and pass the result to each test, before continuing?

Test dependencies
If some tests are dependents, you should use the #depends annotation to declare Test dependencies. The data returned by the dependency is used by the test declaring this dependency.
But, if a test declared as dependency failed, the dependent test is not executed.
Statically stored data
To share data between tests, it's common to setup fixtures statically.
You can use the same method with data providers:
<?php
use PHPUnit\Framework\TestCase;
class MyTest extends TestCase
{
private static $filesContent = NULL;
public function filesContentProvider()
{
if (self::$filesContent === NULL) {
$paths = glob(__DIR__ . '/files/*');
self::$filesContent = array_map(function($path) {
return [file_get_contents($path)];
}, $paths);
}
}
/**
* #dataProvider filesContentProvider
*/
public function test1($content)
{
$this->assertNotEmpty($content, 'File must not be empty.');
}
/**
* #dataProvider filesContentProvider
*/
public function test2($content)
{
$this->assertStringStartsWith('<?php', $content,
'File must start with the PHP start tag.');
}
}
As you can see, it's not supported out of the box. As the test class instance is destroyed after each test method execution, you have to store the initialized data in a class variable.

Related

GoogleTest: is there a generic way to add a function call prior to each test case?

my scenario: I have an existing unit test framework with ~3000 individual test cases. They are made from TEST, TEST_F and TEST_P macros.
Internally the tested modules make use of a logger library and now my goal is to create individual log files for each test case. To do so I would like to call a function as a SetUp for each test case.
Is there a way to register such function at the framework and get it called automatically?
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
I do like the idea of registering a global setup at the framework with AddGlobalTestEnvironment() but as I understand this is handled only once per executable.
By the way: I have acceptance tests implemented in robot test and guess what? I want to repeat the task there...
Thanks for any inspiration!
Christoph
You mentioned:
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
If the reason that you think you would need to touch every single test case is to set the filename differently, you can use the combination of SetUp() function and the current_test_info provided by GTest to get the test name for each test, and then use that to create a separate file for each test.
Here is an example:
// Class for test fixture
class MyTestFixture : public ::testing::Test {
protected:
void SetUp() override {
test_name_ = std::string(
::testing::UnitTest::GetInstance()->current_test_info()->name());
std::cout << "test_name_: " << test_name_ << std::endl;
// CreateYourLogFileUsingTestName(test_name_);
}
std::string test_name_;
};
TEST_F(MyTestFixture, Test1) {
EXPECT_EQ(this->test_name_, std::string("Test1"));
}
TEST_F(MyTestFixture, Test2) {
EXPECT_EQ(this->test_name_, std::string("Test2"));
}
Live example here: https://godbolt.org/z/YjzEG3G77
The solution I found in the gtest docs:
class TraceHandler : public testing::EmptyTestEventListener
{
// Called before a test starts.
void OnTestStart( const testing::TestInfo& test_info ) override
{
// set the logfilename here
}
// Called after a test ends.
void OnTestEnd( const testing::TestInfo& test_info ) override
{
// close the log here
}
};
int main( int argc, char** argv )
{
testing::InitGoogleTest( &argc, argv );
testing::TestEventListeners& listeners =
testing::UnitTest::GetInstance()->listeners();
// Adds a listener to the end. googletest takes the ownership.
listeners.Append(new TraceHandler);
return RUN_ALL_TESTS();
}
This way it automatically applies to all tests linked to this main-function.
Maybe I have to mention: my logger is a collection of static functions that send udp-packets to a receiver that cares for actual logging. I can control the filename by one of that functions. That's the reason why I don't need to insert code in every single TEST, TEST_F or TEST_P.

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

Authentication test running strange

I've just tried to write a simple test for Auth:
use Mockery as m;
...
public function testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome() {
$auth = m::mock('Illuminate\Auth\AuthManager');
$auth->shouldReceive('guest')->once()->andReturn(true);
$this->call('GET', '/');
$this->assertRedirectedToRoute('general.welcome');
}
public function testHomeWhenUserIsAuthenticatedThenRedirectToDashboard() {
$auth = m::mock('Illuminate\Auth\AuthManager');
$auth->shouldReceive('guest')->once()->andReturn(false);
$this->call('GET', '/');
$this->assertRedirectedToRoute('dashboard.overview');
}
This is the code:
public function getHome() {
if(Auth::guest()) {
return Redirect::route('general.welcome');
}
return Redirect::route('dashboard.overview');
}
When I run, I've got the following error:
EF.....
Time: 265 ms, Memory: 13.00Mb
There was 1 error:
1) PagesControllerTest::testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome
Mockery\Exception\InvalidCountException: Method guest() from Mockery_0_Illuminate_Auth_AuthManager should be called
exactly 1 times but called 0 times.
—
There was 1 failure:
1) PagesControllerTest::testHomeWhenUserIsAuthenticatedThenRedirectToDashboard
Failed asserting that two strings are equal.
--- Expected
+++ Actual
## ##
-'http://localhost/dashboard/overview'
+'http://localhost/welcome'
My questions are:
Two similar test cases but why the error output differs? First one the mock Auth::guest() is not called while the second one seems to be called.
On the second test case, why does it fail?
Is there any way to write better tests for my code above? Or even better code to test.
Above test cases, I use Mockery to mock the AuthManager, but if I use the facade Auth::shoudReceive()->once()->andReturn(), then it works eventually. Is there any different between Mockery and Auth::mock facade here?
Thanks.
You're actually mocking a new instance of the Illuminate\Auth\AuthManager and not accessing the Auth facade that is being utilized by your function getHome(). Ergo, your mock instance will never get called. (Standard disclaimer that none of the following code is tested.)
Try this:
public function testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome() {
Auth::shouldReceive('guest')->once()->andReturn(true);
$this->call('GET', '/');
$this->assertRedirectedToRoute('general.welcome');
}
public function testHomeWhenUserIsAuthenticatedThenRedirectToDashboard() {
Auth::shouldReceive('guest')->once()->andReturn(false);
$this->call('GET', '/');
$this->assertRedirectedToRoute('dashboard.overview');
}
If you check out Illuminate\Support\Facades\Facade, you'll see that it takes care of mocking for you. If you really wanted to do it the way that you were doing it (creating an instance of mock instance of Auth), you'd have to somehow inject it into the code under test. I believe that it could be done with something like this assuming that you extend from the TestCase class provided by laravel:
public function testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome() {
$this->app['auth'] = $auth = m::mock('Illuminate\Auth\AuthManager');
// above line will swap out the 'auth' facade with your facade.
$auth->shouldReceive('guest')->once()->andReturn(true);
$this->call('GET', '/');
$this->assertRedirectedToRoute('general.welcome');
}

Grails integration test failing with MissingMethodException

I'm attempting to test a typical controller flow for user login. There are extended relations, as with most login systems, and the Grails documentation is completely useless. It doesn't have a single example that is actually real-world relevant for typical usage and is a feature complete example.
my test looks like this:
#TestFor(UserController)
class UserControllerTests extends GroovyTestCase {
void testLogin() {
params.login = [email: "test1#example.com", password: "123"]
controller.login()
assert "/user/main" == response.redirectUrl
}
}
The controller does:
def login() {
if (!params.login) {
return
}
println("Email " + params.login.email)
Person p = Person.findByEmail(params?.login?.email)
...
}
which fails with:
groovy.lang.MissingMethodException: No signature of method: immigration.Person.methodMissing() is applicable for argument types: () values: []
The correct data is shown in the println, but the method fails to be called.
The test suite cannot use mocks overall because there are database triggers that get called, the result of which will need to be tested.
The bizarre thing is that if I call the method directly in the test, it works, but calling it in the controller doesn't.
For an update: I regenerated the test directly from the grails command, and it added the #Test annotation:
#TestFor(UserController)
class UserControllerTests extends GroovyTestCase {
#Test
void testLogin() {
params.login = [email: "test1#example.com", password: "123"]
Person.findByEmail(params.login.email)
controller.login()
}
}
This works if I run it with
grail test-app -integration UserController
though the result isn't populated correctly - the response is empty, flash.message is null even though it should have a value, redirectedUrl is null, content body is empty, and so is view.
If I remove the #TestFor annotation, it doesn't work even in the slightest. It fails telling me that 'params' doesn't exist.
In another test, I have two methods. The first method runs, finds Person.findAllByEmail(), then the second method runs and can't find Person.findAllByEmail and crashes with a similar error - method missing.
In another weird update - it looks like the response object is sending back a redirect, but to the application baseUrl, not to the user controller at all.
Integration tests shouldn't use #TestFor. You need to create an instance of the controller in your test, and set params in that:
class UserControllerTests extends GroovyTestCase {
void testLogin() {
def controller = new UserController()
controller.params.login = [email:'test1#example.com', password:'123']
controller.login()
assert "/user/main" == controller.response.redirectedUrl
}
}
Details are in the user guide.
The TestFor annotation is used only in unit tests, since this mocks the structure. In the integration tests you have access of the full Grails environment, so there's no need for this mocks. So just remove the annotation and should work.
class UserControllerTests extends GroovyTestCase {
...
}

phpunit database test after teardown

I want to execute several tests in a testcase / testsuite (via selenium) and hook a database test onto the end of every tearDown (with assert which can't be called in tearDown).
So the workflow would be:
Setup the Connection to the database and the Schema in setUpBeforeClass()
Setup the Database (only the contents) in setUp()
Execute test01
TearDown contents
Assert if every Table in Database has a rowCount of Zero.
So is there a way to hook a additional assert onto the end of every tearDown?
I tried to do the Setup in assertPreConditions and the tearDown in assertPostConditions but thats kind of ugly.
Thx in advance
It seems you can use an assert anywhere, even in tearDown(). This test case (save as testTearDown.php, run with phpunit testTearDown.php) correctly gives a fail:
class TearDownTest extends PHPUnit_Framework_TestCase
{
/** */
public function setUp(){
echo "In setUp\n";
//$this->assertTrue(false);
}
/** */
public function tearDown(){
echo "In tearDown\n";
$this->assertTrue(false);
}
/** */
public function assertPreConditions(){
echo "In assertPreConditions\n";
//$this->assertTrue(false);
}
/** */
public function assertPostConditions(){
echo "In assertPostConditions\n";
//$this->assertTrue(false);
}
/**
*/
public function testAdd(){
$this->assertEquals(3, 1+2);
}
}
But, one rule of thumb I have is: if the software is making my life difficult, maybe I'm doing something wrong. You wrote that after the tearDown code has run you want to: "Assert if every Table in Database has a rowCount of Zero."
This sounds like you want to validate your unit test code has been written correctly, in this case that tearDown has done its job correctly? It is not really anything to do with the code that you are actually testing. Using the phpUnit assert mechanisms would be confusing and misleading; in my sample above, when tearDown asserts it tells me that testAdd() has failed. If it is actually code in tearDown() that is not working correctly, I want to be told that instead. So, for validating your unit test code, why not use PHP's asserts:
So I wonder if the tearDown() function you want could look something like this:
public function tearDown(){
tidyUpDatabase();
$cnt=selectCount("table1");
assert($cnt==0);
$cnt=selectCount("table2");
assert($cnt==0);
}