Mocking a database connection - testing

I'm trying to write some tests with PHPUnit for our various classes/methods/functions. Some of these require database connectivity. Obviously, I'd like to Mock these, so that I don't change our database(s).
Can someone point me to some code that explains how to do this? I see lots of examples of Mocking, but nothing specifically about mocking the database.

In general, you don't want to mock the database, or any other similar external dependency. It's better to wrap the database in your code with something else, and then you can mock the wrapper. Because a database might have many different ways that it can interact, whereas your code and your tests only care about one or two, your database wrapper only needs to implement those. That way mocking the wrapper should be quite simple.
You would also need some kind of integration test on the wrapper to check it's doing what it's supposed to, but there will only be a few of these tests so they won't slow your unit tests too much.

Mock of Database
I would write a wrapper around the calls to the Database in the application.
Example in Pseudo Code
CallDataBase (action, options,...) {
// Code for connectiong to DataBase
}
Then you just mock of that function just you would like any other function
CallDataBase (action, options,...) {
return true;
}
This way you can mock of the database without bothering about it being a webservice or a database connection or whatever. And you can have it return true or whatever.
Test how your system handles the database response
To take this idea one step further and make your tests even more powerful you could use some kind of test parameters or environment parameters to control what happens in the mocked off database method. Then you can successfully test how your codes handels different responses from the database.
Again in pseudo-code (assuming your database returns xml answer):
CallDataBase (action, options,...) {
if TEST_DATABASE_PARAMETER == CORRUPT_XML
return "<xml><</xmy>";
else if TEST_DATABASE_PARAMETER == TIME_OUT
return wait(5000);
else if TEST_DATABASE_PARAMETER == EMPTY_XML
return "";
else if TEST_DATABASE_PARAMETER == REALLY_LONG_XML_RESPONSE
return generate_xml_response(1000000);
}
And tests to match:
should_raise_error_on_empty_xml_response_from_database() {
TEST_DATABASE_PARAMETER = EMPTY_XML;
CallDataBase(action, option, ...);
assert_error_was_raised(EMPTY_DB_RESPONSE);
assert_written_in_log(EMPTY_DB_RESPONSE_LOG_MESSAGE);
}
...
And so on, you get the point.
Please note that all my examples are Negative test cases but this could of course be used to test Positive test cases also.
Good Luck

Related

SOAP UI - Set a node value in all test step's requests of all test cases in a test suites

I'm trying to set a node value in all test step's requests xml of all test cases in a test suite.
The groovy script is in the first test case and I get an error (XmlException: Unexpected Element: CDATA) as soon as the script try to edit the same tag in the second test case.
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def AlltestCases = testRunner.testCase.testSuite.project.testSuites[testRunner.testCase.testSuite.name]
0.upto(AlltestCases.getTestCaseCount()) {
AlltestCases.getTestCaseList().each{
it.getTestStepList().each{ if(it.getClass()==com.eviware.soapui.impl.wsdl.teststeps.WsdlTestRequestStep){
if(it.getName().toLowerCase().contains("verify")){
step = groovyUtils.getXmlHolder("${it.getName()}"+"#Request")
step.setNodeValue("//*:Name/text()", "\$"+"{#TestSuite#NAME_ID}")
step.updateProperty()
}
}
}
}
}
If I understand your question correctly, you want to "inject" a value in a number of requests?
I would advise against that. I would rather set some project property, and then let each of the requests simply use that particular variable.
The most important reason for me to prefer this approach, is to make it more tranparent what is happening in your testcase, should someone else at some point - like if you get a different job - would need to take over your SoapUI projects. Currently you have requests, which hold values that appear to come out of nowhere. I would advise to make it clear that the request contains some sort of variable, and where that variable comes from.
Besides you will then also get more flexibility. If a few reqeusts at some point changes the path or name of the entity you want to change, you will need to make your code above handle that kind of situation. Not so, if you are merely using a variable in each of your requests.

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Lumen - seeder in Unit tests

I'm trying to implement unit tests in my company's project, and I'm running into some weird trouble trying to use a separate set of data in my database.
As I want tests to be performed in a confined environment, I'm looking for the easiest way to input data in a dedicated database. Long story short, to this extent, I decided to use a MySQL dump of inserted data.
This is basically my seeder code:
public function run()
{
\Illuminate\Support\Facades\DB::unprepared(file_get_contents(__DIR__ . '/data1.sql'));
}
Now here's the problem.
In my unit test, I can call the seeder, but :
If I call the seeder in the setUpBeforeClass(), it works. Although it doesn't fit my needs as I want to be able to invoke different sets of data for different tests
If I call the seeder within a test, the data is never inserted in the database (either with or without the transaction trait).
If I use DB::insert instead of ::raw or ::unprepared or ::statement without using a raw sql file, it works. But my inserts are too complicated for that.
Here's a few things I tried with the same results :
DB::raw(file_get_contents(__DIR__.'/database/data1.sql'));
DB::statement(file_get_contents(__DIR__ . '/database/data1.sql'));
$seeder = new CheckTestSeeder();
$seeder->run();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
$this->seeInDatabase('jackpot.progressive', [
'name_progressive' => 'aaa'
]);
Any pointers on how to proceed and why I have different behaviors if I do that in the setUpBeforeClass() and within the test would be appreciated!
You may use Illuminate\Foundation\Testing\RefreshDatabase trait as explained here. If you need something more, you can override refreshTestDatabase method in RefreshDatabase trait.
protected function refreshTestDatabase()
{
parent::refreshTestDatabase();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
}

How to write same codeception acceptance test case with many different set of inputs

In codeception acceptance testing, how to run/write same test case for many different set of inputs.
Here is my sample acceptance test (I am using page object oncept)
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,$m);
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,"test#gmail.com");
$I->fillField(login::$password,"test");
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
How do I run same login with multiple set of inputs in acceptance test.
However, I have tried passing inputs in an array by calling the test case in for loop by passing array values as input parameter. In acceptance.php, multiple set of inputs can be passed using if loop.
This runs the test as only 1 test case with different assertions.
But, it runs the test case until it fails for any inputs/assertion. If it fails for any of the assertions, then test case stops executing further & says test case failed.
You can pass parameters through to your login function just as you would with any php function:
loginCept.php code
$I = new AcceptanceTester($scenario);
$I->wantTo('perform actions and see result');
$I->login($I,"test#gmail.com","test");
Acceptance.php file
class Acceptance extends \Codeception\Module
{
public function login($I,$username,$password)
{
$I->amOnPage(login::$loginIndex);
$I->wait(2);
$I->fillField(login::$userName,$username);
$I->fillField(login::$password,$password);
$I->click(login::$submitButton);
$I->see(login::$assertionWelcome);
$I->wait(2);
$I->click(login::$logoutLink);
}
}
You'd then want to create a separate cept for each aspect of login that you are looking to test.
Edit:
What you're looking for in relation to one test running through a number of assertions, this breaks the conventions of automated testing. Each test (or cept in this case) should only ever test one aspect. For instance in logging in, you might have one for invalid username, invalid password, too many attempts, etc... Then when/if one test fails, you as the developer knows exactly what aspect has failed and which continue to pass. If all the aspects are wrapped up in one test, then you as the developer don't know the full picture until you start to debug.

How do you mock your repositories?

I've use Moq to mock my repositories. However, someone recently said that they prefer to create hard-coded test implementations of their repository interfaces.
What are the pros and cons of each approach?
Edit: clarified meaning of repository with link to Fowler.
I generally see two scenarios with repositories. I ask for something, and I get it, or I ask for something, and it isn't there.
If you are mocking your repository, that means you system under test (SUT) is something that is using your repository. So you generally want to test that your SUT behaves correctly when it is given an object from the repository. And you also want to test that it handles the situation properly when you expect to get something back and don't, or aren't sure if you are going to get something back.
Hard-coded test doubles are ok if you are doing integration testing. Say, you want to save an object, and then get it back. But this is testing the interaction of two objects together, not just the behavior of the SUT. They are two different things. If you start coding fake repositories, you need unit tests for those as well, otherwise you end up basing the success and failure of your code on untested code.
That's my opinion on Mocking vs. Test Doubles.
SCNR:
"You call yourself a repository? I've seen matchboxes with more capacity!"
I assume that by "repository" you mean a DAO; if not then this answer won't apply.
Lately I've been making "in memory" "mock" (or test) implementations of my DAO, that basically operate off of data (a List, Map, etc.) passed into the mock's constructor. This way the unit test class is free to throw in whatever data it needs for the test, can change it, etc., without forcing all unit tests operating on the "in memory" DAO to be coded to use the same test data.
One plus that I see in this approach is that if I have a dozen unit tests that need to use the same DAO for their test (to inject into the class under test, for example), I don't need to remember all of the details of the test data each time (as you would if the "mock" was hardcoded) - the unit test creates the test data itself. On the downside, this means each unit test has to spend a few lines creating and wiring up it's test data; but that's a small downside to me.
A code example:
public interface UserDao {
User getUser(int userid);
User getUser(String login);
}
public class InMemoryUserDao implements UserDao {
private List users;
public InMemoryUserDao(List users) {
this.users = users;
}
public User getUser(int userid) {
for (Iterator it = users.iterator(); it.hasNext();) {
User user = (User) it.next();
if (userid == user.getId()) {
return user;
}
}
return null;
}
public User getUser(String login) {
for (Iterator it = users.iterator(); it.hasNext();) {
User user = (User) it.next();
if (login.equals(user.getLogin())) {
return user;
}
}
return null;
}
}