Related
I have just started using Laravel Dusk to test my project and need some guidance. After I run all the tests available, I want to be able to reset my database back to before I run the tests. (If there were any entries in my database before I run the tests, I would still like to see them after I run the tests. However, any entires created during the test, I would not like to see them after the tests finish running.) Any pointers on how I would achieve this? Thank you!
Update:
<?php
namespace Tests\Browser;
use Tests\DuskTestCase;
use Laravel\Dusk\Browser;
use Illuminate\Foundation\Testing\DatabaseTransactions;
class UserRegisterTest extends DuskTestCase
{
use DatabaseTransactions;
/**
* A test for user registration.
* #group register
* #return void
*/
public function testRegisterUser()
{
//Register with all info filled out correctly
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLogin')
->type('lastName', 'Zhou')
->type('email', 'testLogin#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '123456')
->click('.btn-primary')
->assertPathIs('/home')
->click('.dropdown-toggle')
->click('.dropdown-menu li:last-child');
});
$this->assertDatabaseHas('users', ['firstName' => 'JenLogin', 'lastName' => 'Zhou', 'email' => 'testLogin#gmail.com']);
}
/**
* Register with duplicate user
* #group register
* #return void
*/
public function testRegisterDuplicateUser(){
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLoginDup')
->type('lastName', 'Zhou')
->type('email', 'testLogin#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '123456')
->click('.btn-primary')
->assertPathIs('/register')
->assertSee('The email has already been taken.');
});
$this->assertDatabaseMissing('users', ['firstName' => 'JenLoginDup', 'lastName' => 'Zhou', 'email' => 'testLogin#gmail.com']);
}
/**
* Register with incorrect password confirmation
* #group register
* #return void
*/
public function testRegisterUserNoPassConfirm(){
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLoginPass')
->type('lastName', 'Zhou')
->type('email', 'testLoginPass#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '888888')
->click('.btn-primary')
->assertPathIs('/register')
->assertSee('The password confirmation does not match.');
});
$this->assertDatabaseMissing('users', ['firstName' => 'JenLoginPass', 'lastName' => 'Zhou', 'email' => 'testLoginPass#gmail.com']);
}
}
You are looking for the DatabaseTransactions trait. Use it in your test class like this and it will automatically rollback all database transactions made during your tests.
use Illuminate\Foundation\Testing\DatabaseTransactions;
class ExampleTest extends TestCase
{
use DatabaseTransactions;
// test methods here
}
This will keep track of all transactions made during your test and undo them upon completion.
note: this trait only works on default database connections
First of all, when you are running tests you should use completely different database than your live (or dev) database. For this you should create .env.dusk and set in there:
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=testing_database
DB_USERNAME=root
DB_PASSWORD=pass
to database used for tests only.
Second thing is that for Laravel Dusk you cannot use just DatabaseTransactions. You should in fact use DatabaseMigrations for Dusk tests otherwise you will get unexpected results.
There is no sane workflow for running tests on live/dev db with data and reverting changes back, done by tests.
Therefore your approach fails here, instead you should:
Create separate test schema/db for tests
Switch to test db, before running tests - this can be somehow automated depending on your configuration in phpunit and .env.dusk, but it depends on your local setup.
Then in your tests you will create all from scratch on clean db (run migrations, seeds, factories)
Run tests against this test db
For development switch back to your base db with current data, which will not be affected by tests.
Next time you will run your tests all starts again from point zero - clean database, this will be done by in tests:
use CreatesApplication;
use DatabaseMigrations;
parent::setUp();etc.
Read more about these methods...
Side Notes:
With this approach, it will be easy, to test your app in CI environments also.
Never write your tests which depend on data on your dev/live db. For tests all required data should be provided by seeds or ewentually factories!
You can use the RefreshDatabase trait in your test classes.After each test the database will be like before test.
In Fact it will drop all tables and migrate again.
If you would not loose your data you can use one separate schema for test.
use Illuminate\Foundation\Testing\RefreshDatabase;
use Illuminate\Foundation\Testing\WithoutMiddleware;
use Tests\TestCase;
class ExampleTest extends TestCase
{
use RefreshDatabase;
}
For multiple databases, this helped me
class MyTest extends TestCase {
// Reset the DB between tests
use DatabaseTransactions;
// Setting this allows both DB connections to be reset between tests
protected $connectionsToTransact = ['mysql', 'myOtherConnection'];
}
I think this is a great question. I found an Artisan tool that may be what you are looking for. You can use it to take a snapshot of the database before you run the test and then use it again to load that snapshot restoring your database to the previous state. I gave it a run(using MYSQL) and it worked great. Hope this is what you are looking for. Here is a link...
https://github.com/spatie/laravel-db-snapshots
phpunit.xml file is your solution there, you can set a .env variables in this file like so
<env name="DB_CONNECTION" value="testing_mysql"/>
<env name="DB_DATABASE_TEST" value="test"/>
now you can run your tests on a separate database.
Plus you can run a .php file every time before tests in automation, you just need to tell it to unittests
<phpunit
...
bootstrap="tests/autoload.php"
>
You can put any cleaners or seeders there or something like
echo 'Migration -begin-' . "\n";
echo shell_exec('php artisan migrate:fresh --seed');
echo 'Migration -end-' . "\n";
I'm trying to implement unit tests in my company's project, and I'm running into some weird trouble trying to use a separate set of data in my database.
As I want tests to be performed in a confined environment, I'm looking for the easiest way to input data in a dedicated database. Long story short, to this extent, I decided to use a MySQL dump of inserted data.
This is basically my seeder code:
public function run()
{
\Illuminate\Support\Facades\DB::unprepared(file_get_contents(__DIR__ . '/data1.sql'));
}
Now here's the problem.
In my unit test, I can call the seeder, but :
If I call the seeder in the setUpBeforeClass(), it works. Although it doesn't fit my needs as I want to be able to invoke different sets of data for different tests
If I call the seeder within a test, the data is never inserted in the database (either with or without the transaction trait).
If I use DB::insert instead of ::raw or ::unprepared or ::statement without using a raw sql file, it works. But my inserts are too complicated for that.
Here's a few things I tried with the same results :
DB::raw(file_get_contents(__DIR__.'/database/data1.sql'));
DB::statement(file_get_contents(__DIR__ . '/database/data1.sql'));
$seeder = new CheckTestSeeder();
$seeder->run();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
$this->seeInDatabase('jackpot.progressive', [
'name_progressive' => 'aaa'
]);
Any pointers on how to proceed and why I have different behaviors if I do that in the setUpBeforeClass() and within the test would be appreciated!
You may use Illuminate\Foundation\Testing\RefreshDatabase trait as explained here. If you need something more, you can override refreshTestDatabase method in RefreshDatabase trait.
protected function refreshTestDatabase()
{
parent::refreshTestDatabase();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
}
I have a Codeception cest file which has a number of tests in it.
In some of the tests, there are initializations which I would like to so in the _before() hook. These initializations are specific to those tests only and to no other test in the cest file.
How can I go about this?
The pseudocode would be something like
public _before($event)
{
if ($event->test_being_run == 'testThatFeature')
{
$init = something(here);
}
}
Through investigation, I have realized that the $event variable passed into the _before() hook is an instance of the generated AcceptanceTester; as opposed to \Codeception\Event\TestCase. So I cannot use the hopeful $event->getTest()->getTestFullName().
Codeception injects parameters based on type hinting.
If you want to get \Codeception\Event\TestCase, your code must look like this:
public _before(\Codeception\Event\TestCase $event)
All information that I found about receiving testcase in _before method, was about _before method of modules and extensions, it does not apply to tests.
If you want to run specific code for one test, just run it in the test code.
I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.
I'm trying to put together my first Data Driven Test Framework that runs tests through Selenium Grid/WebDriver on multiple browsers. Right now, I have each test case in it's own class, and I parametrize the browser, so it runs each test case once with each browser.
Is this common on big test frameworks? Or, should each test case be copied and fine tuned to each browser in it's own class? So, if I'm testing chrome, firefox, and IE, should there be classes for each, like: "TestCase1Chrome", "TestCase1FireFox", "TestCase1IE"? Or just "TestCase1" and parametrize the test to run 3 times with each browser? Just wondering how others do it.
Parameterizing the tests into a single class per test case makes it easier to maintain the non-browser specific code, while duplicating classes, one for each browser case, makes it easier to maintain the browser-specific code. When I say browser specific code, for example, clicking an item. On ChromeDriver, you cannot click in the middle of some elements, where on FirefoxDriver, you can. So, you potentially need two different blocks of code just to click an element (when it's not clickable in the middle).
For those of you that are employed QA Engineers that use Selenium, what would be best practice here?
I am currently working on a project which runs around 75k - 90k tests on daily basis. We pass the browser as a parameter to the tests. Reasons being:
As you mentioned in your question, this helps in maintenance.
We don't see too many browser-specific code. If you are having too much of browser specific code, then I would say there is a problem with the webdriver itself. Because, one of the advantages of selenium/webdriver is write code once and run it against any supported browser.
The difference I see between my code structure and the one you mentioned in question is, I don't have a test class for each test case. Tests are divided based on the features that I test and each feature will have a class. And that class will hold all the tests as methods. I use testNG so that these methods can be invoked in parallel. May be this won't suite your AUT.
If you keep the code structure that you mention in the question, sooner or later maintaining it will become a nightmare. Try to stick to the rule: the same test code (written once) for all browsers (environments).
This condition will force you to solve two issues:
1) how to run the tests for all chosen browsers
2) how to apply specific browser workarounds without polluting the test code
Actually, this seems to be your question.
Here is how I solved the first issue.
First, I defined all the environments that I am going to test. I call 'environments' all the conditions under which I want to run my tests: browser name, version number, OS, etc. So, separately from test code, I created an enum like this:
public enum Environments {
FF_18_WIN7("firefox", "18", Platform.WINDOWS),
CHR_24_WIN7("chrome", "24", Platform.WINDOWS),
IE_9_WIN7("internet explorer", "9", Platform.WINDOWS)
;
private final DesiredCapabilities capabilities;
private final String browserName;
private final String version;
private final Platform platform;
Environments(final String browserName, final String version, final Platform platform) {
this.browserName = browserName;
this.version = version;
this.platform = platform;
capabilities = new DesiredCapabilities();
}
public DesiredCapabilities capabilities() {
capabilities.setBrowserName(browserName);
capabilities.setVersion(version);
capabilities.setPlatform(platform);
return this.capabilities;
}
public String browserName() {
return browserName;
}
}
It's easy to modify and add environments whenever you need to. As you can notice, I am using this to create and retrieve the DesiredCapabilities that later will be used to create a specific WebDriver.
In order to make the tests run for all the defined environments, I used JUnit's (4.10 in my case) org.junit.experimental.theories:
#RunWith(MyRunnerForSeleniumTests.class)
public class MyWebComponentTestClassIT {
#Rule
public MySeleniumRule selenium = new MySeleniumRule();
#DataPoints
public static Environments[] enviroments = Environments.values();
#Theory
public void sample_test(final Environments environment) {
Page initialPage = LoginPage.login(selenium.driverFor(environment), selenium.getUserName(), selenium.getUserPassword());
// your test code here
}
}
The tests are annotated as #Theory (not as #Test, like in normal JUnit tests) and are passed a parameter. Each test will run then for all the defined values of this parameter, which should be an array of values annotated as #DataPoints. Also, you should use a runner that extends from org.junit.experimental.theories.Theories. I use org.junit.rules to prepare my tests, putting there all the necessary plumbing. As you can see I get the specific capabilities driver through the Rule, too. Though you could use the following code right in your test:
RemoteWebDriver driver = new RemoteWebDriver(new URL(some_url_string), environment.capabilities());
The point is that having it in the Rule you write the code once and use it for all your tests.
As for Page class, it is a class where I put all the code that uses driver's functionality (find an element, navigate, etc.). This way, again, the test code stays neat and clear and, again, you write it once and use it in all your tests.
So, this is the solution for the first issue. (I know that you can do a similar thing with TestNG, but I didn't try it.)
To solve the second issue, I created a special package where I keep all the code of browser specific workarounds. It consists of an abstract class, e.g. BrowserSpecific, that contains the common code which happens to be different (or have a bug) in some browser. In the same package I have classes specific for every browser used in tests and each of them extends BrowserSpecific.
Here is how it works for the Chrome driver bug that you mention. I create a method clickOnButton in BrowserSpecific with the common code for the affected behaviour:
public abstract class BrowserSpecific {
protected final RemoteWebDriver driver;
protected BrowserSpecific(final RemoteWebDriver driver) {
this.driver = driver;
}
public static BrowserSpecific aBrowserSpecificFor(final RemoteWebDriver driver) {
BrowserSpecific browserSpecific = null;
if (Environments.FF_18_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new FireFoxSpecific(driver);
}
if (Environments.CHR_24_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new ChromeSpecific(driver);
}
if (Environments.IE_9_WIN7.browserName().contains(driver.getCapabilities().getBrowserName())) {
browserSpecific = new InternetExplorerSpecific(driver);
}
return browserSpecific;
}
public void clickOnButton(final WebElement button) {
button.click();
}
}
and then I override this method in the specific class, e.g. ChromeSpecific, where I place the workaround code:
public class ChromeSpecific extends BrowserSpecific {
ChromeSpecific(final RemoteWebDriver driver) {
super(driver);
}
#Override
public void clickOnButton(final WebElement button) {
// This is the Chrome workaround
String script = MessageFormat.format("window.scrollTo(0, {0});", button.getLocation().y);
driver.executeScript(script);
// Followed by common behaviour of all the browsers
super.clickOnButton(button);
}
}
When I have to take into account the specific behaviour of some browser, I do the following:
aBrowserSpecificFor(driver).clickOnButton(logoutButton);
instead of:
button.click();
This way, in my common code, I can identify easily where the workaround has been applied and I keep the workarounds isolated from the common code. I find it easy to maintain, as the bugs are usually being solved and the workarounds may or should be changed or eliminated.
One last word about executing the tests. As you are going to use Selenium Grid you will want to use the possibility to run the tests in parallel, so remember to configure this feature for your JUnit tests (available since v. 4.7).
We use testng in our organization and we use the parameter option that testng gives to specify the enviroment, i.e. the browser to use, the machine to run on and any other config that is required for env config. The browsername is sent through the xml file which controls what needs to run and where. It is set as a global variable. What we have done as an extra is, we have our custom annotations which can override these global variables i.e. if a test is very specifically only to be run on chrome and no other browser, then we specify the same on the custom annotation. So, no matter even if the parameter is say run on FF, if it is annotated with chrome, it would always run on chrome.
I somehow believe making one class for each browser is not a good idea. Imagine the flow changes or there is a bit of here and there and you have 3 classes to change instead of one. And if the number of browsers increase, then one more class.
What I would suggest is to have code that is browserspecific to be extracted out. So, if the click behavior is browser specific, then override to it to do appropriate checks or failure handlings based on browsers.
I do it like this but keep in mind that this is pure WebDriver without the Grid or RC in mind:
// Utility class snippet
// Test classes import this with: import static utility.*;
public static WebDriver driver;
public static void initializeBrowser( String type ) {
if ( type.equalsIgnoreCase( "firefox" ) ) {
driver = new FirefoxDriver();
} else if ( type.equalsIgnoreCase( "ie" ) ) {
driver = new InternetExplorerDriver();
}
driver.manage().timeouts().implicitlyWait( 10000, TimeUnit.MILLISECONDS );
driver.manage().window().setPosition(new Point(200, 10));
driver.manage().window().setSize(new Dimension(1200, 800));
}
Now, using JUnit 4.11+ your parameters file needs to look something like this:
firefox, test1, param1, param2
firefox, test2, param1, param2
firefox, test3, param1, param2
ie, test1, param1, param2
ie, test2, param1, param2
ie, test3, param1, param2
Then, using a single .CSV parameterized test class (that you intend to start multiple browser types with), in the #Before annotated method, do this:
If the current parameter test is the first test of this browser type, and no already open windows exist, open a new browser window of the current type.
If a browser is already open and the browser type is the same, then just re-use the same driver object.
if a browser is open of a different type that the current test, then close the browser and re-open a browser of the correct type.
Of course, my answer doesn't tell you how to handle the parameters: I leave that for you to figure out.