Can't perform a Laravel 4 action/route test more than once - testing

I'm running into a Laravel 4 testing issue: An action/route test can only be run once, and it has to be the first test run. Any subsequent action/route test will fail with an exception before the assert is called.
route/action tests run as long as they are the first test run.
Non-route/action tests run normally, although they cause subsequent route/action tests to throw an exception
It's important to note that the tests in question don't fail, they throw an exception when the action is fired, for example:
Symfony\Component\Routing\Exception\RouteNotFoundException: Unable to generate a URL for the named route "home" as such route does not exist.
Sample test class:
class ExampleTest extends TestCase {
// passes
public function testOne()
{
$class = MyApp::ApiResponse();
$this->assertInstanceOf('\MyApp\Services\ApiResponse', $class);
}
// this fails unless moved the top of the file
public function testRoute()
{
$this->route('GET','home');
$this->assertTrue($this->client->getResponse()->isOk());
}
// passes
public function testTwo()
{
$class = MyApp::ProjectService();
$this->assertInstanceOf('\MyApp\Services\ProjectService', $class);
}
}
This is implementation-specific, a fresh Laravel 4 project does not exhibit the issue. What could be causing this behaviour? How would you go about tracking down the problem?

In this case, the routes file was being called using an include_once call. When subsequent tests were run the routes were empty.
Changing to include() fixed the issue exhibited in the question

Related

Class cannot resolve module as content unless #Stepwise used

I have a Spock class, that when run as a test suite, throws Unable to resolve iconRow as content for geb.Page, or as a property on its Navigator context. Is iconRow a class you forgot to import? unless I annotate my class with #Stepwise. However, I really don't want the test execution to stop on the first failure, which #Stepwise does.
I've tried writing (copy and pasting) my own extension using this post, but I still get these errors. It is using my extension, as I added some logging statements that were printed out to the console.
Here is one of my modules:
class IconRow extends Module {
static content = {
iconRow (required: false) {$("div.report-toolbar")}
}
}
And a page that uses it:
class Report extends SomeOtherPage {
static at = {$("div.grid-container").displayed}
static content = {
iconRow { module IconRow }
}
}
And a snippet of the test that is failing:
class MyFailingTest extends GebReportingSpec {
def setupSpec() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
#Unroll
def "I work"() {
given:
at Report
expect:
this == that
where:
this << ["some list", "of values"]
that << anotherModule.someContent*.#id
}
#Unroll
def "I don't work"() {
given:
at Report
expect:
this == that
where:
this << ["some other", "list", "of values"]
that << iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}
When executed as a suite I work passes and I don't work fails because it cannot identify "iconRow" as content for the page. If I switch the order of the test cases, I don't work will pass and I work will fail. Alternatively, if I execute each test separately, they both pass.
What I have tried:
Adding/removing the required: true property from content in the modules
Prefixing the module name with the class, such as IconRow.iconRow
Defining my modules as static #Shared properties
Initialize the modules both in and outside of my setupSpec()
Making simple getter methods in each module's class that return the module, and referencing content such as IconRow.getIconRow().columnHeaders*.attr("innerText")*.toUpperCase()
Moving the contents of my setupSpec() into setup()
Adding autoClearCookies = false into my GebConfig.groovy
Making a #Shared Report report variable and prefix all modules with that such as report.iconRow
Very peculiar note about that last bullet point -- it magically resolves the modules that don't have the prefix -- so it won't resolve report.IconRow but will resolve just iconRow -- absolutely bizarre, because if I remove that variable the module that was just previously working suddenly can't be resolved again. I even tried declaring this variable and then not prefixing anything, and that did not work either.
Another problem that I keep banging my head against the wall with is that I'm also not sure of where the problem is. The error it throws leads me to believe that it's a project setup issue, but running each feature individually works fine, so it appears to be resolving the classes just fine.
On the other hand, perhaps it's an issue with the session and/or cookies? Although I have yet to see any official documentation on this, it seems to be the general consensus (from other posts and articles I've read) that only using #Stepwise will maintain your session between feature methods. If this is the case, why is my extension not working? It's pretty much a copy and paste of #Stepwise without the skipFeaturesAfterFirstFailingFeature method (I can post if needed), unless there is some other stuff going on behind the scenes with #Stepwise.
Apologies for the wall of text, but I've been trying to figure this out for about 6 hours now, so my brain is pretty fried.
Geb has special support for #Stepwise, if a spec is annotated with it it does not call resetBrowser() after each test, instead it is called after the spec is completed. See the code on github
So basically you need to change your setupSpec to setup so that it will be executed before each test.
Regarding your observation, if you just run a focused test the setupSpec is executed for that test and thus it passes. The problem arises, that the cleanup is invoked afterwards and resets the browser, breaking subsequent tests.
EDIT
I overlooked your usage of where blocks, everything in the where block needs to be statically (#Shared) available, so using instance level constructs won't work. Resetting the browser will also kill every reference so just getting it before wont work either. Basically, don't use Geb objects in where blocks!
Looking at your code however I don't see any reason to use data driven tests here.
This can be easily done with one assertion in a normal test
It is good practice for unit tests to just test one thing. Geb however, is not an unit test but an acceptance/frontend test. The problem here is that they are way slower than unit tests and it makes sense to combine sensible assertions into one test.
class MyFailingTest extends GebReportingSpec {
def setup() {
via Dashboard
SomeClass.login("SourMonk", "myPassword")
assert page instanceof Dashboard
nav.goToReport("Some report name")
assert page instanceof Report
}
def "I work"() {
given:
at Report
expect:
["some list", "of values"] == anotherModule.someContent*.#id
}
def "I don't work"() {
given:
at Report
expect:
["some other", "list", "of values"] == iconRow.columnHeaders*.attr("innerText")*.toUpperCase()
}
}

Programmatically execute Gatling tests

I want to use something like Cucumber JVM to drive performance tests written for Gatling.
Ideally the Cucumber features would somehow build a scenario dynamically - probably reusing predefined chain objects similar to the method described in the "Advanced Tutorial", e.g.
val scn = scenario("Scenario Name").exec(Search.search("foo"), Browse.browse, Edit.edit("foo", "bar")
I've looked at how the Maven plugin executes the scripts, and I've also seen mention of using an App trait but I can't find any documentation for the later and it strikes me that somebody else will have wanted to do this before...
Can anybody point (a Gatling noob) in the direction of some documentation or example code of how to achieve this?
EDIT 20150515
So to explain a little more:
I have created a trait which is intended to build up a sequence of, I think, ChainBuilders that are triggered by Cucumber steps:
trait GatlingDsl extends ScalaDsl with EN {
private val gatlingActions = new ArrayBuffer[GatlingBehaviour]
def withGatling(action: GatlingBehaviour): Unit = {
gatlingActions += action
}
}
A GatlingBehaviour would look something like:
object Google {
class Home extends GatlingBehaviour {
def execute: ChainBuilder =
exec(http("Google Home")
.get("/")
)
}
class Search extends GatlingBehaviour {...}
class FindResult extends GatlingBehaviour {...}
}
And inside the StepDef class:
class GoogleStepDefinitions extends GatlingDsl {
Given( """^the Google search page is displayed$""") { () =>
println("Loading www.google.com")
withGatling(Home())
}
When( """^I search for the term "(.*)"$""") { (searchTerm: String) =>
println("Searching for '" + searchTerm + "'...")
withGatling(Search(searchTerm))
}
Then( """^"(.*)" appears in the search results$""") { (expectedResult: String) =>
println("Found " + expectedResult)
withGatling(FindResult(expectedResult))
}
}
The idea being that I can then execute the whole sequence of actions via something like:
val scn = Scenario(cucumberScenario).exec(gatlingActions)
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
and then check the reports or catch an exception if the test fails, e.g. response time too long.
It seems that no matter how I use the 'exec' method it tries to instantly execute it there and then, not waiting for the scenario.
Also I don't know if this is the best approach to take, we'd like to build some reusable blocks for our Gatling tests that can be constructed via Cucumber's Given/When/Then style. Is there a better or already existing approach?
Sadly, it's not currently feasible to have Gatling directly start a Simulation instance.
Not that's it's not technically feasible, but you're just the first person to try to do this.
Currently, Gatling is usually in charge of compiling and can only be passed the name of the class to load, not an instance itself.
You can maybe start by forking io.gatling.app.Gatling and io.gatling.core.runner.Runner, and then provide a PR to support this new behavior. The former is the main entry point, and the latter the one can instanciate and run the simulation.
I recently ran into a similar situation, and did not want to fork gatling. And while this solved my immediate problem, it only partially solves what you are trying to do, but hopefully someone else will find this useful.
There is an alternative. Gatling is written in Java and Scala so you can call Gatling.main directly and pass it the arguments you need to run the Gatling Simulation you want. The problem is, the main explicitly calls System.exit so you have to also use a custom security manager to prevent it from actually exiting.
You need to know two things:
the class (with the full package) of the Simulation you want to run
example: com.package.your.Simulation1
the path where the binaries are compiled.
The code to run a Simulation:
protected void fire(String gatlingGun, String binaries){
SecurityManager sm = System.getSecurityManager();
System.setSecurityManager(new GatlingSecurityManager());
String[] args = {"--simulation", gatlingGun,
"--results-folder", "gatling-results",
"--binaries-folder", binaries};
try {
io.gatling.app.Gatling.main(args);
}catch(SecurityException se){
LOG.debug("gatling test finished.");
}
System.setSecurityManager(sm);
}
The simple security manager i used:
public class GatlingSecurityManager extends SecurityManager {
#Override
public void checkExit(int status){
throw new SecurityException("Tried to exit.");
}
#Override
public void checkPermission(Permission perm) {
return;
}
}
The problem is then getting the information you want out of the simulation after it has been run.

Authentication test running strange

I've just tried to write a simple test for Auth:
use Mockery as m;
...
public function testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome() {
$auth = m::mock('Illuminate\Auth\AuthManager');
$auth->shouldReceive('guest')->once()->andReturn(true);
$this->call('GET', '/');
$this->assertRedirectedToRoute('general.welcome');
}
public function testHomeWhenUserIsAuthenticatedThenRedirectToDashboard() {
$auth = m::mock('Illuminate\Auth\AuthManager');
$auth->shouldReceive('guest')->once()->andReturn(false);
$this->call('GET', '/');
$this->assertRedirectedToRoute('dashboard.overview');
}
This is the code:
public function getHome() {
if(Auth::guest()) {
return Redirect::route('general.welcome');
}
return Redirect::route('dashboard.overview');
}
When I run, I've got the following error:
EF.....
Time: 265 ms, Memory: 13.00Mb
There was 1 error:
1) PagesControllerTest::testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome
Mockery\Exception\InvalidCountException: Method guest() from Mockery_0_Illuminate_Auth_AuthManager should be called
exactly 1 times but called 0 times.
—
There was 1 failure:
1) PagesControllerTest::testHomeWhenUserIsAuthenticatedThenRedirectToDashboard
Failed asserting that two strings are equal.
--- Expected
+++ Actual
## ##
-'http://localhost/dashboard/overview'
+'http://localhost/welcome'
My questions are:
Two similar test cases but why the error output differs? First one the mock Auth::guest() is not called while the second one seems to be called.
On the second test case, why does it fail?
Is there any way to write better tests for my code above? Or even better code to test.
Above test cases, I use Mockery to mock the AuthManager, but if I use the facade Auth::shoudReceive()->once()->andReturn(), then it works eventually. Is there any different between Mockery and Auth::mock facade here?
Thanks.
You're actually mocking a new instance of the Illuminate\Auth\AuthManager and not accessing the Auth facade that is being utilized by your function getHome(). Ergo, your mock instance will never get called. (Standard disclaimer that none of the following code is tested.)
Try this:
public function testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome() {
Auth::shouldReceive('guest')->once()->andReturn(true);
$this->call('GET', '/');
$this->assertRedirectedToRoute('general.welcome');
}
public function testHomeWhenUserIsAuthenticatedThenRedirectToDashboard() {
Auth::shouldReceive('guest')->once()->andReturn(false);
$this->call('GET', '/');
$this->assertRedirectedToRoute('dashboard.overview');
}
If you check out Illuminate\Support\Facades\Facade, you'll see that it takes care of mocking for you. If you really wanted to do it the way that you were doing it (creating an instance of mock instance of Auth), you'd have to somehow inject it into the code under test. I believe that it could be done with something like this assuming that you extend from the TestCase class provided by laravel:
public function testHomeWhenUserIsNotAuthenticatedThenRedirectToWelcome() {
$this->app['auth'] = $auth = m::mock('Illuminate\Auth\AuthManager');
// above line will swap out the 'auth' facade with your facade.
$auth->shouldReceive('guest')->once()->andReturn(true);
$this->call('GET', '/');
$this->assertRedirectedToRoute('general.welcome');
}

markTestSkipped() not working with sausage-based Selenium tests via Sauce Labs

I am using the sausage framework to run parallelized phpunit-based Selenium web driver tests through Sauce Labs. Everything is working well until I want to mark a test as skipped via markTestSkipped(). I have tried this via two methods:
setting markTestSkipped() in the test method itself:
class MyTest
{
public function setUp()
{
//Some set up
parent::setUp();
}
public function testMyTest()
{
$this->markTestSkipped('Skipping test');
}
}
In this case, the test gets skipped, but only after performing setUp, which performs a lot of unnecessary work for a skipped test. To top it off, phpunit does not track the test as skipped -- in fact it doesn't track the test at all. I get the following output:
Running phpunit in 4 processes with <PATH_TO>/vendor/bin/phpunit
Time: <num> seconds, Memory: <mem used>
OK (0 tests, 0 assertions)
The other method is by setting markTestSkipped() in the setUp method:
class MyTest
{
public function setUp()
{
if (!shouldRunTest()) {
$this->markTestSkipped('Skipping test');
} else {
parent::setUp();
}
}
protected function shouldRunTest()
{
$shouldrun = //some checks to see if test should be run
return $shouldrun;
}
public function testMyTest()
{
//run the test
}
}
In this case, setUp is skipped, but the test still fails to track the tests as skipped. phpunit still returns the above output. Any ideas why phpunit is not tracking my skipped test when they are executed in this fashion?
It looks like, at the moment, there is no support for logging markTestSkipped() and markTestIncomplete() results in PHPunit when using paratest. More accurately, PHPunit won't log tests which call markTestSkipped() or markTestIncomplete() if called with arguments['junitLogfile'] set -- and paratest calls PHPunit with a junitLogfile.
For more info, see: https://github.com/brianium/paratest/issues/60
I suppose I can hack away at either phpunit or paratest...

Grails integration test failing with MissingMethodException

I'm attempting to test a typical controller flow for user login. There are extended relations, as with most login systems, and the Grails documentation is completely useless. It doesn't have a single example that is actually real-world relevant for typical usage and is a feature complete example.
my test looks like this:
#TestFor(UserController)
class UserControllerTests extends GroovyTestCase {
void testLogin() {
params.login = [email: "test1#example.com", password: "123"]
controller.login()
assert "/user/main" == response.redirectUrl
}
}
The controller does:
def login() {
if (!params.login) {
return
}
println("Email " + params.login.email)
Person p = Person.findByEmail(params?.login?.email)
...
}
which fails with:
groovy.lang.MissingMethodException: No signature of method: immigration.Person.methodMissing() is applicable for argument types: () values: []
The correct data is shown in the println, but the method fails to be called.
The test suite cannot use mocks overall because there are database triggers that get called, the result of which will need to be tested.
The bizarre thing is that if I call the method directly in the test, it works, but calling it in the controller doesn't.
For an update: I regenerated the test directly from the grails command, and it added the #Test annotation:
#TestFor(UserController)
class UserControllerTests extends GroovyTestCase {
#Test
void testLogin() {
params.login = [email: "test1#example.com", password: "123"]
Person.findByEmail(params.login.email)
controller.login()
}
}
This works if I run it with
grail test-app -integration UserController
though the result isn't populated correctly - the response is empty, flash.message is null even though it should have a value, redirectedUrl is null, content body is empty, and so is view.
If I remove the #TestFor annotation, it doesn't work even in the slightest. It fails telling me that 'params' doesn't exist.
In another test, I have two methods. The first method runs, finds Person.findAllByEmail(), then the second method runs and can't find Person.findAllByEmail and crashes with a similar error - method missing.
In another weird update - it looks like the response object is sending back a redirect, but to the application baseUrl, not to the user controller at all.
Integration tests shouldn't use #TestFor. You need to create an instance of the controller in your test, and set params in that:
class UserControllerTests extends GroovyTestCase {
void testLogin() {
def controller = new UserController()
controller.params.login = [email:'test1#example.com', password:'123']
controller.login()
assert "/user/main" == controller.response.redirectedUrl
}
}
Details are in the user guide.
The TestFor annotation is used only in unit tests, since this mocks the structure. In the integration tests you have access of the full Grails environment, so there's no need for this mocks. So just remove the annotation and should work.
class UserControllerTests extends GroovyTestCase {
...
}