I'm trying to start writing tests within Laravel.
I think it is good practice to write my own basic TestCase, that runs the test setup (for example migrations).
<?php
class TestCase extends PHPUnit_Framework_TestCase {
public function setUp()
{
parent::setUp();
// just for example:
DB::table('categories')->first();
}
}
After that, I want to inherit all my TestCases from the one created above
<?php
class TestExample extends TestCase {
public function testSomethingIsTrue()
{
$this->assertTrue(true);
}
}
So now I have three problems:
My TestCase throws error, class DB not found. Why the heck is the test not autoloading Laravel Framework Classes?
PHPUnit tells me (with a warning) that my TestCase does not contain any tests, but that is my suspected behaviour for this class, how can I avoid this message?
My TestExample cannot find the class TestCase.
Can you help me understanding that?! How can I configure a test specific autoloading?
UPDATE:
Answer 1: Because I run the tests in NetBeans IDE, that needs to be configured! Setting up the right phpunit.xml helped
As revealed in our discussion, when you run your PHPUnit tests using your IDE (NetBeans, PhpStrom and so), you have to make sure your IDE is correctly configured to catch phpunit.xml.
Related
I'm writing a Gherkin-based acceptance testing PoC. I have a feature file, step object, and a page object. In my sequence I will need to log-in the test user before conducting the rest of the series. Our SUT is a legacy PHP application that didn't use any framework.
I would like to store the testuser's credentials in a params.yml or other external config file but have been unsuccessful in making this work and unable to find a complete example.
My login object is a simple Cest class for now. I didn't think it needed its own feature description, the rest of the tests will be Gherkin based where needed. My config files are currently the default configs generated by Codeception 5's bootstrap command with a gherkin section added for the one feature file I've written so far. Eventually I will run this under WebDriver to enable sessions...for now I'm just trying to establish a reusable environment we can build on for a team of developers.
The Codeception docs seem to gloss over some of these concepts or recommendations for users new to their framework.
I sincerely appreciate any ideas or concerns you may have.
<?php
namespace Tests\Acceptance;
use Codeception\Attribute\Group;
use Tests\Support\AcceptanceTester;
use Tests\Support\Page\Acceptance\LoginPage;
class LoginCest
{
#[Group('login')]
public function successfulLogin(AcceptanceTester $I, LoginPage $loginPage)
{
$loginPage->login( <testUserHere>, <goodPasshere> ); // <-this is what I want to provide
$I->dontSeeElement('.alert-error');
$I->amOnPage("/command.php");
}
public function unsuccessfulLogin(AcceptanceTester $I, LoginPage $loginPage)
{
$loginPage->login(getenv( <testUserHere> , 'baddpass');
$I->seeElement('.alert-error');
$I->amOnPage("/");
}
}
I ended up writing a config helper:
<?php
declare(strict_types=1);
namespace Tests\Support\Helper;
class Config extends \Codeception\Module
{
protected array $requiredFields = ['testUser', 'goodPass'];
}
Then this in my suite config:
modules:
enabled:
- \Tests\Support\Helper\Config:
testUser: testuser
goodPass: supersecretpassword
It may not be right or a best practice...but it worked so I could move forward.
Now if I can just figure out how to use the helper in my Gherkin-driven StepObject...
I'm working on a project which uses Quarkus to spin up a few REST endpoints. I have multiple integration tests which run with different test profiles or completely without a test profile. Heres an example:
#QuarkusTest
#Tag("integration")
#TestProfile(SomeProfile::class)
class IntegrationTestWithSomeProfile {
#Test
fun someTest() { ... }
}
#QuarkusTest
#Tag("integration")
class IntegrationTestWithoutProfile {
#Test
fun someTest() { ... }
}
Now I would like to execute a piece of code before the first test runs (or after the last test has finished). The problem is that #BeforeAll can only be used per class and I can't use Quarkus' start and stop events since Quarkus is started and shutdown multiple times - once for each different test profile.
Is there any hook (or hack - i don't mind dirty stuff as long as it works) which I could use, which would execute only once at the very beginning?
You can try #QuarkusTestResource with a class implementing QuarkusTestResourceLifecycleManager.
This can be used to start/stop services on the classes you want.
See: https://quarkus.io/guides/getting-started-testing#quarkus-test-resource
I finally found the solution I need. It's called TestExecutionListener. I went the route with adding a file called org.junit.platform.launcher.TestExecutionListener in resources/META-INF/services. Inside this file I've put the fqcn of my class implementing the TestExecutionListener interface.
In there I can then override testPlanExecutionStarted() and testPlanExecutionFinished(). With this, it doesn't matter how many TestProfiles we use and how many times Quarkus is started and stopped. The TestExecutionListener runs only once.
I'm having an issue trying to do as the title says. When i run this locally, it does launch 2 chrome instances, however it uses only one of the browsers for both tests, as opposed to using each browser for each test. Any ideas how how to set this up correctly?
public class BaseClass
{
public IWebDriver driver;
[SetUp]
public void BaseSetUp()
{
driver = new ChromeDriver();
driver.Manage().Window.Maximize();
}
[TearDown]
public void BaseTearDown()
{
driver.Quit();
}
}
[Parallelizable(ParallelScope.All)]
[TestFixture]
public class DerivedClass : BaseClass
{
[TestCase("https://www.python.org/", "Welcome to Python.org")]
[TestCase("https://www.w3.org/", "World Wide Web Consortium (W3C)")]
public void Test3(string url, string title)
{
driver.Navigate().GoToUrl(url);
Thread.Sleep(4500);
Assert.AreEqual(driver.Title, title);
}
}
You are indeed creating the driver twice, but you are storing both instances in the same member field, driver. The reason this happens is that NUnit uses the same instance of a test class for all the tests within that class. (A new feature will be available in a future release to use a separate instance, but that's no help to you right now.)
As the tests run in parallel, the first test to run performs its setup and stores the driver in that field. Then the second test starts and stores it's instance in the same field. It's not possible to predict exactly when - during the execution of the tests- that replacement will take place. However, it will most likely happen consistently as you re-run on the same machine.
In the case of the example, the solution is simple. If you want each test case to use a separate driver instance, create the driver within the test itself, perhaps by calling a method that centralizes all your initialization.
An important thing to remember is that the ParallelizableAttribute is used to tell NUnit that the test may be run in parallel. NUnit accepts that as a promise, but if you share state between tests they won't run correctly.
I'm setting up an Integration Test Project for Aspnet Core MVC.
I'm using AspNetBoilerplate.
I created a project with this WebApplicationFactory:
public class CustomWebApplicationFactory : WebApplicationFactory<Program>
{
}
And my test classes inherit from
public partial class ScenarioTest : IClassFixture<CustomWebApplicationFactory>
{
When I run the whole test suite, I get either of those errors:
Castle.MicroKernel.ComponentRegistrationException : Component iloggerfactory could not be registered. There is already a component with that name. Did you want to modify the existing component instead? If not, make sure you specify a unique name.
System.ArgumentException : Facility of type 'Castle.Facilities.Logging.LoggingFacility' has already been registered with the container. Only one facility of a given type can exist in the container.
Castle.MicroKernel.Handlers.HandlerException : Can't create component 'Microsoft.Extensions.Logging.ILoggerProvider_e02ce050-f09e-48d5-b518-57e4ee2ef81b' as it has dependencies to be satisfied.
Then when I try to run the test suite for a second time, I get this:
Cleaning the test project and re-running it makes the errors appear again
Though, I can run a single test class (With ReSharper), and the test class runs successfully.
Help with this would be greatly appreciated, as I cannot run a continuous testing session and it's quite annoying
Change WebApplicationFactory<Program> to WebApplicationFactory<Startup>.
I'm running selenium tests through junit.
In my system the setUp method of our AbstractSeleniumTestCase class sets up the selenium web driver and firefox profile, and the tearDown method logs out of the system and closes selenium.
Some tests will override the setUp and tearDown methods to do custom test setUp and tearDown.
The problem I'm having is that if an error accrues in the startUp method of a test (Like an unexpected popup or an selenium exception) then the web browser is never closed and the test specific tearDown operations are never done.
You could use a try block in the setUp() method to run tearDown() after encountering an error, and move the "meat" of the test setup into another method:
public void setUp() throws Exception {
try {
mySetUp();
} catch (Exception e) {
tearDown();
throw e;
}
}
Then, in your subclasses, override mySetUp() instead of setUp().
You should implement a TestWatcher and override finished method that, according to docs:
Invoked when a test method finishes (whether passing or failing)
I have not used JUnit for some time now, but as much as I remember, rules were applied before #Before.
Also you can init the driver in overriding starting method and take any relevant action by overriding the failed method etc. By that way it is possible to get rid of repetitive stuff on #Before and #After. Check the docs for specifics.
Then you can annotate your test classes with either #Rule or #ClassRule, google to understand which better suits your needs. For your any possible specific needs, it is also possible to create a rule chain.