Force running a codeception test before another - testing

If i define an #depends annotation like below, the test cannot be run if the createObjectBase Test has not run successfully before.
Sometimes i don't want to run the whole suite, but only the createObjectGeo Test.
How can i define that if i run createObjectGeo, codeception runs createObjectBase before it?
/**
*
*/
public function createObjectBase (AcceptanceTester $I) {
}
/**
* #depends createObjectBase
*/
public function createObjectGeo(AcceptanceTester $I) {
}

you should looking for the #before/#after annotations for this functionality
/**
*
*/
public function createObjectBase (AcceptanceTester $I) {
}
/**
* #before createObjectBase
*/
public function createObjectGeo(AcceptanceTester $I) {
}
please take a look at the documentation http://codeception.com/docs/07-AdvancedUsage#BeforeAfter-Annotations
The tests will be executed in the same order they will be written in the Cest file.

Related

Laravel middleware is not called in feature test

I have a new middleware that works as expected in the browser. However, when I try to trigger the middleware via a feature test, the handle() is never called.
I understand I can write a unit test for this middleware, and should. But should my actual feature test be moved to a browser test?
# Kernel.php
protected $middlewareGroups = [
'web' => [
MyMiddleware::class,
...
# MyMiddleware.php
public function handle($request, Closure $next)
{
dd('I can see this in the browser, but not in the Feature test. Doing some 302 magic here.');
# Feature Test
/**
* #test
* #return void
*/
public function my_new_test(): void
{
$this->get('/test')
->assertStatus(302)
->assertRedirect($vanityDomain->getFallbackRedirectUrl('/non-matching-path'));
}
it's working for me, not sure why yours is not working.
middleware
class MyMiddleware
{
public function handle($request, Closure $next)
{
if ('test' === $request->path()) {
return redirect('/test1');
}
return $next($request);
}
}
feature test
class ExampleTest extends TestCase
{
/**
* A basic test example.
*/
public function testBasicTest()
{
$this->get('/test')->assertStatus(302)->assertRedirect('test1');
}
}
Btw, unit and feature tests just a way of grouping your tests. Does not matter where you put, they all behave the same.
On laravel 8 I had to run php artisan optimize to get it working. Seems the route cache somehow needed to be cleared.

Execute sql script when updating database schema Doctrine

Is it possible to append (or execute) a custom sql queries when executing:
app/console doctrine:schema:update --force
I have a script that create all my views and I want them to be updated whenever I update the database schema.
Sure, you can extend the UpdateSchemaCommand command and inject the EntityManager into by defining the command as a service.
The command:
// src/AppBundle/Command/CustomUpdateSchemaCommand.php
<?php
namespace AppBundle\Command;
use Doctrine\Bundle\DoctrineBundle\Command\Proxy\UpdateSchemaDoctrineCommand;
use Doctrine\ORM\EntityManagerInterface;
use Symfony\Component\Console\Input\ArrayInput;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use Symfony\Component\Console\Command\Command;
class CustomUpdateSchemaCommand extends UpdateSchemaDoctrineCommand
{
/** #var EntityManagerInterface */
private $em;
/**
* #param EntityManagerInterface $em
*/
public function __construct(EntityManagerInterface $em)
{
$this->em = $em;
parent::__construct();
}
/**
* {#inheritDoc}
*/
protected function configure()
{
parent::configure();
}
/**
* {#inheritDoc}
*/
protected function execute(InputInterface $input, OutputInterface $output)
{
$output->writeln('Hello world');
$conn = $this->em->getConnection();
$conn->exec(/* QUERY */);
return parent::execute($input, $output);
}
}
The service:
// app/config/services.yml
app.command.custom_schema_update_command:
class: App\SportBundle\Command\CustomUpdateSchemaCommand
arguments: ["#doctrine.orm.entity_manager"]
tags:
- { name: console.command }
Hope this helps.

How can I have #After run even if a cucumber step failed?

We have several cucumber step definitions that are modifying the database which would mess up the test afterwards if it doesn't get cleaned up after the test runs. We do this by having a function with the #After annotation that will clean things up.
The problem is that if there's a failure in one of the tests, the function with #After doesn't run, which leaves the database in a bad state.
So the question is, how can I make sure the function with #After always runs, regardless if a test failed or not?
I saw this question, but it's not exactly what I'm trying to do, and the answers don't help.
If it helps, here is part of one of the tests. It's been greatly stripped down, but it has what I think are the important parts.
import static org.hamcrest.MatcherAssert.assertThat;
import cucumber.api.java.After;
public class RunMacroGMUStepDefinition
{
#Autowired
protected ClientSOAPRecordkeeperInterface keeper;
#Given( "^the following Macro exists:$" )
#Transactional
public void establishDefaultPatron( final DataTable dataTable )
{
for ( final DataTableRow dataTableRow : dataTable.getGherkinRows() )
{
// Stuff happens here
keeper.insert( macroScriptRecord );
}
}
#After( value = "#RunMacroGMU" )
#Transactional
public void teardown()
{
for ( int i = 0; i < macroScripts.size(); i++ )
{
keeper.delete( macroScripts.get( i ) );
}
}
// Part of #Then
private void compareRecords( final String has, // Other stuff )
{
// Stuff happens here
if ( has.equals( "include" ) )
{
assertThat( "No matching data found", foundMatch, equalTo( true ) );
}
else
{
assertThat( "Found matching data", foundMatch, equalTo( false ) );
}
}
}
I personally use Behat (The PHP dist of Cucumber), and we use something like this to take screenshots after a failed test. Did a bit of searching, and found this snippet in Java, that may help with this situation.
#After
public void tearDown(Scenario scenario) {
if (scenario.isFailed()) {
(INSERT FUNCTIONS YOU WOULD LIKE TO RUN AFTER A FAILING TEST HERE)
}
driver.close();
}
I hope this helps.

Behat with mink clean before each test

I am trying to find a way to run cleanup (DB) before running on each test. How could I do if I am using behat with mink? My current FeatureContext.php looks like this:
class FeatureContext extends MinkContext
{
/**
* Initializes context.
* Every scenario gets its own context object.
*
* #param array $parameters context parameters (set them up through behat.yml)
*/
public function __construct(array $parameters)
{
// Initialize your context here
}
}
Use the hooks in your context, read docs for Behat 3 or Behat 2. Example from Behat 3:
// features/bootstrap/FeatureContext.php
use Behat\Behat\Context\Context;
use Behat\Testwork\Hook\Scope\BeforeSuiteScope;
use Behat\Behat\Hook\Scope\AfterScenarioScope;
class FeatureContext implements Context
{
/**
* #BeforeSuite
*/
public static function prepare(BeforeSuiteScope $scope)
{
// prepare system for test suite
// before it runs
}
/**
* #AfterScenario #database
*/
public function cleanDB(AfterScenarioScope $scope)
{
// clean database after scenarios,
// tagged with #database
}
}

JUnit Test against an interface without having the implementation yet

I try to write a test for a given interface like that with JUnit and have no idea how to do that:
public interface ShortMessageService {
/**
* Creates a message. A message is related to a topic
* Creates a date for the message
* #throws IllegalArgumentException, if the message is longer then 255 characters.
* #throws IllegalArgumentException, if the message ist shorter then 10 characters.
* #throws IllegalArgumentException, if the user doesn't exist
* #throws IllegalArgumentException, if the topic doesn't exist
* #throws NullPointerException, if one argument is null.
* #param userName
* #param message
* #return ID of the new created message
*/
Long createMessage(String userName, String message, String topic);
[...]
}
I tried to mock the interface after I realized that it doesn't make sense at all so I am a bit lost. Maybe someone can give me a good approach I can work with. I also heard about junit parameterized tests but I am not sure if that is what I am looking for.
Many thanks!
I use the following pattern to write abstract tests against my interface APIs without having any implementations available. You can write whatever tests you require in AbstractShortMessageServiceTest without having to implement them at that point in time.
public abstract class AbstractShortMessageServiceTest
{
/**
* #return A new empty instance of an implementation of FooManager.
*/
protected abstract ShortMessageService getNewShortMessageService();
private ShortMessageService testService;
#Before
public void setUp() throws Exception
{
testService = getNewShortMessageService();
}
#Test
public void testFooBar() throws Exception
{
assertEquals("question", testService.createMessage(
"DeepThought", "42", "everything"));
}
}
When you have an implementation, you can use the test simply by defining a new test class that overrides AbstractShortMessageServiceTest and implements the getNewShortMessageService method.
public class MyShortMessageServiceTest extends AbstractShortMessageServiceTest
{
protected ShortMessageService getNewShortMessageService()
{
return new MyShortMessageService();
}
}
In addition, if you need the test to be parameterized, you can do that in AbstractShortMessageServiceTest without doing it in each of the concrete tests.
Usually test is prepared for class that implements the interface and mocks are used for cooperating classes but you can test your test by mock if the class is not ready yet. It is unusual and you should use thenAnsfer with implemented logic of possible cases:
Better way is simply prepare tests for the implementation class and start to improve it till all test passes:
Implementing class can be in field and initialized before tests
private ShortMessageService testedClasOrMock;
//version with implementing class
#Before
public void setUp(){
testedClasOrMock = new ShortMessageServiceImpl0();
}
#Before
public void setUp(){
testedClasOrMock = mock(ShortMessageService.class);
when(testedClasOrMock).thenAnswer(new Answer<Long>(){
#Override
public Long answer(InvocationOnMock invocation) throws Throwable {
String message =(String) invocation.getArguments()[1];
if (message.length() > 256){
throw new IllegalArgumentException("msg is too long");
}
//other exception throwing cases
…...
return new Long(44);
}});
}
so you will have several test with expected exceptions like
#Test (expected= IllegalArgumentException.class)
public void testTooLongMsg(){
testedClasOrMock.createMessage(USER, TOO_LONG_MSG, TOPIC);
}
and one that simply should not throw exception and for instance check that msg ids are different
#Test
public void testTooLongMsg(){
long id0 = testedClasOrMock.createMessage(USER, TOO_LONG_MSG, TOPIC);
long id1 = testedClasOrMock.createMessage(USER, TOO_LONG_MSG, TOPIC);
assertTrue(id0 != id1);
}
If you insist on testing your test by mock let me know and I will add example for one test case.