How to exclude beforeFeature for specific tag in behat - behat

I have a behat feature with specific tag and I have two beforeFeature function. One of them runs for every scenario and one of them runs only for specific tagged scenarios. The problem is that the beforeFeature with no tag also runs before those features that have the tag. I want to make this beforeFeature to not runs on those feature with specific tag.
For example I have a following feature:
#taggedFeature
Feature: This feature runs tagged beforeFeature
And In my FeatureContect I have the following beforeFeature functions
/** #BeforeFeature */
public static function beforeFeatureDefault()
{
// Do something
}
and
/** #BeforeFeature #taggedFeature*/
public static function beforeFeatureTagged()
{
// Do something
}
What I want is from behat is to not run the beforeFeatureDefault() function before my tagged feature.

You can exclude the tagged features with ~:
/** #BeforeFeature ~#taggedFeature */
public static function beforeFeatureDefault() {
echo 'beforeFeatureDefault()';
}
/** #BeforeFeature #taggedFeature */
public static function beforeFeatureTagged() {
echo 'beforeFeatureTagged()';
}

Related

GoogleTest: is there a generic way to add a function call prior to each test case?

my scenario: I have an existing unit test framework with ~3000 individual test cases. They are made from TEST, TEST_F and TEST_P macros.
Internally the tested modules make use of a logger library and now my goal is to create individual log files for each test case. To do so I would like to call a function as a SetUp for each test case.
Is there a way to register such function at the framework and get it called automatically?
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
I do like the idea of registering a global setup at the framework with AddGlobalTestEnvironment() but as I understand this is handled only once per executable.
By the way: I have acceptance tests implemented in robot test and guess what? I want to repeat the task there...
Thanks for any inspiration!
Christoph
You mentioned:
The obvious solution for me would look like: do the work in a test fixture constructor or SetUp() but then I'd have to touch every single test case.
If the reason that you think you would need to touch every single test case is to set the filename differently, you can use the combination of SetUp() function and the current_test_info provided by GTest to get the test name for each test, and then use that to create a separate file for each test.
Here is an example:
// Class for test fixture
class MyTestFixture : public ::testing::Test {
protected:
void SetUp() override {
test_name_ = std::string(
::testing::UnitTest::GetInstance()->current_test_info()->name());
std::cout << "test_name_: " << test_name_ << std::endl;
// CreateYourLogFileUsingTestName(test_name_);
}
std::string test_name_;
};
TEST_F(MyTestFixture, Test1) {
EXPECT_EQ(this->test_name_, std::string("Test1"));
}
TEST_F(MyTestFixture, Test2) {
EXPECT_EQ(this->test_name_, std::string("Test2"));
}
Live example here: https://godbolt.org/z/YjzEG3G77
The solution I found in the gtest docs:
class TraceHandler : public testing::EmptyTestEventListener
{
// Called before a test starts.
void OnTestStart( const testing::TestInfo& test_info ) override
{
// set the logfilename here
}
// Called after a test ends.
void OnTestEnd( const testing::TestInfo& test_info ) override
{
// close the log here
}
};
int main( int argc, char** argv )
{
testing::InitGoogleTest( &argc, argv );
testing::TestEventListeners& listeners =
testing::UnitTest::GetInstance()->listeners();
// Adds a listener to the end. googletest takes the ownership.
listeners.Append(new TraceHandler);
return RUN_ALL_TESTS();
}
This way it automatically applies to all tests linked to this main-function.
Maybe I have to mention: my logger is a collection of static functions that send udp-packets to a receiver that cares for actual logging. I can control the filename by one of that functions. That's the reason why I don't need to insert code in every single TEST, TEST_F or TEST_P.

phpunit: reusing dataprovider

I want to run multiple test cases against the content of a whole set of files. I could use a data provider to load my files and use the same provider for all the tests like this:
class mytest extends PHPUnit_Framework_TestCase {
public function contentProvider() {
return glob(__DIR__ . '/files/*');
}
/**
* #dataProvider contentProvider
*/
public function test1($file) {
$content = file_get_contents($file);
// assert something here
}
...
/**
* #dataProvider contentProvider
*/
public function test10($file) {
$content = file_get_contents($file);
// assert something here
}
}
Obviously that means if I have 10 test cases, each file is loaded 10 times.
I could adjust the data provider to load all files and return one big structure with all the contents. But since the provider is called separately for each test it would still mean each file is loaded 10 times and in addition it would load all files into memory at the same time.
I could of course condense the 10 tests into one test with 10 assertions, but then it would abort right after the first assertion fails and I really want a report of all things that are wrong with the file.
I know that data providers can also return an iterator. But phpunit seems to rerun the iterator separately for each test, still resulting in loading each file 10 times.
Is there a clever way to make phpunit run an iterator only once and pass the result to each test, before continuing?
Test dependencies
If some tests are dependents, you should use the #depends annotation to declare Test dependencies. The data returned by the dependency is used by the test declaring this dependency.
But, if a test declared as dependency failed, the dependent test is not executed.
Statically stored data
To share data between tests, it's common to setup fixtures statically.
You can use the same method with data providers:
<?php
use PHPUnit\Framework\TestCase;
class MyTest extends TestCase
{
private static $filesContent = NULL;
public function filesContentProvider()
{
if (self::$filesContent === NULL) {
$paths = glob(__DIR__ . '/files/*');
self::$filesContent = array_map(function($path) {
return [file_get_contents($path)];
}, $paths);
}
}
/**
* #dataProvider filesContentProvider
*/
public function test1($content)
{
$this->assertNotEmpty($content, 'File must not be empty.');
}
/**
* #dataProvider filesContentProvider
*/
public function test2($content)
{
$this->assertStringStartsWith('<?php', $content,
'File must start with the PHP start tag.');
}
}
As you can see, it's not supported out of the box. As the test class instance is destroyed after each test method execution, you have to store the initialized data in a class variable.

Perform An Action When Proceeding To Payment in prestashop.

After selecting any payment method on order page. I want to run a small php code so is there any way to perform an action after chosen any payment method.
You can create a module and use a hookActionPayamentConfirmation
public function hookActionPaymentConfirmation()
{
/* Place your code here. */
}
if you do not want to display anything
public function hookActionValidateOrder()
{
/* Place your code here. */
}
if you want display a element
public function hookDisplayShoppingCart()
{
/* Place your code here. */
}

Is it possible to skip a scenario with Cucumber-JVM at run-time

I want to add a tag #skiponchrome to a scenario, this should skip the scenario when running a Selenium test with the Chrome browser. The reason to-do this is because some scenario's work in some environments and not in others, this might not even be browser testing specific and could be applied in other situation for example OS platforms.
Example hook:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
// Skip scenario code here
}
}
I know it is possible to define ~#skiponchrome in the cucumber tags to skip the tag, but I would like to skip a tag at run-time. This way I don't have to think about which steps to skip in advance when I starting a test run on a certain environment.
I would like to create a hook that catches the tag and skips the scenario without reporting a fail/error. Is this possible?
I realized that this is a late update to an already answered question, but I want to add one more option directly supported by cucumber-jvm:
#Before //(cucumber one)
public void setup(){
Assume.assumeTrue(weAreInPreProductionEnvironment);
}
"and the scenario will be marked as ignored (but the test will pass) if weAreInPreProductionEnvironment is false."
You will need to add
import org.junit.Assume;
The major difference with the accepted answer is that JUnit assume failures behave just like pending
Important Because of a bug fix you will need cucumber-jvm release 1.2.5 which as of this writing is the latest. For example, the above will generate a failure instead of a pending in cucumber-java8-1.2.3.jar
I really prefer to be explicit about which tests are being run, by having separate run configurations defined for each environment. I also like to keep the number of tags I use to a minimum, to keep the number of configurations manageable.
I don't think it's possible to achieve what you want with tags alone. You would need to write a custom jUnit test runner to use in place of #RunWith(Cucumber.class). Take a look at the Cucumber implementation to see how things work. You would need to alter the RuntimeOptions created by the RuntimeOptionsFactory to include/exclude tags depending on the browser, or other runtime condition.
Alternatively, you could consider writing a small script which invokes your test suite, building up a list of tags to include/exclude dynamically, depending on the environment you're running in. I would consider this to be a more maintainable, cleaner solution.
It's actually really easy. If you dig though the Cucumber-JVM and JUnit 4 source code, you'll find that JUnit makes skipping during runtime very easy (just undocumented).
Take a look at the following source code for JUnit 4's ParentRunner, which Cucumber-JVM's FeatureRunner (which is used in Cucumber, the default Cucumber runner):
#Override
public void run(final RunNotifier notifier) {
EachTestNotifier testNotifier = new EachTestNotifier(notifier,
getDescription());
try {
Statement statement = classBlock(notifier);
statement.evaluate();
} catch (AssumptionViolatedException e) {
testNotifier.fireTestIgnored();
} catch (StoppedByUserException e) {
throw e;
} catch (Throwable e) {
testNotifier.addFailure(e);
}
}
This is how JUnit decides what result to show. If it's successful it will show a pass, but it's possible to #Ignore in JUnit, so what happens in that case? Well, an AssumptionViolatedException is thrown by the RunNotifier (or Cucumber FeatureRunner in this case).
So your example becomes:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
throw new AssumptionViolatedException("Not supported on Chrome")
}
}
If you've used vanilla JUnit 4 before, you'd remember that #Ignore takes an optional message that is displayed when a test is ignored by the runner. AssumptionViolatedException carries the message, so you should see it in your test output after a test is skipped this way without having to write your own custom runner.
I too had the same challenge, where in I need to skip a scenario from running based on a flag which I obtain from the application dynamically in run-time, which tells whether the feature to be tested is enabled on the application or not..
so this is how I wrote my logic in the scenarios file, where we have the glue code for each step.
I have used a unique tag '#Feature-01AXX' to mark my scenarios that need to be run only when that feature(code) is available on the application.
so for every scenario, the tag '#Feature-01XX' is checked first, if its present then the check for the availability of the feature is made, only then the scenario will be picked for running. Else it will be merely skipped, and Junit will not mark this as failure, instead it will me marked as Pass. So the final result if these tests did not run due to the un-availability of the feature will be pass, that's cool...
#Before
public void before(final Scenario scenario) throws Exception {
/*
my other pre-setup tasks for each scenario.
*/
// get all the scenario tags from the scenario head.
final ArrayList<String> scenarioTags = new ArrayList<>();
scenarioTags.addAll(scenario.getSourceTagNames());
// check if the feature is enabled on the appliance, so that the tests can be run.
if (checkForSkipScenario(scenarioTags)) {
throw new AssumptionViolatedException("The feature 'Feature-01AXX' is not enabled on this appliance, so skipping");
}
}
private boolean checkForSkipScenario(final ArrayList<String> scenarioTags) {
// I use a tag "#Feature-01AXX" on the scenarios which needs to be run when the feature is enabled on the appliance/application
if (scenarioTags.contains("#Feature-01AXX") && !isTheFeatureEnabled()) { // if feature is not enabled, then we need to skip the scenario.
return true;
}
return false;
}
private boolean isTheFeatureEnabled(){
/*
my logic to check if the feature is available/enabled on the application.
in my case its an REST api call, I parse the JSON and check if the feature is enabled.
if it is enabled return 'true', else return 'false'
*/
}
I've implemented a customized junit runner as below. The idea is to add tags during runtime.
So say for a scenario we need new users, we tag the scenarios as "#requires_new_user". Then if we run our test in an environment (say production environment which dose not allow you to register new user easily), then we will figure out that we are not able to get new user. Then the ""not #requires_new_user" will be added to cucumber options to skip the scenario.
This is the most clean solution I can imagine now.
public class WebuiCucumberRunner extends ParentRunner<FeatureRunner> {
private final JUnitReporter jUnitReporter;
private final List<FeatureRunner> children = new ArrayList<FeatureRunner>();
private final Runtime runtime;
private final Formatter formatter;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws java.io.IOException if there is a problem
* #throws org.junit.runners.model.InitializationError if there is another problem
*/
public WebuiCucumberRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
addTagFiltersAsPerTestRuntimeEnvironment(runtimeOptions);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
formatter = runtimeOptions.formatter(classLoader);
final JUnitOptions junitOptions = new JUnitOptions(runtimeOptions.getJunitOptions());
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader, runtime.getEventBus());
jUnitReporter = new JUnitReporter(runtime.getEventBus(), runtimeOptions.isStrict(), junitOptions);
addChildren(cucumberFeatures);
}
private void addTagFiltersAsPerTestRuntimeEnvironment(RuntimeOptions runtimeOptions)
{
String channel = Configuration.TENANT_NAME.getValue().toLowerCase();
runtimeOptions.getTagFilters().add("#" + channel);
if (!TestEnvironment.getEnvironment().isNewUserAvailable()) {
runtimeOptions.getTagFilters().add("not #requires_new_user");
}
}
...
}
Or you can extends the official Cucumber Junit test runner cucumber.api.junit.Cucumber and override method
/**
* Create the Runtime. Can be overridden to customize the runtime or backend.
*
* #param resourceLoader used to load resources
* #param classLoader used to load classes
* #param runtimeOptions configuration
* #return a new runtime
* #throws InitializationError if a JUnit error occurred
* #throws IOException if a class or resource could not be loaded
* #deprecated Neither the runtime nor the backend or any of the classes involved in their construction are part of
* the public API. As such they should not be exposed. The recommended way to observe the cucumber process is to
* listen to events by using a plugin. For example the JSONFormatter.
*/
#Deprecated
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
You can manipulate runtimeOptions here as you wish. But the method is marked as deprecated, so use it with caution.
If you're using Maven, you could read use a browser profile and then set the appropriate ~ exclude tags there?
Unless you're asking how to run this from command line, in which case you tag the scenario with #skipchrome and then when you run cucumber set the cucumber options to tags = {"~#skipchrome"}
If you wish simply to temporarily skip a scenario (for example, while writing the scenarios), you can comment it out (ctrl+/ in Eclipse or Intellij).

phpunit database test after teardown

I want to execute several tests in a testcase / testsuite (via selenium) and hook a database test onto the end of every tearDown (with assert which can't be called in tearDown).
So the workflow would be:
Setup the Connection to the database and the Schema in setUpBeforeClass()
Setup the Database (only the contents) in setUp()
Execute test01
TearDown contents
Assert if every Table in Database has a rowCount of Zero.
So is there a way to hook a additional assert onto the end of every tearDown?
I tried to do the Setup in assertPreConditions and the tearDown in assertPostConditions but thats kind of ugly.
Thx in advance
It seems you can use an assert anywhere, even in tearDown(). This test case (save as testTearDown.php, run with phpunit testTearDown.php) correctly gives a fail:
class TearDownTest extends PHPUnit_Framework_TestCase
{
/** */
public function setUp(){
echo "In setUp\n";
//$this->assertTrue(false);
}
/** */
public function tearDown(){
echo "In tearDown\n";
$this->assertTrue(false);
}
/** */
public function assertPreConditions(){
echo "In assertPreConditions\n";
//$this->assertTrue(false);
}
/** */
public function assertPostConditions(){
echo "In assertPostConditions\n";
//$this->assertTrue(false);
}
/**
*/
public function testAdd(){
$this->assertEquals(3, 1+2);
}
}
But, one rule of thumb I have is: if the software is making my life difficult, maybe I'm doing something wrong. You wrote that after the tearDown code has run you want to: "Assert if every Table in Database has a rowCount of Zero."
This sounds like you want to validate your unit test code has been written correctly, in this case that tearDown has done its job correctly? It is not really anything to do with the code that you are actually testing. Using the phpUnit assert mechanisms would be confusing and misleading; in my sample above, when tearDown asserts it tells me that testAdd() has failed. If it is actually code in tearDown() that is not working correctly, I want to be told that instead. So, for validating your unit test code, why not use PHP's asserts:
So I wonder if the tearDown() function you want could look something like this:
public function tearDown(){
tidyUpDatabase();
$cnt=selectCount("table1");
assert($cnt==0);
$cnt=selectCount("table2");
assert($cnt==0);
}