I have a Codeception cest file which has a number of tests in it.
In some of the tests, there are initializations which I would like to so in the _before() hook. These initializations are specific to those tests only and to no other test in the cest file.
How can I go about this?
The pseudocode would be something like
public _before($event)
{
if ($event->test_being_run == 'testThatFeature')
{
$init = something(here);
}
}
Through investigation, I have realized that the $event variable passed into the _before() hook is an instance of the generated AcceptanceTester; as opposed to \Codeception\Event\TestCase. So I cannot use the hopeful $event->getTest()->getTestFullName().
Codeception injects parameters based on type hinting.
If you want to get \Codeception\Event\TestCase, your code must look like this:
public _before(\Codeception\Event\TestCase $event)
All information that I found about receiving testcase in _before method, was about _before method of modules and extensions, it does not apply to tests.
If you want to run specific code for one test, just run it in the test code.
Related
I know that Codeception is designed for command line usage. But as it is completely based on PHP, I am pretty sure there must be a way to dynamically/temporarily create a test by PHP.
In my case I am getting acceptance test steps from a database and need to run the tests dynamically with Codeception. I would prefer a way to test it without always having to generate and delete temporary test folders and running the codeception commands on the commandline.
The problem is that Codeception dynamically generates a bunch of config files and scripts when creating a cest. I couldn't make it work by using the Codeception classes.
Does anyone have an idea what's the best way to achieve this?
I think that the best approach would be to implement custom test loader as documented at https://codeception.com/docs/07-AdvancedUsage#Formats
You still have to use placeholder file in each suite to kickoff the loader, but the tests can be loaded from database.
Copy of documentation:
In addition to the standard test formats (Cept, Cest, Unit, Gherkin)
you can implement your own format classes to customise your test
execution. Specify these in your suite configuration:
formats:
- \My\Namespace\MyFormat
Then define a class which implements the LoaderInterface
namespace My\Namespace;
class MyFormat implements \Codeception\Test\Loader\LoaderInterface
{
protected $tests;
protected $settings;
public function __construct($settings = [])
{
//These are the suite settings
$this->settings = $settings;
}
public function loadTests($filename)
{
//Load file and create tests
}
public function getTests()
{
return $this->tests;
}
public function getPattern()
{
return '~Myformat\.php$~';
}
}
Look at existing Loader classes for inspiration: https://github.com/Codeception/Codeception/tree/4.0/src/Codeception/Test/Loader
I have below java class which runs with cucumberOptions
#CucumberOptions(tags = {"#userManagement"})
public class IC_API_Tests_Runner {
runner code here
}
From jenkins I am passing below command ti run the tests
clean test "-Dkarate.env=$WhereToRun" "-Dbvt.tags=#userManagement"
I am able to fetch the value of 'bvt.tags' using below command
bvtTags = karate.properties['bvt.tags'];
Now I need to pass the 'bvtTags' value to the CucumberOptions.
I tried
#CucumberOptions(tags = {"bvtTags"})
public class IC_API_Tests_Runner {
runner code here
}
But 'bvtTags' value is not substituted in the CucumberOptions. But I am able to print the value of 'bvtTags' with print statement in karate code.
Any help will be great help
No you can't do dynamic changing of the #CucumberOptions like that.
Use the API for dynamically choosing tests, see this example: DemoTestSelected.java.
Then do something like this (please change for your environment):
String tags = System.getProperty("bvt.tags");
List<String> tags = Arrays.asList(tags);
EDIT: actually you don't need to do any of this. (I guess that you will never read the docs :)
Please refer: https://github.com/intuit/karate#command-line
-Dkarate.options="--tags #userManagement"
We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.
I'm using the testing framework Codeception to do BDD. I understand the idea of wanting something, but I don't understand what the function does.
$I->wantTo('Understand what this method does!');
http://codeception.com/docs/03-AcceptanceTests#Comments
Commands like amGoingTo, expect, expectTo help you in making tests
more descriptive.
$I->wantTo('Understand what this method does!');
will be rendered as * I want to understand what this method does! in verbose output.
Update 2022-11-16:
My original answer was incorrect, wantTo is not a comment method, it renames test method in the output.
Example:
I created very simple Cest class:
<?php
class ExampleCest
{
public function provideExample(CliGuy $I)
{
}
}
When I ran it, I got the following output:
Cli Tests (1) --------------------------------------------
U ExampleCest: Provide example (0.00s)
---------------------------------------------------------
but after adding $I->wantTo('change test name!'); to method:
I got the following output:
Cli Tests (1) --------------------------------------------
U ExampleCest: Change test name! (0.00s)
---------------------------------------------------------
The benefit of wantTo is that it allows to use characters not permitted in method names or different formatting than automatically generated.
I looked up if wantTo has any documentation and all I found was old blog post using examples in class-less Cept format (which is deprecated and is likely to be removed in Codeception 6).
<?php
$I = new TestGuy($scenario);
$I->wantTo('log in to site');
$I->amOnPage('/');
$I->click('Login');
$I->fillField('username', 'admin');
In Cept format wantTo had better purpose, because it didn't override anything, but provided additional information next to file name.
I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.