I have been setting up Behat in order to facilitate BDD as a testing framework for my company. The ask is to also incorporate it with PHPUnit, so as to derive the benefits of that platform, with the readability of the Gherkin syntax of the Behat scenarios.
My first attempt at integration with PHPUnit was to make php exec() calls out to the command line from the Behat Feature file, and base the results of each scenario step on the feedback from that PHPUnit test. This, however, feels cumbersome, and that there should be a way to more tightly integrate these tools, hopefully allowing Behat to call PHPUnit tests directly.
/**
* #When <invoice_custcode>
*/
public function invoiceCustcode()
{
$returnValue = "";
$output = "";
exec("phpunit BDDTest",$output,$returnValue);
echo "returned with status ".$returnValue." and output=".print_r($output);
}
This far, the closest I have come is this project by Jonathan Shaw from 2019, but it doesn't seem to speak completely to my question.
https://medium.com/#jonathanjfshaw/write-better-tests-by-using-behat-with-phpunit-ddb08d449b73
Related
When using PHPUnit, you can annotate a test case with #covers SomeClass::someMethod to ensure that only code inside of that method is recorded as covered when running the test. I like to use this feature because it helps me separate code that was incidentally executed during a test from code that was actually tested.
After using Codeception to implement some acceptance tests for my project, I decided I would rather use it than PHPUnit to run my unit tests. I would like to remove PHPUnit from the project if possible.
I am using Codeception's Cest format for my unit tests, and the #covers and #codeCoverageIgnore annotations no longer work. Code coverage reports show executed code outside of the methods specified with #covers as covered. Is there any way to mimic that "strict coverage" functionality using Codeception?
Edit: I have submitted an enhancement request to the Codeception project's Github.
It turns out that strict coverage was not possible using Cest-format tests when I asked the question. I have implemented it and the pull request has been merged.
For anyone migrating tests from PHPUnit and looking for this feature as I was, this means that a later release of Codeception should provide support for #covers, #uses, #codeCoverageIgnore, and other related test annotations.
The current version (2.2.4 at the time this was written) doesn't support it but 2.2.x-dev should.
I am setting up codeception to test my Yii application.
I came across the 'YiiBridge' and I can't really understand why this is required since I created a simple acceptance test case and it worked fine.
My test case is:
<?php
$I = new AcceptanceTester($scenario);
$I->wantTo('ensure that the frontpage works');
$I->amOnPage('/');
$I->see('LOGIN');
?>
Will more complicated test cases require the YiiBridge?
Also I noted that acceptance and functional test cases are exactly the same with the difference that in the functional.suite.yml file, phpBrowser is missing which is present in the acceptance.suite.yml file, and on the codeception website they say that phpBrowser has the following drawbacks:
you can click only on links with valid urls or form submit buttons,
you can't fill fields that are not inside a form,
you can't work with JavaScript interactions: modal windows, datepickers, etc.
In this way I will not be able to test my AngularJs functionalities.Is there any way to get around these limitations?
Thanks in advance!
Will more complicated test cases require the YiiBridge?
No, they won't.
We're also using Yii and writing our acceptance tests with WebDriver. It's similar to phpBrowser and you don't need Yii Bridge for that since WebDriver/phpBrowser will "simulate" a real browser. Yii Bridge is needed for Functional tests. And yes, you're right:
functional tests are almost the same, with just one major difference: functional tests don't require a web server to run tests.
More about functional tests.
For AngularJS and another javascript tests you will have to write some custom functions like the following:
public function openDevice() {
$I = $this;
$script = 'return document.getElementById("createDevice").children[0].click()';
$I->executeJS($script);
}
It's always a little bit annoying to test JS, however it's possible.
My project requirements are
1.The framework must produce detailed Step Reports - which can be sent to the client through email.
2.The execution time must be less
3.Easy to write
I know behat and Cucumber
Please suggest me which framework is good ??
I would say Behat+Mink+Selenium combination. I've been using for very long time.
Behat will give you report as you wanted. We always send reports to clients where every single line is printed and either marked as success or failure. At the end of it, you get a full result where you can see overall report.
e.g. bin/behat #YourBundleName -f pretty,html --out ,report-path/behat.html. You can even get screen-shots of failed steps.
Every program can be considered as fast or slow. Result will depend on how you do things. You have a lot of options to make behat tests run fast. e.g. if you use phantomJs to run the tests and symfony2 as default session.
Behat uses Gherkin language which is easy to understand and write. You don't have to be a programmer at all.
One framework known for its pretty reports is Concordion. Please, have a look at the example to view one such report: http://concordion.org/Example.html
The Java version of Concordion utilizes JUnit to execute its tests. So you get a good integration in your development environment. Concordion support multiple technologies such as .NET, Ruby, Python, etc. http://concordion.org/Ports.html
Which technology are you using?
Concordion based on specification by example has been designed with a short learning-curve as a top priority. The purposely small command-set is simple to learn: http://concordion.org/Tutorial.html
I was thinking of implementing automated tests for different part of an ActivePivot Servers and most importantly post-processors.
Since I am at the very beginning, I would like to know more about the state of the art in this field, what are the best practices and if there are any caveats to avoid.
IF you have any experience, I will be delighted to read from you.
Cheers,
Pascal
That is a very broad question. An ActivePivot solution is a piece of java software and inherits from all the best practices regarding the testing and continuous build of a software project.
But here are some basic ActivePivot entry points:
How, where and when to write tests?
Write junit tests, run them with maven, setup continuous build with Jenkins.
How to embbed a (real, non trivial) ActivePivot instance within a unit test?
Start an embbeded Jetty web application server. The ActivePivot Sandbox application is an example of that (look at com.quartetfs.pivot.jettyserver.JettyServer). If you want a series of unit tests to run against the same ActivePivot instance, you can start the Jetty server statically (for instance in a static method annotated with #BeforeClass). In any case don't forget to stop it at the end of the tests.
How to write performance tests?
In the Sandbox project, there is a small MDX benchmark called com.quartetfs.pivot.client.MDXBenchmark. It is easy to enrich and a good starting point. There is also com.quartetfs.pivot.client.WebServiceClient that illustrates connecting to ActivePivot
How to test post processors?
As of ActivePivot release 4.3.5 there is no framework dedicated to isolated post processor testing. Post processors are tested through queries (MDX queries or GetAggregates queries). Of course if your post processor implementation has some utility methods, those can be tested one by one in standard unit tests.
To test an ActivePivot-based project, the simpler is to re-use your Spring configuration. This can be done with ClassPathXmlApplicationContext:
ApplicationContext context = new ClassPathXmlApplicationContext("beans.xml");
This simple test will check if your Spring is actually Ok. Then, if you want to run a query, you could do the following:
IQueriesService queriesService = context.getBean(IQueriesService.class);
queriesService.execute(new MDXQuery(someMDX));
If you want to check your loading layer, you can do:
IStoreUniverse storeUniverse = context.getBean(IStoreUniverse.class);
for (IRelationalStore store : storeUniverse.values) {
assertEquals(hardcodedValue1, store.getSize())
assertEquals(hardcodedValue2, store.search("someKey", "someValue").size())
}
This way, you don't need to start a web-app container, which may fail because it needs some port to be available (meaning for instance you can't run several tests at the same time).
Post-Processors should be either Basic or DynamicAggregation post-processors, which are easy to test: focus on .init and the evaluation methods called on point ILocations. Advanced Post-processors can not be reasonnably unit-tested. Then, I advice writing MDX queries as simple as possible but relevant given the Post-Processor.
Any unit-test framework and mock framework could be used. Still, I advice using JUnit and Mockito.
I would recommend using Spring4JUnit to launch the context. You can then autowire the beans and access things like the queries service and the active pivot manager directly.
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"classpath:SPRING-INF/ActivePivot.xml", "classpath:cusomTestContext-test.xml"})
...
#Resource
private IActivePivotManager manager;
#Resource
private IQueriesService queriesService;
#Test
public void testManagerOk() {
assertNotNull(manager);
assertTrue(manager.getStatus().equals(State.STARTED));
}
#Test
public void testManagerOk() {
// run a query with the queries service
}
...
You can define custom test properties for the tests in a separate context file, say for loading a set of test data.
Here's the situation that I'm working with:
Build tests in Selenium
Get all the tests running correctly (in Firefox)
Export all the tests to MSTest (so that each test can be run in IE, Chrome and FF)
If any test needs to be modified, do that editing in Selenium IDE
So it's a very one-way workflow. However, I'd now like to do a bit more automation. For instance, I'd like every test to run under each of two accounts. I'm getting into a maintenance issue. If I have 6 tests that I want to run under two accounts, suddenly I'd need 12 tests in the Selenium IDE tests. That's too much editing. But a ton of that code is exactly the same.
How can I share chunks of Selenium tests among tests? Should I use Selenium IDE to develop the test first time then never use it again (only doing edits in VS after that)?
Selenium code is very linear after you export it from the IDE.
For example (ignore syntax):
someTestMethod() {
selenium.open("http://someLoginPage.com");
selenium.type("usernameField", "foo");
selenium.type("passwordField", "bar");
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
This is the login page. Every single one of your tests will have to use it. You should refactor it into a method.
someTestMethod(){
selenium.open("http://someLoginPage.com");
String username = "foo";
String password = "bar";
performLogin(username, password);
}
performLogin(String username, String password){
selenium.type("usernameField", username);
selenium.type("passwordField", password);
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
The performLogin() method does not have to be in the same file as your test code itself. You can create a separate class for it with your methods and share it between your tests.
We have classes that correspond to certain functionalities on our UI. For example, we have many ways to search in our app. All methods that helps you with search functionality will be in the SearchUtil class.
Structuring your tests similarly will give you the following advantages:
If the UI changes (an id of a field), you go to your one method, update the id and you are good to go
If the flow of your logic changes you also have only one place to update
To test whether your changes worked, you only have to run one of the tests to verify. All other tests use the same code so it should work.
A lot more expressive as you look at the code. With well named methods, you create a higher level of abstraction that is easier to read and understand.
Flexible and extensible! The possibilities are limitless. At this point you can use conditions, loops, exceptions, you can do your own reporting, etc...
This website is an excellent resource on what you are trying to accomplish.
Good Luck!
There are two aspects to consider regarding code reuse:
Eliminating code duplication in your own code base -- c_maker touched on this.
Eliminating code duplication from code generated by Selenium IDE.
I should point out that my comments lean heavily to the one-way workflow that you are using, jcollum, but even more so: I use IDE to generate code just once for a given test case. I never go back to the IDE to modify the test case and re-export it. (I do keep the IDE test case around as a diagnostic tool when I want to experiment with things while I am fine-tuning and customizing my test case in code (in my case, C#).
The reasons I favor using IDE tests only as a starting point are:
IDE tests will always have a lot of code duplication from one test to another; sometimes even within one test. That is just the nature of the beast.
In code I can make the test case more "user-friendly", i.e. I can encapsulate arcane locators within a meaningful-named property or method so it is much clearer what the test case is doing.
Working in code rather than the IDE just provides much greater flexibility.
So back to IDE-generated code: it always has massive amounts of duplication. Example:
verifyText "//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span" Home
generates this block of code:
try
{
Assert.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
}
catch (AssertionException e)
{
verificationErrors.Append(e.Message);
}
Each subsequent verifyText command generates an identical block of code, differing only by the two parameters.
My solution to this pungent code smell was to develop Selenium Sushi, a Visual Studio C# project template and library that lets you eliminate most if not all of this duplication. With the library I can simply write this one line of code to match the original line of code from the IDE test case:
Verify.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
I have an extensive article covering this (Web Testing with Selenium Sushi: A Practical Guide and Toolset) that was just published on Simple-Talk.com in February, 2011.
You can also put some fragments or one-liners, e.g.
note( "now on page: " . $sel->get_location() . ", " . $sel->get_title() ;
into the "code snippets" collection of your IDE ( I use Eclipse).
That's not true reuse, but hey it works for me for throwaway testscripts or quick enhancements of existing testscripts.