Codeception-style reporting for Behat - testing

Does anybody know if there's a way to have test reports for Behat similar to what we could have for Codeception?
I mean, Behat with pretty format output just gives us steps from feature files. Whereas Codeception reports exact request body with headers and also reports responses.
Is there any way to have similar reports for Behat? Maybe some extension or plugin? I did a search but didn't find anything similar.

Behat is a generic TDD framework and does not know anything about web requests by default. You could easily use it for unit testing in your application or for testing some CLI application.
So, if you need extended information on requests during the step failures, you have to:
Provide Behat with those Request/Response objects
Customize Behat behavior in AfterStep, AfterScenario and/or AfterFeature hooks to make use of those Request/Response objects
Find or write custom output formatter to output that info
You can see examples of custom formatters (initially written by yours truly), which are used in Allure reporting tool (worth giving it a try for rich reporting exprience).

Related

Karate UI and Gatling [duplicate]

I am developing WebUI automation tests using Karate 0.9.5.RC5 and it is working wonderfully. Does anyone know how to execute performance testing in Karate for WebUI Automation tests?
That's great to hear and thanks for the feedback. To be honest, we have focused on API perf testing and UI functional test automation so far. Maybe you can help us by experimenting and sharing what you find.
You must be aware of the Gatling integration for API performance testing. So we have some pieces of the puzzle in place.
So maybe a hybrid strategy is best:
identify the API calls being made by the UI, in the future we would like to derive them automatically from the Chrome network / devtools
manually convert the API calls to Karate tests, note that the VS Code plugin has an option to import from cURL
now you can convert the Karate tests to a performance test, and for most teams, this is sufficient
if needed, you can add some Karate calls to load HTML and static resources to make the load profile more realistic
finally, it may be possible to run a Karate UI test in parallel, just to measure "real user" experience and measure the HTML / JS load times etc on the browser side. we don't have this in place yet, but it should be possible to get all the page timings and perf-stats from Chrome
potentially you could look at spinning up multiple Chrome instances in parallel using Docker - but again, this is something yet to be explored

Is it possible to execute performance test in Karate for WebUI Automation?

I am developing WebUI automation tests using Karate 0.9.5.RC5 and it is working wonderfully. Does anyone know how to execute performance testing in Karate for WebUI Automation tests?
That's great to hear and thanks for the feedback. To be honest, we have focused on API perf testing and UI functional test automation so far. Maybe you can help us by experimenting and sharing what you find.
You must be aware of the Gatling integration for API performance testing. So we have some pieces of the puzzle in place.
So maybe a hybrid strategy is best:
identify the API calls being made by the UI, in the future we would like to derive them automatically from the Chrome network / devtools
manually convert the API calls to Karate tests, note that the VS Code plugin has an option to import from cURL
now you can convert the Karate tests to a performance test, and for most teams, this is sufficient
if needed, you can add some Karate calls to load HTML and static resources to make the load profile more realistic
finally, it may be possible to run a Karate UI test in parallel, just to measure "real user" experience and measure the HTML / JS load times etc on the browser side. we don't have this in place yet, but it should be possible to get all the page timings and perf-stats from Chrome
potentially you could look at spinning up multiple Chrome instances in parallel using Docker - but again, this is something yet to be explored

Migrating from LoadRunner/The Grinder to JMeter: Where are the Scripts?

I've done load tests for users building scripts in LoadRunner or The Grinder, and now I'm trying out JMeter and it all feels incredibly clunky. Where are the scripts? Does everything have to be done through the UI? Is JMeter able to do complex scripting?
JMeter has good user friendly GUI. We create scripts in JMeter using the UI. JMeter saves the script in XML format with .JMX extension. Script creation is NOT very difficult as you say.
Check this site to get an idea.
Complex scripting can be done in JMeter using logic controllers.
Also JMeter
Is Free & Open source
Is Light weight & easy to install
supports any platform
Supports many protocols - HTTP/HTTPS, FTP, SOAP, LDAP, JDBC, JMS, SMTP, POP etc
Supports external plugins
Can be extended with Beanshell scripts, Groovy, Javascript, Java.
I have tried out Jmeter briefly but I think to it's for you to try it out yourself, there are a number of Jmeter tutorials on the Jmeter site itself as well as on youtube which should prove useful.
Below are some links:
http://jmeter.apache.org/usermanual/intro.html
https://www.youtube.com/watch?v=cv7KqxaLZd8
Hope these links help you in getting a better understanding of Jmeter.
What do you call "scripts"?
JMeter is designed so anyone could create tests using UI only. JMeter's Logic Controllers, Pre/Post-Processors, Assertions, etc. are quite enough to build a load test of any complexity.
If you feel yourself too creative and you're limited by JMeter Test Elements you're welcome to extend it using
Code-enabled test elements like BSF, Beanshell, JSR-223 Pre/PostProcessors, Samplers, Timers and Assertions
Developing custom functions
Developing custom implementation of Java Request Sampler
Developing custom implementation of your own Sampler or Function
Finally it is possible to either run existing or create new tests using JMeter API

Jakarta Cactus alternate?

Greetings, we have a project with loads of beans, JSP and etc. There is a desperate need for performing automated tests in our environment (we use Maven). Now, we can easily write tests for database project layer, for various security utilities we implemented. But the JSP pages remain untested.
I searched for utilities for server-side testing and Cactus seems the best option. However, according to their changelist, their last release is 1.8 and it was released more than two years ago!
So the question is - what happened to Cactus, is it still developing or what? And what are the recent alternates for Jakarta Cactus (if any exists)?
I've used a combination of Spring, JUnit and HttpClient with some success in recent projects.
Apache HttpClient provides a powerful and flexible API for constructing and sending http requests into your application. It cannot replicate a web browser, say by running client side scripts, however if there is sufficient content within the resulting http responses (headers, URI, body), then you can use this information to traverse pages within the application and validate the behavior. You can post forms, follow re-directs, process cookies and supply the inputs into your application.
JUnit (junit.org) drives the tests, invoking a series of pages with HttpClient and can be deployed alongside the application, run standalone with ant/maven, or run separately inside your IDE.
Spring (springsource.org) is, of course, optional as you may not be using it for your project. I've found it useful to stub/mock out parts of the application, such that I can isolate specific areas, such as front-end controllers, through to the business logic, by substituting the DAOs to return specific data values. It provides an excellent Test Context Framework and specialized TestRunners that hook in well to testing frameworks like JUnit (or TestNG if you prefer).
Cactus served as a good server-side testing framework in the ejb2 ages and but it's not supported anymore.
You can use combination of both Mock testing (fine-grained) and In-Container testing (coarse-grained) strategy to test your application completely.
Mock Testing Frameworks : Mockito, Jmockit, EasyMock etc..
Integration Testing Frameworks (Java EE) : Arquillian, Embeddable API, etc..
I prefer Mockito and Arquillian for server-side testing.
How about Arquillian? I haven't used it and it doesn't even have a stable version yet, but at least it's in active development.
You might want to try selenium. That with jBehave is a good combination I'm finding. And the more support for both those projects, the more they will not go defunct (like cactus).

jmeter vs selenium

Hi, I want to get into test automation, and the two tools I found during my first web search are Selenium and Jmeter.
Which one do you think is the first to have a look at? Or do I need both tools as they're totally different?
What I would need is the possibility to do Clientside-Certificate-Authentication, filling forms with different Information, and checking result pages.
Apache JMeter is definitely tool for performance testing and load/stress tests. But you can use it also for functional tests as well (in your example: fill form ->check results but with checking if results are as expected - but better don't do functional testing with him)
For functional testing on the other hand there are Selenium and also Canoo web test.
So final answer will be to combine those two. (I was using JMeter for performance tests and canoo web test for functional testing, but I guess that Selenium is much better choice now)
Use Selenium for your functional tests
Use JMeter for stress tests, and measure performance
In both cases you can record a session, so you can start your Selenium or JMeter engine, do something in your browser and then stop recording. After that you can use Selenium or JMeter to run the session recorded again.
Selenium tests browser fields and buttons. In Selenium you can fill a input field and click a button, wait for the page load and then inspect the page.
Jmeter could be used for testing user-browser GET and POST communications. In Jmeter you can request an URL and post some parameters like the browser do and then inspect the page response.
PROS and CONS:
Selenium is good if you want to test javascript page functionalities.
Selenium is good if you want to have your test cases written in Java, Javascript, Python or simple html text files. Selenium can format your test cases in many programming languages. JMeter always uses an XML format for store test cases.
JMeter is good if you don't want to deal with browser versions. JMeter works in all browsers. Selenium has a wide list of supported browsers, but will always have browser requirements.
JMeter is good if you want to also record HTTP, SOAP and RESTFul protocols. JMeter can be used for record and test communications between servers. JMeter doesn't need a browser to run, Selenium does.
JMeter can run SQL queries, bash scripts, Java classes, ... from JMeter test. By the other hand, Selenium tests can be embedded in Java, Python, Javascript, ... programs.
Both supports xpath, html inspection, css inspection, ...
as mentioned in above replies, Selenium is a tool for testing functionality. Usually its described as a tool for automated testing, while on other hand JMeter is a tool used for performance testing.
I would suggest to start off with Selenium, since the most important aspect of any web application is that it's working correctly. Try to create the basic suite of couple of tests, with the most important automated tests that verify some functionality. Once you have the base automated testing knowledge at least, I would move on to JMeter and performance testing.
In my personal experience, performance testing requires much more knowledge about the system being tested than automated testing. Both JMeter and Selenium should not be complex to learn, but for performance testing you need to more about web application tested.