How to include “_beforeSuite” in test report? - codeception

We have a running api test suite on our CI environment.
Today I had a case where something broke in the code in the _beforeSuite function (in the response of some endpoint, I receive an empty JSON object).
Now, I am investigating any options, to include _beforeSuite in the test report, so in such cases, we could see what exactly went wrong on "CI test report" level.
Expected behavior:
When I launch my tests for example with
codecept run tests/api --env stage1 --html my_report.html
I want "my_report" to include steps from _beforeSute function

Related

Run specific tests within test suite - Selenium Side Runner (IDE)

Is it possible to run a specific test within my selenium side runner test suite? For example, within a test suite, my first test logs me into a website, then the other tests, test specific areas of the website. Each of these tests first inherit the login test to auth the "user" when running the tests. But when I run the suite, it runs the tests in order, so it will first run the login test, then rerun the login test within my other tests. Hope this makes sense. So essentially i want to be able to specify which tests to run within my test suite. Thanks in advance
You may use the filter to run tests that have a common name:
Filter tests
You also have the option to run a targeted subset of your tests with the --filter
target command flag (where target is a regular expression value). Test names that
contain the given search criteria will be the only ones run.
[example] selenium-side-runner --filter smoke

Getting coverage for e2e tests in IntelliJ

I have manually instrumented my code using:
istanbul instrument src --o temp --es-modules --config=.istanbul.yml.
This is my .istanbul.yml:
instrumentation:
excludes: ['*.spec.js']
extensions: ['.js','.jsx']
Once it is instrumented I am running e2e tests using Selenium inside IntelliJ, using the run with coverage button.
The tests pass but at the end it only gives me coverage information of the *.e2e.js files and not the actual *.jsx file that the e2e test is running.
Any ideas?
The JavaScript is executed in the browser, not by the test-runnner. So only the code that is used by the test-runner is included in the coverage. You need to instrument the front-end code and send it to the browser and collect the coverage from the browser.
Here is how it could work with istanbul and Selenium:
Instrument your front-end code with the istanbul
instrument command. (As far as I know, istanbul instrument writes out
instrumented code to disk, whereas istanbul cover does everything in
memory.)
Instead of sending the original JS code to the browser, send
the instrumented JS code. The really nice thing here, with Istanbul,
you don’t have to manually modify your source code at all to make this
all work. Istanbul does almost all of the work for us in the browser,
automatically.
Run your Selenium-based tests, and for each individual
driver in your tests, run a hook that will send the coverage results
from the browser to the backend test process.
Once you get the
coverage data in the test process, you can do whatever you want with
it. In this case, we will HTTP POST the data to a server which can
interpret and display the coverage results.
And that’s it!
Read the full article : https://medium.com/#the1mills/front-end-javascript-test-coverage-with-istanbul-selenium-4b2be44e3e98
The article goes over all the details how to set it up.

Display selenese-runner results in Jenkins

As I am implementing an automated way to GUI test our webapplication with selenium I ran into some issues.
I am using selenese-runner to execute our Selenium test suites, created with Selenium IDE as a post build action in Jenkins.
This works perfeclty fine, as the build fails when something is wrong, and the build succeeds if all tests are passed. And the results are stored on a per build basis as HTML files, generated be selenese-runner.
My problem is however, that I seem to be unable to find a way, how to display these results in the respective jenkins build.
Does anyone have an idea how to solve this issue. Or maybe I am on the wrong path at all?
Your help is highly appreciated!
I believe the JUnit plugin should do what you want, but it doesn't work for me.
My config uses this shell script to run the tests (you can see the names of all my test suites):
/usr/bin/Xvfb &
export DISPLAY=localhost:0.0
cd ${WORKSPACE}
java -jar ./test/selenium/bin/selenese-runner.jar --baseurl http://${testenvironment} --screenshot-on-fail ./seleniumResults/ --html-result ./seleniumResults/ ./test/selenium/Search_TestSuite.html ./test/selenium/Admin_RegisteredUser_Suite.html ./test/selenium/Admin_InternalUser_Suite.html ./test/selenium/PortfolioAgency_Suite.html ./test/selenium/FOAdmin_Suite.html ./test/selenium/PublicWebsite_Suite.html ./test/selenium/SystemAdmin_Content_Suite.html ./test/selenium/SystemAdmin_MetaData_Suite.html
killall Xvfb
And I can see the result of the most recent test (you can see the name of my jenkins task folder)
http://<JENKINS.MY.COMPANY>/job/seleniumRegressionTest/ws/seleniumResults/index.html
Earlier tests are all saved on the Jenkins server, so I can view them if I need to.

How to measure Golang integration test coverage?

I am trying to use go test -cover to measure the test coverage of a service I am building. It is a REST API and I am testing it by spinning it up, making test HTTP requests and reviewing the HTTP responses. These tests are not part of the packages of the services and go tool cover returns 0% test coverage. Is there a way to get the actual test coverage? I would expect a best-case scenario test on a given endpoint to cover at least 30-50% of the code for specific endpoint handler, and by adding more tests for common error to improve this further.
I was pointed at the -coverpkg directive, which does what I need - measures the test coverage in a particular package, even if tests that use this package and not part of it. For example:
$ go test -cover -coverpkg mypackage ./src/api/...
ok /api 0.190s coverage: 50.8% of statements in mypackage
ok /api/mypackage 0.022s coverage: 0.7% of statements in mypackage
compared to
$ go test -cover ./src/api/...
ok /api 0.191s coverage: 71.0% of statements
ok /api/mypackage 0.023s coverage: 0.7% of statements
In the example above, I have tests in main_test.go which is in package main that is using package mypackage. I am mostly interested in the coverage of package mypackage since it contains 99% of the business logic in the project.
I am quite new to Go, so it is quite possible that this is not the best way to measure test coverage via integration tests.
you can run go test in a way that creates coverage html pages. like this:
go test -v -coverprofile cover.out ./...
go tool cover -html=cover.out -o cover.html
open cover.html
As far as I know, if you want coverage you need to run go test -cover.
However it is easy enough to add a flag which you can pass in which will enable these extra tests, so you can make them part of your test suite but don't run them normally.
So add a command line flag in your whatever_test.go
var integrationTest = flag.Bool("integration-test", false, "Run the integration tests")
Then in each test do something like this
func TestSomething(t *testing.T){
if !*integrationTest {
t.Skip("Not running integration test")
}
// Do some integration testing
}
Then to run the integration tests
go run -cover -integration-test

Is there console.log output support to terminal/command-line with intern-runner?

I have a dependency with Intern where we have to spin up a Selenium server and use PhantomJS for our tests. We use Jenkins and may need some more inspection/debug output to console but the console.log's get suppressed from the test files to terminal/command-line
Is console.log to terminal/command-line supported yet?
How console.log works with intern-runner depends on where your test code is running. Unit tests (specified with suites) run in the browser, so that's where console.log output ends up. There isn't currently a way to get console output out of a browser for unit tests.
Functional tests (specified with functionalSuites) control a browser, but actually run in Node.js, so output from console.log statements in functional tests generally goes to intern's stdout. The exceptions are log statements in execute and executeAsync blocks; since those blocks run in the browser, that's where the log output ends up. You can retrieve browser logs in functional tests using getLogsFor('browser'), but WebDriver log support is inconsistent between browsers.