Configuration of a JUnit 5 extension - junit5

I am upgrading an internal tool based on JUnit 4 to JUnit 5. Therefore I have to write an extension for the test execution. The task of the extension should be to ensure the correct state of external application (start it if it is not running etc.). To perform this task serveral parameter (from the commandline) are needed.
Can I store these parameters in the extension context? And if so, how can I access it before the actual tests run starts?

Can I store these parameters in the extension context?
You could, but a better option would simply be to access them as "configuration parameters" -- for example, via org.junit.jupiter.api.extension.ExtensionContext.getConfigurationParameter(String).
And if so, how can I access it before the actual tests run starts?
You can access them via the aforementioned ExtensionContext.getConfigurationParameter(String) method within a BeforeAllCallback extension.
If you want that custom extension to be executed before all test classes without the user having to register the extension explicitly, you could have the extension registered automatically. See the User Guide for details.

Related

How to send result file after all tests run in nunit console c#

Is there a way to send a mail with result file (I set this file in console command with option --result) after running.
I have run my selenium test cases in following way
How to Schedule Selenium Web Drivers Tests in C#
The result file was created after OneTimeTearDown function.
If sending an e-mail into OneTimeTearDown function - the result file comes incomplete
Thanks in advance
Sangeetha P.
I'm not sure I'd actually recommend doing this - but I think it's possible. Personally, I'd instead handle the email sending outside of the NUnit console, in a separate script in your CI System.
Anyway. You could achieve this by writing your own ResultWriter extension. Take a look at the implementation of the standard NUnit3XmlResultWriter as an idea - you'd essentially want the same thing, except to send the file by email, rather than write a file. (You may even want to make your ResultWriter actually inherit the NUnit3XmlResultWriter class.)

How to send Automation Result mails from Testcomplete Jscript

I need to send automation Result to devlopers which is automatically from test complete is there any possibility method there
In TestComplete 12.50, you can send results via email. The log of the entire test run or any of its children logs via:
Quick sending from the log context menu
Sending from scripts
Sending from keyword tests
Last two methods allow you to send, during the test execution.
Result to developers which is automatically
Ideally the CI employed should provide you with a mechanism to support the reporting activity. Basically, in your case, you have to follow two steps:
Use methods of the slPacker object to pack the test results to a file (it is easier to manage a single file then a group of files). You can call these methods from a keyword test, for instance, via the Call Object Method operation. There is more details on how to pack results, in Archiving results from tests.
Write script code that will send the test log. JScript snippets here.

How do you get access to the current filtered trait in xUnit

In xUnit, is there a way to get access to the current trait filter during the test execution? For example, during our build process we setup a task to run our tests with a given trait (i.e. browser type). In our test setup code, I would like to know if the requested test run has received a trait filter, so I can use that to determine what Selenium web driver to use for that test run.
Thanks in advance for your assistance.

GO Unit testing structured REST API projects

I am trying to write nice unit tests for my already created REST API. I have this simple structure:
ROOT/
config/
handlers/
lib/
models/
router/
main.go
config contains configuration in JSON and one simple config.go that reads and parses JSON file and fills the Config struct. handlers contains controllers (i.e. handlers of respective METHOD+URL described in router/routes.go). lib contains some DB, request responder and logger logic. models contains structs and their funcs to be mapped from-to JSON and DB. Finally router contains the router and routes definition.
Now I was searching and reading a lot about unit testing REST APIs in GO and found more or less satisfying articles about how to set up a testing server, define routes and test my requests. All fine. BUT only if you want to test a single file!
My problem is now how to set up the testing environment (server, routes, DB connection) for all handlers? With the approach found here (which I find very easy to understand and implement) I have one problem: either I have to run tests separately for each handler or I have to write test suites for all handlers in just one test file. I believe you understand that both cases are not very happy (1st because I need to preserve that running go test runs all tests that succeed and 2nd because having one test file to cover all handler funcs would become unmaintainable).
By now I have succeeded (according to the linked article) only if I put all testing and initializing code into just one func per XYZhandler_test.go file but I don't like this approach as well.
What I would like to achieve is kind of setUp() or init() that runs once with first triggered test making all required variables globally visible and initialized so that all next tests could use them already without the need of instantiating them again while making sure that this setup file is compiled only for tests...
I am not sure if this is completely clear or if some code example is required for this kind of question (other than what is already linked in the article but I will add anything that you think is required, just tell me!
Test packages, not files!
Since you're testing handlers/endpoints it would make sense to put all your _test files in either the handlers or the router package. (e.g. one file per endpoint/handler).
Also, don't use init() to setup your tests. The testing package specifies a function with the following signature:
func TestMain(m *testing.M)
The generated test will call TestMain(m) instead of running the tests
directly. TestMain runs in the main goroutine and can do whatever
setup and teardown is necessary around a call to m.Run. It should then
call os.Exit with the result of m.Run
Inside the TestMain function you can do whatever setup you need in order to run your tests. If you have global variables, this is the place to declare and initialize them. You only need to do this once per package, so it makes sense to put the TestMain code in a seperate _test file. For example:
package router
import (
"testing"
"net/http/httptest"
)
var (
testServer *httptest.Server
)
func TestMain(m *testing.M) {
// setup the test server
router := ConfigureRouter()
testServer = httptest.NewServer(router)
// run tests
os.Exit(m.Run())
}
Finally run the tests with go test my/package/router.
Perhaps you could put the setup code that you want to use from multiple unit test files into a separate package that only the unit tests use?
Or you could put the setup code into the normal package and just use it from the unit tests.
It's been asked before but the Go authors have chosen not to implicitly supply a test tag that could be used to selectively enable function compiles within the normal package files.

How to pass an argument (e.g. the hostname) to the testrunner

I'm creating a unittest- and Selenium-based test suite for a web application. It is reachable by several hostnames, e.g. implying different languages; but of course I want to be able to test e.g. my development instances as well without changing the code (and without fiddling with the hosts file which doesn't work for me anymore, because of network security considerations, I suppose).
Thus, I'd like to be able to specify the hostname by commandline arguments.
The test runner does argument parsing itself, e.g. for chosing the tests to execute.
What is the recommended method to handle this situation?
The solution I came up with finally is:
Have a module for the tests which fixes the global data, including the hostname, and provides my TestCase class (I added an assertLoadsOk method to simply check for the HTTP status code).
This module does commandline processing as well:
It checks for its own options
and removes them from the argument vector (sys.argv).
When finding an "unknown" option, stop processing the options, and leave the rest to the testrunner.
The commandline processing happens on import, before initializing my TestCase class.
It works well for me ...