I'm creating a unittest- and Selenium-based test suite for a web application. It is reachable by several hostnames, e.g. implying different languages; but of course I want to be able to test e.g. my development instances as well without changing the code (and without fiddling with the hosts file which doesn't work for me anymore, because of network security considerations, I suppose).
Thus, I'd like to be able to specify the hostname by commandline arguments.
The test runner does argument parsing itself, e.g. for chosing the tests to execute.
What is the recommended method to handle this situation?
The solution I came up with finally is:
Have a module for the tests which fixes the global data, including the hostname, and provides my TestCase class (I added an assertLoadsOk method to simply check for the HTTP status code).
This module does commandline processing as well:
It checks for its own options
and removes them from the argument vector (sys.argv).
When finding an "unknown" option, stop processing the options, and leave the rest to the testrunner.
The commandline processing happens on import, before initializing my TestCase class.
It works well for me ...
Related
I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566
I am upgrading an internal tool based on JUnit 4 to JUnit 5. Therefore I have to write an extension for the test execution. The task of the extension should be to ensure the correct state of external application (start it if it is not running etc.). To perform this task serveral parameter (from the commandline) are needed.
Can I store these parameters in the extension context? And if so, how can I access it before the actual tests run starts?
Can I store these parameters in the extension context?
You could, but a better option would simply be to access them as "configuration parameters" -- for example, via org.junit.jupiter.api.extension.ExtensionContext.getConfigurationParameter(String).
And if so, how can I access it before the actual tests run starts?
You can access them via the aforementioned ExtensionContext.getConfigurationParameter(String) method within a BeforeAllCallback extension.
If you want that custom extension to be executed before all test classes without the user having to register the extension explicitly, you could have the extension registered automatically. See the User Guide for details.
I'm using WebDriverIO and I want to do the following:
Run a single test before any test is run (createNewUsers)
Use specific capabilities (proxy settings) for that first test
Once done, use a default set of capabilities for everything else
So I can't seem to figure this out:
I've tried to add a second set of capabilities and use the exclude argument to ensure it only applies to that specific spec, however, I don't know whether this is actually possible and then how to call that specific test in my before block - so capabilities I use:
exclude: [ './newUserCreationStage/newStageUsers.js' ],
But then in my before block - how do I say run that (if it's possible):
before: function (capabilities, specs) {
expect = require('chai').expect;
RUN THIS './newUserCreationStage/newStageUsers.js'
},
I would say that your setup needs a bit different approach. First take a look at the xUnit Fixture Setup Patterns. This createNewUsers can actually be achieved via SuiteFixture Setup, Prebuilt Fixture, Setup Decorator and Creation Method. This will set the SUT in the desired state and remove the need to
Run a single test before any test is run
Even better - if you have access, you can seed a DB or call APIs to load all users, data and whatever you need before the test run. This is also known as Back Door Manipulation. Let your CI server to take care of all this as a dedicated step.
Since you are using Mocha, you can utilize tags and organise your suites and specs in a better way. This will allow you to switch drivers with capabilities according to your needs (read test needs, ones may need proxy, others not). I would suggest to also consider mocha-tags. Good fit here is the Strategy pattern. It will allow you to have many related subclasses which only differ in behavior.
I am trying to write nice unit tests for my already created REST API. I have this simple structure:
ROOT/
config/
handlers/
lib/
models/
router/
main.go
config contains configuration in JSON and one simple config.go that reads and parses JSON file and fills the Config struct. handlers contains controllers (i.e. handlers of respective METHOD+URL described in router/routes.go). lib contains some DB, request responder and logger logic. models contains structs and their funcs to be mapped from-to JSON and DB. Finally router contains the router and routes definition.
Now I was searching and reading a lot about unit testing REST APIs in GO and found more or less satisfying articles about how to set up a testing server, define routes and test my requests. All fine. BUT only if you want to test a single file!
My problem is now how to set up the testing environment (server, routes, DB connection) for all handlers? With the approach found here (which I find very easy to understand and implement) I have one problem: either I have to run tests separately for each handler or I have to write test suites for all handlers in just one test file. I believe you understand that both cases are not very happy (1st because I need to preserve that running go test runs all tests that succeed and 2nd because having one test file to cover all handler funcs would become unmaintainable).
By now I have succeeded (according to the linked article) only if I put all testing and initializing code into just one func per XYZhandler_test.go file but I don't like this approach as well.
What I would like to achieve is kind of setUp() or init() that runs once with first triggered test making all required variables globally visible and initialized so that all next tests could use them already without the need of instantiating them again while making sure that this setup file is compiled only for tests...
I am not sure if this is completely clear or if some code example is required for this kind of question (other than what is already linked in the article but I will add anything that you think is required, just tell me!
Test packages, not files!
Since you're testing handlers/endpoints it would make sense to put all your _test files in either the handlers or the router package. (e.g. one file per endpoint/handler).
Also, don't use init() to setup your tests. The testing package specifies a function with the following signature:
func TestMain(m *testing.M)
The generated test will call TestMain(m) instead of running the tests
directly. TestMain runs in the main goroutine and can do whatever
setup and teardown is necessary around a call to m.Run. It should then
call os.Exit with the result of m.Run
Inside the TestMain function you can do whatever setup you need in order to run your tests. If you have global variables, this is the place to declare and initialize them. You only need to do this once per package, so it makes sense to put the TestMain code in a seperate _test file. For example:
package router
import (
"testing"
"net/http/httptest"
)
var (
testServer *httptest.Server
)
func TestMain(m *testing.M) {
// setup the test server
router := ConfigureRouter()
testServer = httptest.NewServer(router)
// run tests
os.Exit(m.Run())
}
Finally run the tests with go test my/package/router.
Perhaps you could put the setup code that you want to use from multiple unit test files into a separate package that only the unit tests use?
Or you could put the setup code into the normal package and just use it from the unit tests.
It's been asked before but the Go authors have chosen not to implicitly supply a test tag that could be used to selectively enable function compiles within the normal package files.
I'm new to testing and I'm trying to create a test for my Plone product for the first time.
I'm on Plone 3.3.
The basic test suite works, I can execute it without errors.
I followed this documentation : http://plone.org/documentation/kb/testing
...except that I'm writing my tests in Python classes instead of doctests.
My problem is that I cannot seem to access the views defined in my app (I get ComponentLookupError).
The problem seems to be with the "browserlayer" defined by my applications.
When I remove the layer="..." attribute from my configure.zcml, the test can access the views without problem. However, if I add it back, it doesn't work.
I guess that's because de browserlayer interface doesn't get applied to the request.
The only reference to this problem I found is in the tests for googlesitemap : http://dev.plone.org/collective/browser/googlesitemap/googlesitemap.common/trunk/googlesitemap/common/tests?rev=
The author seems to have made a custom ZCML file for the test, in which the layer="..." attribute has been removed. (which would work but it seems very bad having to maintain a separate zcml file for the tests)
In my test, I have included the following (taken from the googlesitemap tests), which passes :
from jambette.site.interfaces import IJambetteLayer # this is my browserlayer
from plone.browserlayer.utils import registered_layers
self.assertTrue(IJambetteLayer in registered_layers())
So I think my skin and browserlayer are registered correctly.
Is there anything I need to do so that the browserlayer will be applied to the request?
Browser layer interfaces are simply 'painted' onto the request with directlyProvides. Simply do so in your test setup before you look up the view:
from zope import interface
from jambette.site.interfaces import IJambetteLayer
...
directlyProvides(request, IJambetteLayer)