I'm using WebDriverIO and I want to do the following:
Run a single test before any test is run (createNewUsers)
Use specific capabilities (proxy settings) for that first test
Once done, use a default set of capabilities for everything else
So I can't seem to figure this out:
I've tried to add a second set of capabilities and use the exclude argument to ensure it only applies to that specific spec, however, I don't know whether this is actually possible and then how to call that specific test in my before block - so capabilities I use:
exclude: [ './newUserCreationStage/newStageUsers.js' ],
But then in my before block - how do I say run that (if it's possible):
before: function (capabilities, specs) {
expect = require('chai').expect;
RUN THIS './newUserCreationStage/newStageUsers.js'
},
I would say that your setup needs a bit different approach. First take a look at the xUnit Fixture Setup Patterns. This createNewUsers can actually be achieved via SuiteFixture Setup, Prebuilt Fixture, Setup Decorator and Creation Method. This will set the SUT in the desired state and remove the need to
Run a single test before any test is run
Even better - if you have access, you can seed a DB or call APIs to load all users, data and whatever you need before the test run. This is also known as Back Door Manipulation. Let your CI server to take care of all this as a dedicated step.
Since you are using Mocha, you can utilize tags and organise your suites and specs in a better way. This will allow you to switch drivers with capabilities according to your needs (read test needs, ones may need proxy, others not). I would suggest to also consider mocha-tags. Good fit here is the Strategy pattern. It will allow you to have many related subclasses which only differ in behavior.
Related
I have a test that sometimes gets run with the command line flag --disable-multiple-windows and sometimes it doesn't. I would like the test to skip over a section if multiple windows are disabled. Is there an easy way to tell that within a test?
My best solution so far is something like the following
let disableMultipleWindows=true;
try{
await t.openWindow('www.google.com');
disableMultipleWindows=false
await t.closeWindow();
}
But I'm wondering if there's a better way.
At present, disabling support for multiple browser windows is possible only for tests run scenarios. There is no access to the --disable-multiple-windows option within a test body. You can learn more about this option at Disable Support for Multiple Windows.
Could you please clarify your scenario of using the --disable-multiple-windows option? I think you can split your tests into different tests suites: with/without multiple browser windows.
I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566
I have a web application which needs to be tested in multiple browsers in multiple environments (i.e. Chrome, Firefox, and Internet Explorer in both Windows and Linux* (*with the obvious exception of Internet Explorer)).
Tests have been written in Java using JBehave, Selenium, and SerenityBDD (Thucydides)). These tests exercise an underlying REST API, testing if objects may be successfully created and deleted using the UI.
I am using Selenium Grid, and would like to run the tests on parallel nodes; however, the concern is that as the tests exercise an underlying REST API, there could be conflicts.
Is it possible to pass in parameters to the tests as a parameter within the Jenkins job configuration which runs the tests, so that there is a slight difference in the tests dependent on the node in which they are executing? (e.g. An object named 'MYOBJECT-CHROME' is created on Chrome, versus an object named 'MYOBJECT-FIREFOX' on Firefox, meaning any REST API conflicts can be avoided?)
If the software under test(SUT) allows multithread REST API requests there is no need for you worry about
meaning any conflicts can be avoided?
The tests concurrent requests should be set up as fixtures, meaning every atomic test should set/tear down the required for it test data or return the software under test's(SUT) state. A good candidate here is a Prebuilt fixture. It'll allow you to add it as a step in Jenkins and can reduce the overhead of creating all those test objects.
If you still need to parameterize the build, you can use your suite #tags from the BDD to define which set of tests will be executed.
I'd like to test the search functionality of 30 websites that are generated by the same CMS under different domains with different Lucene-indexes. For this purpose I'd like to write a single Page Object which I'd like to be fed by the configuration with those 30 different baseUrls.
I'd be running those tests in the same environment so I'm not sure how to approach this issue. Is there anything I've been missing so far? Looking forward to a push in the right direction and thanks in advance.
You can always override the base url in your test using using browser.config.baseUrl = 'http://example.com'. It will be reverted to the value configured in GebConfig.groovy for the next test.
The question is how many tests do you wish to run this way? If it's just one test then you can get away by using Spock's support for parametrised tests with where: block and this approach. If it's more than one test then you will probably be looking at custom test runners or running your tests multiple times using your build system with different geb environment settings.
I'm creating a unittest- and Selenium-based test suite for a web application. It is reachable by several hostnames, e.g. implying different languages; but of course I want to be able to test e.g. my development instances as well without changing the code (and without fiddling with the hosts file which doesn't work for me anymore, because of network security considerations, I suppose).
Thus, I'd like to be able to specify the hostname by commandline arguments.
The test runner does argument parsing itself, e.g. for chosing the tests to execute.
What is the recommended method to handle this situation?
The solution I came up with finally is:
Have a module for the tests which fixes the global data, including the hostname, and provides my TestCase class (I added an assertLoadsOk method to simply check for the HTTP status code).
This module does commandline processing as well:
It checks for its own options
and removes them from the argument vector (sys.argv).
When finding an "unknown" option, stop processing the options, and leave the rest to the testrunner.
The commandline processing happens on import, before initializing my TestCase class.
It works well for me ...