Is it possible to run a single test using QUnit inside Karma? - testing

Jasmine has iit() and ddescribe, and Mocha has it.only and describe.only, but I can't see any way to get QUnit to focus on running a single test.
The QUnit UI allows you to run a single test, but I can't see how to get this to work inside Karma, because it doesn't display the QUnit UI.

Here is my solution. It works fine for me, but YMMV.
Download qunit-karma-setup.js, qunit-karma-launch.js from here
Then, in your Gruntfile:
config.set({
frameworks: ['qunit'],
files: [
'your-app-files-here/**/*.js',
'qunit-karma-setup.js',
'your-tests-here/**/*.js',
'qunit-karma-launch.js'
]
});
Now, you can use omodule(), otest(), oasyncTest() functions to run only the selected modules or tests respectively.

QUnit is not well designed for this, but you can improve it!
The filtering mechanism of QUnit is validTest function that read QUnit.config.filter (string, name of a test). See https://github.com/jquery/qunit/blob/master/src/core.js#L820
There are two problems:
it only allows selecting one test,
you need to know the selected test name in advance (because the tests are filtered when being created).
Suggest changing QUnit to:
filter using a custom "filter" function (the default implementation can do what the current validTest does),
filter the tests when executing (means, collects all the tests first)
Then, implementing inclusive tests will be simple ;-)

I believe you can use only. Taken from the docs:
test('my test 1', function (assert) { ... });
only('my test that will be run exclusively now', function (assert) { ... });
So instead of using word test, you use only.

Add this on the top of your test file
var ttest = test;
test = function() {}
And rename the test you want to run to ttest. Looks clumsy, but simple and works well.

Related

Pyhtml - Print the order in the order in which it is run

Have been working with pytest for sometime, (and really like it, I must say). I have been able to generate a self contained html with additional columns etc. What I need is either:
Have the results displayed in the order in which they are run (not Failed first as it normally appears in the self contained html output)
OR
Print the order in which the tests are run. Am using #pytest.mark.run(order=123456) in my tests.
The order is important as there are dependency tests that needs to be executed in a certain sequence.
I'am not sure about pytest, but I worked with pyhtml and had a similar problem.
I would say you shuld use a "yield" operator in the function you are calling from your pyhtml part. If you are calling the pytest function direct from your pyhtml part, you shuld be able to write a new function, that calls pytest for you.

Is it possible to add tags or have multiple BeforeTestRun hooks in Specflow

So I currently have an automation pack that I have created using Selenium/Specflow.
I wanted to know whether it is possible to have multiple BeforeTestRun hooks?
I've already tried: [BeforeTestRun("example1")] but I receive an error stating BeforeTestRunAttribute does not contain a constructor that takes 1 arguments
I tried the following but that also failed:
[BeforeTestRun]
[Scope(Tag = "example1")]
And referenced the above in the .feature file like this:
#example1
Scenario: This is an example
Given...
When...
Then...
Is there a way to implement this correctly such that in one .feature file I can have two scenarios that can use different [BeforeTestRun]?
If you cannot use [BeforeScenario] like suggested, you can try to manually check for tags using if statements. To get the current tags and compare them to the ones you need, try this:
var tags = ScenarioContext.ScenarioInfo.Tags;
if (tags.Any(x => x.Equals("MyTag")))
{
DoWork();
}
More info here: https://stackoverflow.com/a/42417623/9742876

Cucumber - how to get scenario tag that is currently being executed

I have scenario with multiple tags. For example, #registration, #smoke, #core.
I have a configuration file (test.conf.js file) in which I set targeted tests to be run like this:
cucumberOpts: {
tags: ['#registration', '~#WIP']
}
Running this configuration will only run scenarios with #registration tag.
With this I can get and iterate through all scenario tags (in this case #registration, #smoke, #core):
beforeScenario: function (scenario) {
tags = scenario.getTags();
tags.forEach(function (scenarioTagItem) { ... });
}
My question is how to get in the above function the tag that the test is currently running against? So how to recognize that currently running tag is #registration? Sort of to recognize it as an active tag?
Please help :)
Just called this.cucumberOpts.tags because it was in the same file and build my logic further on that. Stupid overlook from my side :/
Even better way to do it is browser.options.cucumberOpts.tags

Phpunit yii: two fixtures to one table

In phpunit, with Yii, is possibile to create more fixtures for the same table?
I would like to have different fixtures folders to be used with different unit test, to avoid problems between the various test file.
You can set the fixture folder for each test by adding the following to your test classes:
protected function setUp()
{
$this->getFixtureManager()->basePath = 'path/to/fixtures';
parent::setUp();
}
With this, you can have your tests use whichever set of fixtures you want.
Make sure to call parent::setUp(), and to call it after setting the basePath property, as that is what actually loads the fixtures.
See also CDbFixtureManager.

Googletest: combine tests in a testsuite

I am trying googletest.
Previously i have been using Boost test and i have been using the macro BOOST_AUTO_TEST_SUITE to group my tests into a Testsuite.
This makes the junit reports much more readable.
I have not found a hint how to do this or something similar in googletest. Is it possible?
I use the first parameter of the call to TEST() or TEST_F() as sort of a "test suite" identifier, like this:
TEST(TestSuiteName, shouldExpectTrue) {
EXPECT_TRUE(true);
}
TEST(TestSuiteName, shouldExpectFalse) {
EXPECT_FALSE(false);
}
Of course, when using a fixture class with TEST_F(), your TestSuiteName will need to match the name of your fixture class, so it will be necessary to create a separate fixture class for each test suite.
There is no way that I know of to break the test suites into sub-suites or anything like that, but of course you could always run your tests multiple times using the --gtest_filter="someFilter" option if you wanted to clean up your output.