I have a suite of tests in my leiningen-based Clojure project.
I want to run most of them frequently, but I want to exclude those with the :integration selector because they are slow and flaky.
If I understand correctly there's a built-in :only selector in leiningen which will run only the matching tests:
lein test :only :integration
I want a :not selector which does the opposite (runs all except :integration).
lein test :not :integration
Is there a way to build this with the facilities provided by lein test?
I know I can write a fn like (complement :integration) and put it in the :test-selectors map in my project.clj but it'll be hard-coded to ignore :integration -- what I really want is a general :not that I can parameterize with a keyword, so I can ignore my :slow or :flaky tests in other circumstances.
I don't think you can do it with keywords, since they are picked up by Leiningen as arguments. But you can create a custom test selector that you pass a string:
:test-selectors {:not (fn [m s] (not (contains? m (keyword s))))}
That you can call with lein test :not integration or lein test :not flaky.
Related
In my regression suite I have 600+ test cases. All those tests have #RegressionTest tag. See below, how I am running.
_start = LocalDateTime.now();
//see karate-config.js files for env options
_logger.info("karate.env = " + System.getProperty("karate.env"));
System.setProperty("karate.env", "test");
Results results = Runner.path("classpath:functional/Commercial/").tags("#RegressionTest").reportDir(reportDir).parallel(5);
generateReport(results.getReportDir());
assertEquals(0, results.getFailCount(), results.getErrorMessages());
I am thinking that, I can create 1 test and give it a tag #smokeTest. I want to be able to run that test 1st and only if that test passes then run the entire Regression suite. How can I achieve this functionality? I am using Junit5 and Karate.runner.
I think the easiest thing to do is run one test in JUnit itself, and if that fails, throw an Exception or skip running the actual tests.
So use the Runner two times.
Otherwise consider this not supported directly in Karate but code-contributions are welcome.
Also refer to the answers to this question: How to rerun failed features in karate?
I have a test suite with multiple unit tests, and all these unit tests expect specific working directory as they use relative path to load some test data. If unit test executable is executed from some wrong directory, all these unit tests fail.
What's the proper way to make this check in gtest? Preferably so that I get one single failure message instead of having 50 failed unit tests with the same message.
One way is to use fixture and do single time check, but in that case I still get all these 50 unit test failures instead of skipping the rest of the test suite
In the latest release v.1.10.0 gtest provides the new GTEST_SKIP() macro (hooray!!).
It can be used as follows:
TEST(SkipTest, DoesSkip)
{
if (my_condition_to_skip)
GTEST_SKIP();
// ...
}
As far as I know, there is no documentation on this yet except for the unit test of the feature.
As you can see in the unit test, entire fixtures classes can also be skipped. The skipped tests are marked as not failing with a green color. But you still get one output per test:
[----------] 2 tests from Fixture
[ RUN ] Fixture.DoesSkip
[ SKIPPED ] Fixture.DoesSkip (1 ms)
[ RUN ] Fixture.DoesSkip2
[ SKIPPED ] Fixture.DoesSkip2 (0 ms)
[----------] 2 tests from Fixture (12 ms total)
Googletest has built-in filtering feature. Provided that all your tests have common part of the name (e.g. they are in single fixture), you can disable them when running tests:
./foo_test --gtest_filter=-PathDependentTests.*
Or by setting environment variable GTEST_FILTER to the same string
GoogleTest 1.8 docs
Googletest master docs
If you still want a failure but only one instead of fifty then it's probably not the best mechanism unfortunately.
Parts of my system are specced out really well, but when I change one of the predicates to be something obviously wrong I noticed all my tests still pass and I don't get the usual blowup from spec I've come to rely on.
I can't figure out why this is happening, and I certainly can't reproduce it starting from lein new test.
Is there a way I can get spec.test to give me a warning when it can't find a spec, for debug purposes, rather than assuming I didn't want to spec out this part of my system? Can it perhaps help me in some other way with debugging this situation?
spec should error if you try to use a spec that's not defined.
There is no way currently to have it tell you about things that aren't spec'ed. To do so would require instrumenting (replacing) all vars and adding that check.
For your particular problem, if you have a spec that you're changing, I would search for who is using that predicate and then try testing each thing that uses those specs or the original predicate.
One thing that trips people up sometimes is that stest/instrument only checks the :args specs of functions, not the :ret or :fn specs (which are only used by stest/check).
Here is a minimal reproduction:
(ns test.core
(:require [clojure.spec :as s]))
(defn my-specced-fn [x]
x)
(s/fdef my-specced-fn
:args (s/cat :arg int?))
(ns test.core-test
(:require [clojure
[test :refer :all]]
[test.core :as core]
[clojure.spec.test :as spec-test]))
(spec-test/instrument)
(deftest my-specced-fn-test
(is (= 1 (core/my-specced-fn 1))))
This test passes initially. I would then go edit test.core, change the schema and re-evaluate test.core. After changing the schema with a predicate like string? the tests should fail, but it keeps passing. To solve the problem re-evaluate the test namespace (specifically the call to instrument).
I have suppose 10 test cases in test suite in which 2 test cases are disabled.I want to get those two test cases in test result of jenkins job like pass = 7 ,fail = 1 and disabled/notrun= 2.
By default, TestNG generates report for your test suite and you may refer to index.html file under the test-output folder. If you click on "Ignored Methods" hyperlink, it will show you all the ignored test cases and its class name and count of ignored methods.
All test cases annotated with #Test(enabled = false) will be showing in "Ignored Methods" link.
I have attached a sample image. Refer below.
If your test generates JUnit XML reports, you can use the JUnit plugin to parse these reports after the build (as a post-build action). Then, you can go into your build and click 'Test Result'. You should see a breakdown of how the execution went (including passed, failed, and skipped tests).
I have a ScalaTest which extends the FlatSpec. I have many tests inside the test and I now want to have the possibility to run one test at a time. No matter what I do, I can't get IntelliJ to do it. In the Edit Configurations of the test, I can specify that it should run one test at a time by giving the name of the test. For example:
it should "test the sample multiple times" in new MyDataHelper {
...
}
where I gave the name as "test the sample multiple times", but it does not seem to take that and all I get to see is that it just prints Empty Test Suite. Any ideas how can this be done?
If using Gradle, go to Preferences > Build, Execution, Deployment > Build Tools > Gradle and in the Build and run > Run tests using: section, select IntelliJ IDEA if you haven't already.
An approach that works for me is to right-click (on Windows) within the definition of the test, and choose "Run MyTestClass..." -- or, equivalently, Ctrl-Shift-F10 with the cursor already inside the test. But it's a little delicate and your specific example may be causing your problem. Consider:
class MyTestClass extends FlatSpec with Matchers {
"bob" should "do something" in {
// ...
}
it should "do something else" in {
// ...
}
"fred" should "do something" in {
// ...
}
it should "do something else" in {
// ...
}
}
I can use the above approach to run any of the four tests individually. Your approach based on editing configurations works too. But if I delete the first test I can't run the second one individually -- the others are still fine. That's because a test that starts with it is intended to follow one that doesn't -- then the it is replaced with the appropriate string in the name of the test.
If you want to run the tests by setting up configurations, then the names of these four tests are:
bob should do something
bob should do something else
fred should do something
fred should do something else
Again, note the substitution for it -- there's no way to figure out the name of a test starting with it if it doesn't follow another test.
I'm using IntelliJ Idea 13.1.4 on Windows with Scala 2.10.4, Scala plugin 0.41.1, and ScalaTest 2.1.0. I wouldn't be surprised if this worked less well in earlier versions of Idea or the plugin.
I just realized that I'm able to run individual tests with IntelliJ 13.1.3 Community Edition. With the one that I had earlier 13.0.x, it was unfortunately not possible.