In Rails, can you ignore specific tests from running? - ruby-on-rails-3

I have a suite of tests that I run, but would like to ignore a few tests locally, since they require a Java version that is different and consistently fail in my environment. I'm okay with ignoring these tests (we have integrated testing anyways)
How can I specify to Rails to not run certain tests but run all others? I'm just tired of seeing the errors, and it could lead me to miss some legit test failures...
Any help would be appreciated!

In RSpec one can use exclusion filters and then from the command line, skip specific tests.
In your case, tag the description blocks as java: true.
describe "code for JVM in production", java: true do
it "java-specific test" do
end
end
Then run rspec . --tag ~java:true RSpec will ignore/skip the tests matching java: true tag.
NOTE: It is not necessary to set the other tests to java: false
Alternatively, you can amend your spec_helper.rb with a configuration to skip these tests when run locally using an environment variable.
RSpec.configure do |c|
if RUBY_PLATFORM.include?('darwin') # assumes Macintosh
c.filter_run_excluding java: true
end
end
CITE:
https://relishapp.com/rspec/rspec-core/v/2-5/docs/filtering/exclusion-filters
https://www.relishapp.com/rspec/rspec-core/v/2-4/docs/command-line/tag-option
Detecting Operating Systems in Ruby

Use the if or unless conditional in the describe or context block.
#scarver2's answer is really great and I just wanted to add a more lightweight alternative as well.
You can also use the if or unless conditional in the describe or context block, like:
describe "code for JVM in production", unless: RUBY_PLATFORM.include?('darwin') do
it "java-specific test" do
end
end

Related

How do I call a function when all tests are finished running? [duplicate]

In Rust, is there any way to execute a teardown function after all tests have been run (i.e. at the end of cargo test) using the standard testing library?
I'm not looking to run a teardown function after each test, as they've been discussed in these related posts:
How to run setup code before any tests run in Rust?
How to initialize the logger for integration tests?
These discuss ideas to run:
setup before each test
teardown before each test (using std::panic::catch_unwind)
setup before all tests (using std::sync::Once)
One workaround is a shell script that wraps around the cargo test call, but I'm still curious if the above is possible.
I'm not sure there's a way to have a global ("session") teardown with Rust's built-in testing features, previous inquiries seem to have yielded little, aside from "maybe a build script". Third-party testing systems (e.g. shiny or stainless) might have that option though, might be worth looking into their exact capabilities
Alternatively, if nightly is suitable there's a custom test frameworks feature being implemented, which you might be able to use for that purpose.
That aside, you may want to look at macro_rules! to cleanup some boilerplate, that's what folks like burntsushi do e.g. in the regex package.

Does Codeception have an equivalent to PHPUnit's "strict coverage"?

When using PHPUnit, you can annotate a test case with #covers SomeClass::someMethod to ensure that only code inside of that method is recorded as covered when running the test. I like to use this feature because it helps me separate code that was incidentally executed during a test from code that was actually tested.
After using Codeception to implement some acceptance tests for my project, I decided I would rather use it than PHPUnit to run my unit tests. I would like to remove PHPUnit from the project if possible.
I am using Codeception's Cest format for my unit tests, and the #covers and #codeCoverageIgnore annotations no longer work. Code coverage reports show executed code outside of the methods specified with #covers as covered. Is there any way to mimic that "strict coverage" functionality using Codeception?
Edit: I have submitted an enhancement request to the Codeception project's Github.
It turns out that strict coverage was not possible using Cest-format tests when I asked the question. I have implemented it and the pull request has been merged.
For anyone migrating tests from PHPUnit and looking for this feature as I was, this means that a later release of Codeception should provide support for #covers, #uses, #codeCoverageIgnore, and other related test annotations.
The current version (2.2.4 at the time this was written) doesn't support it but 2.2.x-dev should.

How to skip Clojure Midje tests

If I have a Clojure test suite written using the Midje testing framework, how do I skip individual tests? For example, if I were using JUnit in Java and I wished to skip a single test, I would add an #Ignore attribute above that test method. Is there an equivalent to this for Midje?
I realise that I could add a label to my test metadata and then run the test suite excluding that label. For example, if I labelled my test with ":dontrun", I could then run the test suite with "lein midje :filter -dontrun". This would involve a change to my Continuous Integration task that runs the test suite though, and I'd prefer not to do this. Is there an equivalent test label of JUnit's #Ignore so that I only need to change the Midje test code and not change my Continuous Integration task?
future-fact does what you want, just substitute (or wrap) your fact with it:
(future-fact "an interesting sum"
(sum-up 1 2) => 4)
This will, instead of executing the test code, print a message during the test run:
WORK TO DO "an interesting sum" at (some_tests.clj:23)

How to test the rust standard library?

I'd like to make some changes to my copy of the rust standard library, then run the tests in the source files I changed. I do not need to test the compiler itself. How can I do this without testing a lot of things I am not changing and do not care about?
Here are some things I've already tried. A note - the specific file I want to play around with is libstd/io/net/pipes.rs in rust 0.12.0.
I tried rustc --test pipes.rs - the imports and options are not set up properly, it seems, and a multitude of errors is the result.
Following the rust test suite documentation, I tried make check-stage1-std NO_REBUILD=1, but this failed with "can't find crate for `green`". A person on the #rust-internals irc channel informed me that "make check-stage1 breaks semi-often as it's not the 'official way' to run tests."
Another person on that channel suggested make check-stage0-std, which seems to check libstd, but doesn't restrict testing to the file I changed, even if I use the TESTNAME flag as specified in the rust test suite documentation.
As of 2022 the way to run the test suite for the rust standard library is documented at https://rustc-dev-guide.rust-lang.org/tests/running.html .
While the documentation mentions the ability to test a single file it appears to malfunction
$ ./x.py test library/core/tests/str.rs
Updating only changed submodules
Submodules updated in 0.01 seconds
Building rustbuild
Finished dev [unoptimized] target(s) in 0.09s
thread 'main' panicked at 'error: no rules matched library/core/tests/str.rs', src/bootstrap/builder.rs:286:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Build completed unsuccessfully in 0:00:00
Later on it mentions a different syntax like ./x.py test library/alloc/ --test-args str which appears to succesfully run the unit tests in library/alloc/tests/str.rs
make check-stage1-std NO_REBUILD=1 or ... check-stage2-std ... should work, if you've done a full build previously. They just build the test runner directly without doing the rest of the bootstrap.
In any case, the full std test runner is built always, since, as you noticed, the imports etc are set up for the full crate. TESTNAME is the correct way to restrict which tests are run, but there's no way to restrict tests are built.
Another option is to pull the test/relevant code into an external file, and yet another is to build the test runner by running rustc manually on libstd/lib.rs: rustc --test lib.rs. One could edit the rest of the crate to remove the tests/code you're not interested in.

Running rspec parallel with capybara features

I have a test suite which is using capybara & rspec using Ruby.
I am using the parallel_tests gem in order to run my tests in parallel using sauce.
Now this is great, but it splits up the workload into spec files rather than what I want which is using the capybara features or even better, the scenarios. My spec file looks like
publisher_spec.rb
Feature "Adding users to the publisher"
Scenario "using public groups"
Scenario "using private groups"
So I want to run each scenario as a parallel test, but it only looks at the spec files, forcing me to break up by spec file into multiple units. My test suite would run faster if I had 1 scenario / spec file, but that would ruin the readibility and ability to use "before" steps.
Anyone have a good solution?
Depends on how many parallel processes there are and how many files (and how long they take).
parallel_tests implements an "evening out" strategy by logging how long each spec file takes to run, as explained in their readme.
If the files you have can not be evened out, splitting them apart is your only option.
There's also the new parallel gem paraspec. It looks promising, but I have not yet tried it out with feature testing.