Find a list of slow rspec tests [duplicate] - ruby-on-rails-3

This question already has an answer here:
How to find which rspec test is taking so long
(1 answer)
Closed 10 years ago.
How can I find a list of the slowest rspec tests? I want to refactor those to run faster. I tried looking for gems but can't find any. I was thinking of putting something in
Rspec.before(:each)
and
Rspec.after(:each)
blocks to generate this list. But I don't know how to access the name of the spec.

That's actually built right in to the rspec command. Just use the -p or --profile flag while you're running all your rspec tests.

As MrDan says, the --profile flag. You can also make a .rspec file in the root of your project, so that you get the timing information all the time, like this:
$ cat .rspec
--colour
--profile
--format nested
--backtrace
You can also speed up all your tests generally by turning off garbage collection, like this:
# turn off garbage collection when running tests, place this in spec_helper.rb
RSpec.configure do |config|
config.before(:all) do
DeferredGarbageCollection.start
end
config.after(:all) do
DeferredGarbageCollection.reconsider
end
end

Related

LLVM lit testing: is it possible to configure number of threads via `lit.cfg`?

I wonder if it's possible to configure the number of threads for testing in a lit.cfg file.
lit offers a command-line flag for specifying the number of threads:
llvm/utils/lit/lit.py -j1 <test directory>
However, I'm not sure how to do this in a lit.cfg file. I want to force all tests in a subdirectory to be run with -j1 - not sure if this is possible.
Edit: for reference, I'm working on the Swift codebase which has a large test suite (4000+ tests) with multiple test subdirectories.
I want to run just one subdirectory with -j1 and the rest with the default number of threads (-j12 for my machine).
I was wondering about that too a while back, but I don't think there is one because of this line here. Usually, the main project compilation times dwarf the lit tests execution time.
It is easy to change, but I'd suggest using your build configuration to this (e.g. make or cmake). So, make test could execute something like lit -j $(nproc) underneath.
Edit (after OP update):
I haven't worked with the swift repo, but maybe you could hack your way around. One thing I could see is that you could influence the LIT_ARGS cmake variable with the options you want by appending to it.
Now to force a single process execution for a specific directory, you may add a lit.local.cfg that sets the singleProcess flag. This seems to override multi-thread execution:
config.singleProcess = True

Let CMake setup CTtest to print header and footer around the output from single tests

Is there a way, ideally from CMakeLists.txt, to setup ctest as to
print a header before running the individual tests,
print a footer after running the individual tests,
make the footer dependent on whether the tests were all successful or not ?
The footer should appear below the default output
The following tests FAILED:
76 - MyHardTest
Errors while running CTest
This concretizes and generalizes a somewhat unclear question that is open since more than 2 years (CMakeLists.txt: How to print a message if ctest fails?). Therefore I fear there is no easy solution.
Thence an alternative question: could the desired bevhavior achieved with CDash?
YES, CTest does have macros to achieve exactly this [1]:
CTEST_CUSTOM_PRE_TEST Command to execute before any tests are run during Test stage
CTEST_CUSTOM_POST_TEST Command to execute after any tests are run during Test stage
To activate these macros from cmake for use by ctest, they must somehow be placed into the build directory. So it seems, two steps are necessary:
(1) Have a script scriptdir/CTestCustom.cmake.in somewhere in the source tree, which contains
set(CTEST_CUSTOM_POST_TEST "echo AFTER_THE_TEST")
or whatever whatever command instead of "echo"
(2) Let CMakeLists.txt call
configure_file("scriptdir/CTestCustom.cmake.in" ${CMAKE_BINARY_DIR}/CTestCustom.cmake)
so that during configuration stage a CTest configuration file is placed under the preferred name [2] CTestCustom.cmake in the build directory.
[1] https://cmake.org/Wiki/CMake/Testing_With_CTest
[2] https://blog.kitware.com/ctest-performance-tip-use-ctestcustom-cmake-not-ctest/
During my research, I found it was extremely difficult to integrate something like this. I am not entirely sure but I believe you can do this in CTestScript, then create a add_custom_target to always allow that Script to execute with ctest. For example, the command make check will now run ctest with the CTestScript that you made... too much work?
Easiest way I can think of for your application is to just add two empty tests at top and bottom as placeholders for header and footers. Ctest already has a "The following tests FAILED:" kind of output at the very end, so you might not have to worry about it. Any sort of conditional logic IF TEST FAILED DO THIS, you cannot do currently in ctest.
add_test(NAME HEADER_GOES_HERE)
add_test(NAME ACTUAL_TEST COMMAND test)
add_test(NAME FOOTER_GOES_HERE)
Maybe someone can give you a better answer, but this is the easiest (not at all good) implementation I can think of.

In Rails, can you ignore specific tests from running?

I have a suite of tests that I run, but would like to ignore a few tests locally, since they require a Java version that is different and consistently fail in my environment. I'm okay with ignoring these tests (we have integrated testing anyways)
How can I specify to Rails to not run certain tests but run all others? I'm just tired of seeing the errors, and it could lead me to miss some legit test failures...
Any help would be appreciated!
In RSpec one can use exclusion filters and then from the command line, skip specific tests.
In your case, tag the description blocks as java: true.
describe "code for JVM in production", java: true do
it "java-specific test" do
end
end
Then run rspec . --tag ~java:true RSpec will ignore/skip the tests matching java: true tag.
NOTE: It is not necessary to set the other tests to java: false
Alternatively, you can amend your spec_helper.rb with a configuration to skip these tests when run locally using an environment variable.
RSpec.configure do |c|
if RUBY_PLATFORM.include?('darwin') # assumes Macintosh
c.filter_run_excluding java: true
end
end
CITE:
https://relishapp.com/rspec/rspec-core/v/2-5/docs/filtering/exclusion-filters
https://www.relishapp.com/rspec/rspec-core/v/2-4/docs/command-line/tag-option
Detecting Operating Systems in Ruby
Use the if or unless conditional in the describe or context block.
#scarver2's answer is really great and I just wanted to add a more lightweight alternative as well.
You can also use the if or unless conditional in the describe or context block, like:
describe "code for JVM in production", unless: RUBY_PLATFORM.include?('darwin') do
it "java-specific test" do
end
end

How to test the rust standard library?

I'd like to make some changes to my copy of the rust standard library, then run the tests in the source files I changed. I do not need to test the compiler itself. How can I do this without testing a lot of things I am not changing and do not care about?
Here are some things I've already tried. A note - the specific file I want to play around with is libstd/io/net/pipes.rs in rust 0.12.0.
I tried rustc --test pipes.rs - the imports and options are not set up properly, it seems, and a multitude of errors is the result.
Following the rust test suite documentation, I tried make check-stage1-std NO_REBUILD=1, but this failed with "can't find crate for `green`". A person on the #rust-internals irc channel informed me that "make check-stage1 breaks semi-often as it's not the 'official way' to run tests."
Another person on that channel suggested make check-stage0-std, which seems to check libstd, but doesn't restrict testing to the file I changed, even if I use the TESTNAME flag as specified in the rust test suite documentation.
As of 2022 the way to run the test suite for the rust standard library is documented at https://rustc-dev-guide.rust-lang.org/tests/running.html .
While the documentation mentions the ability to test a single file it appears to malfunction
$ ./x.py test library/core/tests/str.rs
Updating only changed submodules
Submodules updated in 0.01 seconds
Building rustbuild
Finished dev [unoptimized] target(s) in 0.09s
thread 'main' panicked at 'error: no rules matched library/core/tests/str.rs', src/bootstrap/builder.rs:286:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Build completed unsuccessfully in 0:00:00
Later on it mentions a different syntax like ./x.py test library/alloc/ --test-args str which appears to succesfully run the unit tests in library/alloc/tests/str.rs
make check-stage1-std NO_REBUILD=1 or ... check-stage2-std ... should work, if you've done a full build previously. They just build the test runner directly without doing the rest of the bootstrap.
In any case, the full std test runner is built always, since, as you noticed, the imports etc are set up for the full crate. TESTNAME is the correct way to restrict which tests are run, but there's no way to restrict tests are built.
Another option is to pull the test/relevant code into an external file, and yet another is to build the test runner by running rustc manually on libstd/lib.rs: rustc --test lib.rs. One could edit the rest of the crate to remove the tests/code you're not interested in.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.