Location leaking between vue-test-utils tests - vue.js

I am trying to create everything fresh for each tests, for example creating localVue in each test, however, the location seems to be leaking between unit tests. I am using vue-test-utils and jest with vue-router. The way I work around it is to explicitly navigate to '/' at the beginning of each test. Is it inevitable, or is there a way to isolate the tests from each other?

Yes, because you are running tests in an environment with window on the global, any changes to window or its properties affect future tests running in the same scope. The best approach is to reset any properties that you're altering in your source code before each test.

Related

Share managed resources in tests (zio tests)

I'm binding an HttpRoute and testing it.
That occurs inside a ZManaged context.
Yet I need to use it for each test, which is very resource inefficient.
Is there a way run many labeled tests inside a ZManaged context?

How do I call a function when all tests are finished running? [duplicate]

In Rust, is there any way to execute a teardown function after all tests have been run (i.e. at the end of cargo test) using the standard testing library?
I'm not looking to run a teardown function after each test, as they've been discussed in these related posts:
How to run setup code before any tests run in Rust?
How to initialize the logger for integration tests?
These discuss ideas to run:
setup before each test
teardown before each test (using std::panic::catch_unwind)
setup before all tests (using std::sync::Once)
One workaround is a shell script that wraps around the cargo test call, but I'm still curious if the above is possible.
I'm not sure there's a way to have a global ("session") teardown with Rust's built-in testing features, previous inquiries seem to have yielded little, aside from "maybe a build script". Third-party testing systems (e.g. shiny or stainless) might have that option though, might be worth looking into their exact capabilities
Alternatively, if nightly is suitable there's a custom test frameworks feature being implemented, which you might be able to use for that purpose.
That aside, you may want to look at macro_rules! to cleanup some boilerplate, that's what folks like burntsushi do e.g. in the regex package.

Clear Cursive REPL state before each test run

I'm new to Cursive and Clojure in general and am having some difficulty getting a decent TDD workflow.
My problem is that subsequent test runs depend on state in the REPL. For example suppose that you have the code below.
(def sayHello "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will pass.
If you then refactor the code like this
(def getGreeting "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will still pass (because the def of sayHello still exists in the repl). However, the tests should fail because the code is currently in a failing state (sayHello is not defined anywhere in the code).
I've tried toggling the "locals will be cleared" button in the REPL window but this does not seem to fix the issue.
If there is a way to run the tests outside of the REPL (or in a new REPL for each test run) I'd be fine with that as a solution.
All I want is that there is a 1 to 1 correspondence between the source code under test and the result of the test.
Thanks in advance for your help.
Yes, it's annoying to have old defs available. I don't even create tests usually (whoops), but this bites me during normal development. If I create a function, then rename it, then change it, then accidentally refer to the first function name, I get odd results since it's referring to the old function. I'm still looking for a good way around this that doesn't involve killing and restarting the REPL.
For your particular case though, there's a couple easy, poor workarounds:
Open IntelliJ's terminal (button at bottom left of the window) and run lein test. This will execute all the project's tests and report the results.
Similarly to the above, you can, outside of IntelliJ, open a command window in the project directory and run lein test, and it will run all found tests.
You can also specify which namespace to test using lein test <ns here> (such as lein test beings-retry.core-test), or a specific test in a namespace using :only (such as lein test :only beings-retry.core-test/a-test; where a-test is a deftest). Unfortunately, this doesn't happen in the REPL, so it kind of breaks workflow.
The only REPL-based workaround I know of, as mentioned above, is to just kill the REPL:
"Stop REPL" (Ctrl+F2)
"Reconnect" (Ctrl+F5).
Of course though, this is slow, and an awful solution if you're doing this constantly. I'm interested to see if anyone else has any better solutions.
You could use Built-in test narrowing (test selector) feature of test-refresh lein plugin. It allows to test only those tests that have been marked with ^:test-refresh/focus meta every time you save a file.
The usual solution for this kind of problem is either stuartsierra/component or tolitius/mount.
A complete description would be out of place here, but the general idea is to have some system to manage state in a way that allows to cleanly reload the application state. This helps keeping close to the code that is saved in your source files while interactively working on the running system.
Thanks to everyone for their suggestions. I'm posting my own answer to this problem because I've found a way forward that works for me and I'm not sure that any of the above were quite what I was looking for.
I have come to the conclusion that the clojure REPL, although useful, is not where I will run tests. This basically came down to a choice between either running a command to clean the repl between each test run (like the very useful refresh function in tools.namespace https://github.com/clojure/tools.namespace) or not running tests in the REPL.
I chose the latter option because.
It is one less step to do (and reloading is not always perfect)
CI tests do not run in a REPL so running them directly in dev is one step closer to the CI environment.
The code in production does not run in a REPL either so running tests outside the repl is closer to the way that production code runs.
It's actually a pretty simple thing to configure a run configuration in IntelliJ to run either a single test or all tests in your application as a normal clojure application. You can even have a REPL running at the same time if you like and use it however you want. The fact that the tooling leans so heavily towards running things in the REPL blinded me to this option to some extent.
I'm pretty inexperienced with Clojure and also a stubborn old goat that is set in his TDD ways but at least some others agree with me about this https://github.com/cursive-ide/cursive/issues/247.
Also if anyone is interested, there is a great talk on how the REPL holds on to state and how this causes all sorts of weird behaviour here https://youtu.be/-RaFcpNiYCo. It turns out that the problem I was seeing with re-defining functions was just the tip of the iceberg.
One option that may help, especially if you're bundling several assertions, or have repeating tests is let. The name-value binding has a known scope, and can save you from re-typing a lot.
Here's an example:
(deftest my-bundled-and-scoped-test
(let [TDD "My expected result"
helper (some-function :data)]
(testing "TDD-1: Testing state in the repl"
(is (= TDD "MY expected result")))
(testing "TDD-2: Reusing state in the repl"
(is (= TDD helper)))))
Once my-bundled-and-scoped test finishes executing, you'll no longer be in the let binding. An added benefit is that the result of some-function will be reusable too, which is handy for testing multiple assertions or properties of the same function/input pair.
While on the subject, I'd also recommend using Leiningen to run your tests, as there are plenty of plugins that can help you test more efficiently. I'd checkout test-refresh, speclj, and cloverage.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Running Scala tests automatically either after test change or tested class change

I'm wondering if there is any solution to let Scala tests run automatically upon change of test class itself or class under the test (just to test automatically pairs Class <---> ClassTest) would be a good start.
sbt can help you with this. After you setup project, just run
~test
~ means continuous execution. So that sbt will watch file system changes and when changes are detected it recompiles changed classes and tests your code. ~testQuick can be even more suitable for you, because it runs only tests, that were changed (including test class and all it's transitive dependencies). You can read more about this here:
http://code.google.com/p/simple-build-tool/wiki/TriggeredExecution
http://php.jglobal.com/blog/?p=363
By the way, ~ also works with other tasks like ~run.