I'm new to Cursive and Clojure in general and am having some difficulty getting a decent TDD workflow.
My problem is that subsequent test runs depend on state in the REPL. For example suppose that you have the code below.
(def sayHello "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will pass.
If you then refactor the code like this
(def getGreeting "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will still pass (because the def of sayHello still exists in the repl). However, the tests should fail because the code is currently in a failing state (sayHello is not defined anywhere in the code).
I've tried toggling the "locals will be cleared" button in the REPL window but this does not seem to fix the issue.
If there is a way to run the tests outside of the REPL (or in a new REPL for each test run) I'd be fine with that as a solution.
All I want is that there is a 1 to 1 correspondence between the source code under test and the result of the test.
Thanks in advance for your help.
Yes, it's annoying to have old defs available. I don't even create tests usually (whoops), but this bites me during normal development. If I create a function, then rename it, then change it, then accidentally refer to the first function name, I get odd results since it's referring to the old function. I'm still looking for a good way around this that doesn't involve killing and restarting the REPL.
For your particular case though, there's a couple easy, poor workarounds:
Open IntelliJ's terminal (button at bottom left of the window) and run lein test. This will execute all the project's tests and report the results.
Similarly to the above, you can, outside of IntelliJ, open a command window in the project directory and run lein test, and it will run all found tests.
You can also specify which namespace to test using lein test <ns here> (such as lein test beings-retry.core-test), or a specific test in a namespace using :only (such as lein test :only beings-retry.core-test/a-test; where a-test is a deftest). Unfortunately, this doesn't happen in the REPL, so it kind of breaks workflow.
The only REPL-based workaround I know of, as mentioned above, is to just kill the REPL:
"Stop REPL" (Ctrl+F2)
"Reconnect" (Ctrl+F5).
Of course though, this is slow, and an awful solution if you're doing this constantly. I'm interested to see if anyone else has any better solutions.
You could use Built-in test narrowing (test selector) feature of test-refresh lein plugin. It allows to test only those tests that have been marked with ^:test-refresh/focus meta every time you save a file.
The usual solution for this kind of problem is either stuartsierra/component or tolitius/mount.
A complete description would be out of place here, but the general idea is to have some system to manage state in a way that allows to cleanly reload the application state. This helps keeping close to the code that is saved in your source files while interactively working on the running system.
Thanks to everyone for their suggestions. I'm posting my own answer to this problem because I've found a way forward that works for me and I'm not sure that any of the above were quite what I was looking for.
I have come to the conclusion that the clojure REPL, although useful, is not where I will run tests. This basically came down to a choice between either running a command to clean the repl between each test run (like the very useful refresh function in tools.namespace https://github.com/clojure/tools.namespace) or not running tests in the REPL.
I chose the latter option because.
It is one less step to do (and reloading is not always perfect)
CI tests do not run in a REPL so running them directly in dev is one step closer to the CI environment.
The code in production does not run in a REPL either so running tests outside the repl is closer to the way that production code runs.
It's actually a pretty simple thing to configure a run configuration in IntelliJ to run either a single test or all tests in your application as a normal clojure application. You can even have a REPL running at the same time if you like and use it however you want. The fact that the tooling leans so heavily towards running things in the REPL blinded me to this option to some extent.
I'm pretty inexperienced with Clojure and also a stubborn old goat that is set in his TDD ways but at least some others agree with me about this https://github.com/cursive-ide/cursive/issues/247.
Also if anyone is interested, there is a great talk on how the REPL holds on to state and how this causes all sorts of weird behaviour here https://youtu.be/-RaFcpNiYCo. It turns out that the problem I was seeing with re-defining functions was just the tip of the iceberg.
One option that may help, especially if you're bundling several assertions, or have repeating tests is let. The name-value binding has a known scope, and can save you from re-typing a lot.
Here's an example:
(deftest my-bundled-and-scoped-test
(let [TDD "My expected result"
helper (some-function :data)]
(testing "TDD-1: Testing state in the repl"
(is (= TDD "MY expected result")))
(testing "TDD-2: Reusing state in the repl"
(is (= TDD helper)))))
Once my-bundled-and-scoped test finishes executing, you'll no longer be in the let binding. An added benefit is that the result of some-function will be reusable too, which is handy for testing multiple assertions or properties of the same function/input pair.
While on the subject, I'd also recommend using Leiningen to run your tests, as there are plenty of plugins that can help you test more efficiently. I'd checkout test-refresh, speclj, and cloverage.
Related
I often work on very small pieces of code, on the order of max 100 lines, especially in scenarios when I learn something new and just play with the code, or when I debug.
Because I frequently change code and want to see how that changes the contents of my variables and output, it is tedious to either
1) hit the debug button, wait for the debugger to start (in my case I use PyCharm as an IDE) and then inspect the output
or
2) insert some prints for the variables that I want to observe and compile the code (slightly faster than starting the debugger).
To eliminate this time consuming workflow, where I constantly hit the compile or debug button every few seconds, is there an IDE where I can set a watch to a few variables and then each time I change in my source code a single character (or, alternatively, every half a second) the IDE automatically compiles everything and I will see then new values of my variables?
(Of course while I intermediatelychange the code the compilation will give errors, but that is ok. This feature would be a big time saver. Maybe PyCharm has it already implemented? If not, ideally I would hope for a language agnostic IDE, similar to PyCharm, where variants for Java etc. also exist. If not, since I code in Python, a Python IDE would also be great.)
This might not be exactly what you are looking for but PyCharm (and IntelliJ and probably others) can run tests automatically when code changes.
In the PyCharm Run toolbar look for "Toggle auto-test" button.
For example in PyCharm you can create test cases that just runs the code you're interested in and prints the variables you need.
Then create a run configuration that runs only those tests and set it to run automatically.
For more details see PyCharm documentation on rerunning tests.
The Scala plugin for IntelliJ has exactly what you need in the form of "worksheets," where every expression is automatically recompiled when its value or the value of anything it references is changed.
Since (based on your usage of PyCharm), I assume you're using Python primarily, I think Jupyter notebook is your best bet. Jupyter is language agnostic but began as specific to python (it was called IPython notebook for this reason).
I have not tried it, but this guide purports to show to get Jupyter to work with PyCharm
EDIT: Here is another possibility called vim worksheet; I haven't tried it, but it purports to do the same thing as Scala worksheets, but in vim, and for a number of languages, including Python.
The python Spyder IDE (comes with Anaconda) has this feature. When you hit run, you can see all of the variables at the top right of the screen and you can click on them to see their values (this is very helpful with Numpy Arrays too!).
If your interest is in the actual workflow improvement:
I used to program like you, looking at what my variables changed to, and design or debug my code based on such modifications, however is way to inefficient and costly to set what variables to watch over and over again and besides when if it bugs, you have to go all over again for the debugging process.
I changed my design process to better my workflow and adopted Test Driven Development (TDD), with it you can look at tools for you specific implementations or IDEs but the principles and workflow stay with you, with it you stop looking on how the variables changed and instead focus on what the functions should do, meaning faster iteration (with real time tools for testing), easier debugging and far more better, safe refactoring.
My favorite tool for it is Cucumber, and agnostic tool (for IDE or programming language) which help you test your code scenarios and at the same time documenting your features.
Hope it helps, i know its a very opinionated answer but it's an honest advices for improvement in ones workflow.
You should try Thonny. It is developed by Institute of Computer Science of University of Tartu.
The 4 features which might be of help to you are below (verbatim from the website):
No-hassle variables.
Once you're done with hello-worlds, select View → Variables and see how your programs and shell commands affect Python variables.
Simple debugger.
Just press Ctrl+F5 instead of F5 and you can run your programs step-by-step, no breakpoints needed. Press F6 for a big step and F7 for a small step. Steps follow program structure, not just code lines.
Stepping through statements
Step through expression evaluation. If you use small steps, then you can even see how Python evaluates your expressions. You can think of this light-blue box as a piece of paper where Python replaces subexpressions with their values, piece-by-piece.
Visualization of expression evaluation
Faithful representation of function calls.
Stepping into a function call opens a new window with separate local variables table and code pointer. Good understanding of how function calls work is especially important for understanding recursion.
I'm using emacs with nREPL via cider, and I've got a suite of clojure.test-based tests that I run to see when I've broken things (which is a lot as I'm fairly new to clojure.) I've tried two methods to run these tests - first by invoked the external "lein test" command and second by using clojure-test - and both work but neither gives completely satisfactory results. What I want is to be able to "navigate" the results of the tests, i.e. click on failures and stacktraces to go to the sources of failure.
I've poked around a bit with clojure-stacktrace-mode, but, while fairly impressive, that only seems to apply to stacktraces generated in the nREPL buffer.
So my question is: is there a way to get the behavior I want? Or maybe another way to get equivalent functionality? I feel like all the parts are there, but that I'm putting them together incorrectly.
Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.
When using Common Test with Erlang on Windows, I run into a lot of bugs with Common Test and Erlang. For one, if there are any spaces in the project's path, Common Test often fails outright. To workaround this, I moved the project to a path with no spaces (but I really wish the devs would fix the libraries so they work better on Windows). Now, I got Common Test to mostly run, except it won't print out the HTML report at the end. This is the error I get after the tests run:
Testing myapp.ebin: EXIT, reason {
{badmatch,{error,enoent}},
[{test_server_ctrl,start_minor_log_file1,4,
[{file,"test_server_ctrl.erl"},{line,1959}]},
{test_server_ctrl,run_test_case1,11,
[{file,"test_server_ctrl.erl"},{line,3761}]},
{test_server_ctrl,run_test_cases_loop,5,
[{file,"test_server_ctrl.erl"},{line,3032}]},
{test_server_ctrl,run_test_cases,3,
[{file,"test_server_ctrl.erl"},{line,2294}]},
{test_server_ctrl,ts_tc,3,
[{file,"test_server_ctrl.erl"},{line,1434}]},
{test_server_ctrl,init_tester,9,
[{file,"test_server_ctrl.erl"},
{line,1401}]}]}
This happened in sometimes in Erlang R15 and older if the test function names were either too long or had too many underscores in the name (which I suspect is also a bug) or when too many tests failed (which means Common Test is useless to me for TDD). But now it happens on every ct:run from Common Test in R15B01. Does anyone know how I can workaround this? Has anyone had any success with TDD and Common Test on Windows?
Given the last comment you might want to disable the buildin_hooks. You can do this by passing the following to ct:run/1 or ct_run
{enable_builtin_hooks,false}
That should disable the cth_log_redirect hook and maybe solve your problem during overload.
Are there specific techniques to consider when refactoring the non-regression tests? The code is usually pretty simple, but it's obviously not included into the safety net of a test suite...
When building a non-regression test, I first ensure that it really exhibits the issue that I want to correct, of course. But if I come back later to this test because I want to refactor it (e.g. I just added another very similar test), I usually can't put the code-under-test back in a state where it was exhibiting the first issue. So I can't be sure that the test, once refactored, is still exercising the same paths in the code.
Are there specific techniques to deal with this issue, except being extra careful?
It's not a big problem. The tests test the code, and the code tests the tests. Although it's possible to make a clumsy mistake that causes the test to start passing under all circumstances, it's not likely. You'll be running the tests again and again, so the tests and the code they test gets a lot of exercise, and when things change for the worse, tests generally start failing.
Of course, be careful; of course, run the tests immediately before and after refactoring. If you're uncomfortable about your refactoring, do it in a way that allows you to see the test working (passing and failing). Find a reliable way to fail each test before the refactoring, and write it down. Get to green - all tests passing - then refactor the test. Run the tests; still green? Good. (If not, of course, get green, perhaps by starting over). Perform the changes that made the original unrefactored tests fail. Red? Same failure as before? Then reinstate the working code, and check for green again. Check it in and move onto your next task.
Try to include not only positive cases in your automated test, but also negative cases (and a proper handler for them).
Also, you can try to run your refactored automated test with breakpoints and supervise through the debugger that it keeps on exercising all the paths you intended it to exercise.