I'm running into some issues with reporting on failures using the clojure.test testing framework.
Now, I understand that I can override some functions for different reporting so that it prints out to the console or wherever I want it to print to. I also understand that I can save this output to a file.
My issue is the following... when I declare a deftest like this example:
(deftest test1
(is (= 1 1)
(is (= 2 1))
This test will run and if I do something like (run-tests) or (test-var #'test1) it will return nil but print the failures.
I decided to override the :fail method for reporting, because what I want is a map of the failures like this: {"expected" (:expected m), "actual" (:actual m)} and this kinda sorta works if I were to just use the reporting function.
The problem is that when you run tests through the Clojure.test framework there are many macro's that get called and it doesn't behave it exactly how I want to.
My end goal is: running the tests, and if there are any failures, instead of printing them, save them to a map and return the map to me. If they all pass, then I don't care what it returns to me.
Is this even possible? I don't want to stop testing if a certain test fails, I just want it to be logged somewhere, preferably a map.
Sources:
Clojure test with mutiple assertions
https://clojure.github.io/clojure/branch-1.1.x/clojure.test-api.html
https://groups.google.com/forum/#!topic/clojure/vCjso96wqps
I'm afraid there's no easy way how to do that.
You could provide a custom implementation of clojure.test/report :fail defmethod and store the result in an atom but it's hard to propage the result to outer layers.
If you just use test-var then it's doable but note that test fixtures aren't executed in this case - see test-vars source:
(:use clojure.test)
(deftest failing
(testing "fail me"
(is (= 1 0))
(is (= 2 1))
(is (= 3 2))))
(def test-failures (atom []))
(defmethod report :fail [m]
(swap! test-failures
(fn [previous-failures current-failure]
(conj previous-failures current-failure))
{:test-var-str (testing-vars-str m)
:expected (:expected m)
:actual (:actual m)}))
(defmethod report :end-test-var [m]
#test-failures)
(defn run-test-var [v]
(reset! test-failures [])
(test-var v))
;; in REPL:
(run-test-var #'failing)
;; =>
[{:test-var-str "(failing) (form-init4939336553149581727.clj:159)", :expected 1, :actual (0)}
{:test-var-str "(failing) (form-init4939336553149581727.clj:160)", :expected 2, :actual (1)}
{:test-var-str "(failing) (form-init4939336553149581727.clj:161)", :expected 3, :actual (2)}]
There's also defmethod report :end-test-ns but this one is not very useful because test-ns function returns #*report-counters*.
Related
Question
I would like to black hole print like behaviors within my test bodies in order to keep my log output looking clean and tidy.
(deftest some-test
(testing "something"
(logless
(is (= 22 (test-thing 14))))))
I expect test-thing to call println and other similar calls to *out* and would like those to stop polluting my test output.
Is there a recognized way to do this in general?
I found this guy (with-out-str) but it's capturing the string, not quite what I'm looking for.
Background
I'm fairly new to Clojure, coming largely from a javascript world. Having a blast so far! But there's lots left for me to learn.
In Clojure, not Clojure.script (if it matters)
Just use with-out-str and then ignore the string.
Note that this will not capture error messages or messages from Java libraries. If you want to capture and/or suppress this output, I have written 3 additional functions in the Tupelo library that you may find useful:
with-err-str
with-system-err-str
with-system-out-str
The code looks like this:
(defmacro with-system-err-str
"Evaluates exprs in a context in which JVM System/err is bound to a fresh
PrintStream. Returns the string created by any nested printing calls."
[& body]
`(let [baos# (ByteArrayOutputStream.)
ps# (PrintStream. baos#)]
(System/setErr ps#)
~#body
(System/setErr System/err)
(.close ps#)
(.toString baos#)))
If you wanted, you could make a new macro like so:
(defmacro with-printing-suppressed
"Evaluates exprs in a context in which JVM System/err and System/out captured & discarded."
[& body]
`(let [baos# (ByteArrayOutputStream.)
ps# (PrintStream. baos#)
s# (new java.io.StringWriter)]
(System/setErr ps#)
(System/setOut ps#)
(binding [*err* s#
*out* s#]
(let [result# (do ~#body)]
(.close ps#)
(System/setErr System/err)
(System/setOut System/out)
result#))))
(defn stuff []
(println "***** doing stuff *****")
42)
and then test it:
(println "calling - before")
(is= 42 (with-printing-suppressed
(stuff)))
(println "calling - after")
with result:
calling - before
calling - after
Use default logging and logback.xml for output configuration.
default clojure logging
I've got some fixtures that boot up and close the database in my project.
Now it looks something like this:
(use-fixtures :once with-embedded-db)
while in the fixture itself I've got a dynamic variable that I use in different places:
(def ^:dynamic *db*)
(defn with-embedded-db [f]
(binding [*db* (db/connect args)]
(f)
(finally
(db/clean-up *db)))
Now, assume that db/connect and db/clean-up take some time.
PROBLEM:
When I run tests using lein test, it takes very long time, unnecessarily spending time on connecting and disconnecting to the db for every namespace.
QUESTION:
Is there a way to set up global fixtures so that when I run lein test, it calls it only once for all the test namespaces?
Thanks!
It would have been better if that feature was added to leiningen itself. At least a ticket should be opened, if not a PR.
The following solution is dirty, but you can get the idea and transform it into something more intelligent.
;; profect.clj
:profiles
{:dev {:dependencies [[robert/hooke "1.1.2"]]
:injections [(require '[robert.hooke :as hooke])
(defn run-all-test-hook [f & nss]
(doall (map (fn [a]
(when (intern a '*db*)
(intern a '*db* "1234"))) nss))
(apply f nss))
(hooke/add-hook #'clojure.test/run-tests #'run-all-test-hook)
]}}
Note: leiningen itself uses robert/hooke in its core.
And then somewhere in tests:
(ns reagenttest.cli
(:require [clojure.test :refer :all]))
(def ^:dynamic *db*) ;; should be defined in every NS where it is needed
(deftest Again
(testing "new"
(prn *db*)))
Use circleci.test, it supports :global-fixtures:
... you can define global fixtures that are only run once for the entire test run, no matter how many namespaces you run.
I am trying to understand what's the correct way to handle errors using core.async/pipeline, my pipeline is the following:
input --> xf-run-computation --> first-out
first-out --> xf-run-computation --> last-out
Where xf-run-computation will do an http calls and return response. However some of these responses will return an error. What's the best way to handles these errors?
My solution is to split the outputs channels in success-values and error-values and then merge them back to a channel:
(let [[success-values1 error-values1] (split fn-to-split first-out)
[success-values2 error-values2] (split fn-to-split last-out)
errors (merge [error-values1 error-values2])]
(pipeline 4 first-out xf-run-computation input)
(pipeline 4 last-out xf-run-computation success-values1)
[last-out errors])
So my function will return the last results and the errors.
Generally speaking, what is "the" correct way is probably depending on your application needs, but given your problem description, I think there are three things you need to consider:
xf-run-computation returns data that your business logic would see as errors,
xf-run-computation throws an exception and
given that http calls are involved, some runs of xf-run-computation might never finish (or not finish in time).
Regarding point 3., the first thing you should consider is using pipeline-blocking instead of pipeline.
I think your question is mostly related to point 1. The basic idea is that the result of xf-run-computation needs to return a data structure (say a map or a record), which clearly marks a result as an error or a success, e.g. {:title nil :body nil :status "error"}. This will give you some options of dealing with the situation:
all your later code simply ignores input data which has :status "error". I.e., your xf-run-computation would contain a line like (when (not (= (:status input) "error")) (run-computation input)),
you could run a filter on all results between the pipeline-calls and filter them as needed (note that filter can also be used as a transducer in a pipeline, thereby obliterating the old filter> and filter< functions of core.async),
you use async/split like you suggested / Alan Thompson shows in his answer to to filter out the error values to a separate error channel. There is no real need to have a second error channel for your second pipeline if you're going to merge the values anyway, you can simply re-use your error channel.
For point 2., the problem is that any exception in xf-run-computation is happening in another thread and will not simply propagate back to your calling code. But you can make use of the ex-handler argument to pipeline (and pipeline-blocking). You could either simply filter out all exceptions, put the result on a separate exception channel or try to catch them and turn them into errors (potentially putting them back on the result or another error channel) -- the latter only makes sense, if the exception gives you enough information, e.g. an id or something that allows to tie back the exception to the input which caused the exception. You could arrange for this in xf-run-computation (i.e. catch any exception thrown from a third-party library like the http call).
For point 3, the canonical answer in core.async would be to point to a timeout channel, but this doesn't make much sense in relation to pipeline. A better idea is to ensure on your http calls that a timeout is set, e.g. the :timeout option of http-kit or :socket-timeout and :conn-timeout of clj-http. Note that these options will usually result in an exception on timeout.
Here is an example that does what you are suggesting. Beginning with (range 10) it first filters out the multiples of 5, then the multiples of 3.
(ns tst.clj.core
(:use clj.core
clojure.test )
(:require
[clojure.core.async :as async]
[clojure.string :as str]
)
)
(defn err-3 [x]
"'fail' for multiples of 3"
(if (zero? (mod x 3))
(+ x 300) ; error case
x)) ; non-error
(defn err-5 [x]
"'fail' for multiples of 5"
(if (zero? (mod x 5))
(+ x 500) ; error case
x)) ; non-error
(defn is-ok?
"Returns true if the value is not 'in error' (>=100)"
[x]
(< x 100))
(def ch-0 (async/to-chan (range 10)))
(def ch-1 (async/chan 99))
(def ch-2 (async/chan 99))
(deftest t-2
(let [
_ (async/pipeline 1 ch-1 (map err-5) ch-0)
[ok-chan-1 fail-chan-1] (async/split is-ok? ch-1 99 99)
_ (async/pipeline 1 ch-2 (map err-3) ok-chan-1)
[ok-chan-2 fail-chan-2] (async/split is-ok? ch-2 99 99)
ok-vec-2 (async/<!! (async/into [] ok-chan-2))
fail-vec-1 (async/<!! (async/into [] fail-chan-1))
fail-vec-2 (async/<!! (async/into [] fail-chan-2))
]
(is (= ok-vec-2 [1 2 4 7 8]))
(is (= fail-vec-1 [500 505]))
(is (= fail-vec-2 [303 306 309]))))
Rather than return the errors, I would probably just log them as soon as they are detected and then forget about them.
I'm having some issues with testing a clojure macro. When I put the code through the repl, it behaves as expected, but when I try to expect this behavior in a test, I'm getting back nil instead. I have a feeling it has to do with how the test runner handles macroexpansion, but I'm not sure what exactly is going on. Any advice/alternative ways to test this code is appreciated.
Here is a simplified example of the macro I'm trying to test
(defmacro macro-with-some-validation
[-name- & forms]
(assert-symbols [-name-])
`(defn ~-name- [] (println "You passed the validation")))
(macroexpand-1 (read-string "(macro-with-some-validation my-name (forms))"))
;; ->
(clojure.core/defn my-name [] (clojure.core/println "You passed the validation"))
When passed into the repl
(macroexpand-1 (read-string "(macro-with-some-validation 'not-symbol (forms))"))
;; ->
rulesets.core-test=> Exception Non-symbol passed in to function. chibi-1-0-0.core/assert-symbols (core.clj:140)
But when put through a test
(deftest macro-with-some-validation-bad
(testing "Passing in a non-symbol to the macro"
(is (thrown? Exception
(macroexpand-1 (read-string "(macro-with-some-validation 'not-symbol (forms))"))))))
;; after a lein test ->
FAIL in (macro-with-some-validation-bad) (core_test.clj:50)
Passing in a non-symbol to the macro
expected: (thrown? Exception (macroexpand-1 (read-string "(macro-with-some-validation 'not-symbol (forms))")))
actual: nil
Thanks.
Edit: forgot to include the source for assert-symbols in case it matters
(defn assert-symbol [symbol]
(if (not (instance? clojure.lang.Symbol symbol))
(throw (Exception. "Non-symbol passed in to function."))))
(defn assert-symbols [symbols]
(if (not (every? #(instance? clojure.lang.Symbol %) symbols))
(throw (Exception. "Non-symbol passed in to function."))))
After changing my read-strings to be ` instead, I'm able to get the code working again. Still strange that read-string wasn't working correctly, though. Thanks for the help.
When using clojure.test's use-fixture, is there a way to pass a value from the fixture function to the test function?
A couple of good choices are dynamic binding and with-redefs. You could bind a var from the test namespace in the fixture and then use it in a test definition:
core.clj:
(ns hello.core
(:gen-class))
(defn foo [x]
(inc x))
test/hello/core.clj:
(ns hello.core-test
(:require [clojure.test :refer :all]
[hello.core :refer :all]))
(def ^:dynamic *a* 4)
(defn setup [f]
(binding [*a* 42]
(with-redefs [hello.core/foo (constantly 42)]
(f))))
(use-fixtures :once setup)
(deftest a-test
(testing "testing the number 42"
(is (= *a* (foo 75)))))
You can tell that it works by comparing calling the test directly, which does not use fixtures, to calling it through run-tests:
hello.core-test> (a-test)
FAIL in (a-test) (core_test.clj:17)
testing the number 42
expected: (= *a* (foo 75))
actual: (not (= 4 76))
nil
hello.core-test> (run-tests)
Testing hello.core-test
Ran 1 tests containing 1 assertions.
0 failures, 0 errors.
{:test 1, :pass 1, :fail 0, :error 0, :type :summary}
This approach works because fixtures close over the tests they run, though they don't get to actually make the calls to the test functions directly (usually) so it makes sense to use closures to pass information to the test code.
Perhaps not a direct answer, but if your fixture was an :each fixture anyway, or you can tolerate it being an :each fixture, you can just cop out and create a set-up function returning the relevant state and call it as the first line of your test, instead of using a fixture. This may be the best approach for some circumstances.
(defn set-up [] (get-complex-state))
(deftest blah
(let [state (set-up)]
(frobnicate)
(query state)
(tear-down state)))