Workaround for enoent error from Erlang's Common Test on Windows? - testing

When using Common Test with Erlang on Windows, I run into a lot of bugs with Common Test and Erlang. For one, if there are any spaces in the project's path, Common Test often fails outright. To workaround this, I moved the project to a path with no spaces (but I really wish the devs would fix the libraries so they work better on Windows). Now, I got Common Test to mostly run, except it won't print out the HTML report at the end. This is the error I get after the tests run:
Testing myapp.ebin: EXIT, reason {
{badmatch,{error,enoent}},
[{test_server_ctrl,start_minor_log_file1,4,
[{file,"test_server_ctrl.erl"},{line,1959}]},
{test_server_ctrl,run_test_case1,11,
[{file,"test_server_ctrl.erl"},{line,3761}]},
{test_server_ctrl,run_test_cases_loop,5,
[{file,"test_server_ctrl.erl"},{line,3032}]},
{test_server_ctrl,run_test_cases,3,
[{file,"test_server_ctrl.erl"},{line,2294}]},
{test_server_ctrl,ts_tc,3,
[{file,"test_server_ctrl.erl"},{line,1434}]},
{test_server_ctrl,init_tester,9,
[{file,"test_server_ctrl.erl"},
{line,1401}]}]}
This happened in sometimes in Erlang R15 and older if the test function names were either too long or had too many underscores in the name (which I suspect is also a bug) or when too many tests failed (which means Common Test is useless to me for TDD). But now it happens on every ct:run from Common Test in R15B01. Does anyone know how I can workaround this? Has anyone had any success with TDD and Common Test on Windows?

Given the last comment you might want to disable the buildin_hooks. You can do this by passing the following to ct:run/1 or ct_run
{enable_builtin_hooks,false}
That should disable the cth_log_redirect hook and maybe solve your problem during overload.

Related

Clear Cursive REPL state before each test run

I'm new to Cursive and Clojure in general and am having some difficulty getting a decent TDD workflow.
My problem is that subsequent test runs depend on state in the REPL. For example suppose that you have the code below.
(def sayHello "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will pass.
If you then refactor the code like this
(def getGreeting "hello")
(deftest test-repl-state
(testing "testing state in the repl"
(is (= "hello" sayHello))))
If you run this with "Tools->REPL->Run tests in current ns in REPL" it will still pass (because the def of sayHello still exists in the repl). However, the tests should fail because the code is currently in a failing state (sayHello is not defined anywhere in the code).
I've tried toggling the "locals will be cleared" button in the REPL window but this does not seem to fix the issue.
If there is a way to run the tests outside of the REPL (or in a new REPL for each test run) I'd be fine with that as a solution.
All I want is that there is a 1 to 1 correspondence between the source code under test and the result of the test.
Thanks in advance for your help.
Yes, it's annoying to have old defs available. I don't even create tests usually (whoops), but this bites me during normal development. If I create a function, then rename it, then change it, then accidentally refer to the first function name, I get odd results since it's referring to the old function. I'm still looking for a good way around this that doesn't involve killing and restarting the REPL.
For your particular case though, there's a couple easy, poor workarounds:
Open IntelliJ's terminal (button at bottom left of the window) and run lein test. This will execute all the project's tests and report the results.
Similarly to the above, you can, outside of IntelliJ, open a command window in the project directory and run lein test, and it will run all found tests.
You can also specify which namespace to test using lein test <ns here> (such as lein test beings-retry.core-test), or a specific test in a namespace using :only (such as lein test :only beings-retry.core-test/a-test; where a-test is a deftest). Unfortunately, this doesn't happen in the REPL, so it kind of breaks workflow.
The only REPL-based workaround I know of, as mentioned above, is to just kill the REPL:
"Stop REPL" (Ctrl+F2)
"Reconnect" (Ctrl+F5).
Of course though, this is slow, and an awful solution if you're doing this constantly. I'm interested to see if anyone else has any better solutions.
You could use Built-in test narrowing (test selector) feature of test-refresh lein plugin. It allows to test only those tests that have been marked with ^:test-refresh/focus meta every time you save a file.
The usual solution for this kind of problem is either stuartsierra/component or tolitius/mount.
A complete description would be out of place here, but the general idea is to have some system to manage state in a way that allows to cleanly reload the application state. This helps keeping close to the code that is saved in your source files while interactively working on the running system.
Thanks to everyone for their suggestions. I'm posting my own answer to this problem because I've found a way forward that works for me and I'm not sure that any of the above were quite what I was looking for.
I have come to the conclusion that the clojure REPL, although useful, is not where I will run tests. This basically came down to a choice between either running a command to clean the repl between each test run (like the very useful refresh function in tools.namespace https://github.com/clojure/tools.namespace) or not running tests in the REPL.
I chose the latter option because.
It is one less step to do (and reloading is not always perfect)
CI tests do not run in a REPL so running them directly in dev is one step closer to the CI environment.
The code in production does not run in a REPL either so running tests outside the repl is closer to the way that production code runs.
It's actually a pretty simple thing to configure a run configuration in IntelliJ to run either a single test or all tests in your application as a normal clojure application. You can even have a REPL running at the same time if you like and use it however you want. The fact that the tooling leans so heavily towards running things in the REPL blinded me to this option to some extent.
I'm pretty inexperienced with Clojure and also a stubborn old goat that is set in his TDD ways but at least some others agree with me about this https://github.com/cursive-ide/cursive/issues/247.
Also if anyone is interested, there is a great talk on how the REPL holds on to state and how this causes all sorts of weird behaviour here https://youtu.be/-RaFcpNiYCo. It turns out that the problem I was seeing with re-defining functions was just the tip of the iceberg.
One option that may help, especially if you're bundling several assertions, or have repeating tests is let. The name-value binding has a known scope, and can save you from re-typing a lot.
Here's an example:
(deftest my-bundled-and-scoped-test
(let [TDD "My expected result"
helper (some-function :data)]
(testing "TDD-1: Testing state in the repl"
(is (= TDD "MY expected result")))
(testing "TDD-2: Reusing state in the repl"
(is (= TDD helper)))))
Once my-bundled-and-scoped test finishes executing, you'll no longer be in the let binding. An added benefit is that the result of some-function will be reusable too, which is handy for testing multiple assertions or properties of the same function/input pair.
While on the subject, I'd also recommend using Leiningen to run your tests, as there are plenty of plugins that can help you test more efficiently. I'd checkout test-refresh, speclj, and cloverage.

Navigable clojure stacktraces with emacs, nREPL, and clojure.test

I'm using emacs with nREPL via cider, and I've got a suite of clojure.test-based tests that I run to see when I've broken things (which is a lot as I'm fairly new to clojure.) I've tried two methods to run these tests - first by invoked the external "lein test" command and second by using clojure-test - and both work but neither gives completely satisfactory results. What I want is to be able to "navigate" the results of the tests, i.e. click on failures and stacktraces to go to the sources of failure.
I've poked around a bit with clojure-stacktrace-mode, but, while fairly impressive, that only seems to apply to stacktraces generated in the nREPL buffer.
So my question is: is there a way to get the behavior I want? Or maybe another way to get equivalent functionality? I feel like all the parts are there, but that I'm putting them together incorrectly.

grunt lesslint how to prevent output from being written to console

We are trying to use grunt-lesslint in our project, as our UI developer is comfortable fix errors in less file. grunt-recess seems more powerful but not sure if it can point errors in less file itself. I am unable to comprehend enough from lesslint page, and there do not seem to be many examples. Does anyone know the following:
How to prevent lesslint from displaying on the console. I use formatters and the report file is generated, but it also prints on console, which I do not want to.
How to make lesslint fail only in the case of errors (not warnings). Also csslint seems to report errors also, while lesslint mostly gives warnings only, why is that so? Does lesslint throw errors as well? How to make it fail only in case of errors?
I tried using 'checkstyle-xml' formatter, but it does not seem to use it (I have used in jshint and it gives a properly formatted xml, which it does not give for lesslint).
Is it possible to compile less (many files or directories) in conjunction with lesslint? Any example?
Thanks,
Paddy
I'd say it's more of a common practice to display stdout for this kind of thing; the JSHint plugin does it, as does any other linting plugin that I've used. If you get in another developer that uses Grunt they'll probably expect stdout too. If you really want to override this, use grunt-verbosity: https://npmjs.org/package/grunt-verbosity
Again, this is a convention in Grunt; if a task has any warnings then it fails. The reason being if you lint a file and the linter flags something up it should be dealt with straight away, rather than delay it; six months time you have 500 errors that you haven't fixed and you're less likely to fix them then. Most linting plugins allow you to specify custom options (I've used CSS Lint and that is very customisable), so if you don't like a rule you can always disable it.
This should work. If there's a bug with this feature you should report it on the issue tracker, where it will be noticed by the developers of the plugin. https://github.com/kevinsawicki/grunt-lesslint/issues
Yes. You can set up a custom task that runs both your linter and compile in one step: something like grunt.registerTask('buildless', 'Lint and compile LESS files.', ['lesslint', 'less']); note that you'll have to install https://github.com/gruntjs/grunt-contrib-less to get that to work. Also note that, failing linting will not compile your LESS files; mandate that your code always passes the lint check; you'll help everyone involved in the project.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Is there a way to 'test run' an ant build?

Is there a way to run an ant build such that you get an output of what the build would do, but without actually doing it?
That is to say, it would list all of the commands that would be submitted to the system, output the expansion of all filesets, etc.
When I've searched 'ant' and 'test', I get overwhelming hits for running tests with ant. Any suggestions on actually testing ant build files?
It seems, that you are looking for a "dry run".
I googled it a bit and found no evidence that this is supoorted.
Heres a bugzilla-request for that feature, that explains things a bit:
https://issues.apache.org/bugzilla/show_bug.cgi?id=35464
This is impossible in theory and in practice. In theory, you cannot test a program meaningfully without actually running it (basically the halting problem).
In practice, since individual ant tasks very often depend on each other's output, this would be quite pointless for the vast majority of Ant scripts. Most of them compile some source code and build JARs from the class files - but what would the fileset for the JAR contain if the compiler didn't actually run?
The proper way to test an Ant script is to run it regularly, but on a test system, possibly a VM image that you can restory to the original state easily.
Here's a problem: You have target #1 that builds a bunch of stuff, then target #2 that copies it.
You run your Ant script in test mode, it pretends to do target #1. Now it comes to target #2 and there's nothing to copy. What should target #2 return? Things can get even more confusing when you have if and unless clauses in your ant targets.
I know that Make has a command line parameter that tells it to run without doing a build, but I never found it all that useful. Maybe that's why Ant doesn't have one.
Ant does have a -k parameter to tell it to keep going if something failed. You might find that useful.
As Michael already said, that's what Test Systems - VM's come in handy- are for
From my ant bookmarks => some years ago some tool called "Virtual Ant" has been announced, i never tried it. So don't regard it as a tip but as something someone heard of
From what the site says =
"With Virtual Ant you no longer have to get your hands dirty with XML to create or edit Ant build scripts. Work in a completely virtualized environment similar to Windows Explorer and run your tasks on a Virtual File System to see what they do, in real time, without affecting your real file system*. The actual Ant build script is generated in the background."
Hm, sounds to good to be true ;-)
..without affecting your real file system.. might be what you asked for !?
They provide a 30day trial license so you won't lose no money but only the time to have a look on..