Im trying to find a way to run my grails unit/integration tests in vim without having to leave my editor. Is there a good plugin for that? I took a lot of tips from here http://www.objectpartners.com/2012/02/21/using-vim-as-your-grails-ide-part-1-navigating-your-project/ but I dont want to have to switch to just see the output.
I tried it with a keymap but grails has to start up each time. Any ideas? How do you guys do this?
I also found this https://github.com/hoffoo/vim-grails-console but couldn't get it set up properly.
Thanks!
I've started to use Tim Pope's vim-dispatch plugin to execute tests in a shell and then report the results back to vim. You can see my version of executing a grails test in my dotfiles repo, specifically, in grailsScripts
In order for the plugin to report back to vim, you need to set an errorformat. I've battled with vim's errorformat for quite awhile and ultimately landed on using a combination of reformatting the output with a groovy script and using errorformat to highlight file paths. The result is that you see the entire grails output in the error list, but can hit enter on lines with file paths and go to the line in question. It's not perfect, but it works.
If you are interested in using that method, you'll want to also get the following files from my dotfiles:
.vim/tools/filters.groovy
.vim/compiler/grails.vim
Note that these scripts work in my setup, but I haven't made them into a plugin yet, so your mileage may vary. I hope it helps!
Related
I'm experimenting with Dart/Angular/WebStorm for the first time. One thing which I've found a little jarring has been the build->error cycle. In Visual Studio, I am used to this work flow:
Write some code
Running a build
Having a fresh list of errors being created
Fixing a subset of them (some or all)
Go to 1.
I'm wondering what is the workflow with Dart?
I have the following issues:
I can't figure out a way to just run pub/transformer/whatever-it-is-that-roughly-equates-to-a-build. The only way I can do this is by attempting to run a configuration
When the transformer is run, it just dumps a gigantic error output to the Pub Serve window. It does not clear the existing output, so I end up with duplicate error or errors I've already fixed. So I'm left manually scrolling through the list but taking care not to So I must manually right-click and clear the output window and rerun it.
The transformer only runs when it detects a file change. This makes sense, but when coupled with 1 and 2, I've often cleared the output and I am running the transformer just to see a fresh list of errors. Which I don't get.
So my workflow becomes:
Write some code.
Run
Close dartium browser window (I'm not actually interested in running it, just seeing my errors)
See a bunch of errors. Realise that I didn't clear the errors from the previous run.
Right click and clear the pub serve output window.
Run again
Close dartium browser window again
Realise that the transformer has not run because it already ran in steps 1-3 and I haven't changed a file.
Change a file
Run again
Close dartium browser window again
Scroll through error list to find errors to fix
I find this a little cumbersome. Perhaps there is a philosophical point here on relying too much on my tooling to identify and fix errors (although I thought that was the entire point) but I'm just wondering what other people do to simplify this - I'm faintly surprised I appear to be alone in this.
You may run 'Pub Build' (available in the right-click menu of pubspec.yaml file and also right in the editor when pubspec is open). It is not incremental, so it runs longer (i.e. runs from scratch each time) but it gives you the list of errors just as if you've cleared Pub Serve output, edited each file in the project, started run configuration and closed a browser.
Sometimes errors are only shown when pub serve generates output the first time. For reloads some errors aren't shown anymore.
I'm not sure if this is a limitation of pub serve or a bug in the transformers.
pub serve is going to be replaced a new build system that builds to disk instead of in-memory only.
DDC isn't perfect yet either, but it's the future and I'd suggest to try this instead. There are known performance problems with Angular, but they are working on it.
See also
- https://webdev.dartlang.org/tools/dartdevc
- https://github.com/dart-lang/build
I have been developing a project which contains a TestLauncher class that'll read a given directory and for each file it contains, run it against my tool and yield the results.
So, when coding in Eclipse, it would show up one result for each test (as expected). Today I've been toying with Intellij, and I've decided to try to run and code a bit of this project in Intellij.
When trying to run the tests, though, it seems to be only showing up 2 results instead of the 100+ it should. Although I am sure it is running the full suite, it seems to be folding all the results of a given category in a single result. That means that if I have at least one failing test in each category, it shows up as a "failed test".
I guess this must not be a bug, but rather some configuration that I am not aware about and that is on by default in Intellij but not in Eclipse. Could anyone explain what might be going on?
Edit: I am using the latest Intellij (downloaded one of these days).
Thanks
What you're seeing is simply a difference in the way the Eclipse and IDEA plug-ins are implemented. I implemented the Eclipse plug-in to be pretty clever in its display, so it will show different things depending on various factors such as the presence of a toString() method in your test class or whether your test class implements org.testng.ITest.
I suggest you ask this question on the IDEA forums and if you don't get any response, feel free to email the testng-users list and I can put you in touch with the JetBrains engineer in charge of the TestNG plug-in.
The IntelliJ-IDEA TestNG Plugin has a filter symbol called "Hide Passed" above the output Test Results. You can toggle that to display all tests, including the passed ones.
Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.
For example:
http://www.youtube.com/watch?v=tKTZoB2Vjuk (# 5 min)
I have no idea how one would go about writing code and making it work without an IDE but it seems like a useful thing to know how to do. Thanks!
Open an editor. Edit the code. Invoke the compiler. Run the executable.
Can't say much more without further details on your part.
What they're showing in the video though is the Python REPL.
I've been playing with Symfony's testing methods. They give you the results in the command line (for instance, green text is for OK and red text is for NOT OK). It also tells you the cause of the error.
Is there something similar to this in CodeIgniter and CakePHP?
The best tool out there that I've seen for codeigniter is a library for SimpleTest see http://codeigniter.com/wiki/SimpleTester_-_Unit_testing_library/
With cakephp, you can use e.g. simpletest as a plugin (installation a download/renaming). In fact all the cake-core tests work with it. In cake, once you have written the tests, you can run them in a one-click manner via your browser (yes, you get your green bars, too :-) ).
If I remember correctly, you can run the tests from the shell, too. If not you can create your own cake-shell as it is easy to extend.