Any way to run code in SenTest only on a success? - objective-c

In my Mac Cocoa unit tests, I would like to output some files as part of the testing process, and delete them when the test is done, but only when there are no failures. How can this be done (and/or what's the cleanest way to do so)?

Your question made me curious so I looked into it!
I guess I would override the failWithException: method in the class SenTestCase (the class your tests run in inherits from this), and set a "keep output files" flag or something before calling the super's method.
Here's what SenTestCase.h says about that method:
/*"Failing a test, used by all macros"*/
- (void) failWithException:(NSException *) anException;
So, provided you only use the SenTest macros to test and/or fail (and chances are this is true in your case), that should cover any test failure.

I've never dug into the scripts for this, but it seems like you could customize how you call the script that actually runs your tests to do this. In Xcode 4, look at the last step in the Build Phases tab of your test target. Mine contains this:
# Run the unit tests in this test bundle.
"${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests"
I haven't pored through the contents of this script or the many subscripts it pulls in on my machine, but presumably they call otest or some other test rig executable and the test results would be returned to that script. After a little time familiarizing yourself with those scripts you would likely be able to find a straightforward way to conditionally remove the output files based on the test results.

Related

Is there a way a test can have its TestCaseSource read data from outside source (like excel)?

I am writing new tests in Nunit. I would like the tests to get their TestCaseSource values from an excel sheet (Data-driven tests).
However, I noticed that the [SetUp] method is actually accessed AFTER the [Test] method is entered, therefore I cannot initialize the data I read from my excel sheet in the TestCaseSource.
How do I init my TestCaseSource from an excel file BEFORE each test is running?
Thanks
I have tried using a separate class like MyFactoryClass and then used
[Test, TestCaseSource(typeof(MyFactoryClass), "TestCases")]
However, this is reached Before the [Setup] method and does not recognize the name of the excel file that is named after each tests' name.
It's important, when using NUnit, to understand the stages that a test goes through as it is loaded and then run. Because I don't know what you are doing at each stage, I'll start by outlining those stages. I'll add to this answer after you post some code that shows what your factory class, your [SetUp] method and your actual tests are doing.
In brief, NUnit loads tests before it runs them. It may actually run tests multiple tiems for each load - this depends on the type of runner being used. Examples:
NUnit-console loads tests once and runs them once, then exits.
TestCentric GUI loads tests once and then runs them each time you select tests and click run. It can reload them using a menu option as well.
TestExplorer, using the NUnit 3 Test Adapter, loads tests and then runs them each time you click run.
Ideally, you should write your tests so that they will work under any runner. To do that, you should assume that they will be run multiple times for each load. Don't write code at load time, which you want to see repeated for each run. If you follow this rule, you'll have more robust tests.
So... what does NUnit do at each stage? Here it is...
Loading...
All the code in your [TestCaseSource] executes.
Running...
For each TestFixture (I'll ignore SetUpFixtures for simplicity)
Run any [OneTimeSetUp] method
For each Test or TestCase
Run any [SetUp] method
Run the test itself
Run any [TearDown] method
Run any [OneTimeTearDown] method
As you noticed, the code you write for any step can only depend on steps that have already executed. In particular, the action taken when loading the test can't depend on actions that are part of running it. This makes sense if you consider that "loading" really means creating the test that will be run.
In your [TestCaseSource] you should only call a factory that creates objects if you know in advance what objects to create. Usually, the best approach is to initialize those parameters that will be used to create objects. Those are then used to actually create the objects in the [OneTimeSetUp] or [SetUp] depending on the object lifetime you are aiming for.
That's enough (maybe too much) generalization! If you post some code, I'll add more specific suggestions to this answer.

See which methods dont have unit test on intelj idea

When I turn to project view, I can see percentages for a single class.
When I go inside, i cant see which methods are covered.
When i take export of results, and open in browser for HTML, I can see some green and red lines.
I can understand if a method has red or does not have any green, it does not have unit test.
But this is hard way.
Are there better ways here? Like: how can i find the unit test of a method if it has an unit test?
Answering on how can i find the unit test of a method if it has an unit test?
I think there is a misconception on your end. Nothing says that there is exactly one (or zero) unit test for a specific method.
It is rather common that there are multiple tests per production code method. For example to test the different results for different cases of input parameters.
It is also possible that a production code method gets executed when some "unrelated" test runs.
From that point of view, the "best" what you can do: select the production code method and have IntelliJ show you its usages. IntelliJ tells you in which module usages are found, and obviously: if the usage is within your unit test module, you know for sure: the method is used in the tests listed there.
But as said: that doesn't mean that other tests aren't running that method when doing their specific testing.

How to create an SUnit test in Dolphin Smalltalk?

I've created a small (test) addition to the Dolphin Smalltalk framework
that I want to submit on GitHub later. (1 method: Integer>>isPrime)
But first, I want add my testing method of this method to the standard regression test set, with ~ 2400 tests now. (IntegerTest>>testIsPrime)
I've found the classes TestCase, DolphinTest, IntegerTest and the SUnit browser.
But I didn't find out how to add my test to the standard test set.
Can someone point me the right direction?
I assume you are working from a Git checkout and have the test classes in your image. From there the easiest thing is to modify an existing class (such as IntegerTest) in the code browser, save the package back to the file system, and then Git should show the files as modified.
The neat thing about SUnit is that by default it will include all methods that start with 'test' in the test suite. So just add the test, run the suite, and see the number of tests increase by one!

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Running a single test method

Using OCUnit & Xcode, is there a way of running just one test?
Ideally, I'd be able to run just one test method, but if there's a way to just run a single test case, that would be OK too.
What I'm currently doing is running the 'Test' task which runs all of my tests, but this takes up a lot of time, which ideally could be spent doing other things.
See this post from an Xcode engineer:
http://chanson.livejournal.com/119578.html
The last paragraph explains how to specify a single test case class.
For more info see Chris' entire series on unit testing:
http://chanson.livejournal.com/tag/unit%20testing