Using Rubberduck unit tests, how can I find out which one of multiple asserts failed? - vba

I'm using Rubberduck to unit test my VBA implementations. When using multiple Asserts of the same kind (e.g. Assert.IsTrue) in one TestMethod, the test result is not telling me which of them failed, as far as I can see.
Is there a way to find out which Assert failed or is this on the Rubberduck future roadmap? Of course I could add my own information, e.g. by using Debug.Print before each Assert, but that would cause a lot of extra code.
I know there are different opinions about multiple Asserts in one test, but I chose to have them in my situation and this discussion is already covered elsewhere.

Disclaimer: I'm heavily involved in Rubberduck's development.
The IAssert interface that both the Rubberduck.AssertClass and the Rubberduck.PermissiveAssertClass implement, includes an optional message parameter for every single member:
Simply include a different and descriptive message for each assertion:
Assert.AreEqual expected, actual, "oops, didn't expect this"
Assert.IsTrue result, "truth is an illusion"
The Test Explorer toolwindow will display the custom message under the Message column, only when the assertion fails:

Related

See which methods dont have unit test on intelj idea

When I turn to project view, I can see percentages for a single class.
When I go inside, i cant see which methods are covered.
When i take export of results, and open in browser for HTML, I can see some green and red lines.
I can understand if a method has red or does not have any green, it does not have unit test.
But this is hard way.
Are there better ways here? Like: how can i find the unit test of a method if it has an unit test?
Answering on how can i find the unit test of a method if it has an unit test?
I think there is a misconception on your end. Nothing says that there is exactly one (or zero) unit test for a specific method.
It is rather common that there are multiple tests per production code method. For example to test the different results for different cases of input parameters.
It is also possible that a production code method gets executed when some "unrelated" test runs.
From that point of view, the "best" what you can do: select the production code method and have IntelliJ show you its usages. IntelliJ tells you in which module usages are found, and obviously: if the usage is within your unit test module, you know for sure: the method is used in the tests listed there.
But as said: that doesn't mean that other tests aren't running that method when doing their specific testing.

Tool or eclipse base plugin available for generate test cases for SalesForce platform related Apex classes

Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.

How would you effectively test command line software, with many switches and arguments

A command line utility/software could potentially consist of many different switches and arguments.
Lets say your software is called CLI and lets say CLI has the following features:
The general syntax of CLI is:
CLI <data structures> <operation> <required arguments> [optional arguments]
<data structures> could be 'matrix', 'complex numbers', 'int', 'floating point', 'log'
<operation> could be 'add', 'subtract', 'multiply', 'divide'
I cant think of any required and optional arguments, but lets say your software does support it
Now you want to test this software. And you wish to test interface itself, not the logic. Essentially the interface must return the correct success codes and error codes.
Essentially a lot of real word software still present a Command Line interface with several options. I am curious if there is any formal testing methodology established for this. One idea i had was to construct a grammar (like EBNF) and describing the 'language' of the interface. But I fail to push this idea ahead. What good is a grammar for in this case? How does it enable the generation of many many combinations .
I am curious to learn more about any theoretical models which could be applied to such a problem or if anyone in here has actually done such testing with satisfying coverage
There is a command-line tool as part of a product i maintain, and i have a situation thats very similar to what you describe. What i did was employ a unit testing framework, and encode each combination of arguments as a test method.
The program is implemented in c#/.NET, so i use microsoft's testing framework that's builtin to Visual Studio, but the approach would work with any unit testing framework.
Each test invokes a utility function that starts the process and sends in the input and cole ts the output. Then, each test is responsible for verifying that the output from the CLI matches what was expected. In some cases, there's a family of test cases that can be performed by a single test method, wih a for loop in it. The logic needs to run the CLI and check the output for each iteration.
The set of tests i have does not cover every permutation of arguments, but it covers the 80% cases and i can add new tests if there are ever any defects.
Using a recursive grammar to generate switches is an interesting idea. If you where to try this then you would need to first write the grammar in such a way that all switches could be used, and then do a random walk of the grammar.
This provides an easy method of randomly walking a grammar and outputting the result.

Pex: For String.IsNullOrEmpty Pex generates only two test methods

I have a simple method with a single condition like this.
if (String.IsNullOrEmpty(FirstName))
{
success = false;
}
return success;
When I run Pex it generates only one test case which assigns Null to FirstName property and the other with assigns "\0" to the FirstName.
Why is it not generating a third method which will assign string.Empty to the FirstName property?
As I understand it, Pex just tries to achieve 100% test coverage in your application code. From the code you posted, it would only take two tests to trace all the branches of that method.
The string is not null or empty.
The string is null or empty.
I'm guessing that Pex is not configured to examine the internals of .Net libraries so it doesn't know that empty string will be special value for the IsNullOrEmpty function. Null and the null character ('\0') are its two favorite choices for testing strings if it isn't able to examine how the string is used.
You can create a parametrized unit test to check the empty string if you want.
As Joshua Dale says, Pex attempts to generate tests that cover as many code branches as possible. As it says in the first paragraph of the Pex Reference Manual:
Given a method, the [sic] Microsoft Pex generates inputs which exercise many different code paths. In order [sic] words, Microsoft Pex aims at generating a test suite that achieves maximum code coverage.
(As you can see, this document could do with some proof-reading!)
It's important to bear this in mind, so Pex will generate test inputs designed to execute all your code branches, not generate test inputs with semantic value (expect as where that is general). It's important to realise this and not assume that the test suite that Pex generates means that your tests have covered all the possible failure conditions. It could potentially cover very few of them- the test inputs are designed to hit edge-cases (e.g. null/ the null-character), which is obvious if you consider that the purpose is to exercise as many code branches as possible.
Pex attempts to explore code branches that your own tests don't discover. It's a complement to your intelligence- as a human you are good at figuring out what the code should do, as a Turing machine, Pex is good at picking through every possible code branch (though it often needs help.)

Have JUnit fail tests that don't actually run an assertion

My team is working on educating some of our developers about testing. They understand why to write tests and are on board that they should write tests, but are falling a little short on writing good tests.
I just saw a commit like this
public void SomeTest{
#Test
public void testSomething{
System.out.println(new mySomething.getData());
}
So they were at least making sure their code gave them the expected output by looking.
It will be a bit before we can really sell the idea of code reviews. In the mean time I was considering having JUnit fail any tests that do not have actual assertXXX or fail statements in them. I would then like to have that failure message say something like "Your tests should use assertions and actually examine the output!".
I fully expect this to lead to calls like assertTrue(1 == 1);. We're working on the team buy in for proper testing and code reviews, are there any technical mechanisms we can use to make life easier for the developers that already get it?? What about technical mechanisms to help the new guys understand?
I think you should consider organizational changes: mentoring, training, code reviews.
The tools can only help you if you're using them in good faith with a base understanding of the goals. If one of these is missing they won't help you.
Humans are just to intelligent to do dump things or work around metrics. I think your assessment is not correct that "they" are on board if they can't write a single useful test. Automatic tools are simply not the correct tools at this stage. You can't learn by being told by a program what to do next.
You can use some static code analyzer.
I use PMD which includes a JUnit rule set. There are a lot of IDE plugins which will mark rule violations in the IDE. You can configure the rule sets to your needs.
You will also profit from the other rule sets - which will warn you on code style / best practice violations (although you have to decide sometimes if the tool or you are the fool :-)).
to answer the stated question for future viewers.
JUnit uses reflection to run tested function if any Exception, Error throws -> test fails, otherwise succeed. Assert class is just a utils class.