See which methods dont have unit test on intelj idea - intellij-idea

When I turn to project view, I can see percentages for a single class.
When I go inside, i cant see which methods are covered.
When i take export of results, and open in browser for HTML, I can see some green and red lines.
I can understand if a method has red or does not have any green, it does not have unit test.
But this is hard way.
Are there better ways here? Like: how can i find the unit test of a method if it has an unit test?

Answering on how can i find the unit test of a method if it has an unit test?
I think there is a misconception on your end. Nothing says that there is exactly one (or zero) unit test for a specific method.
It is rather common that there are multiple tests per production code method. For example to test the different results for different cases of input parameters.
It is also possible that a production code method gets executed when some "unrelated" test runs.
From that point of view, the "best" what you can do: select the production code method and have IntelliJ show you its usages. IntelliJ tells you in which module usages are found, and obviously: if the usage is within your unit test module, you know for sure: the method is used in the tests listed there.
But as said: that doesn't mean that other tests aren't running that method when doing their specific testing.

Related

Is there a way to easily run a specific group of tests or a single test in xcode?

Is there a way to run a single test in isolation, or a group of tests?
I find that when I am trying to debug a particular test, it's really annoying because previous test cases may trigger that same break point-- So I typically comment out all my tests except the one I am working on, but that seems so ghetto... So I am wondering if there's a better way?
The only way I know of doing this is by editing the scheme. The "Test" action has a bunch of checkboxes where you can turn on or off groups of tests (i.e. classes) or single tests. If you're doing this with the same tests a lot, you could create a set of schemes for different groups and set it up just the way you want. (Or just edit it each time.)
Perhaps not the answer you're looking for, but in AppCode, the iOS and OSX IDE from Jetbrains you put the cursor anywhere over the test you want, and then trigger it with a keyboard shortcut. (I use IntelliJ mapping so its Ctrl+Shift+F10).
It also has a great test runner showing green/red bars for tests that pass or fail.

Tool or eclipse base plugin available for generate test cases for SalesForce platform related Apex classes

Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.

Any way to run code in SenTest only on a success?

In my Mac Cocoa unit tests, I would like to output some files as part of the testing process, and delete them when the test is done, but only when there are no failures. How can this be done (and/or what's the cleanest way to do so)?
Your question made me curious so I looked into it!
I guess I would override the failWithException: method in the class SenTestCase (the class your tests run in inherits from this), and set a "keep output files" flag or something before calling the super's method.
Here's what SenTestCase.h says about that method:
/*"Failing a test, used by all macros"*/
- (void) failWithException:(NSException *) anException;
So, provided you only use the SenTest macros to test and/or fail (and chances are this is true in your case), that should cover any test failure.
I've never dug into the scripts for this, but it seems like you could customize how you call the script that actually runs your tests to do this. In Xcode 4, look at the last step in the Build Phases tab of your test target. Mine contains this:
# Run the unit tests in this test bundle.
"${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests"
I haven't pored through the contents of this script or the many subscripts it pulls in on my machine, but presumably they call otest or some other test rig executable and the test results would be returned to that script. After a little time familiarizing yourself with those scripts you would likely be able to find a straightforward way to conditionally remove the output files based on the test results.

TestNG & Selenium: Separate tests into "groups", run ordered inside each group

We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..

How to protect yourself when refactoring non-regression tests?

Are there specific techniques to consider when refactoring the non-regression tests? The code is usually pretty simple, but it's obviously not included into the safety net of a test suite...
When building a non-regression test, I first ensure that it really exhibits the issue that I want to correct, of course. But if I come back later to this test because I want to refactor it (e.g. I just added another very similar test), I usually can't put the code-under-test back in a state where it was exhibiting the first issue. So I can't be sure that the test, once refactored, is still exercising the same paths in the code.
Are there specific techniques to deal with this issue, except being extra careful?
It's not a big problem. The tests test the code, and the code tests the tests. Although it's possible to make a clumsy mistake that causes the test to start passing under all circumstances, it's not likely. You'll be running the tests again and again, so the tests and the code they test gets a lot of exercise, and when things change for the worse, tests generally start failing.
Of course, be careful; of course, run the tests immediately before and after refactoring. If you're uncomfortable about your refactoring, do it in a way that allows you to see the test working (passing and failing). Find a reliable way to fail each test before the refactoring, and write it down. Get to green - all tests passing - then refactor the test. Run the tests; still green? Good. (If not, of course, get green, perhaps by starting over). Perform the changes that made the original unrefactored tests fail. Red? Same failure as before? Then reinstate the working code, and check for green again. Check it in and move onto your next task.
Try to include not only positive cases in your automated test, but also negative cases (and a proper handler for them).
Also, you can try to run your refactored automated test with breakpoints and supervise through the debugger that it keeps on exercising all the paths you intended it to exercise.