What does the JUnit 4 #Test Annotation actually do - testing

What does the #Test actually do? I have some tests without it, and they run fine.
My class starts with
public class TransactionTest extends InstrumentationTestCase {
The test runs with either:
public void testGetDate() throws Exception {
or
#Test
public void testGetDate() throws Exception {
EDIT: It was pointed out that I may be using JUnit 3 tests, but I think I am using JUnit 4:

#Test
public void method()
#Test => annotation identifies a method as a test method.
#Test(expected = Exception.class) => Fails if the method does not throw the named exception.
#Test(timeout=100) => Fails if the method takes longer than 100 milliseconds.
#Before
public void method() =>This method is executed before each test. It is used to prepare the test environment (e.g., read input data, initialize the class).
#After
public void method() => This method is executed after each test. It is used to cleanup the test environment (e.g., delete temporary data, restore defaults). It can also save memory by cleaning up expensive memory structures.
#BeforeClass
public static void method() => This method is executed once, before the start of all tests. It is used to perform time intensive activities, for example, to connect to a database. Methods marked with this annotation need to be defined as static to work with JUnit.
#AfterClass
public static void method() => This method is executed once, after all tests have been finished. It is used to perform clean-up activities, for example, to disconnect from a database. Methods annotated with this annotation need to be defined as static to work with JUnit.
#Ignore => Ignores the test method. This is useful when the underlying code has been changed and the test case has not yet been adapted. Or if the execution time of this test is too long to be included.

It identifies a method as a test method. JUnit invokes the class, then it invokes the annotated methods.
If an exception occurs, the test fails; however, you can specify that an exception should occur. If it does not, the test will fail (testing for the exception - sort of inverse testing):
#Test(expected = Exception.class) - Fails if the method does not throw the named exception.
You can also set time limits and if the function does not finish with the allotted time it will fail:
#Test(timeout = 500) - Fails if the method takes longer than 500 milliseconds.

In JUnit 4 the #Test annotation is used to tell JUnit that a specific method is a test. In JUnit 3 in contrast every method is a test if its name starts with test and its class extends TestCase.
I assume that InstrumentationTestCase extends junit.framework.TestCase. This means that you're using a JUnit 4 annotation within a JUnit 3 test. In this case the tool that runs the tests (your IDE or build tool like Ant or Maven) decides whether is recognizes the #Test annotation or not. You can verify this by renaming testGetDate() to something that doesn't start with test, e.g. shouldReturnDate(). If your tool still runs 17 tests than you know that it supports JUnit 4 annotations within JUnit 3 tests. If it runs 16 tests than you know that the #Test annotation is just a flashbang that does nothing.
JUnit 4 still provides the classes of JUnit 3 (the junit.framework package). This means you can use JUnit 3 style tests with JUnit 4.

In JUnit Annotations are used to give a meaning to a method or class as per test execution point of view. Once you use #Test annotation with a method then that method is not longer an ordinary method it is a Test Case and it will be Executed as Test Case by IDE and JUnit will diplay its execution results as per test case passed or mail on the basis of your assertions.
In case you are starting with JUnit as a beginner do check out simple Junit Tutorial here - http://qaautomated.blogspot.in/p/junit.html

Related

General questions on parameterized test in googletest

background:I am writing a Session table for incoming traffic. This table should hold all active UDP/TCP connections.
I am using googletest package to test my implementation.
I prepare a parameterised test based on fixture in the following format:
class SessionTest - initialize all staff.
struct ConnectionInfo - holds set of connection parameters (IPs, ports, etc..)
class SessionTestPrepare : SessionTest , testing::WithParamInterface<ConnectionInfo> - initialization.
TEST_P(SessionTestPrepare, test) - holds the test cases and logic.
INSTANTIATE_TEST_CASE_P(default, SessionTestPrepare_ testing::Values(
ConectionInfo{},
ConectionInfo{},
ConectionInfo{},
)
I noticed that each time new parameters are tested, the SessionTest constructor and Setup function are called (and of course destructor and TearDown).
Note: my sessionTable is declared and initialized here.
Is there a way to avoid calling to SetUp and TearDown after each set of parameter test?
Is there a way to keep the state of my Session Table after each test without make it global (i.e. when testing the second connection parameters, the first is still in table)?
To run set up and tear down only once in a test fixture, use SetUpTestCase and TearDownTestCase instead of SetUp and TearDown. And the shared resources can be stored in fixture with static member variables. For example:
class SessionTestPrepare : public ::testing::WithParamInterface<ConnectionInfo> //...
{
public:
static void SetUpTestCase();
static void TearDownTestCase();
static ConnectionInfo * shared_data;
//...
}
SetUpTestCase is called before the first parameter test begins and TearDownTestCase is called after the last parameter test ends. You can create/delete the shared resources in these functions.

Selenium- Run a method once and use the return value for all #Test methods in the class

My requirement goes like this.
Log in to the application and open the System property menu to return a value of the property.
Open Another menu in the application and based on the value returned in the above step, perform the test scenario.
The problem is, for each #Test methods in the same class I need to perform both step 1 and 2 which is a time taking and unnecessary. The property retrieved from 'step 1' will be the same throughout the execution of the tests in the class.
Is there anyway I can execute 'step 1' just once at the start of the test and use the property value returned for all the #Test methods in the class following it?
P.S- I checked on the dependsOnMethods annotation and not sure whether it is a solution I am looking for.
If you're using JUnit, it sounds like #BeforeClass is what you are looking for. Method with this annotation runs only once per class and you can store any value returned in a global variable. Or, you might consider #Before annotation (runs before each test) if that suits you better.
Other testing frameworks use similar idea.

NSubstitute Test against classes (VB.net)

First of all I'm a beginner in unit tests. For my tests i want to use NSubstitute, so I read the tutorial on the website and also the mock comparison from Richard Banks. Both of them are testing against interfaces, not against classes. The statement is "Generally this [substituted] type will be an interface, but you can also substitute classes in cases of emergency."
Now I'm wondering about the purpose of testing against interfaces. Here is the example interface from the NSubstitute website (please note, that i have converted the C#-code in VB.net):
Public Interface ICalculator
Function Add(a As Double, b As Double) As Double
Property Mode As String
Event PoweringUp As EventHandler
End Interface
And here is the unit test from the website (under the NUnit-Framework):
<Test>
Sub ReturnValue_For_Methods()
Dim calculator = Substitute.For(Of ICalculator)()
calculator.Add(1, 2).Returns(3)
Assert.AreEqual(calculator.Add(1, 2), 3)
End Sub
Ok, that works and the unit test will perform successful. But what sense makes this? This do not test any code. The Add-Method could have any errors, which will not be detected when testing against interfaces - like this:
Public Class Calculator
Implements ICalculator
Public Function Add(a As Double, b As Double) As Double Implements ICalculator.Add
Return 1 / 0
End Function
...
End Class
The Add-Method performs a division by zero, so the unit test should fail - but because of testing against the interface ICalculator the test is successful.
Could you please help me to understand that? What sense makes it, not to test the code but the interface?
Thanks in advance
Michael
The idea behind mocking is to isolate a class we are testing from its dependencies. So we don't mock the class we are testing, in this case Calculator, we mock an ICalculator when testing a class that uses an ICalculator.
A small example is when we want to test how something interacts with a database, but we don't want to use a real database for some quick tests. (Please excuse the C#.)
[Test]
public void SaveTodoItemToDatabase() {
var substituteDb = Substitute.For<IDatabase>();
var todoScreen = new TodoViewModel(substituteDb);
todoScreen.Item = "Read StackOverflow";
todoScreen.CurrentUser = "Anna";
todoScreen.Save();
substituteDb.Received().SaveTodo("Read StackOverflow", "Anna");
}
The idea here is we've separated the TodoViewModel from the details of saving to the database. We don't want to worry about configuring a database, or getting a connection string, or having data from previous test runs interfering with future tests runs. Testing with a real database can be very valuable, but in some cases we just want to test a smaller unit of functionality. Mocking is one way of doing this.
For the real app, we'll create a TodoViewModel with a real implementation of IDatabase, and provided that implementation follows the expected contract of the interface then we can have a reasonable expectation that it will work.
Hope this helps.
Update in response to comment
The test for TodoViewModel assumes the implementation of the IDatabase works so we can focus on that class' logic. This means we'll probably want a separate set of tests for implementations of IDatabase. Say we have a SqlServerDbimplementation, then we can have some tests (probably against a real database) that check it does what it promises. In those tests we'll no longer be mocking the database interface, because that's what we're testing.
Another thing we can do is have "contract tests" which we can apply to any IDatabase implementation. For example, we could have a test that says for any implementation, saving an item then loading it up again should return the same item. We can then run those tests against all implementations, SqlDb, InMemoryDb, FileDb etc. In this way we can state our assumptions about the dependencies we're mocking, then check that the actual implementations meet our assumptions.

Is it better to test with mock or without?

A method can be tested either with mock object or without. I prefer the solution without mock when they are not necessary because:
They make tests more difficult to understand.
After refactoring it is a pain to fix junit tests if they have been implemented with mocks.
But I would like to ask your opinion. Here the method under test:
public class OndemandBuilder {
....
private LinksBuilder linksBuilder;
....
public OndemandBuilder buildLink(String pid) {
broadcastOfBuilder = new LinksBuilder(pipsBeanFactory);
broadcastOfBuilder.type(XXX).pid(pid);
return this;
}
Test with mocks:
#Test
public void testbuildLink() throws Exception {
String type = "XXX";
String pid = "test_pid";
LinksBuilder linkBuilder = mock(LinksBuilder.class);
given(linkBuilder.type(type)).willReturn(linkBuilder);
//builderFactory replace the new call in order to mock it
given(builderFactory.createLinksBuilder(pipsBeanFactory)).willReturn(linkBuilder);
OndemandBuilder returnedBuilder = builder.buildLink(pid);
assertEquals(builder, returnedBuilder); //they point to the same obj
verify(linkBuilder, times(1)).type(type);
verify(linkBuilder, times(1)).pid(pid);
verifyNoMoreInteractions(linkBuilder);
}
The returnedBuilder obj within the method buildLink is 'this' that means that builder and returnedBuilder can't be different because they point to the same object in memory so the assertEquals is not really testing that it contains the expected field set by the method buildLink (which is the pid).
I have changed that test as below, without using mocks. The below test asserts what we want to test which is that the builder contains a LinkBuilder not null and the LinkBuilder pid is the one expected.
#Test
public void testbuildLink() throws Exception {
String pid = "test_pid";
OndemandBuilder returnedBuilder = builder.buildLink(pid);
assertNotNull(returnedBuilder.getLinkBuilder());
assertEquals(pid, returnedBuilder.getLinkBuilder().getPid());
}
I wouldn't use mock unless they are necessary, but I wonder if this makes sense or I misunderstand the mock way of testing.
Mocking is a very powerful tool when writing unit tests, in a nut shell where you have dependencies between classes, and you want to test one class that depends on another, you can use mock objects to limit the scope of your tests so that you are only testing the code in the class that you want to test, and not those classes it depends on. There is no point me explaining further, I would highly recommend you read the brilliant Martin Fowler work Mocks Aren't Stubs for a full introduction into the topic.
In your example, the test without mocks is definitely cleaner, but you will notice that your test exercises code in both the OndemandBuilder and LinksBuilder classes. It may be that this is what you want to do, but the 'problem' here is that should the test fail, it could be due to issues in either of those two classes. In your case, because the code in OndemandBuilder.buildLink is minimal, I would say your approach is OK. However, if the logic in this function was more complex, then I would suggest that you would want to unit test this method in a way that didn't depend on the behavior of the LinksBuilder.type method. This is where mock objects can help you.
Lets say we do want to test OndemandBuilder.buildLink independent of the LinksBuilder implementation. To do this, we want to be able to replace the linksBuilder object in OndemandBuilder with a mock object - by doing this we can precisely control what is returned by calls to this mock object, breaking the dependency on the implementation of LinksBuilder. This is where the technique Dependency Injection can help you - the example below shows how we could modify OndemandBuilder to allow linksBuilder to be replaced with a mock object (by injecting the dependency in the constructor):
public class OndemandBuilder {
....
private LinksBuilder linksBuilder;
....
public class OndemandBuilder(LinksBuilder linksBuilder) {
this.linksBuilder = linksBuilder;
}
public OndemandBuilder buildLink(String pid) {
broadcastOfBuilder = new LinksBuilder(pipsBeanFactory);
broadcastOfBuilder.type(XXX).pid(pid);
return this;
}
}
Now, in your test, when you create your OndemandBuilder object, you can create a mock version of LinksBuilder, pass it into the constructor, and control how this behaves for the purpose of your test. By using mock objects and dependency injection, you can now properly unit test OndemandBuilder independent of the LinksBuilder implementation.
Hope this helps.
It all dependent upon what you understand by UNIT testing.
Because when you are trying to unit test a class , it means you are not worried about the underlying system/interface. You are assuming they are working correctly hence you just mock them. And when i say you are ASSUMING means you are unit testing the underlying interface separately.
So when you are writing your JUnits without mocks essentially you are doing a system or an integration test.
But to answer your question both ways have their advantages/disadvantages and ideally a system should have both.

Junit4: Running a Suite of particular Test methods

Is there a way to create a suite of test methods, not just test classes?
I'd like to put together a test suite that just runs particular tests methods from a test class. I don't see a way to do this from my limited junit knowledge and from searching the web.
Use Categories feature in JUnit4.
Example: if some methods scattered both in ATest and BTest are expected to executed :
//Define Categories
#RunWith(Categories.class)
#IncludeCategory(NeedTest.class)
#SuiteClasses({ ATest.class, BTest.class })
class MySuite{
...
}
Then in ATest and BTest, annotate your expect methods as:
#Test
#Category(NeedTest.class)
public void test()
When you run MySuite, only the methods annotated with #Category(NeedTest.class) will be executed. Of course, you could create multiple test categories,
ps: NeedTest.class is just a marker class, it can be any class.