Can we run JUnit5 with JMockit-1.30 version? - jmockit

It is showing a NullPointerException when we use Junit5 with Jmockit1.30, but works fine with Jmockit 1.37(latest version)
This is my code executing RunWith(JUnitPlatform.class):
public class Junit5WithJmockitTest
{
Tested private User user;
Mocked private Department department;
Test void testjunit5WithJmockit37Test()
{
new Expectations()
{
{
department.getDepartmentId();
result = 1;
new Department();
result = department;
}
};
user.getUserId();
AssertionUtils.assertNotNull("", user);
}
}

Related

JUnit5 - how to pass input collection to ParameterizedTest

I'm trying to translate a ParameterizedTest from JUnit4 to JUnit5 (sadly I'm not particularly skilled in testing).
In JUnit4 I have the following class:
#RunWith(Parameterized.class)
public class AssertionTestCase {
private final TestInput testInput;
public AssertionTestCase(TestInput testInput) {
this.testInput = testInput;
}
#Parameterized.Parameters
public static Collection<Object[]> data() {
return AssertionTestCaseDataProvider.createDataCase();
}
#Test(timeout = 15 * 60 * 1000L)
public void testDailyAssertion() {
LOG.info("Testing input {}/{}", testInput.getTestCase(), testInput.getTestName());
//assert stuffs
}
}
in the AssertionTestCaseDataProvider class I have a simple method generating a collection of Object[]:
class AssertionTestCaseDataProvider {
static Collection<Object[]> createDataCase() {
final List<TestInput> testInputs = new ArrayList<>();
//create and populate testInputs
return testInputs.stream()
.map(testInput -> new Object[]{testInput})
.collect(Collectors.toList());
}
}
I've been trying to translate it using JUnit5 and obtained this:
class AssertionTestCase {
private final TestInput testInput;
public AssertionTestCase(TestInput testInput) {
this.testInput = testInput;
}
public static Collection<Object[]> data() {
return AssertionTestCaseDataProvider.createDataCase();
}
#ParameterizedTest
#MethodSource("data")
void testDailyAssertion() {
LOG.info("Testing input {}/{}", testInput.getTestCase(), testInput.getTestName());
// assert stuffs
}
}
I did not apply any change to the AssertionTestCaseDataProvider class.
Nevertheless, I'm getting the following error:
No ParameterResolver registered for parameter [com.xxx.xx.xxx.xxx.testinput.TestInput arg0] in constructor [public `com.xxx.xxx.xxx.xxx.xxx.AssertionTestCase(com.xxx.xxx.xxx.xxx.testinput.TestInput)]. org.junit.jupiter.api.extension.ParameterResolutionException: No ParameterResolver registered for parameter [com.xxx.xx.xxx.xxx.testinput.TestInput arg0] in constructor [public com.xxx.xxx.xxx.xxx.xxx.AssertionTestCase(com.xxx.xxx.xxx.xxx.testinput.TestInput)].`
I understand I'm probably not applying correctly JUnit5 when initializing the input collection for the test. Am I missing some annotations?
I've also tried to use #ArgumentSource instead of #MethodSource and implementing Argument for AssertionTestCaseDataProvider, with the same failing results.
It works in a bit another way in Junit5.
Test Method should have parameters, and provider method should return a Stream.
static Stream<Arguments> data(){
return Stream.of(
Arguments.of("a", 1),
Arguments.of("d", 2)
);
}
#ParameterizedTest
#MethodSource("data")
void testDailyAssertion(String a, int b) {
Assertions.assertAll(
() -> Assertions.assertEquals("a", a),
() -> Assertions.assertEquals(1, b)
);
}
In your case you can just return a Stream<TestInput>:
static Stream<TestInput> createDataCase() {
final List<TestInput> testInputs = new ArrayList<>();
//create and populate testInputs
return testInputs.stream();
}
and then in your testMethod:
#ParameterizedTest
#MethodSource("createDataCase")
void testDailyAssertion(TestInput testInput) {
{your assertions}
}

How to print the number of failures for every test case using SoftAssert in TestNG

I have a test case that performs the following:
#Test
public void testNumber() {
SoftAssert softAssert = new SoftAssert();
List<Integer> nums = Arrays.asList(1,2,3,4,5,6,7,8,9,10);
for (Integer num : nums){
softAssert.assertTrue(num%2==0, String.format("\n Old num : %d", num);
}
softAssert.assertAll();
}
The above test will fail for numbers 1,3,5,7,9
Five statements will be printed in the test report.
If I run the test for a larger data set, I find it difficult to get the count of test data that failed the test case.
Is there any easier way to get the number of test data that failed for a test case using softAssert itself ?
Easy way to find count of test data that failed the test case in SoftAssert itself is not possible.
Alternatively you can think of extending Assertion class. Below is the sample that gives count of test data that failed the test case and prints each failure in new line.
package yourpackage;
import java.util.Map;
import org.testng.asserts.Assertion;
import org.testng.asserts.IAssert;
import org.testng.collections.Maps;
public class AssertionExtn extends Assertion {
private final Map<AssertionError, IAssert> m_errors;
public AssertionExtn(){
this.m_errors = Maps.newLinkedHashMap();
}
protected void doAssert(IAssert a) {
onBeforeAssert(a);
try {
a.doAssert();
onAssertSuccess(a);
} catch (AssertionError ex) {
onAssertFailure(a, ex);
this.m_errors.put(ex, a);
} finally {
onAfterAssert(a);
}
}
public void assertAll() {
if (!(this.m_errors.isEmpty())) {
StringBuilder sb = new StringBuilder("Total assertion failed: "+m_errors.size());
sb.append("\n\t");
sb.append("The following asserts failed:");
boolean first = true;
for (Map.Entry ae : this.m_errors.entrySet()) {
if (first)
first = false;
else {
sb.append(",");
}
sb.append("\n\t");
sb.append(((AssertionError)ae.getKey()).getMessage());
}
throw new AssertionError(sb.toString());
}
}
}
You can use this as:
#Test
public void test()
{
AssertionExtn asert=new AssertionExtn();
asert.assertEquals(false, true,"failed");
asert.assertEquals(false, false,"passed");
asert.assertEquals(0, 1,"brokedown");
asert.assertAll();
}

Mockito.doNothing() is still running

I'm trying to test small pieces of code. I do not want test one of the method and used Mockito.doNothing(), but this method was still run. How can I do that?
protected EncoderClientCommandEventHandler clientCommandEventHandlerProcessStop = new EncoderClientCommand.EncoderClientCommandEventHandler() {
#Override
public void onCommandPerformed(
EncoderClientCommand clientCommand) {
setWatcherActivated(false);
buttonsBackToNormal();
}
};
protected void processStop() {
EncoderServerCommand serverCommand = new EncoderServerCommand();
serverCommand.setAction(EncoderAction.STOP);
checkAndSetExtension();
serverCommand.setKey(getArchiveJobKey());
getCommandFacade().performCommand(
serverCommand,
EncoderClientCommand.getType(),
clientCommandEventHandlerProcessStop);
}
#Test
public void testClientCommandEventHandlerProcessStop() {
EncoderClientCommand encoderClientCommand = mock(EncoderClientCommand.class);
Mockito.doNothing().when(encoderCompositeSpy).buttonsBackToNormal();
when(encoderCompositeSpy.isWatcherActivated()).thenReturn(false);
encoderCompositeSpy.clientCommandEventHandlerProcessStop.onCommandPerformed(encoderClientCommand);
I've found the problem. One of the variable is already mocked in buttonsBackNormal().

Selenium, FluentAutomation & NUnit - How do I switch browsers for each TestCase?

Currently, openGoogle() does get called for each test case with the correct parameters. The problem is that setBrowser does not appear to be working properly. It does set the first time and completes the test successfully. However, when openGoogle() gets invoked for the second time it continues to use the first browser instead of using the new browser specified.
using NFramework = NUnit.Framework;
...
[NFramework.TestFixture]
public class SampleTest : FluentAutomation.FluentTest
{
string path;
private Action<TinyIoCContainer> currentRegistration;
public TestContext TestContext { get; set; }
[NFramework.SetUp]
public void Init()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
FluentAutomation.Settings.ScreenshotPath = path = "C:\\ScreenShots";
}
[NFramework.Test]
[NFramework.TestCase(SeleniumWebDriver.Browser.Firefox)]
[NFramework.TestCase(SeleniumWebDriver.Browser.InternetExplorer)]
public void openGoogle(SeleniumWebDriver.Browser browser)
{
setBrowser(browser);
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
}
public SampleTest()
{
currentRegistration = FluentAutomation.Settings.Registration;
}
private void setBrowser(SeleniumWebDriver.Browser browser)
{
switch (browser)
{
case SeleniumWebDriver.Browser.InternetExplorer:
case SeleniumWebDriver.Browser.Firefox:
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
break;
}
}
}
Note: Doing it this way below DOES work correctly - opening a separate browser for each test.
public class SampleTest : FluentAutomation.FluentTest {
string path;
private Action currentRegistration;
public TestContext TestContext { get; set; }
private void ie()
{
FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer);
}
private void ff()
{
>FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.Firefox);
}
public SampleTest()
{
//ff
FluentAutomation.SeleniumWebDriver.Bootstrap();
currentRegistration = FluentAutomation.Settings.Registration;
}
[TestInitialize]
public void Initialize()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
path = TestContext.TestResultsDirectory;
FluentAutomation.Settings.ScreenshotPath = path;
}
[TestMethod]
public void OpenGoogleIE()
{
ie();
openGoogle("IE");
}
[TestMethod]
public void OpenGoogleFF()
{
ff();
openGoogle("FF");
}
private void openGoogle(string browser)
{
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
} }
Dev branch: The latest bits in the Dev branch play nicely with NUnit's parameterized test cases in my experience.
Just move the Bootstrap call inside the testcase itself and be sure that you manually call I.Dispose() at the end. This allows for proper browser creation when run in this context.
Here is an example that you should be able to copy/paste and run, if you pull latest from GitHub on the dev branch.
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer)]
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.Chrome)]
public void CartTest(FluentAutomation.SeleniumWebDriver.Browser browser)
{
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
I.Open("http://automation.apphb.com/forms");
I.Select("Motorcycles").From(".liveExample tr select:eq(0)"); // Select by value/text
I.Select(2).From(".liveExample tr select:eq(1)"); // Select by index
I.Enter(6).In(".liveExample td.quantity input:eq(0)");
I.Expect.Text("$197.70").In(".liveExample tr span:eq(1)");
// add second product
I.Click(".liveExample button:eq(0)");
I.Select(1).From(".liveExample tr select:eq(2)");
I.Select(4).From(".liveExample tr select:eq(3)");
I.Enter(8).In(".liveExample td.quantity input:eq(1)");
I.Expect.Text("$788.64").In(".liveExample tr span:eq(3)");
// validate totals
I.Expect.Text("$986.34").In("p.grandTotal span");
// remove first product
I.Click(".liveExample a:eq(0)");
// validate new total
I.WaitUntil(() => I.Expect.Text("$788.64").In("p.grandTotal span"));
I.Dispose();
}
It should find its way to NuGet in the next release which I'm hoping happens this week.
NuGet v2.0: Currently only one call to Bootstrap is supported per test. In v1 we had built-in support for running the same test against all the browsers supported by a provider but found that users preferred to split it out into multiple tests.
The way I manage it with v2 is to have a 'Base' TestClass that has the TestMethods in it. I then extend that once per browser I want to target, and override the constructor to call the appropriate Bootstrap method.
A bit more verbose but very easy to manage.

TestNG Test Case failing with JMockit "Invalid context for the recording of expectations"

The following TestNG (6.3) test case generates the error "Invalid context for the recording of expectations"
#Listeners({ Initializer.class })
public final class ClassUnderTestTest {
private ClassUnderTest cut;
#SuppressWarnings("unused")
#BeforeMethod
private void initialise() {
cut = new ClassUnderTest();
}
#Test
public void doSomething() {
new Expectations() {
MockedClass tmc;
{
tmc.doMethod("Hello"); result = "Hello";
}
};
String result = cut.doSomething();
assertEquals(result, "Hello");
}
}
The class under test is below.
public class ClassUnderTest {
MockedClass service = new MockedClass();
MockedInterface ifce = new MockedInterfaceImpl();
public String doSomething() {
return (String) service.doMethod("Hello");
}
public String doSomethingElse() {
return (String) ifce.testMethod("Hello again");
}
}
I am making the assumption that because I am using the #Listeners annotation that I do not require the javaagent command line argument. This assumption may be wrong....
Can anyone point out what I have missed?
The JMockit-TestNG Initializer must run once for the whole test run, so using #Listeners on individual test classes won't work.
Instead, simply upgrade to JMockit 0.999.11, which works transparently with TestNG 6.2+, without any need to specify a listener or the -javaagent parameter (unless running on JDK 1.5).