I'm trying to test small pieces of code. I do not want test one of the method and used Mockito.doNothing(), but this method was still run. How can I do that?
protected EncoderClientCommandEventHandler clientCommandEventHandlerProcessStop = new EncoderClientCommand.EncoderClientCommandEventHandler() {
#Override
public void onCommandPerformed(
EncoderClientCommand clientCommand) {
setWatcherActivated(false);
buttonsBackToNormal();
}
};
protected void processStop() {
EncoderServerCommand serverCommand = new EncoderServerCommand();
serverCommand.setAction(EncoderAction.STOP);
checkAndSetExtension();
serverCommand.setKey(getArchiveJobKey());
getCommandFacade().performCommand(
serverCommand,
EncoderClientCommand.getType(),
clientCommandEventHandlerProcessStop);
}
#Test
public void testClientCommandEventHandlerProcessStop() {
EncoderClientCommand encoderClientCommand = mock(EncoderClientCommand.class);
Mockito.doNothing().when(encoderCompositeSpy).buttonsBackToNormal();
when(encoderCompositeSpy.isWatcherActivated()).thenReturn(false);
encoderCompositeSpy.clientCommandEventHandlerProcessStop.onCommandPerformed(encoderClientCommand);
I've found the problem. One of the variable is already mocked in buttonsBackNormal().
Related
I have created below JUnit5 parameterized test with ArgumentsSource for loading arguments for the test:
public class DemoModelValidationTest {
public ParamsProvider paramsProvider;
public DemoModelValidationTest () {
try {
paramsProvider = new ParamsProvider();
}
catch (Exception iaex) {
}
}
#ParameterizedTest
#ArgumentsSource(ParamsProvider.class)
void testAllConfigurations(int configIndex, String a) throws Exception {
paramsProvider.executeSimulation(configIndex);
}
}
and the ParamsProvider class looks like below:
public class ParamsProvider implements ArgumentsProvider {
public static final String modelPath = System.getProperty("user.dir") + File.separator + "demoModels";
YAMLDeserializer deserializedYAML;
MetaModelToValidationModel converter;
ValidationRunner runner;
List<Configuration> configurationList;
List<Arguments> listOfArguments;
public ParamsProvider() throws Exception {
configurationList = new ArrayList<>();
listOfArguments = new LinkedList<>();
deserializedYAML = new YAMLDeserializer(modelPath);
deserializedYAML.load();
converter = new MetaModelToValidationModel(deserializedYAML);
runner = converter.convert();
configurationList = runner.getConfigurations();
for (int i = 0; i < configurationList.size(); i++) {
listOfArguments.add(Arguments.of(i, configurationList.get(i).getName()));
}
}
public void executeSimulation(int configListIndex) throws Exception {
final Configuration config = runner.getConfigurations().get(configListIndex);
runner.run(config);
runner.getReporter().consolePrintReport();
}
#Override
public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
return listOfArguments.stream().map(Arguments::of);
// return Stream.of(Arguments.of(0, "Actuator Power"), Arguments.of(1, "Error Logging"));
}}
In the provideArguments() method, the commented out code is working fine, but the first line of code
listOfArguments.stream().map(Arguments::of)
is returning the following error:
org.junit.platform.commons.PreconditionViolationException: Configuration error: You must configure at least one set of arguments for this #ParameterizedTest
I am not sure whether I am having a casting problem for the stream in provideArguments() method, but I guess it somehow cannot map the elements of listOfArguments to the stream, which can finally take the form like below:
Stream.of(Arguments.of(0, "Actuator Power"), Arguments.of(1, "Error Logging"))
Am I missing a proper stream mapping of listOfArguments?
provideArguments(…) is called before your test is invoked.
Your ParamsProvider class is instantiated by JUnit. Whatever you’re doing in desiralizeAndCreateValidationRunnerInstance should be done in the ParamsProvider constructor.
Also you’re already wrapping the values fro deserialised configurations to Arguments and you’re double wrapping them in providesArguments.
Do this:
#Override
public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
return listOfArguments.stream();
}}
I'm trying to test the following class (I've left out the implementation)
public class UTRI implements UTR {
public void runAsUser(String userId, Runnable r);
}
This is the way I would use it:
UTRI.runAsUser("User1", new Runnable () {
private void run() {
//do whatever needs to be done here.
}
});
The problem is, I don't know how to use EasyMock to test functions that return void. That and I'm also not too familiar with testing in general (right out of school!). Can someone help explain to me what I need to do to approach this? I was thinking about making the UTRI a mock and doing expectlastcall after that, but realistically, not sure.
public class UTRITest {
UTRI utri = new UTRI();
#Test
public void testRunAsUser() {
// Create Mocks
Runnable mockRunnable = EasyMock.createMock(Runnable.class);
// Set Expectations
**mockRunnable.run();
EasyMock.expectLastCall().once();**
EasyMock.replay(mockRunnable);
// Call the method under test
utri.runAsUser("RAMBO", **mockRunnable**);
// Verify if run was called on Runnable!!
EasyMock.verify(mockRunnable);
}
}
Currently, openGoogle() does get called for each test case with the correct parameters. The problem is that setBrowser does not appear to be working properly. It does set the first time and completes the test successfully. However, when openGoogle() gets invoked for the second time it continues to use the first browser instead of using the new browser specified.
using NFramework = NUnit.Framework;
...
[NFramework.TestFixture]
public class SampleTest : FluentAutomation.FluentTest
{
string path;
private Action<TinyIoCContainer> currentRegistration;
public TestContext TestContext { get; set; }
[NFramework.SetUp]
public void Init()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
FluentAutomation.Settings.ScreenshotPath = path = "C:\\ScreenShots";
}
[NFramework.Test]
[NFramework.TestCase(SeleniumWebDriver.Browser.Firefox)]
[NFramework.TestCase(SeleniumWebDriver.Browser.InternetExplorer)]
public void openGoogle(SeleniumWebDriver.Browser browser)
{
setBrowser(browser);
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
}
public SampleTest()
{
currentRegistration = FluentAutomation.Settings.Registration;
}
private void setBrowser(SeleniumWebDriver.Browser browser)
{
switch (browser)
{
case SeleniumWebDriver.Browser.InternetExplorer:
case SeleniumWebDriver.Browser.Firefox:
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
break;
}
}
}
Note: Doing it this way below DOES work correctly - opening a separate browser for each test.
public class SampleTest : FluentAutomation.FluentTest {
string path;
private Action currentRegistration;
public TestContext TestContext { get; set; }
private void ie()
{
FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer);
}
private void ff()
{
>FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.Firefox);
}
public SampleTest()
{
//ff
FluentAutomation.SeleniumWebDriver.Bootstrap();
currentRegistration = FluentAutomation.Settings.Registration;
}
[TestInitialize]
public void Initialize()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
path = TestContext.TestResultsDirectory;
FluentAutomation.Settings.ScreenshotPath = path;
}
[TestMethod]
public void OpenGoogleIE()
{
ie();
openGoogle("IE");
}
[TestMethod]
public void OpenGoogleFF()
{
ff();
openGoogle("FF");
}
private void openGoogle(string browser)
{
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
} }
Dev branch: The latest bits in the Dev branch play nicely with NUnit's parameterized test cases in my experience.
Just move the Bootstrap call inside the testcase itself and be sure that you manually call I.Dispose() at the end. This allows for proper browser creation when run in this context.
Here is an example that you should be able to copy/paste and run, if you pull latest from GitHub on the dev branch.
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer)]
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.Chrome)]
public void CartTest(FluentAutomation.SeleniumWebDriver.Browser browser)
{
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
I.Open("http://automation.apphb.com/forms");
I.Select("Motorcycles").From(".liveExample tr select:eq(0)"); // Select by value/text
I.Select(2).From(".liveExample tr select:eq(1)"); // Select by index
I.Enter(6).In(".liveExample td.quantity input:eq(0)");
I.Expect.Text("$197.70").In(".liveExample tr span:eq(1)");
// add second product
I.Click(".liveExample button:eq(0)");
I.Select(1).From(".liveExample tr select:eq(2)");
I.Select(4).From(".liveExample tr select:eq(3)");
I.Enter(8).In(".liveExample td.quantity input:eq(1)");
I.Expect.Text("$788.64").In(".liveExample tr span:eq(3)");
// validate totals
I.Expect.Text("$986.34").In("p.grandTotal span");
// remove first product
I.Click(".liveExample a:eq(0)");
// validate new total
I.WaitUntil(() => I.Expect.Text("$788.64").In("p.grandTotal span"));
I.Dispose();
}
It should find its way to NuGet in the next release which I'm hoping happens this week.
NuGet v2.0: Currently only one call to Bootstrap is supported per test. In v1 we had built-in support for running the same test against all the browsers supported by a provider but found that users preferred to split it out into multiple tests.
The way I manage it with v2 is to have a 'Base' TestClass that has the TestMethods in it. I then extend that once per browser I want to target, and override the constructor to call the appropriate Bootstrap method.
A bit more verbose but very easy to manage.
I'm trying to automate a user scenario that involves two websites with no common base url. How can I achieve this? Right now I have unsuccessfully tried altering global variables but they are reset for each test.
public $check = true;
protected function setUp() {
$this->setBrowser("*googlechrome");
if ($this->check==true)
$this->setBrowserUrl("SITE A");
else
$this->setBrowserUrl("SITE B");
$this->setPort(4444);
$this->setHost("0.0.0.0");
}
public testA() { //requires SITE A, set check to false }
public testB() { //requires SITE B }
Your code won't work since setUp() is executed before every test case is run.
Why not being more explicit about what you're trying to do? Try the following:
private $sites = array('A' => 'a.com', 'B' => 'b.com');
protected function setUp() {
$this->setBrowser("*googlechrome");
$this->setPort(4444);
$this->setHost("0.0.0.0");
}
public function testA()
{
$this->useSite('A');
}
public function testB()
{
$this->useSite('B');
}
private function useSite($site)
{
$this->setBrowserUrl($this->sites[$site]);
}
The following TestNG (6.3) test case generates the error "Invalid context for the recording of expectations"
#Listeners({ Initializer.class })
public final class ClassUnderTestTest {
private ClassUnderTest cut;
#SuppressWarnings("unused")
#BeforeMethod
private void initialise() {
cut = new ClassUnderTest();
}
#Test
public void doSomething() {
new Expectations() {
MockedClass tmc;
{
tmc.doMethod("Hello"); result = "Hello";
}
};
String result = cut.doSomething();
assertEquals(result, "Hello");
}
}
The class under test is below.
public class ClassUnderTest {
MockedClass service = new MockedClass();
MockedInterface ifce = new MockedInterfaceImpl();
public String doSomething() {
return (String) service.doMethod("Hello");
}
public String doSomethingElse() {
return (String) ifce.testMethod("Hello again");
}
}
I am making the assumption that because I am using the #Listeners annotation that I do not require the javaagent command line argument. This assumption may be wrong....
Can anyone point out what I have missed?
The JMockit-TestNG Initializer must run once for the whole test run, so using #Listeners on individual test classes won't work.
Instead, simply upgrade to JMockit 0.999.11, which works transparently with TestNG 6.2+, without any need to specify a listener or the -javaagent parameter (unless running on JDK 1.5).