Invalid place to record Expectations - jmockit

I'm using JMockit 1.14 with Junit 4.
private void method()
{
new NonStrictExpectations()
{
{
firstObject.getLock();
returns(new Lock());
secondObject.getDetails();
result = secondObjectDetails;
secondObject.isAvailable();
result = true;
}
};
}
Is there anything obviously wrong with my code?

I resolved a similar issue (using android studio, junit 4.12 and JMockit 1.20) by adding
#RunWith(JMockit.class)
outside the test case class, with a couple of import changes.
See JMockit documentation: http://jmockit.org/tutorial/Introduction.html#runningTests

Related

A proper way to conditionally ignore tests in JUnit5?

I've been spotting a lot of these in my project's tests:
#Test
void someTest() throws IOException {
if (checkIfTestIsDisabled(SOME_FLAG)) return;
//... the test starts now
Is there an alternative to adding a line at the beginning of each test? For example in JUnit4 there is an old project that provides an annotation #RunIf(somecondition) and I was wondering if there is something similar in JUnit5?
Thank you for your attention.
Tests can be disabled with #DisabledIf and a custom condition.
#Test
#DisabledIf("customCondition")
void disabled() {
// ...
}
boolean customCondition() {
return true;
}
See also the user guide about custom conditions.

MockK mock method returning Interface Future

Hello I have following problem.
I am trying to mock call of injected executor
to execute given Callable immediately. Later in test arguments of methods called inside Callable are captured and arguments are asserted. Mock example see bellow.
Maven 3, jdk 10-slim, mockk 1.9
//this task should be executed by executor
private val taskCaptor = slot<Callable<Boolean>>()
private val asyncTaskExecutor: LazyTraceThreadPoolTaskExecutor = mockk<LazyTraceThreadPoolTaskExecutor>().apply {
//this was my 1st try, but resutt was java.lang.InstantiationError: java.util.concurrent.Callable
//every { submit(capture(taskCaptor)) } returns CompletableFuture.completedFuture(taskCaptor.captured.call())
//every { submit(any()) } returns CompletableFuture.completedFuture(true)
every { submit(ofType(Callable::class)) } returns FutureTask<Boolean>(Callable { true })
}
later on I have changed Callable interface to implementation, which I have created in tested class and I got another exception.
With same code as above exceptions was
java.lang.InstantiationError: java.util.concurrent.Future
which is return type of submit method.
Is my approach to mocking wrong?
not sure if this is the best way to implemented but for me it worked this way:
private val taskCaptor = slot<Callable<Boolean>>()
private val asyncTaskExecutor: LazyTraceThreadPoolTaskExecutor = mockk<LazyTraceThreadPoolTaskExecutor>().apply {
every { submit(ofType(Callable::class)) } returns mockFuture
every { mockFuture.get() } returns true
}

Setting up selenium webdriver for parallel execution

I am trying to execute a large suite of selenium tests via xUnit console runner in parallel.
These have executed and I see 3 chrome windows open, however the first send key commands simply executes 3 times to one window, resulting in test failure.
I have registered my driver in an objectcontainer before each scenario as below:
[Binding]
public class WebDriverSupport
{
private readonly IObjectContainer _objectContainer;
public WebDriverSupport(IObjectContainer objectContainer)
{
_objectContainer = objectContainer;
}
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
_objectContainer.RegisterInstanceAs<IWebDriver>(driver);
}
And then call the driver in my specflow step defintions as:
_driver = (IWebDriver)ScenarioContext.Current.GetBindingInstance(typeof(IWebDriver));
ScenarioContext.Current.Add("Driver", _driver);
However this has made no difference and it seems as if my tests are trying to execute all commands to one driver.
Can anyone advise where I have gone wrong ?
You shouldn't be using ScenarioContext.Current in a parallel execution context. If you're injecting the driver through _objectContainer.RegisterInstanceAs you will receive it through constructor injection in your steps class' constructor, like so:
public MyScenarioSteps(IWebDriver driver)
{
_driver = driver;
}
More info:
https://github.com/techtalk/SpecFlow/wiki/Parallel-Execution#thread-safe-scenariocontext-featurecontext-and-scenariostepcontext
https://github.com/techtalk/SpecFlow/wiki/Context-Injection
In my opinion this is horribly messy.
This might not be an answer, but is too big for a comment.
why are you using the IObjectContainer if you are just getting it from the current scenario context and not injecting it via the DI mechanism? I would try this:
[Binding]
public class WebDriverSupport
{
[BeforeScenario]
public void InitializeWebDriver()
{
var driver = GetWebDriverFromAppConfig();
ScenarioContext.Current.Add("Driver",driver);
}
}
then in your steps:
_driver = (IWebDriver)ScenarioContext.Current.Get("Driver");
As long as GetWebDriverFromAppConfig returns a new instance you should be ok...

Grails 2.0 integration test pollution?

So i have a small integration test that houses 5 tests in total. Running that test exclusively results in all tests passed. However running my entire test suite results in 4 test failures of the 5.
I've just recently upgraded to grails-2.0 from 1.3.7 and i switched from hsqldb to h2.
Has anyone any pointers in which direction i should be looking in order to fix this (test-pollution) problem?
Domain model
Integration test:
class SeriesIntegrationTests extends GrailsUnitTestCase {
Series series
Episode episode
protected void setUp() {
super.setUp()
series = new Series(ttdbId: 2348);
episode = new Episode(ttdbId: 2983, season: 0, episodeNumber: 0, series: series);
}
protected void tearDown() {
super.tearDown()
}
void testCreateSeries() {
series.save()
assertFalse("should not have validation errors : $series.errors", series.hasErrors())
assertEquals("should be one series stored in db", 1, Series.count())
}
void testCreateEpisode() {
series.save()
episode.save()
assertFalse("should not have validation errors : $episode.errors", episode.hasErrors())
assertEquals("should be one episode stored in db", 1, Episode.count())
}
void testCreateSeriesAndAddEpisode() {
series.addToEpisodes(episode)
series.save(flush: true)
series.refresh()
assertEquals("series should contain one episode", 1, series.episodes.size())
}
void testDeleteSeriesAndCascadeToEpisode() {
series.addToEpisodes(episode)
series.save(flush: true)
series.delete(flush: true)
assertEquals(0, Episode.count())
assertEquals(0, Series.count())
}
void testDeleteSeriesAndCascadeToBackdropImage() {
series.backdrop = new Image();
series.backdrop.binaryData = new byte[0]
series.save(flush: true)
assertFalse(series.hasErrors())
assertEquals(1, Image.count())
series.delete(flush: true)
assertEquals(0, Image.count())
}
}
I had a similar problem when moving from 1.3.7 to 2.0. The integration tests were ok when launched with
grails test-app --integration
but were failing when launched with
grails test-app
I fixed everything by converting unit tests to grails 2.0 test (using annotations).
My solution as to upgrade all the unit tests to grails 2.0 method of doing tests. When this was done, every test passed. So it seem's that unit tests somehow polluted integration tests. But only on certain hardware configurations.

Using Rhino Mocks, why does invoking a mocked on a property during test initialization return Expected call #1, Actual call #0?

I currently have a test which tests the presenter I have in the MVP model. On my presenter I have a property which will call into my View, which in my test is mocked out. In the Initialization of my test, after I set my View on the Presenter to be the mocked View, I set my property on the Presenter which will call this method.
In my test I do not have an Expect.Call for the method I invoke, yet when I run I get this Rhino mock exception:
Rhino.Mocks.Exceptions.ExpectationViolationException: IView.MethodToInvoke(); Expected #1, Actual #0..
From what I understand with Rhino mocks, as long as I am invoking on the Mock outside the expecting block it should not be recording this. I would imagine the test to pass. Is there a reason it is not passing?
Below is some code to show my setup.
public class Presenter
{
public IView View;
public Presenter(IView view)
{
View = view
}
private int _property;
public int Property
get { return _property;}
set
{
_property = value;
View.MethodToInvoke();
}
}
... Test Code Below ...
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
_presenter.Property = 1;
}
[TestMethod]
public void Test()
{
Rhino.Mocks.With.Mocks(_mocks).Expecting(delegate
{
}).Verify(delegate
{
_presenter.SomeOtherMethod();
});
}
Why in the world would you want to test the same thing each time a test is run?
If you want to test that a specific thing happens, you should check that in a single test.
The pattern you are using now implies that you need to
- set up prerequisites for testing
- do behavior
- check that behavior is correct
and then repeat that several times in one test
You need to start testing one thing for each test, and that help make the tests clearer, and make it easier to use the AAA syntax.
There's several things to discuss here, but it certainly would be clearer if you did it something like:
[TestMethod]
ShouldCallInvokedMethodWhenSettingProperty()
{
var viewMock = MockRepository.GenerateMock<IView>()
var presenter = new Presenter(viewMock);
presenter.Property = 1;
viewMock.AssertWasCalled(view => view.InvokedMethod());
}
Read up more on Rhino Mocks 3.5 syntax here: http://ayende.com/Wiki/Rhino+Mocks+3.5.ashx
What exactly are you trying to test in the Test method?
You should try to avoid using strict mocks.
I suggest using the Rhino's AAA syntax (Arrange, Act, Assert).
The problem lied with me not understanding the record/verify that is going on with Strict mocks. In order to fix the issue I was having this is how I changed my TestInitilize function. This basicaly does a quick test on my intial state I'm setting up for all my tests.
[TestInitialize]
public void Initilize()
{
_mocks = new MockRepository();
_view = _mocks.StrictMock<IView>();
_presenter = new Presenter(_view);
Expect.Call(delegate { _presenter.View.InvokedMethod(); });
_mocks.ReplayAll();
_mocks.VerifyAll();
_mocks.BackToRecordAll();
_presenter.Property = 1;
}