Is there a way to defer parameter resolution? - junit5

(I'm reasonably sure the answer is "no", but I want to make sure.)
In JUnit 5 you can write an extension that is an implementation of ParameterResolver. Before your test runs, if the method in question has parameters, then an extension that implements ParameterResolver can return the object suitable as an argument for that parameter.
You can also write an extension that is an implementation of InvocationInterceptor, that is in charge of intercepting a test method's execution. You can get any arguments (such as those resolved by ParameterResolvers), but it appears you cannot change them.
In terms of execution order, if there are relevant parameters, then a ParameterResolver will "fire" first, and then any InvocationInterceptors will "fire" next.
(Lastly, if your test method declares parameters, but there are no ParameterResolvers to resolve them, everything craps out.)
Putting this all together:
Consider the case when a parameter can't really be properly resolved until the stuff that an interceptor sets up prior to execution is complete:
What is the best way, if there is one, to have all of the following:
A parameter that conceivably only the interceptor could resolve
Deferred resolution of that parameter (i.e. the actual parameter value is not sought by the JUnit internals until interception time so that the interceptor could resolve it just-in-time before calling proceed()
…?
(In my very concrete case, I got lucky: the parameter I'm interested in is an interface, so I "resolve" it to a dummy implementation, and then, at interception time, "fill" the dummy implementation with a delegate that does the real work. I can't think of a better way with the existing JUnit 5 toolkit.)
(I can almost get there if ReflectiveInvocationContext would allow me to set its arguments: my resolveParameter implementation could return null and my interceptor could replace the null reference it found in the arguments with an appropriate non-null argument just-in-time.)
(I also am at least aware of the ExecutableInvoker interface that is reachable from the ExtensionContext, but I'm unclear how that would help me in this scenario, since parameter resolution happens before interception.)

Related

How to use Kotlin to inspect all calls of a particular function?

In our codebase we have a third-party library method that behaves in an unexpected manner when you pass null to it. To help prevent misuse of the method, I would like to write a test that walks through the codebase, finds all calls to the method, and makes sure the type of the single parameter passed in is not nullable.
Is this possible using Kotlin reflection? Is this possible in Kotlin at all? I can get to the point where I list out all the functions in the codebase, but am stumped on how to continue!

Test assertions inside test doubles?

Is it a good practice to write a EXPECT(something) inside a test double (e.g. spy or mock) method? To ensure the test double is used in a specific way for testing?
If not, what would be a preferred solution?
If you would write a true Mock (as per definition from xUnit Test Patterns) this is exactly what defines this kind of test double. It is set up with the expectations how it will be called and therefore also includes the assertions. That's also how mocking frameworks produce mock objects under the hood. See also the definition from xUnit Test Patterns:
How do we implement Behavior Verification for indirect outputs of the SUT?
How can we verify logic independently when it depends on indirect inputs from other software components?
Replace an object the system under test (SUT) depends on with a test-specific object that verifies it is being used correctly by the SUT.
Here, indirect outputs means that you don't want to verify that the method under test returns some value but that there is something happening inside the method being tested that is behaviour relevant to callers of the method. For instance, that while executing some method the correct behaviour lead to an expected important action. Like sending an email or sending a message somewhere. The mock would be the doubled dependency that also verifies itself that this really happened, i.e. that the method under test really called the method of the dependency with the expected parameter(s).
A spy on the other hand shall just record things of interest that happened to the doubled dependency. Interrogating the spy about what happened (and sometimes also how often) and then judging if that was correct by asserting on the expected events is the responsibility of the test itself. So a mock is always also a spy with the addition of the assertion (expectation) logic. See also Uncle Bobs blog The Little Mocker for a great explanation of the different types of test doubles.
TL;DR
Yes, the mock includes the expectations (assertion) itself, the spy just records what happened and lets the test itself asks the spy and asserts on the expected events.
Mocking frameworks also implement mocks like explained above as they all follow the specified xunit framework.
mock.Verify(p => p.Send(It.IsAny<string>()));
If you look at the above Moq example (C#), you see that the mock object itself is configured to in the end perform the expected verification. The framework makes sure that the mock's verification methods are executed. A hand-written would be setup and than you would call the verification method on the mock object yourself.
Generally, you want to put all EXPECT statements inside individual tests to make your code readable.
If you want to enforce certain things on your test stub/spy, it is probably better to use exceptions or static asserts because your test is usually using them as a black box, and it uses them in an unintended way, your code will either not get compiled, or it will throw and give you the full stack trace which also will cause your test to fail (so you can catch the misuse).
For mocks, however, you have full control over the use and you can be very specific about how they are called and used inside each test. For example in Google test, using GMock matchers, you can say something like:
EXPECT_CALL(turtle, Forward(Ge(100)));
which means expect Forward to be called on the mock object turtle with a parameter equal or greater than 100. Any other value will cause the test to fail.
See this video for more examples on GMock matchers.
It is also very common to check general things in a test fixture (e.g. in Setup or TearDown). For example, this sample from google test enforces each test to finish in a certain amount of time, and the EXPECT statement is in teardown rather than each individual test.

Can I invoke MethodHandle.invokeExact from ByteBuddy?

MethodHandle#invokeExact(Object...) is a strange method in Java.
Suppose I wanted to invoke this from ByteBuddy (using MethodCall.invoke() and the like). Is there a way to do this without incurring a runtime exception? (Please bear in mind in any answers to this question that although it looks like it takes an ordinary Object array, MethodHandle#invokeExact(Object...) treats that argument very unusually.)
Those methods have a polymorphic signature and expect the arguments to be of the expected types, against the actual class file signature. Unfortunately, this corner case of method invocation is not supported in Byte Buddy at this day.

Why have both a return-value and a side-effect when raising an error for a mock?

A while back I found the following solution-pattern here on StackOverflow for raising an error from a mocked function (I cannot find the original link anymore, sorry):
def _mock_raises_error(my_mock, error_type, output):
my_mock.return_value = mock.Mock()
my_mock.side_effect = error_type(output)
# for example
with mock.patch('mymodule.interface.function') as mock_function:
_mock_raises_error(mock_function, Exception, 'Some error-message')
This works as intended, so I always have used this pattern.
Recently a colleague of mine asked why this definition both has a return_value as a side_effect and why that was needed. And to my shame I could not come up with the correct answer (only that I copied it from StackOverflow, but that is not an explanation why it is correct).
So now my question (to be able to explain it in the future) is why isn't giving the side_effect enough? What does the return_value add?
Let's start with the documentation for unittest.mock.
side_effect: A function to be called whenever the Mock is called. See the side_effect attribute. Useful for raising exceptions or dynamically changing return values. The function is called with the same arguments as the mock, and unless it returns DEFAULT, the return value of this function is used as the return value.
Alternatively side_effect can be an exception class or instance. In this case the exception will be raised when the mock is called.
return_value: The value returned when the mock is called. By default this is a new Mock (created on first access). See the return_value attribute.
One generally tests either with a return_value or with a side_effect. In my experience, I often test with return_value. This is for my standard case, when I'm trying to mock some functionality (such as obtaining the contents of a directory) without the need to make a function call (such as to the OS and to a directory which might change).
I mock the side_effect less often (usually when I'm testing the function's reaction to an exception). (Example, when I'm handling a file-not-found exception.)
In this example of a function that tests for the existence of a file in a directory, I might want to have both the results (say an empty list of files) and the side effect (say a file-not-found exception). But generally I will test one or the other.
The OP's example is only testing for the exception, so the side-effect is critical (but the return value is not). If you were testing for both, you'd need both.

From a ByteBuddy-generated method, how do I set a (public) instance field in an object received as an argument to the return value of a MethodCall?

I am generating a class in ByteBuddy.
As part of one method implementation, I would like to set a (let's just say) public instance field in another object to the return value of a MethodCall invocation. (Keeping the example public means that access checks etc. are irrelevant.)
I thought I could use MethodCall#setsField(FieldDescription) to do this.
But from my prior question related to this I learned that MethodCall#setsField(FieldDescription) is intended to work only on fields of the instrumented type, and, looking at it now, I'm not entirely sure why or how I thought it was ever going to work.
So: is there a way for a ByteBuddy-generated method implementation to set an instance field of another object to the return value of a method invocation?
If it matters, the "instrumented method" (in ByteBuddy's terminology) accepts the object whose field I want to set as an argument. Naïvely I'd expect to be able to do something like:
MethodCall.invoke(someMethod).setsField(somePublicField).onArgument(2);
There may be problems here that I am not seeing but I was slightly surprised not to see this DSL option. (It may not exist for perfectly good reasons; I just don't know what they would be.)
This is not possible as of Byte Buddy 1.10.18, the mechanism was originally created to support getters/setters when defining beans, for example. That said, it would not be difficult to add; I think it would even be easiest to allow any custom byte code to be dispatched as a consumer of the method call.
I will look into how this can be done, but as a new feature, this will take some time before I find the empty space to do so. The change is tracked on GitHub.