#SuppressStaticInitializationFor partial mocking - testing

I have this weird case where I want to test "some" functionality without touching the other... it's very hard for me to choose a proper description and I hope that the code I will present below is pretty much self descriptive.
Suppose I have a class that keeps some strategies:
class TypeStrategy {
private static final CreateConsumer CREATE_CONSUMER = new CreateConsumer();
private static final ModifyConsumer MODIFY_CONSUMER = new ModifyConsumer();
private static final Map<Type, Consumer<ConsumerContext>> MAP = Map.of(
Type.CREATE, CREATE_CONSUMER,
Type.MODIFY, MODIFY_CONSUMER
);
public static void consume(Type type, ConsumerContext context) {
Optional.ofNullable(MAP.get(nodeActionType))
.orElseThrow(strategyMissing(type))
.accept(context);
}
}
The idea is very easy - there are some strategies that are registered for a certain Type; method consume will simply try to find a proper registered type and invoke consume on it with the supplied ConsumerContext.
And now the problem: I very much want to test that all the strategies I care about are registered and I can invoke accept on them - that is literally all I want to test.
Usually, I would use #SuppressStaticInitializationFor on the TypeStrategy and using WhiteBox::setInternalState would just put whatever I need for CREATE_CONSUMER and MODIFY_CONSUMER; but in this case I can't, because the MAP will be skipped also and I really don't want that, all I care about is those two strategies - I need the MAP to stay as it is.
Besides some nasty refactoring, that does get me where I sort of want to be, I am out of ideas how can I achieve this. In the best case scenario I hoped that #SuppressStaticInitializationFor would support some "partial" skipping, where you could specify some filter on what exactly you want skipped , but that is not an option, really.
I could also test "everything" else on the chain of calls - that is test everything that accept is supposed to do, but that adds close to 70 lines of mocking in this test and it becomes a nightmare to understand that it really wants to test a very small piece.

From your description it seems black-box testing is not an option, so perhaps we can rely on some white-box tests by mocking the constructors of your consumers, and verifying their interactions.
Below you can find a complete example extrapolated from your initial sample, including a possible option for .orElseThrow(strategyMissing(type)).
One important note/disclaimer: since we're leaving TypeStrategy intact, this means the static initialization block for the map will be executed. Thus, we need to pay special attention to the consumer mock instances. We need to make sure that the same mock instances added in the map during the initial mocking phase, are available in all the tests, otherwise the verification will fail. So instead of creating mocks for each test, we will create them once for all tests. While this is not recommended in unit testing (tests should be isolated and independent), I believe in this special case it's a decent trade-off one can live with.
import org.junit.BeforeClass;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
import java.util.AbstractMap;
import java.util.Map;
import java.util.Optional;
import java.util.function.Consumer;
import java.util.function.Supplier;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.hamcrest.Matchers.is;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.fail;
import static org.mockito.Mockito.*;
import static org.powermock.api.mockito.PowerMockito.whenNew;
// enable powermock magic
#RunWith(PowerMockRunner.class)
#PrepareForTest({MockitoTest.TypeStrategy.class})
public class MockitoTest {
private static CreateConsumer createConsumerMock;
private static ModifyConsumer modifyConsumerMock;
// static initializer in TypeStrategy => mock everything once in the beginning to avoid having new mocks for each test (otherwise "verify" will fail)
#BeforeClass
public static void setup() throws Exception {
// mock the constructors to return mocks which we can later check for interactions
createConsumerMock = mock(CreateConsumer.class);
modifyConsumerMock = mock(ModifyConsumer.class);
whenNew(CreateConsumer.class).withAnyArguments().thenReturn(createConsumerMock);
whenNew(ModifyConsumer.class).withAnyArguments().thenReturn(modifyConsumerMock);
}
#Test
public void shouldDelegateToCreateConsumer() {
checkSpecificInteraction(Type.CREATE, createConsumerMock);
}
#Test
public void shouldDelegateToModifyConsumer() {
checkSpecificInteraction(Type.MODIFY, modifyConsumerMock);
}
private void checkSpecificInteraction(Type type, Consumer<ConsumerContext> consumer) {
ConsumerContext expectedContext = new ConsumerContext();
// invoke the object under test
TypeStrategy.consume(type, expectedContext);
// check interactions
verify(consumer).accept(expectedContext);
}
#Test
public void shouldThrowExceptionForUnsupportedConsumer() {
ConsumerContext expectedContext = new ConsumerContext();
// unsupported type mock
Type unsupportedType = PowerMockito.mock(Type.class);
when(unsupportedType.toString()).thenReturn("Unexpected");
// powermock does not play well with "#Rule ExpectedException", use plain old try-catch
try {
// invoke the object under test
TypeStrategy.consume(unsupportedType, expectedContext);
// if no exception was thrown to this point, the test is failed
fail("Should have thrown exception for unsupported consumers");
} catch (Exception e) {
assertThat(e.getMessage(), is("Type [" + unsupportedType + "] not supported"));
}
}
/* production classes below */
public static class TypeStrategy {
private static final CreateConsumer CREATE_CONSUMER = new CreateConsumer();
private static final ModifyConsumer MODIFY_CONSUMER = new ModifyConsumer();
private static final Map<Type, Consumer<ConsumerContext>> MAP = Stream.of(
new AbstractMap.SimpleEntry<>(Type.CREATE, CREATE_CONSUMER),
new AbstractMap.SimpleEntry<>(Type.MODIFY, MODIFY_CONSUMER)
).collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
public static void consume(Type type, ConsumerContext context) {
Optional.ofNullable(MAP.get(type))
.orElseThrow(strategyMissing(type))
.accept(context);
}
private static Supplier<IllegalArgumentException> strategyMissing(Type type) {
return () -> new IllegalArgumentException("Type [" + type + "] not supported");
}
}
public static class CreateConsumer implements Consumer<ConsumerContext> {
#Override
public void accept(ConsumerContext consumerContext) {
throw new UnsupportedOperationException("Not implemented");
}
}
public static class ModifyConsumer implements Consumer<ConsumerContext> {
#Override
public void accept(ConsumerContext consumerContext) {
throw new UnsupportedOperationException("Not implemented");
}
}
public enum Type {
MODIFY, CREATE
}
public static class ConsumerContext {
}
}

Related

Spring WebFlux (Flux): how to publish dynamically

I am new to Reactive programming and Spring WebFlux. I want to make my App 1 publish Server Sent event through Flux and my App 2 listen on it continuously.
I want Flux publish on-demand (e.g. when something happens). All the example I found is to use Flux.interval to periodically publish event, and there seems no way to append/modify the content in Flux once it is created.
How can I achieve my goal? Or I am totally wrong conceptually.
Publish "dynamically" using FluxProcessor and FluxSink
One of the techniques to supply data manually to the Flux is using FluxProcessor#sink method as in the following example
#SpringBootApplication
#RestController
public class DemoApplication {
final FluxProcessor processor;
final FluxSink sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.processor = DirectProcessor.create().serialize();
this.sink = processor.sink();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
sink.next("Hello World #" + counter.getAndIncrement());
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return processor.map(e -> ServerSentEvent.builder(e).build());
}
}
Here, I created DirectProcessor in order to support multiple subscribers, that will listen to the data stream. Also, I provided additional FluxProcessor#serialize which provide safe support for multiproducer (invocation from different threads without violation of Reactive Streams spec rules, especially rule 1.3). Finally, by calling "http://localhost:8080/send" we will see the message Hello World #1 (of course, only in case if you connected to the "http://localhost:8080" previously)
Update For Reactor 3.4
With Reactor 3.4 you have a new API called reactor.core.publisher.Sinks. Sinks API offers a fluent builder for manual data-sending which lets you specify things like the number of elements in the stream and backpressure behavior, number of supported subscribers, and replay capabilities:
#SpringBootApplication
#RestController
public class DemoApplication {
final Sinks.Many sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.sink = Sinks.many().multicast().onBackpressureBuffer();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
EmitResult result = sink.tryEmitNext("Hello World #" + counter.getAndIncrement());
if (result.isFailure()) {
// do something here, since emission failed
}
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return sink.asFlux().map(e -> ServerSentEvent.builder(e).build());
}
}
Note, message sending via Sinks API introduces a new concept of emission and its result. The reason for such API is the fact that the Reactor extends Reactive-Streams and has to follow the backpressure control. That said if you emit more signals than was requested, and the underlying implementation does not support buffering, your message will not be delivered. Therefore, the result of tryEmitNext returns the EmitResult which indicates if the message was sent or not.
Also, note, that by default Sinsk API gives a serialized version of Sink, which means you don't have to care about concurrency. However, if you know in advance that the emission of the message is serial, you may build a Sinks.unsafe() version which does not serialize given messages
Just another idea, using EmitterProcessor as a gateway to flux
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
public class MyEmitterProcessor {
EmitterProcessor<String> emitterProcessor;
public static void main(String args[]) {
MyEmitterProcessor myEmitterProcessor = new MyEmitterProcessor();
Flux<String> publisher = myEmitterProcessor.getPublisher();
myEmitterProcessor.onNext("A");
myEmitterProcessor.onNext("B");
myEmitterProcessor.onNext("C");
myEmitterProcessor.complete();
publisher.subscribe(x -> System.out.println(x));
}
public Flux<String> getPublisher() {
emitterProcessor = EmitterProcessor.create();
return emitterProcessor.map(x -> "consume: " + x);
}
public void onNext(String nextString) {
emitterProcessor.onNext(nextString);
}
public void complete() {
emitterProcessor.onComplete();
}
}
More info, see here from Reactor doc. There is a recommendation from the document itself that "Most of the time, you should try to avoid using a Processor. They are harder to use correctly and prone to some corner cases." BUT I don't know which kind of corner case.

Arquillian Graphene #Location placeholder

I'm learning Arquillian right now I wonder how to create page that has a placeholder inside the path. For example:
#Location("/posts/{id}")
public class BlogPostPage {
public String getContent() {
// ...
}
}
or
#Location("/posts/{name}")
#Location("/specific-page?requiredParam={value}")
I have looking for an answer on graphine and arquillian reference guides without success. I used library from other language that have support for page-objects, but it has build-in support for placeholders.
AFAIK there is nothing like this implemented in Graphene.
To be honest, I'm not sure how this should behave - how would you pass the values...?
Apart from that, I think that it could be also limited by Java annotation abilities https://stackoverflow.com/a/10636320/6835063
This is not possible currently in Graphene. I've created ARQGRA-500.
It's possible to extend Graphene to add dynamic parameters now. Here's how. (Arquillian 1.1.10.Final, Graphene 2.1.0.Final.)
Create an interface.
import java.util.Map;
public interface LocationParameterProvider {
Map<String, String> provideLocationParameters();
}
Create a custom LocationDecider to replace the corresponding Graphene's one. I replace the HTTP one. This Decider will add location parameters to the URI, if it sees that the test object implements our interface.
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.util.Map;
import java.util.Map.Entry;
import org.jboss.arquillian.core.api.Instance;
import org.jboss.arquillian.core.api.annotation.Inject;
import org.jboss.arquillian.graphene.location.decider.HTTPLocationDecider;
import org.jboss.arquillian.graphene.spi.location.Scheme;
import org.jboss.arquillian.test.spi.context.TestContext;
public class HTTPParameterizedLocationDecider extends HTTPLocationDecider {
#Inject
private Instance<TestContext> testContext;
#Override
public Scheme canDecide() {
return new Scheme.HTTP();
}
#Override
public String decide(String location) {
String uri = super.decide(location);
// not sure, how reliable this method of getting the current test object is
// if it breaks, there is always a possibility of observing
// org.jboss.arquillian.test.spi.event.suite.TestLifecycleEvent's (or rather its
// descendants) and storing the test object in a ThreadLocal
Object testObject = testContext.get().getActiveId();
if (testObject instanceof LocationParameterProvider) {
Map<String, String> locationParameters =
((LocationParameterProvider) testObject).provideLocationParameters();
StringBuilder uriParams = new StringBuilder(64);
boolean first = true;
for (Entry<String, String> param : locationParameters.entrySet()) {
uriParams.append(first ? '?' : '&');
first = false;
try {
uriParams.append(URLEncoder.encode(param.getKey(), "UTF-8"));
uriParams.append('=');
uriParams.append(URLEncoder.encode(param.getValue(), "UTF-8"));
} catch (UnsupportedEncodingException e) {
throw new RuntimeException(e);
}
}
uri += uriParams.toString();
}
return uri;
}
}
Our LocationDecider must be registered to override the Graphene's one.
import org.jboss.arquillian.core.spi.LoadableExtension;
import org.jboss.arquillian.graphene.location.decider.HTTPLocationDecider;
import org.jboss.arquillian.graphene.spi.location.LocationDecider;
public class MyArquillianExtension implements LoadableExtension {
#Override
public void register(ExtensionBuilder builder) {
builder.override(LocationDecider.class, HTTPLocationDecider.class,
HTTPParameterizedLocationDecider.class);
}
}
MyArquillianExtension should be registered via SPI, so create a necessary file in your test resources, e.g. for me the file path is src/test/resources/META-INF/services/org.jboss.arquillian.core.spi.LoadableExtension. The file must contain a fully qualified class name of MyArquillianExtension.
And that's it. Now you can provide location parameters in a test.
import java.util.HashMap;
import java.util.Map;
import org.jboss.arquillian.graphene.page.InitialPage;
import org.jboss.arquillian.graphene.page.Location;
import org.junit.Test;
public class TestyTest implements LocationParameterProvider {
#Override
public Map<String, String> provideLocationParameters() {
Map<String, String> params = new HashMap<>();
params.put("mykey", "myvalue");
return params;
}
#Test
public void test(#InitialPage TestPage page) {
}
#Location("MyTestView.xhtml")
public static class TestPage {
}
}
I've focused on parameters specifically, but hopefully this paves the way for other dynamic path manipulations.
Of course this doesn't fix the Graphene.goTo API. This means before using goTo you have to provide parameters via this roundabout provideLocationParameters way. It's weird. You can make your own alternative API, goTo that accepts parameters, and modify your LocationDecider to support other ParameterProviders.

Cucumber: Unable to find step definition

I have the following feature file: MacroValidation.feature
#macroFilter
Feature: Separating out errors and warnings
Scenario: No errors or warnings when separating out error list
Given I have 0 macros
When I filter out errors and warnings for Macros
Then I need to have 0 errors
And I need to have 0 warnings
My definition files.
package com.test.definition;
import cucumber.api.java.After;
import cucumber.api.java.Before;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;
import cucumber.runtime.java.StepDefAnnotation;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.powermock.api.mockito.PowerMockito.doReturn;
import static org.powermock.api.mockito.PowerMockito.mock;
import static org.powermock.api.mockito.PowerMockito.spy;
#StepDefAnnotation
public class MacroValidationStepDefinitions {
private final MacroService macroService = spy(new MacroService());
private final LLRBusList busList = mock(LLRBusList.class);
private final List<String> errorList = new ArrayList<String>();
private final List<String> warningList = new ArrayList<String>();
#Before({"#macroFilter"})
public void setUp() {
errorList.addAll(Arrays.asList("error 1, error2, error 3"));
warningList.addAll(Arrays.asList("warning 1, warning 2, warning 3"));
}
#After({"#macroFilter"})
public void tearDown() {
errorList.clear();
warningList.clear();
}
#Given("^I have (\\d+) macros$")
public void i_have_macros(int input) {
doReturn(input).when(busList).size();
}
#When("^I filtered out errors and warnings for Macros$")
public void i_filtered_out_errors_and_warnings_for_Macros() {
macroService.separateErrorsAndWarning(busList, errorList, warningList);
}
#Then("^I need to have (\\d+) errors$")
public void i_need_to_have_errors(int numOfError) {
if (numOfError == 0) {
assertTrue(errorList.isEmpty());
} else {
assertEquals(errorList.size(), numOfError);
}
}
#Then("^I need to have (\\d+) warnings$")
public void i_need_to_have_warnings(int numOfWarnings) {
if (numOfWarnings == 0) {
assertTrue(warningList.isEmpty());
} else {
assertEquals(warningList.size(), numOfWarnings);
}
}
}
My unit test class.
#CucumberOptions(features = {"classpath:testfiles/MacroValidation.feature"},
glue = {"com.macro.definition"},
dryRun = false,
monochrome = true,
tags = "#macroFilter"
)
#RunWith(Cucumber.class)
public class PageMacroValidationTest {
}
When I execute the test, I get file definition not implemented warnings in the log.
Example log:
You can implement missing steps with the snippets below:
#Given("^I have (\\d+) macros$")
public void i_have_macros(int arg1) throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
#When("^I filter out errors and warnings for Macros$")
public void i_filter_out_errors_and_warnings_for_Macros() throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
#Then("^I need to have (\\d+) errors$")
public void i_need_to_have_errors(int arg1) throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
#Then("^I need to have (\\d+) warnings$")
public void i_need_to_have_warnings(int arg1) throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
I don't think file name should matter right?
It looks like Cucumber isn't finding your step defintion class. In your unit test class you say:
glue = {"com.macro.definition"}
However the step definition classes are in com.test.definition
Try changing that line to:
glue = {"com.test.definition"}
You may have to rebuild your project to pick up the change.
Also, Cucumber is sensitive to white space. If you try to make your runner or feature file pretty after having captured the snippets, you will get this problem.
Here's an example that drove me nuts for several hours while creating my first BDD. I had created the feature file and a skeleton runner which I ran and and captured the snippets. Then I prettified the feature file, and when I ran the runner got the errors.
Of course everything looked fine to my human brain, so the next few hours were spent in fruitless research here, and checking versions and bug lists. Finally I decided to compare the first two lines of the snippet to see what was different:
// #Then("^the test result is = \"([^\"]*)\"$")
// public void theTestResultIs(String ruleResult) throws Throwable {
#Then("^the test result is = \"([^\"]*)\"$")
public void theTestResultIs(String arg1) throws Throwable {
Doh!
Try to use all dependencies in POM.xml with io.cucumber group id which is the latest jars instead of info.cukes. After removing jars update the project with new imports and run the project.
Remember you have to replace all dependencies of info.cukes with io.cucumber

JUnit Test against an interface without having the implementation yet

I try to write a test for a given interface like that with JUnit and have no idea how to do that:
public interface ShortMessageService {
/**
* Creates a message. A message is related to a topic
* Creates a date for the message
* #throws IllegalArgumentException, if the message is longer then 255 characters.
* #throws IllegalArgumentException, if the message ist shorter then 10 characters.
* #throws IllegalArgumentException, if the user doesn't exist
* #throws IllegalArgumentException, if the topic doesn't exist
* #throws NullPointerException, if one argument is null.
* #param userName
* #param message
* #return ID of the new created message
*/
Long createMessage(String userName, String message, String topic);
[...]
}
I tried to mock the interface after I realized that it doesn't make sense at all so I am a bit lost. Maybe someone can give me a good approach I can work with. I also heard about junit parameterized tests but I am not sure if that is what I am looking for.
Many thanks!
I use the following pattern to write abstract tests against my interface APIs without having any implementations available. You can write whatever tests you require in AbstractShortMessageServiceTest without having to implement them at that point in time.
public abstract class AbstractShortMessageServiceTest
{
/**
* #return A new empty instance of an implementation of FooManager.
*/
protected abstract ShortMessageService getNewShortMessageService();
private ShortMessageService testService;
#Before
public void setUp() throws Exception
{
testService = getNewShortMessageService();
}
#Test
public void testFooBar() throws Exception
{
assertEquals("question", testService.createMessage(
"DeepThought", "42", "everything"));
}
}
When you have an implementation, you can use the test simply by defining a new test class that overrides AbstractShortMessageServiceTest and implements the getNewShortMessageService method.
public class MyShortMessageServiceTest extends AbstractShortMessageServiceTest
{
protected ShortMessageService getNewShortMessageService()
{
return new MyShortMessageService();
}
}
In addition, if you need the test to be parameterized, you can do that in AbstractShortMessageServiceTest without doing it in each of the concrete tests.
Usually test is prepared for class that implements the interface and mocks are used for cooperating classes but you can test your test by mock if the class is not ready yet. It is unusual and you should use thenAnsfer with implemented logic of possible cases:
Better way is simply prepare tests for the implementation class and start to improve it till all test passes:
Implementing class can be in field and initialized before tests
private ShortMessageService testedClasOrMock;
//version with implementing class
#Before
public void setUp(){
testedClasOrMock = new ShortMessageServiceImpl0();
}
#Before
public void setUp(){
testedClasOrMock = mock(ShortMessageService.class);
when(testedClasOrMock).thenAnswer(new Answer<Long>(){
#Override
public Long answer(InvocationOnMock invocation) throws Throwable {
String message =(String) invocation.getArguments()[1];
if (message.length() > 256){
throw new IllegalArgumentException("msg is too long");
}
//other exception throwing cases
…...
return new Long(44);
}});
}
so you will have several test with expected exceptions like
#Test (expected= IllegalArgumentException.class)
public void testTooLongMsg(){
testedClasOrMock.createMessage(USER, TOO_LONG_MSG, TOPIC);
}
and one that simply should not throw exception and for instance check that msg ids are different
#Test
public void testTooLongMsg(){
long id0 = testedClasOrMock.createMessage(USER, TOO_LONG_MSG, TOPIC);
long id1 = testedClasOrMock.createMessage(USER, TOO_LONG_MSG, TOPIC);
assertTrue(id0 != id1);
}
If you insist on testing your test by mock let me know and I will add example for one test case.

EasyMock Testing Void With Runnable

I'm trying to test the following class (I've left out the implementation)
public class UTRI implements UTR {
public void runAsUser(String userId, Runnable r);
}
This is the way I would use it:
UTRI.runAsUser("User1", new Runnable () {
private void run() {
//do whatever needs to be done here.
}
});
The problem is, I don't know how to use EasyMock to test functions that return void. That and I'm also not too familiar with testing in general (right out of school!). Can someone help explain to me what I need to do to approach this? I was thinking about making the UTRI a mock and doing expectlastcall after that, but realistically, not sure.
public class UTRITest {
UTRI utri = new UTRI();
#Test
public void testRunAsUser() {
// Create Mocks
Runnable mockRunnable = EasyMock.createMock(Runnable.class);
// Set Expectations
**mockRunnable.run();
EasyMock.expectLastCall().once();**
EasyMock.replay(mockRunnable);
// Call the method under test
utri.runAsUser("RAMBO", **mockRunnable**);
// Verify if run was called on Runnable!!
EasyMock.verify(mockRunnable);
}
}