How do I mock an inherited method that has generics with JMockit - jmockit

I have this abstract class:
public abstract class Accessor<T extends Id, U extends Value>
{
public U find(T id)
{
// let's say
return getHelper().find(id);
}
}
And an implementation:
public FooAccessor extends Accessor<FooId,Foo>
{
public Helper getHelper
{
// ...
return helper;
}
}
And I would like to mock the calls to FooAccessor.find.
This:
#MockClass(realClass=FooAccessor.class)
static class MockedFooAccessor
{
public Foo find (FooId id)
{
return new Foo("mocked!");
}
}
will fail with this error:
java.lang.IllegalArgumentException: Matching real methods not found for the following mocks of MockedFooAccessor:
Foo find (FooId)
and I understand why... but I don't see how else I could do it.
Note: yes, I could mock the getHelper method, and get what I want; but this is more a question to learn about JMockit and this particular case.

The only way around this I have found is to use fields
#Test
public void testMyFooMethodThatCallsFooFind(){
MyChildFooClass childFooClass = new ChildFooClass();
String expectedFooValue = "FakeFooValue";
new NonStrictExpectations(){{
setField(childFooClass, "fieldYouStoreYourFindResultIn", expectedFooValue);
}};
childFooClass.doSomethingThatCallsFind();
// if your method is protected or private you use Deencapsulation class
// instead of calling it directly like above
Deencapsulation.invoke(childFooClass, "nameOfFindMethod", argsIfNeededForFind);
// then to get it back out since you used a field you use Deencapsulation again to pull out the field
String actualFoo = Deencapsulation.getField(childFooClass, "nameOfFieldToRunAssertionsAgainst");
assertEquals(expectedFooValue ,actualFoo);
}
childFooClass doesn't need to be mocked nor do you need to mock the parent.
Without more knowledge of your specific case this strategy has been the best way for me to leverage jMockit Deencapsulation makes so many things possilbe to test without sacrificing visibility. I know this doesn't answer the direct question but I felt you should get something out of it. Feel free to downvote and chastise me community.

Honestly, I do not find it in any way different from mocking regular classes. One way to go is to tell JMockit to mock only the find method and use Expectations block to provide alternate implementation. Like this:
abstract class Base<T, U> {
public U find(T id) {
return null;
}
}
class Concrete extends Base<Integer, String> {
public String work() {
return find(1);
}
}
#RunWith(JMockit.class)
public class TestClass {
#Mocked(methods = "find")
private Concrete concrete;
#Test
public void doTest() {
new NonStrictExpectations() {{
concrete.find((Integer) withNotNull());
result = "Blah";
}}
assertEquals("Blah", concrete.work());
}
}
Hope it helps.

Related

Gson - deserialize or default

I have a class :
data class Stam(#SerializedName("blabla") val blabla: String = "")
I want to do gson.fromJson("{\"blabla\":null}", Stam::class.java)
However, it will fail because blabla is not nullable.
I want to make it so if gson failed to deserialize some variable, it will take the default value I give it.
How to achieve that?
I don't think it is possible with GSON, this is one of the reasons why kotlinx.serialization library was created. With this library it is fairly easy:
#Serializable
data class Stam(#SerialName("blabla") val blabla: String = "") //actually, #SerialName may be omitted if it is equal to field name
Json { coerceInputValues = true }.decodeFromString<Stam>("{\"blabla\":null}")
I wouldn't say it is not possible in Gson, but Gson is definitely not the best choice:
Gson has no mention on Kotlin, its runtime and specifics, so one is better to use a more convenient and Kotlin-aware tool. Typical questions here are: how to detect a data class (if it really matters, can be easily done in Kotlin), how to detect non-null parameters and fields in runtime, etc.
Data classes in Kotlin seem to provide a default constructor resolvable by Gson therefore Gson can invoke it (despite it can instantiate classes instances without constructors using unsafe mechanics) delegating to the "full-featured" constructor with the default arguments. The trick here is removing null-valued properties from input JSON so Gson would keep "default-argumented" fields unaffected.
I do Java but I do believe the following code can be converted easily (if you believe Gson is still a right choice):
final class StripNullTypeAdapterFactory
implements TypeAdapterFactory {
// The rule to check whether this type adapter should be applied.
// Externalizing the rule makes it much more flexible.
private final Predicate<? super TypeToken<?>> isClassSupported;
private StripNullTypeAdapterFactory(final Predicate<? super TypeToken<?>> isClassSupported) {
this.isClassSupported = isClassSupported;
}
static TypeAdapterFactory create(final Predicate<? super TypeToken<?>> isClassSupported) {
return new StripNullTypeAdapterFactory(isClassSupported);
}
#Override
#Nullable
public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> typeToken) {
if ( !isClassSupported.test(typeToken) ) {
return null;
}
// If the type is supported by the rule, get the type "real" delegate
final TypeAdapter<T> delegate = gson.getDelegateAdapter(this, typeToken);
return new StripNullTypeAdapter<>(delegate);
}
private static final class StripNullTypeAdapter<T>
extends TypeAdapter<T> {
private final TypeAdapter<T> delegate;
private StripNullTypeAdapter(final TypeAdapter<T> delegate) {
this.delegate = delegate;
}
#Override
public void write(final JsonWriter out, final T value)
throws IOException {
delegate.write(out, value);
}
#Override
public T read(final JsonReader in) {
// Another disadvantage in using Gson:
// the null-stripped object must be buffered into memory regardless how big it is.
// So it may generate really big memory footprints.
final JsonObject buffer = JsonParser.parseReader(in).getAsJsonObject();
// Strip null properties from the object
for ( final Iterator<Map.Entry<String, JsonElement>> i = buffer.entrySet().iterator(); i.hasNext(); ) {
final Map.Entry<String, JsonElement> property = i.next();
if ( property.getValue().isJsonNull() ) {
i.remove();
}
}
// Now there is no null values so Gson would only use properties appearing in the buffer
return delegate.fromJsonTree(buffer);
}
}
}
Test:
public final class StripNullTypeAdapterFactoryTest {
private static final Collection<Class<?>> supportedClasses = ImmutableSet.of(Stam.class);
private static final Gson gson = new GsonBuilder()
.disableHtmlEscaping()
// I don't know how easy detecting data classes and non-null parameters is
// but since the rule is externalized, let's just lookup it
// in the "known classes" registry
.registerTypeAdapterFactory(StripNullTypeAdapterFactory.create(typeToken -> supportedClasses.contains(typeToken.getRawType())))
.create();
#Test
public void test() {
final Stam stam = gson.fromJson("{\"blabla\":null}", Stam.class);
// The test is "green" since
Assertions.assertEquals("", stam.getBlabla());
}
}
I still think Gson is not the best choice here.

Overriding an internal method with Decorator Design Pattern

I am writing an object-oriented code in which I am trying to use Decorator pattern to implement a variety of optimizations to be applied on a family of core classes at runtime. The main behaviour of core classes is a complex behaviour that is fully implemented in those classes, which indeed calls other internal methods to fulfill pieces of the task.
The decorators will only customize the internal methods which are called by the complex behaviour in core class.
Here is a pseudo-code of what I'm trying to reach:
interface I{
complex();
step1();
step2();
}
class C implements I{
complex(){
...
this.step1();
...
this.step2();
}
step1(){
...
}
step2(){
...
}
}
abstract class Decorator implements I{
I wrapped;
constructor(I obj){
this.wrapped = obj;
}
complex(){
this.wrapped.complex();
}
step1(){
this.wrapped.step1();
}
step2(){
this.wrapped.step2();
}
}
class ConcreteDecorator extends Decorator{
constructor(I obj){
super(obj);
}
step2(){
... // customizing step2()
}
}
There are a variety of customizations possible which could be combined together, and that is the main reason I'm using decorator pattern. otherwise I'll get to create dozens to hundred subtypes for each possible combination of customizations.
Now if I try to create object of the decorated class:
x = new C();
y = new ConcreteDecorator(x);
y.complex();
I expect the complex() method to be executed form the wrapped core object, while using the overridden step2() method from decorator. But it does not work this way as the complex() method in abstract decorator directly calls the method on core object which indeed skips the overridden step2() in decorator.
My overall goal is to enable the decorators only overriding one or few of the stepx() methods and that would be invoked by the complex() method which is already implemented in the core object and invokes all the steps.
Could this functionality be implemented using Decorator design pattern at all? If yes how, and if not what is the appropriate design pattern for tackling this problem.
Thanks.
I guess you could resolve that problem with Strategy pattern, where the Strategy interface includes the methods that are vary from class to class. Strategy interface may include as only one method as well as several depending on their nature.
interface IStrategy {
step1(IData data);
step2(IData data);
}
interface I {
complex();
}
class C implements I {
IData data
constructor(IStrategy strategy) {}
complex() {
...
this.strategy.step1(this.data);
...
this.strategy.step2(this.data);
}
}
class S1 implements IStrategy {
constructor(IStrategy strategy)
step1(IData data) {
}
step2(IData data) {
}
}
strategy1 = new S1();
c = new C(strategy1)
The issue you are facing is that in your application of the Decorator design pattern, because you are not decorating complex(), the call to complex() on a decorator object will be delegated to the decorated object, which has "normal" version of step2.
I think a more appropriate design pattern to solve your problem would be the Template Method design pattern.
In your case complex() would play the role of the template method, whose steps can be customized by subclasses. Instead of using composition, you use inheritance, and the rest stays more or less the same.
Here is a sample application of the Template Method design pattern to your context:
public interface I {
void complex();
void step1(); // Better to remove from the interface if possible
void step2(); // Better to remove from the interface if possible
}
// Does not need to be abstract, but can be
class DefaultBehavior implements I {
// Note how this is final to avoid having subclass
// change the algorithm.
public final void complex() {
this.step1();
this.step2();
}
public void step1() { // Default step 1
System.out.println("Default step 1");
}
public void step2() { // Default step 2
System.out.println("Default step 1");
}
}
class CustomizedStep2 extends DefaultBehavior {
public void step2() { // Customized step 2
System.out.println("Customized step 2");
}
}

Transactions with ReactiveCrudRepository with spring-data-r2dbc

I'm trying to implement transactions with spring-data-r2dbc repositories in combination with the TransactionalDatabaseClient as such:
class SongService(
private val songRepo: SongRepo,
private val databaseClient: DatabaseClient
){
private val tdbc = databaseClient as TransactionalDatabaseClient
...
...
fun save(song: Song){
return tdbc.inTransaction{
songRepo
.save(mapRow(song, albumId)) //Mapping to a row representation
.delayUntil { savedSong -> tdbc.execute.sql(...).fetch.rowsUpdated() } //saving a many to many relation
.map(::mapSong) //Mapping back to actual song and retrieve the relationship data.
}
}
}
I currently have a config class (annotated with #Configuration and #EnableR2dbcRepositories) that extends from AbstractR2dbcConfiguration. In here I override the databaseClient method to return a TransactionalDatabaseClient. This should be the same instance as in the SongService class.
When running the code in a test with just subscribing and printing, I get org.springframework.transaction.NoTransactionException: ReactiveTransactionSynchronization not active and the relationship data is not returned.
When using project Reactors stepverifier though, i get java.lang.IllegalStateException: Connection is closed. Also in this case, the relationship data is not returned.
Just for the record, I have seen https://github.com/spring-projects/spring-data-r2dbc/issues/44
Here is a working Java example:
#Autowired TransactionalDatabaseClient txClient;
#Autowired Mono<Connection> connection;
//You Can also use: #Autowired Mono<? extends Publisher> connectionPublisher;
public Flux<Void> example {
txClient.enableTransactionSynchronization(connection);
// Or, txClient.enableTransactionSynchronization(connectionPublisher);
Flux<AuditConfigByClub> audits = txClient.inTransaction(tx -> {
txClient.beginTransaction();
return tx.execute().sql("SELECT * FROM audit.items")
.as(Item.class)
.fetch()
.all();
}).doOnTerminate(() -> {
txClient.commitTransaction();
});
txClient.commitTransaction();
audits.subscribe(item -> System.out.println("anItem: " + item));
return Flux.empty()
}
I just started reactive so not too sure what I'm doing with my callbacks haha. But I decided to go with TransactionalDatabaseClient over DatabaseClient or Connection since I'll take all the utility I can get while R2dbc is in its current state.
In your code did you actually instantiate a Connection object? If so I think you would have done it in your configuration. It can be utilized throughout the app the same as DatabaseClient, but it is slightly more intricate.
If not:
#Bean
#Override // I also used abstract config
public ConnectionFactory connectionFactory() {
...
}
#Bean
TransactionalDatabaseClient txClient() {
...
}
//TransactionalDatabaseClient will take either of these as arg in
//#enableTransactionSynchronization method
#Bean
public Publisher<? extends Connection> connectionPublisher() {
return connectionFactory().create();
}
#Bean
public Mono<Connection> connection() {
return = Mono.from(connectionFactory().create());
}
If you are having problems translating to Kotlin, there is an alternative way to enable synchronization that could work:
// From what I understand, this is a useful way to move between
// transactions within a single subscription
TransactionResources resources = TransactionResources.create();
resources.registerResource(Resource.class, resource);
ConnectionFactoryUtils
.currentReactiveTransactionSynchronization()
.subscribe(currentTx -> sync.registerTransaction(Tx));
Hope this translates well for Kotlin.

OOP question involving the best way to reference a base class protected variable without having to typecast every-time it is used

I have a quick OOP question and would like to see how others would approach this particular situation. Here it goes:
Class A (base class) -> Class B (extends Class A)
Class C (base class) -> Class D (extends Class C)
Simple so far right? Now, Class A can receive an instance of Class C through its constructor. Likewise, Class B can receive an instance of either class C or Class D through its constructor. Here is a quick snippet of code:
Class A
{
protected var _data:C;
public function A( data:C )
{
_data = data;
}
}
Class B extends A
{
public function B( data:D )
{
super( data );
}
}
Class C
{
public var someVar:String; // Using public for example so I don't need to write an mutator or accessor
public function C() { } // empty constructor for example
}
Class D extends C
{
public var someVar2:String; // Using public for example so I don't need to write an mutator or accessor
public function D() { super(); } // empty constructor for example
}
So, let's say that I am using class B. Since _data was defined as a protected var in Class A as type C, I will need to typecast my _data variable to type D in class B every time I want to use it. I would really like to avoid this if possible. I'm sure there is a pattern for this, but don't know what it is. For now, i'm solving the problem by doing the following:
Class B extends A
{
private var _data2:D;
public function B( data:D )
{
super( data );
_data2 = data;
}
}
Now, in class B, I can use _data2 instead of typecasting _data to type D every-time I want to use it. I think there might be a cleaner solution that others have used. Thoughts?
I think B doesn't take C or D... in order for it to do what you wrote it should be
public function B( data:C )
{
super( data );
}
At least as far as I used to know :)
I doubt you can use a downwards inheritance in your case.
As for the pattern, the best one to use in situations like these is Polymorphism. Alternatively, depending on language, you can use interfaces. Or if languages allow it, even a combination of conventional code and templates.
Most modern OO languages support covariant of return type, that is: an overriding method can have a return type that is a subclass of the return type in the original (overridden) method.
Thus, the trick is to define a getter method in A that will return C, and then have B override it, such that it returns D. For this to work the variable _data is immutable: it is initialized at construction time, and from that point it does not change its value.
Class A {
private var _data:C;
public function A(data:C) {
_data = data;
}
public function getData() : C {
return _data;
}
// No function that takes a C value and assigns it to _data!
}
Class B extends A {
public function B(data:D) {
super(data);
}
public function getData() : D { // Override and change return type
return (D) super.getData(); // Downcast only once.
}
}
This how I usually write it in Java:
public class A {
private final C data;
public A(C data) { this.data = data; }
public C getData() { return data; }
}
public class B extends A {
public B(D data) { super(data); }
#Override
public D getData() { return (D) super.getData(); }
}

WCF Dependency injection and abstract factory

I have this wcf method
Profile GetProfileInfo(string profileType, string profileName)
and a business rule:
if profileType is "A" read from database.
if profileType is "B" read from xml file.
The question is: how to implement it using a dependency injection container?
Let's first assume that you have an IProfileRepository something like this:
public interface IProfileRepository
{
Profile GetProfile(string profileName);
}
as well as two implementations: DatabaseProfileRepository and XmlProfileRepository. The issue is that you would like to pick the correct one based on the value of profileType.
You can do this by introducing this Abstract Factory:
public interface IProfileRepositoryFactory
{
IProfileRepository Create(string profileType);
}
Assuming that the IProfileRepositoryFactory has been injected into the service implementation, you can now implement the GetProfileInfo method like this:
public Profile GetProfileInfo(string profileType, string profileName)
{
return this.factory.Create(profileType).GetProfile(profileName);
}
A concrete implementation of IProfileRepositoryFactory might look like this:
public class ProfileRepositoryFactory : IProfileRepositoryFactory
{
private readonly IProfileRepository aRepository;
private readonly IProfileRepository bRepository;
public ProfileRepositoryFactory(IProfileRepository aRepository,
IProfileRepository bRepository)
{
if(aRepository == null)
{
throw new ArgumentNullException("aRepository");
}
if(bRepository == null)
{
throw new ArgumentNullException("bRepository");
}
this.aRepository = aRepository;
this.bRepository = bRepository;
}
public IProfileRepository Create(string profileType)
{
if(profileType == "A")
{
return this.aRepository;
}
if(profileType == "B")
{
return this.bRepository;
}
// and so on...
}
}
Now you just need to get your DI Container of choice to wire it all up for you...
Great answer by Mark, However the solution given is not Abstract factory but the implementation of Standard Factory pattern. Please check how Marks classes fit in the Standard Factory Pattern UML diagram. Click here to see above classes applied to Factory pattern UML
Since in Factory pattern, the factory is aware of the concrete classes, we can make the code of the ProfileRepositoryFactory much simpler like below. The problem with injecting the different repositories to factory is that you have more code changes every time you add a new concrete type. With below code you only have to update the switch to include new concrete class
public class ProfileRepositoryFactory : IProfileRepositoryFactory
{
public IProfileRepository Create(string profileType)
{
switch(profileType)
{
case "A":
return new DatabaseProfileRepository();
case "B":
return new XmlProfileRepository();
}
}
}
Abstract Factory is more advanced pattern used for creating families of related or dependent objects without specifying their concrete classes. The UML class diagram available here explains it well.