WCF Dependency injection and abstract factory - wcf

I have this wcf method
Profile GetProfileInfo(string profileType, string profileName)
and a business rule:
if profileType is "A" read from database.
if profileType is "B" read from xml file.
The question is: how to implement it using a dependency injection container?

Let's first assume that you have an IProfileRepository something like this:
public interface IProfileRepository
{
Profile GetProfile(string profileName);
}
as well as two implementations: DatabaseProfileRepository and XmlProfileRepository. The issue is that you would like to pick the correct one based on the value of profileType.
You can do this by introducing this Abstract Factory:
public interface IProfileRepositoryFactory
{
IProfileRepository Create(string profileType);
}
Assuming that the IProfileRepositoryFactory has been injected into the service implementation, you can now implement the GetProfileInfo method like this:
public Profile GetProfileInfo(string profileType, string profileName)
{
return this.factory.Create(profileType).GetProfile(profileName);
}
A concrete implementation of IProfileRepositoryFactory might look like this:
public class ProfileRepositoryFactory : IProfileRepositoryFactory
{
private readonly IProfileRepository aRepository;
private readonly IProfileRepository bRepository;
public ProfileRepositoryFactory(IProfileRepository aRepository,
IProfileRepository bRepository)
{
if(aRepository == null)
{
throw new ArgumentNullException("aRepository");
}
if(bRepository == null)
{
throw new ArgumentNullException("bRepository");
}
this.aRepository = aRepository;
this.bRepository = bRepository;
}
public IProfileRepository Create(string profileType)
{
if(profileType == "A")
{
return this.aRepository;
}
if(profileType == "B")
{
return this.bRepository;
}
// and so on...
}
}
Now you just need to get your DI Container of choice to wire it all up for you...

Great answer by Mark, However the solution given is not Abstract factory but the implementation of Standard Factory pattern. Please check how Marks classes fit in the Standard Factory Pattern UML diagram. Click here to see above classes applied to Factory pattern UML
Since in Factory pattern, the factory is aware of the concrete classes, we can make the code of the ProfileRepositoryFactory much simpler like below. The problem with injecting the different repositories to factory is that you have more code changes every time you add a new concrete type. With below code you only have to update the switch to include new concrete class
public class ProfileRepositoryFactory : IProfileRepositoryFactory
{
public IProfileRepository Create(string profileType)
{
switch(profileType)
{
case "A":
return new DatabaseProfileRepository();
case "B":
return new XmlProfileRepository();
}
}
}
Abstract Factory is more advanced pattern used for creating families of related or dependent objects without specifying their concrete classes. The UML class diagram available here explains it well.

Related

Overriding an internal method with Decorator Design Pattern

I am writing an object-oriented code in which I am trying to use Decorator pattern to implement a variety of optimizations to be applied on a family of core classes at runtime. The main behaviour of core classes is a complex behaviour that is fully implemented in those classes, which indeed calls other internal methods to fulfill pieces of the task.
The decorators will only customize the internal methods which are called by the complex behaviour in core class.
Here is a pseudo-code of what I'm trying to reach:
interface I{
complex();
step1();
step2();
}
class C implements I{
complex(){
...
this.step1();
...
this.step2();
}
step1(){
...
}
step2(){
...
}
}
abstract class Decorator implements I{
I wrapped;
constructor(I obj){
this.wrapped = obj;
}
complex(){
this.wrapped.complex();
}
step1(){
this.wrapped.step1();
}
step2(){
this.wrapped.step2();
}
}
class ConcreteDecorator extends Decorator{
constructor(I obj){
super(obj);
}
step2(){
... // customizing step2()
}
}
There are a variety of customizations possible which could be combined together, and that is the main reason I'm using decorator pattern. otherwise I'll get to create dozens to hundred subtypes for each possible combination of customizations.
Now if I try to create object of the decorated class:
x = new C();
y = new ConcreteDecorator(x);
y.complex();
I expect the complex() method to be executed form the wrapped core object, while using the overridden step2() method from decorator. But it does not work this way as the complex() method in abstract decorator directly calls the method on core object which indeed skips the overridden step2() in decorator.
My overall goal is to enable the decorators only overriding one or few of the stepx() methods and that would be invoked by the complex() method which is already implemented in the core object and invokes all the steps.
Could this functionality be implemented using Decorator design pattern at all? If yes how, and if not what is the appropriate design pattern for tackling this problem.
Thanks.
I guess you could resolve that problem with Strategy pattern, where the Strategy interface includes the methods that are vary from class to class. Strategy interface may include as only one method as well as several depending on their nature.
interface IStrategy {
step1(IData data);
step2(IData data);
}
interface I {
complex();
}
class C implements I {
IData data
constructor(IStrategy strategy) {}
complex() {
...
this.strategy.step1(this.data);
...
this.strategy.step2(this.data);
}
}
class S1 implements IStrategy {
constructor(IStrategy strategy)
step1(IData data) {
}
step2(IData data) {
}
}
strategy1 = new S1();
c = new C(strategy1)
The issue you are facing is that in your application of the Decorator design pattern, because you are not decorating complex(), the call to complex() on a decorator object will be delegated to the decorated object, which has "normal" version of step2.
I think a more appropriate design pattern to solve your problem would be the Template Method design pattern.
In your case complex() would play the role of the template method, whose steps can be customized by subclasses. Instead of using composition, you use inheritance, and the rest stays more or less the same.
Here is a sample application of the Template Method design pattern to your context:
public interface I {
void complex();
void step1(); // Better to remove from the interface if possible
void step2(); // Better to remove from the interface if possible
}
// Does not need to be abstract, but can be
class DefaultBehavior implements I {
// Note how this is final to avoid having subclass
// change the algorithm.
public final void complex() {
this.step1();
this.step2();
}
public void step1() { // Default step 1
System.out.println("Default step 1");
}
public void step2() { // Default step 2
System.out.println("Default step 1");
}
}
class CustomizedStep2 extends DefaultBehavior {
public void step2() { // Customized step 2
System.out.println("Customized step 2");
}
}

Transactions with ReactiveCrudRepository with spring-data-r2dbc

I'm trying to implement transactions with spring-data-r2dbc repositories in combination with the TransactionalDatabaseClient as such:
class SongService(
private val songRepo: SongRepo,
private val databaseClient: DatabaseClient
){
private val tdbc = databaseClient as TransactionalDatabaseClient
...
...
fun save(song: Song){
return tdbc.inTransaction{
songRepo
.save(mapRow(song, albumId)) //Mapping to a row representation
.delayUntil { savedSong -> tdbc.execute.sql(...).fetch.rowsUpdated() } //saving a many to many relation
.map(::mapSong) //Mapping back to actual song and retrieve the relationship data.
}
}
}
I currently have a config class (annotated with #Configuration and #EnableR2dbcRepositories) that extends from AbstractR2dbcConfiguration. In here I override the databaseClient method to return a TransactionalDatabaseClient. This should be the same instance as in the SongService class.
When running the code in a test with just subscribing and printing, I get org.springframework.transaction.NoTransactionException: ReactiveTransactionSynchronization not active and the relationship data is not returned.
When using project Reactors stepverifier though, i get java.lang.IllegalStateException: Connection is closed. Also in this case, the relationship data is not returned.
Just for the record, I have seen https://github.com/spring-projects/spring-data-r2dbc/issues/44
Here is a working Java example:
#Autowired TransactionalDatabaseClient txClient;
#Autowired Mono<Connection> connection;
//You Can also use: #Autowired Mono<? extends Publisher> connectionPublisher;
public Flux<Void> example {
txClient.enableTransactionSynchronization(connection);
// Or, txClient.enableTransactionSynchronization(connectionPublisher);
Flux<AuditConfigByClub> audits = txClient.inTransaction(tx -> {
txClient.beginTransaction();
return tx.execute().sql("SELECT * FROM audit.items")
.as(Item.class)
.fetch()
.all();
}).doOnTerminate(() -> {
txClient.commitTransaction();
});
txClient.commitTransaction();
audits.subscribe(item -> System.out.println("anItem: " + item));
return Flux.empty()
}
I just started reactive so not too sure what I'm doing with my callbacks haha. But I decided to go with TransactionalDatabaseClient over DatabaseClient or Connection since I'll take all the utility I can get while R2dbc is in its current state.
In your code did you actually instantiate a Connection object? If so I think you would have done it in your configuration. It can be utilized throughout the app the same as DatabaseClient, but it is slightly more intricate.
If not:
#Bean
#Override // I also used abstract config
public ConnectionFactory connectionFactory() {
...
}
#Bean
TransactionalDatabaseClient txClient() {
...
}
//TransactionalDatabaseClient will take either of these as arg in
//#enableTransactionSynchronization method
#Bean
public Publisher<? extends Connection> connectionPublisher() {
return connectionFactory().create();
}
#Bean
public Mono<Connection> connection() {
return = Mono.from(connectionFactory().create());
}
If you are having problems translating to Kotlin, there is an alternative way to enable synchronization that could work:
// From what I understand, this is a useful way to move between
// transactions within a single subscription
TransactionResources resources = TransactionResources.create();
resources.registerResource(Resource.class, resource);
ConnectionFactoryUtils
.currentReactiveTransactionSynchronization()
.subscribe(currentTx -> sync.registerTransaction(Tx));
Hope this translates well for Kotlin.

OO - Reduce boilerplate/forwarding code

Imagine the following: I have a bunch of DTO's that inherit from Foo class
class Foo { }
class FooA : Foo { }
class FooB : Foo { }
class FooX : Foo { }
Than I have one class that have encapsulated all the related logic and orchestration related with Foo data types. I provide a method DoSomethingWithData(Foo data) that do all the logic related to data provided by argument
The method implementation is something like this:
void DoSomething(Foo data)
{
if (data is FooA)
DoSomethingWithFooA((FooA) data);
if (data is FooB)
DoSomethingWithFooB((FooA)data);
if (data is FooX)
DoSomethingWithFooC((FooA)data);
}
This is a very simplified example. The advantage of this approach is:
The "Client" invoke always the DoSomething method independently of
the Foo data type
If I add a new type I only have to change the method DoSomething
What i dont like is the downcasting
The alternative is instead of exposing only DoSomething method I expose a method by each Foo data type. The advantage is that we dont have downcast but increases the boilerplate/forwarding code.
What do you prefer? Or do you have other approaches?
In this case, I would approach the problem like this (I will use Java for this example).
In your approach, for every subclass of Foo you have to provide a specific processing logic - as you have shown, and cast the Foo object to its sub-type. Moreover, for every new class that you add, you have to change the DoSomething(Foo f) method.
You can make the Foo class an interface:
public interface Foo{
public void doSomething();
}
Then have your classes implement this interface:
public class FooA iplements Foo {
public void doSomething(){
//Whatever FooA needs to do.
}
}
public class FooB implements Foo {
public void doSomething(){
//Whatever FooB needs to do.
}
}
And so on. Then, the client can call the doSomething() method:
...
Foo fooA = new FooA();
Foo fooB = new FooB();
fooA.doSomething();
fooB.doSomething();
...
This way, you don't have to cast the object at run-time and if you add more classes, you don't have to change your existing code, except the client that has to call the method of a newly added object.

How do I mock an inherited method that has generics with JMockit

I have this abstract class:
public abstract class Accessor<T extends Id, U extends Value>
{
public U find(T id)
{
// let's say
return getHelper().find(id);
}
}
And an implementation:
public FooAccessor extends Accessor<FooId,Foo>
{
public Helper getHelper
{
// ...
return helper;
}
}
And I would like to mock the calls to FooAccessor.find.
This:
#MockClass(realClass=FooAccessor.class)
static class MockedFooAccessor
{
public Foo find (FooId id)
{
return new Foo("mocked!");
}
}
will fail with this error:
java.lang.IllegalArgumentException: Matching real methods not found for the following mocks of MockedFooAccessor:
Foo find (FooId)
and I understand why... but I don't see how else I could do it.
Note: yes, I could mock the getHelper method, and get what I want; but this is more a question to learn about JMockit and this particular case.
The only way around this I have found is to use fields
#Test
public void testMyFooMethodThatCallsFooFind(){
MyChildFooClass childFooClass = new ChildFooClass();
String expectedFooValue = "FakeFooValue";
new NonStrictExpectations(){{
setField(childFooClass, "fieldYouStoreYourFindResultIn", expectedFooValue);
}};
childFooClass.doSomethingThatCallsFind();
// if your method is protected or private you use Deencapsulation class
// instead of calling it directly like above
Deencapsulation.invoke(childFooClass, "nameOfFindMethod", argsIfNeededForFind);
// then to get it back out since you used a field you use Deencapsulation again to pull out the field
String actualFoo = Deencapsulation.getField(childFooClass, "nameOfFieldToRunAssertionsAgainst");
assertEquals(expectedFooValue ,actualFoo);
}
childFooClass doesn't need to be mocked nor do you need to mock the parent.
Without more knowledge of your specific case this strategy has been the best way for me to leverage jMockit Deencapsulation makes so many things possilbe to test without sacrificing visibility. I know this doesn't answer the direct question but I felt you should get something out of it. Feel free to downvote and chastise me community.
Honestly, I do not find it in any way different from mocking regular classes. One way to go is to tell JMockit to mock only the find method and use Expectations block to provide alternate implementation. Like this:
abstract class Base<T, U> {
public U find(T id) {
return null;
}
}
class Concrete extends Base<Integer, String> {
public String work() {
return find(1);
}
}
#RunWith(JMockit.class)
public class TestClass {
#Mocked(methods = "find")
private Concrete concrete;
#Test
public void doTest() {
new NonStrictExpectations() {{
concrete.find((Integer) withNotNull());
result = "Blah";
}}
assertEquals("Blah", concrete.work());
}
}
Hope it helps.

Is it possible to design a type-safe linked list preventing getNext() at the tail node?

I'm wondering if it is possible to design, for example, a type-safe singly linked list structure such that it is impossible to ask for the next node from the tail node.
At the same time, the client would need to be able to traverse (recursively or otherwise) through the list via node.getChild() but be prevented at compile time (at least with human-written explicit type checking) from going past the tail.
I'm wondering:
Is there a name for this type of problem?
Is there an object oriented or other approach that would help to avoid explicit run-time type checking?
The implementation language isn't important, but here's a Java example of what I'm thinking of.
Edit after Joop's answer:
public class TestHiddenInterfaces {
interface Node { HasNoChildNode getTail(); }
interface HasNoChildNode extends Node {};
interface HasChildNode extends Node { Node getChild(); }
class HasNoChild implements HasNoChildNode {
#Override public HasNoChildNode getTail() { return this; }
}
class HasChild implements HasChildNode {
final Node child;
#Override
public Node getChild() { return child; }
HasChild(Node child) {
this.child = child;
}
#Override public HasNoChildNode getTail() {
if(child instanceof HasChild) return ((HasChild) child).getTail();
else if(child instanceof HasNoChild) return (HasNoChildNode) child;
else throw new RuntimeException("Unknown type");
}
}
#Test
public void test() {
HasNoChild tail = new HasNoChild();
assertEquals(tail, tail.getTail());
HasChild level1 = new HasChild(tail);
assertEquals(tail, level1.getTail());
HasChild level2 = new HasChild(level1);
assertEquals(tail, level2.getTail());
}
}
In Scala one uses "case types" for such typing. In Java, or UML diagrams, one often sees that a distinction is made between branch and leaf. And that can reduce half the memory of unused leaf children.
The types coexist like enum values.
So one might use the following:
/**
* Base of all nodes. For the remaining types I have dropped the type parameter T.
*/
public interface Node<T> {
void setValue(T value);
T getValue();
}
public interface HasParent extends Node {
void setParent(HasChildren node);
HasChildren getParent();
}
public interface HasChildren extends Node {
void setChildren(HasParent... children);
HasPOarent[] getChildren();
}
public final class RootBranch implements HasChildren {
...
}
public final class SubBranch implements HasChildren, HasParent {
...
}
public final class Leaf implements HasParent {
...
}
public final class RootLeaf implements Node {
...
}
The usage would either use overloading, or distinguishing cases:
void f(Node node) {
if (node instanceof HasParent) {
HasParent nodeHavingParent = (HasParent) node;
...
}
}
Personally I think this is overdone in Java, but in Scala for instance, where the type declaration is the constructor, this would make sense: SubBranche(parent, child1, child2).
The only way that such a hierarchy could exist is if each level implemented a different interface (where I'm using interface in a wider sense than a specific language term).
The root node cannot implement getParent - that's the only way you'll achieve a compilation error that I can think of. So, the "interface" of the root node doesn't include getParent.
The first children can implement getParent - but in order to be compile safe, they have to return a type that is, at compile time, known to be the root node (i.e. a type that doesn't implement getParent).
At the next level, the implementation of getParent must return a type that implements a getParent that returns a root node that doesn't have getParent.
In short, even if you did choose to produce such an implementation, it would be very brittle, because you'd need to write different code to deal with each level of the hierarchy.
There are certain problems where a runtime check is right, and this would be one of those times. If every problem could be solved at compile time, then every compiled program would just be a set of results (and possibly a massive switch statement to pick which result you want to output)