adapter-Any real example of Adapter Pattern [closed] - oop

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I want to demonstrate use of Adapter Pattern to my team. I've read many books and articles online. Everyone is citing an example which are useful to understand the concept (Shape, Memory Card, Electronic Adapter etc.), but there is no real case study.
Can you please share any case study of Adapter Pattern?
p.s. I tried searching existing questions on stackoverflow, but did not find the answer so posting it as a new question. If you know there's already an answer for this, then please redirect.

Many examples of Adapter are trivial or unrealistic (Rectangle vs. LegacyRectangle, Ratchet vs. Socket, SquarePeg vs RoundPeg, Duck vs. Turkey). Worse, many don't show multiple Adapters for different Adaptees (someone cited Java's Arrays.asList as an example of the adapter pattern). Adapting an interface of only one class to work with another seems a weak example of the GoF Adapter pattern. This pattern uses inheritance and polymorphism, so one would expect a good example to show multiple implementations of adapters for different adaptees.
The best example I found is in Chapter 26 of Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development (3rd Edition). The following images are from the instructor material provided on an FTP site for the book.
The first one shows how an application can use multiple implementations (adaptees) that are functionally similar (e.g., tax calculators, accounting modules, credit authorization services, etc.) but have different APIs. We want to avoid hard-coding our domain-layer code to handle the different possible ways to calculate tax, post sales, authorize credit card requests, etc. Those are all external modules that might vary, and for which we can't modify the code. The adapter allows us to do the hard-coding in the adapter, whereas our domain-layer code always uses the same interface (the IWhateverAdapter interface).
We don't see in the above figure the actual adaptees. However, the following figure shows how a polymorphic call to postSale(...) in the IAccountingAdapter interface is made, which results in a posting of the sale via SOAP to an SAP system.

How to turn a french person into a normal person...
public interface IPerson
{
string Name { get; set; }
}
public interface IFrenchPerson
{
string Nom { get; set; }
}
public class Person : IPerson
{
public string Name { get; set; }
}
public class FrenchPerson : IFrenchPerson
{
public string Nom { get; set; }
}
// that is a service that we want to use with our French person
// we cannot or don't want to change the service contract
// therefore we need 'l'Adaptateur'
public class PersonService
{
public void PrintName(IPerson person)
{
Debug.Write(person.Name);
}
}
public class FrenchPersonAdapter : IPerson
{
private readonly IFrenchPerson frenchPerson;
public FrenchPersonAdapter(IFrenchPerson frenchPerson)
{
this.frenchPerson = frenchPerson;
}
public string Name
{
get { return frenchPerson.Nom; }
set { frenchPerson.Nom = value; }
}
}
Example
var service = new PersonService();
var person = new Person();
var frenchPerson = new FrenchPerson();
service.PrintName(person);
service.PrintName(new FrenchPersonAdapter(frenchPerson));

Convert an Interface into another Interface.
Any real example of Adapter Pattern
In order to connect power, we have different interfaces all over the world.
Using Adapter we can connect easily like wise.

Here is an example that simulates converting analog data to digit data.
It provides an adapter that converts float digit data to binary data, it's probably not useful in real world, it just helps to explain the concept of adapter pattern.
Code
AnalogSignal.java
package eric.designpattern.adapter;
public interface AnalogSignal {
float[] getAnalog();
void setAnalog(float[] analogData);
void printAnalog();
}
DigitSignal.java
package eric.designpattern.adapter;
public interface DigitSignal {
byte[] getDigit();
void setDigit(byte[] digitData);
void printDigit();
}
FloatAnalogSignal.java
package eric.designpattern.adapter;
import java.util.Arrays;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class FloatAnalogSignal implements AnalogSignal {
private Logger logger = LoggerFactory.getLogger(this.getClass());
private float[] data;
public FloatAnalogSignal(float[] data) {
this.data = data;
}
#Override
public float[] getAnalog() {
return data;
}
#Override
public void setAnalog(float[] analogData) {
this.data = analogData;
}
#Override
public void printAnalog() {
logger.info("{}", Arrays.toString(getAnalog()));
}
}
BinDigitSignal.java
package eric.designpattern.adapter;
import java.util.Arrays;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class BinDigitSignal implements DigitSignal {
private Logger logger = LoggerFactory.getLogger(this.getClass());
private byte[] data;
public BinDigitSignal(byte[] data) {
this.data = data;
}
#Override
public byte[] getDigit() {
return data;
}
#Override
public void setDigit(byte[] digitData) {
this.data = digitData;
}
#Override
public void printDigit() {
logger.info("{}", Arrays.toString(getDigit()));
}
}
AnalogToDigitAdapter.java
package eric.designpattern.adapter;
import java.util.Arrays;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* <p>
* Adapter - convert analog data to digit data.
* </p>
*
* #author eric
* #date Mar 8, 2016 1:07:00 PM
*/
public class AnalogToDigitAdapter implements DigitSignal {
public static final float DEFAULT_THRESHOLD_FLOAT_TO_BIN = 1.0f; // default threshold,
private Logger logger = LoggerFactory.getLogger(this.getClass());
private AnalogSignal analogSignal;
private byte[] digitData;
private float threshold;
private boolean cached;
public AnalogToDigitAdapter(AnalogSignal analogSignal) {
this(analogSignal, DEFAULT_THRESHOLD_FLOAT_TO_BIN);
}
public AnalogToDigitAdapter(AnalogSignal analogSignal, float threshold) {
this.analogSignal = analogSignal;
this.threshold = threshold;
this.cached = false;
}
#Override
public synchronized byte[] getDigit() {
if (!cached) {
float[] analogData = analogSignal.getAnalog();
int len = analogData.length;
digitData = new byte[len];
for (int i = 0; i < len; i++) {
digitData[i] = floatToByte(analogData[i]);
}
}
return digitData;
}
// not supported, should set the inner analog data instead,
#Override
public void setDigit(byte[] digitData) {
throw new UnsupportedOperationException();
}
public synchronized void setAnalogData(float[] analogData) {
invalidCache();
this.analogSignal.setAnalog(analogData);
}
public synchronized void invalidCache() {
cached = false;
digitData = null;
}
#Override
public void printDigit() {
logger.info("{}", Arrays.toString(getDigit()));
}
// float -> byte convert,
private byte floatToByte(float f) {
return (byte) (f >= threshold ? 1 : 0);
}
}
Code - Test case
AdapterTest.java
package eric.designpattern.adapter.test;
import java.util.Arrays;
import junit.framework.TestCase;
import org.junit.Test;
import eric.designpattern.adapter.AnalogSignal;
import eric.designpattern.adapter.AnalogToDigitAdapter;
import eric.designpattern.adapter.BinDigitSignal;
import eric.designpattern.adapter.DigitSignal;
import eric.designpattern.adapter.FloatAnalogSignal;
public class AdapterTest extends TestCase {
private float[] analogData = { 0.2f, 1.4f, 3.12f, 0.9f };
private byte[] binData = { 0, 1, 1, 0 };
private float[] analogData2 = { 1.2f, 1.4f, 0.12f, 0.9f };
#Test
public void testAdapter() {
AnalogSignal analogSignal = new FloatAnalogSignal(analogData);
analogSignal.printAnalog();
DigitSignal digitSignal = new BinDigitSignal(binData);
digitSignal.printDigit();
// adapter
AnalogToDigitAdapter adAdapter = new AnalogToDigitAdapter(analogSignal);
adAdapter.printDigit();
assertTrue(Arrays.equals(digitSignal.getDigit(), adAdapter.getDigit()));
adAdapter.setAnalogData(analogData2);
adAdapter.printDigit();
assertFalse(Arrays.equals(digitSignal.getDigit(), adAdapter.getDigit()));
}
}
Dependence - via maven
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.8.2</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.13</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.13</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.16</version>
</dependency>
How to test
Just run the unit test.

Adapter pattern works as a bridge between two incompatible interfaces.
This pattern involves a single class called adapter which is
responsible for communication between two independent or incompatible
interfaces.
Real-world examples might be a language translator or a mobile charger. More here in this youtube video:
Youtube - Adapter Design pattern: Introduction

You can use the Adapter design pattern when you have to deal with different interfaces with similar behavior (which usually means classes with similar behavior but with different methods). An example of it would be a class to connect to a Samsung TV and another one to connect to a Sony TV. They will share common behavior like open menu, start playback, connect to a network and etc but each library will have a different implementation of it (with different method names and signatures). These different vendor specific implementations are called Adaptee in the UML diagrams.
So, in your code (called Client in the UML diagrams), instead of hard code the method calls of each vendor (or Adaptee), you could then create a generic interface (called Target in UML diagrams) to wrap these similar behaviors and work with only one type of object.
The Adapters will then implement the Target interface delegating its method calls to the Adaptees that are passed to the Adapters via constructor.
For you to realize this in Java code, I wrote a very simple project using exactly the same example mentioned above using adapters to deal with multiple smart TV interfaces. The code is small, well documented and self explanatory so dig on it to see how a real world implementation would look like.
Just download the code and import it to Eclipse (or your favorite IDE) as a Maven project. You can execute the code by running org.example.Main.java. Remember that the important thing here is to understand how classes and interfaces are assembled together to design the pattern. I also created some fake Adaptees in the package com.thirdparty.libs. Hope it helps!
https://github.com/Dannemann/java-design-patterns

Adapter design patterns helps in converting interface of one class into interface of client expects.
Example:
You have a service which returns weather (in celsius) by passing city name as a input value. Now, assume that your client wants to pass zipcode as input and expecting the temperature of the city in return. Here you need an adaptor to achieve this.
public interface IWetherFinder {
public double getTemperature(String cityName);
}
class WeatherFinder implements IWetherFinder{
#Override
public double getTemperature(String cityName){
return 40;
}
}
interface IWeatherFinderClient
{
public double getTemperature(String zipcode);
}
public class WeatherAdapter implements IWeatherFinderClient {
#Override
public double getTemperature(String zipcode) {
//method to get cityname by zipcode
String cityName = getCityName(zipcode);
//invoke actual service
IWetherFinder wetherFinder = new WeatherFinder();
return wetherFinder.getTemperature(cityName);
}
private String getCityName(String zipCode) {
return "Banaglore";
}
}

One Real example is Qt-Dbus.
The qt-dbus has a utility to generate the adaptor and interface code from the xml file provided. Here are the steps to do so.
1. Create the xml file - this xml file should have the interfaces
that can be viewed by the qdbus-view in the system either on
the system or session bus.
2.With the utility - qdbusxml2cpp , you generate the interface adaptor code.
This interface adaptor does the demarshalling of the data that is
received from the client. After demarshalling, it invokes the
user defined - custom methods ( we can say as adaptee).
3. At the client side, we generate the interface from the xml file.
This interface is invoked by the client. The interface does the
marshalling of the data and invokes the adaptor interface. As told
in the point number 2, the adaptor interface does the demarshalling
and calls the adaptee - user defined methods.
You can see the complete example of Qt-Dbus over here -
http://www.tune2wizard.com/linux-qt-signals-and-slots-qt-d-bus/

Use Adapter when you have an interface you cannot change, but which you need to use. See it as you're the new guy in an office and you can't make the gray-hairs follow your rules - you must adapt to theirs. Here is a real example from a real project I worked on sometime where the user interface is a given.
You have an application that read all the lines in a file into a List data structure and displayed them in a grid (let's call the underlying data store interface IDataStore). The user can navigate through these data by clicking the buttons "First page", "Previous page", "Next page", "Last Page". Everything works fine.
Now the application needs to be used with production logs which are too big to read into memory but the user still needs to navigate through it! One solution would be to implement a Cache that stores the first page, next, previous and last pages. What we want is when the user clicks "Next page", we return the page from the cache and update the cache; when they click last page, we return last page from cache. In the background we have a filestream doing all the magic. By so doing we only have four pages in memory as opposed to the entire file.
You can use an adapter to add this new cache feature to your application without the user noticing it. We extend the current IDataStore and call it CacheDataStore. If the file to load is big, we use CacheDataStore. When we make a request for First, Next, Previous and Last pages, the information is routed to our Cache.
And who knows, tomorrow the boss wants to start reading the files from a database table. All you do is still extend IDataStore to SQLDataStore as you did for Cache, setup the connection in the background. When they click Next page, you generate the necessary sql query to fetch the next couple hundred rows from the database.
Essentially, the original interface of the application did not change. We simply adapted modern and cool features to work it while preserving the legacy interface.

You can find a PHP implementation of the Adapter pattern used as a defense against injection attacks here:
http://www.php5dp.com/category/design-patterns/adapter-composition/
One of the interesting aspects of the Adapter pattern is that it comes in two flavors: A class adapter relying on multiple inheritance and an object adapter relying on composition. The above example relies on composition.

#Justice o's example does not talk about adapter pattern clearly. Extending his answer -
We have existing interface IDataStore that our consumer code uses and we cannot change it. Now we are asked to use a cool new class from XYZ library that does what we want to implement, but but but, we cannot change that class to extend our IDataStore, seen the problem already ?
Creating a new class - ADAPTER, that implements interface our consumer code expects, i.e. IDataStore and by using class from the library whose features we need to have - ADAPTEE, as a member in our ADAPTER, we can achieve what we wanted to.

As per “C# 3.0 Design Patterns” book by Judith Bishop, Apple used Adapter pattern to adapt Mac OS to work with Intel products (explained in Chapter # 4, excerpt here2)
C# 3.0 Design Patterns
Structural Patterns: Adapter and Façade

An example from Yii framework would be: Yii uses internally cache utilizing an interface
ICache.
https://www.yiiframework.com/doc/api/1.1/ICache
whose signature is like : -
abstract public boolean set(string $id, mixed $value, integer $expire=0, ICacheDependency $dependency=NULL)
abstract public mixed get(string $id)
Let's say, you would like to use inside a Yii project the symfony cache library
https://packagist.org/packages/symfony/cache with it's cache interface, by defining this service in Yii services components (service locator) configuration
https://github.com/symfony/cache-contracts/blob/master/CacheInterface.php
public function get(string $key, callable $callback, float $beta = null, array &$metadata = null);
We see, symfony cache has an interface with only a get method, missing a set method and a different signature for a get method, as Symfony uses the get method also as a setter when supplying the second callable parameter.
As Yii core internally uses this Yii cache/interface, it's difficult (extending Yii/YiiBase) if not impossible at places , to rewrite the calls to that interface.
Plus Symfony cache is nor our class, so we can't rewrite it's interface to fit with the Yii cache interface.
So here comes the adapter pattern to rescue. We will write a mapping = an intermediate adapter which will map the Yii cache interface calls to Symfony cache interface
Would look like this
class YiiToSymfonyCacheAdapter implements \Yii\system\caching\ICache
{
private \Symfony\Contracts\Cache\CacheInterface $symfonyCache;
public function __construct(\Symfony\Contracts\Cache\CacheInterface $symfonyCache)
{
$this->symfonyCache = $symfonyCache;
}
public boolean set(string $id, mixed $value, integer $expire=0, ICacheDependency
$dependency=NULL)
{
// https://symfony.com/doc/current/cache.html
return $this->symfonyCache->get(
$id,
function($item) {
// some logic ..
return $value;
}
);
// https://github.com/symfony/cache/blob/master/Adapter/MemcachedAdapter.php
// if a class could be called statically, the adapter could call statically also eg. like this
// return \Symfony\Component\Cache\Adapter\MemcacheAdapter::get(
// $id,
// function($item) {
// // some logic ..
// return $value;
// }
);
}
public mixed get(string $id)
{
// https://github.com/symfony/cache/blob/master/Adapter/FilesystemAdapter.php
// if a class could be called statically, the adapter could call statically also eg. like this
// \Symfony\Component\Cache\Adapter\FileSystemAdapter::get($id)
return $this->symfonyCache->get($id)
}
}

A real example can be reporting documents in an application. Simple code as here.
Adapters i think are very useful for programming structure.
class WordAdaptee implements IReport{
public void report(String s) {
System.out.println(s +" Word");
}
}
class ExcellAdaptee implements IReport{
public void report(String s) {
System.out.println(s +" Excel");
}
}
class ReportAdapter implements IReport{
WordAdaptee wordAdaptee=new WordAdaptee();
#Override
public void report(String s) {
wordAdaptee.report(s);
}
}
interface IReport {
public void report(String s);
}
public class Main {
public static void main(String[] args) {
//create the interface that client wants
IReport iReport=new ReportAdapter();
//we want to write a report both from excel and world
iReport.report("Trial report1 with one adaptee"); //we can directly write the report if one adaptee is avaliable
//assume there are N adaptees so it is like in our example
IReport[] iReport2={new ExcellAdaptee(),new WordAdaptee()};
//here we can use Polymorphism here
for (int i = 0; i < iReport2.length; i++) {
iReport2[i].report("Trial report 2");
}
}
}
Results will be:
Trial report1 with one adaptee Word
Trial report 2 Excel
Trial report 2 Word

This is an example of adapter implementation:
interface NokiaInterface {
chargementNokia(x:boolean):void
}
class SamsungAdapter implements NokiaInterface {
//nokia chargement adapted to samsung
chargementNokia(x:boolean){
const old= new SamsungCharger();
let y:number = x ? 20 : 1;
old.charge(y);
}
}
class SamsungCharger {
charge(x:number){
console.log("chrgement x ==>", x);
}
}
function main() {
//charge samsung with nokia charger
const adapter = new SamsungAdapter();
adapter.chargementNokia(true);
}

Related

How to architect an embedded system with multiple input and output capabilities. Some based on hardware, some on software settings

I have an ESP8266 project programmed in the Arduino framework that gathers data from the network and then displays on a display. The device can be built with a few different display hardware types (eink, led, oled). These are set at compile time with #defines. However there are also a few different type of data and data transport mechanisms that can be used. Some require hardware (LoRa TX/RX) and are enabled at compile time but some can be changed at runtime based on user settings (eg. HTTP or MQTT).
I'm already using a factory design pattern to instantiate the Data transport object at runtime but still use compile time build flags to select which display hardware to use. I have a Display class, a Datasource class and a Config class. This has worked well but is now reaching its limit as I try to add Cellular functionality.
I wonder if there is a good design pattern / architecture design that will facilitate this kind of flexibility without having to keep adding more and more intrusive #ifdef statements all over my code.
Attached is a little mind map of the basic layout of possibilities of this device.
If you want to make a decision what algorithn should be injected at runtime, then you can try to use Strategy pattern.
As wiki says about strategy pattern:
In computer programming, the strategy pattern (also known as the
policy pattern) is a behavioral software design pattern that enables
selecting an algorithm at runtime. Instead of implementing a single
algorithm directly, code receives run-time instructions as to which in
a family of algorithms to use
So you can read your config file and choose what object should be instantiated. For example, you have many displays:
public enum DisplayMark
{
Samsung, Sony, Dell
}
and then yoy should create a base class Display:
public abstract class Display
{
public abstract string Show();
}
And then you need concrete implementations of this base class Display:
public class SamsungDisplay : Display
{
public override string Show()
{
return "I am Samsung";
}
}
public abstract class SonyDisplay : Display
{
public override string Show()
{
return "I am Sony";
}
}
public abstract class DellDisplay : Display
{
public override string Show()
{
return "I am Dell";
}
}
So far, so good. Now we need something like mapper which will be responsible to bring correct instance by selected display from config:
public class DisplayFactory
{
public Dictionary<DisplayMark, Display> DisplayByMark { get; private set; }
= new Dictionary<DisplayMark, Display>
{
{ DisplayMark.Sony, new SonyDisplay()},
{ DisplayMark.Samsung, new SamsungDisplay()},
{ DisplayMark.Dell, new DellDisplay()},
};
}
and then when you will know what display should be used from config file, then you can get desired instance:
public void UseDisplay(DisplayMark displayMark)
{
DisplayFactory displayFactory = new DisplayFactory();
Display display = displayFactory.DisplayByMark[displayMark];
// Here you can use your desired display class
display.Show();
}

Is there a complete JUnit 5 extension example that demonstrates the proper way to maintain state (e.g. WebServerExtension.java from guide)

The main WebServerExtension example from the JUnit5 manual is incomplete and it doesn't fully show how to properly store the configuration (e.g. enableSecurity, server url).
https://github.com/junit-team/junit5/blob/master/documentation/src/main/java/example/registration/WebServerExtension.java
The example ignores or hard codes the values. The manual (section 5.11. Keeping State in Extensions) implies that the "Store" should be used but the ExtensionContext is not yet available yet when the object is constructed -- its not clear how to handle migrating this data to the Store as the ExtensionContext is not yet available in the constructor.
Also its not clear to me that using the Store API for the WebServerExtension programmatic example is even desirable and perhaps it could work just using the internal state (e.g. this.serverUrl, this.enableSecurity, etc.).
Maybe the Store is more applicable to Extensions which don't use this "programmatic" style where multiple instances of the custom extension may exist (appropriately)? In other words its not clear to me from the guide if this a supported paradigm or not?
Other JUnit 5 extension examples online (e.g. org.junit.jupiter.engine.extension.TempDirectory) show how to leverage annotations to handle passing configuration info to the Store but it would be nice if there were a complete programmatic builder type example like WebServerExtension too.
Examples like TempDirectory clearly have access to the ExtensionContext from the beforeXXX() methods whereas the WebServerExtension example does not.
Using the following approach below seems to work fine but I wanted confirmation that this is a supported paradigm (i.e. using fields instead of Stores when using this programmatic approach).
public class WebServerExtension implements BeforeAllCallback {
private final boolean securityEnabled;
private final String serverUrl;
public WebServerExtension(Builder builder) {
this.securityEnabled = builder.enableSecurity;
this.serverUrl = build.serverUrl;
}
#Override
public void beforeAll(ExtensionContext context) {
// is it ok to use this.securityEnabled, this.serverUrl instead of Store API???
}
public String getServerUrl() {
return this.serverUrl;
}
public boolean isSecurityEnabled() {
return this.securityEnabled;
}
public static Builder builder() {
return new Builder();
}
public static class Builder {
private boolean enableSecurity;
private String serverUrl;
public Builder enableSecurity(boolean b) {
this.enableSecurity = b;
return this;
}
public Builder serverUrl(String url) {
this.serverUrl = url;
return this;
}
public WebServerExtension build() {
return new WebServerExtension(this);
}
}
}
Thanks!

JavaFX Wrap an Existing Object with Simple Properties

I am writing a new app and I have chosen to use Java for flexibility. It is a GUI app so I will use JavaFX. This is my first time using Java but I have experience with C#.
I am getting familiar with JavaFX Properties, they look like a great way of bi-directional binding between front-end and back-end.
My code uses classes from an open-source API, and I would like to convert the members of these classes to JavaFX Properties (String => StringProperty, etc). I believe this would be transparent to any objects that refer to these members.
Is it ok to do this?
Is it the suggested way of dealing with existing classes?
What do I do about Enum types? E.g. an enum member has it's value changed, how should I connect the enum member to the front-end?
Thank you :)
In general, as long as you don't change the public API of the class - in other words you don't remove any public methods, modify their parameter types or return types, or change their functionality - you should not break any code that uses them.
So, e.g. a change from
public class Foo {
private String bar ;
public String getBar() {
return bar ;
}
public void setBar(String bar) {
this.bar = bar ;
}
}
to
public class Foo {
private final StringProperty bar = new SimpleStringProperty();
public StringProperty barProperty() {
return bar ;
}
public String getBar() {
return barProperty().get();
}
public void setBar(String bar) {
barProperty().set(bar);
}
}
should not break any clients of the class Foo. The only possible problem is that classes that have subclassed Foo and overridden getBar() and/or setBar(...) might get unexpected behavior if their superclass is replaced with the new implementation (specifically, if getBar() and setBar(...) are not final, you have no way to enforce that getBar()==barProperty().get(), which is desirable).
For enums (and other objects) you can use an ObjectProperty<>:
Given
public enum Option { FIRST_CHOICE, SECOND_CHOICE, THIRD_CHOICE }
Then you can do
public class Foo {
private final ObjectProperty<Option> option = new SimpleObjectProperty<>();
public ObjectProperty<Option> optionProperty() {
return option ;
}
public Option getOption() {
return optionProperty().get();
}
public void setOption(Option choice) {
optionProperty().set(choice);
}
}
One caveat to all this is that you do introduce a dependency on the JavaFX API that wasn't previously present in these classes. JavaFX ships with the Oracle JDK, but it is not a full part of the JSE (e.g. it is not included in OpenJDK by default, and not included in some other JSE implementations). So in practice, you're highly unlikely to be able to persuade the developers of the open source library to accept your changes to the classes in the library. Since it's open source, you can of course maintain your own fork of the library with JavaFX properties, but then it will get tricky if you want to incorporate new versions of that library (you will need to merge two different sets of changes, essentially).
Another option is to use bound properties in the classes, and wrap them using a Java Bean Property Adapter. This is described in this question.

Looking for a Ninject scope that behaves like InRequestScope

On my service layer I have injected an UnitOfWork and 2 repositories in the constructor. The Unit of Work and repository have an instance of a DbContext I want to share between the two of them. How can I do that with Ninject ? Which scope should be considered ?
I am not in a web application so I can't use InRequestScope.
I try to do something similar... and I am using DI however, I need my UoW to be Disposed and created like this.
using (IUnitOfWork uow = new UnitOfWorkFactory.Create())
{
_testARepository.Insert(a);
_testBRepository.Insert(b);
uow.SaveChanges();
}
EDIT: I just want to be sure i understand… after look at https://github.com/ninject/ninject.extensions.namedscope/wiki/InNamedScope i though about my current console application architecture which actually use Ninject.
Lets say :
Class A is a Service layer class
Class B is an unit of work which take into parameter an interface (IContextFactory)
Class C is a repository which take into parameter an interface (IContextFactory)
The idea here is to be able to do context operations on 2 or more repository and using the unit of work to apply the changes.
Class D is a context factory (Entity Framework) which provide an instance (keep in a container) of the context which is shared between Class B et C (.. and would be for other repositories aswell).
The context factory keep the instance in his container so i don’t want to reuse this instance all the name since the context need to be disposed at the end of the service operaiton.. it is the main purpose of the InNamedScope actually ?
The solution would be but i am not sure at all i am doing it right, the services instance gonna be transcient which mean they actually never disposed ? :
Bind<IScsContextFactory>()
.To<ScsContextFactory>()
.InNamedScope("ServiceScope")
.WithConstructorArgument(
"connectionString",
ConfigurationUtility.GetConnectionString());
Bind<IUnitOfWork>().To<ScsUnitOfWork>();
Bind<IAccountRepository>().To<AccountRepository>();
Bind<IBlockedIpRepository>().To<BlockedIpRepository>();
Bind<IAccountService>().To<AccountService>().DefinesNamedScope("ServiceScope");
Bind<IBlockedIpService>().To<BlockedIpService>().DefinesNamedScope("ServiceScope");
UPDATE: This approach works against NuGet current, but relies in an anomaly in the InCallscope implementation which has been fixed in the current Unstable NuGet packages. I'll be tweaking this answer in a few days to reflect the best approach after some mulling over. NB the high level way of structuring stuff will stay pretty much identical, just the exact details of the Bind<DbContext>() scoping will work. (Hint: CreateNamedScope in unstable would work or one could set up the Command Handler as DefinesNamedScope. Reason I dont just do that is that I want to have something that composes/plays well with InRequestScope)
I highly recommend reading the Ninject.Extensions.NamedScope integration tests (seriously, find them and read and re-read them)
The DbContext is a Unit Of Work so no further wrapping is necessary.
As you want to be able to have multiple 'requests' in flight and want to have a single Unit of Work shared between them, you need to:
Bind<DbContext>()
.ToMethod( ctx =>
new DbContext(
connectionStringName: ConfigurationUtility.GetConnectionString() ))
.InCallScope();
The InCallScope() means that:
for a given object graph composed for a single kernel.Get() Call (hence In Call Scope), everyone that requires an DbContext will get the same instance.
the IDisposable.Dispose() will be called when a Kernel.Release() happens for the root object (or a Kernel.Components.Get<ICache>().Clear() happens for the root if it is not .InCallScope())
There should be no reason to use InNamedScope() and DefinesNamedScope(); You don't have long-lived objects you're trying to exclude from the default pooling / parenting / grouping.
If you do the above, you should be able to:
var command = kernel.Get<ICommand>();
try {
command.Execute();
} finally {
kernel.Components.Get<ICache>().Clear( command ); // Dispose of DbContext happens here
}
The Command implementation looks like:
class Command : ICommand {
readonly IAccountRepository _ar;
readonly IBlockedIpRepository _br;
readonly DbContext _ctx;
public Command(IAccountRepository ar, IBlockedIpRepository br, DbContext ctx){
_ar = ar;
_br = br;
_ctx = ctx;
}
void ICommand.Execute(){
_ar.Insert(a);
_br.Insert(b);
_ctx.saveChanges();
}
}
Note that in general, I avoid having an implicit Unit of Work in this way, and instead surface it's creation and Disposal. This makes a Command look like this:
class Command : ICommand {
readonly IAccountService _as;
readonly IBlockedIpService _bs;
readonly Func<DbContext> _createContext;
public Command(IAccountService #as, IBlockedIpServices bs, Func<DbContext> createContext){
_as = #as;
_bs = bs;
_createContext = createContext;
}
void ICommand.Execute(){
using(var ctx = _createContext()) {
_ar.InsertA(ctx);
_br.InsertB(ctx);
ctx.saveChanges();
}
}
This involves no usage of .InCallScope() on the Bind<DbContext>() (but does require the presence of Ninject.Extensions.Factory's FactoryModule to synthesize the Func<DbContext> from a straightforward Bind<DbContext>().
As discussed in the other answer, InCallScope is not a good approach to solving this problem.
For now I'm dumping some code that works against the latest NuGet Unstable / Include PreRelease / Instal-Package -Pre editions of Ninject.Web.Common without a clear explanation. I will translate this to an article in the Ninject.Extensions.NamedScope wiki at some stagehave started to write a walkthrough of this technique in the Ninject.Extensions.NamedScope wiki's CreateNamedScope/GetScope article.
Possibly some bits will become Pull Request(s) at some stage too (Hat tip to #Remo Gloor who supplied me the outline code). The associated tests and learning tests are in this gist for now), pending packaging in a proper released format TBD.
The exec summary is you Load the Module below into your Kernel and use .InRequestScope() on everything you want created / Disposed per handler invocation and then feed requests through via IHandlerComposer.ComposeCallDispose.
If you use the following Module:
public class Module : NinjectModule
{
public override void Load()
{
Bind<IHandlerComposer>().To<NinjectRequestScopedHandlerComposer>();
// Wire it up so InRequestScope will work for Handler scopes
Bind<INinjectRequestHandlerScopeFactory>().To<NinjectRequestHandlerScopeFactory>();
NinjectRequestHandlerScopeFactory.NinjectHttpApplicationPlugin.RegisterIn( Kernel );
}
}
Which wires in a Factory[1] and NinjectHttpApplicationPlugin that exposes:
public interface INinjectRequestHandlerScopeFactory
{
NamedScope CreateRequestHandlerScope();
}
Then you can use this Composer to Run a Request InRequestScope():
public interface IHandlerComposer
{
void ComposeCallDispose( Type type, Action<object> callback );
}
Implemented as:
class NinjectRequestScopedHandlerComposer : IHandlerComposer
{
readonly INinjectRequestHandlerScopeFactory _requestHandlerScopeFactory;
public NinjectRequestScopedHandlerComposer( INinjectRequestHandlerScopeFactory requestHandlerScopeFactory )
{
_requestHandlerScopeFactory = requestHandlerScopeFactory;
}
void IHandlerComposer.ComposeCallDispose( Type handlerType, Action<object> callback )
{
using ( var resolutionRoot = _requestHandlerScopeFactory.CreateRequestHandlerScope() )
foreach ( object handler in resolutionRoot.GetAll( handlerType ) )
callback( handler );
}
}
The Ninject Infrastructure stuff:
class NinjectRequestHandlerScopeFactory : INinjectRequestHandlerScopeFactory
{
internal const string ScopeName = "Handler";
readonly IKernel _kernel;
public NinjectRequestHandlerScopeFactory( IKernel kernel )
{
_kernel = kernel;
}
NamedScope INinjectRequestHandlerScopeFactory.CreateRequestHandlerScope()
{
return _kernel.CreateNamedScope( ScopeName );
}
/// <summary>
/// When plugged in as a Ninject Kernel Component via <c>RegisterIn(IKernel)</c>, makes the Named Scope generated during IHandlerFactory.RunAndDispose available for use via the Ninject.Web.Common's <c>.InRequestScope()</c> Binding extension.
/// </summary>
public class NinjectHttpApplicationPlugin : NinjectComponent, INinjectHttpApplicationPlugin
{
readonly IKernel kernel;
public static void RegisterIn( IKernel kernel )
{
kernel.Components.Add<INinjectHttpApplicationPlugin, NinjectHttpApplicationPlugin>();
}
public NinjectHttpApplicationPlugin( IKernel kernel )
{
this.kernel = kernel;
}
object INinjectHttpApplicationPlugin.GetRequestScope( IContext context )
{
// TODO PR for TrgGetScope
try
{
return NamedScopeExtensionMethods.GetScope( context, ScopeName );
}
catch ( UnknownScopeException )
{
return null;
}
}
void INinjectHttpApplicationPlugin.Start()
{
}
void INinjectHttpApplicationPlugin.Stop()
{
}
}
}

Composition, I don't quite get this?

Referring to the below link:
http://www.javaworld.com/javaworld/jw-11-1998/jw-11-techniques.html?page=2
The composition approach to code reuse provides stronger encapsulation
than inheritance, because a change to a back-end class needn't break
any code that relies only on the front-end class. For example,
changing the return type of Fruit's peel() method from the previous
example doesn't force a change in Apple's interface and therefore
needn't break Example2's code.
Surely if you change the return type of peel() (see code below) this means getPeelCount() wouldn't be able to return an int any more? Wouldn't you have to change the interface, or get a compiler error otherwise?
class Fruit {
// Return int number of pieces of peel that
// resulted from the peeling activity.
public int peel() {
System.out.println("Peeling is appealing.");
return 1;
}
}
class Apple {
private Fruit fruit = new Fruit();
public int peel() {
return fruit.peel();
}
}
class Example2 {
public static void main(String[] args) {
Apple apple = new Apple();
int pieces = apple.peel();
}
}
With a composition, changing the class Fruit doesn't necessary require you to change Apple, for example, let's change peel to return a double instead :
class Fruit {
// Return String number of pieces of peel that
// resulted from the peeling activity.
public double peel() {
System.out.println("Peeling is appealing.");
return 1.0;
}
}
Now, the class Apple will warn about a lost of precision, but your Example2 class will be just fine, because a composition is more "loose" and a change in a composed element does not break the composing class API. In our case example, just change Apple like so :
class Apple {
private Fruit fruit = new Fruit();
public int peel() {
return (int) fruit.peel();
}
}
Whereas if Apple inherited from Fruit (class Apple extends Fruit), you would not only get an error about an incompatible return type method, but you'd also get a compilation error in Example2.
** Edit **
Lets start this over and give a "real world" example of composition vs inheritance. Note that a composition is not limited to this example and there are more use case where you can use the pattern.
Example 1 : inheritance
An application draw shapes into a canvas. The application does not need to know which shapes it has to draw and the implementation lies in the concrete class inheriting the abstract class or interface. However, the application knows what and how many different concrete shapes it can create, thus adding or removing concrete shapes requires some refactoring in the application.
interface Shape {
public void draw(Graphics g);
}
class Box implement Shape {
...
public void draw(Graphics g) { ... }
}
class Ellipse implements Shape {
...
public void draw(Graphics g) { ... }
}
class ShapeCanvas extends JPanel {
private List<Shape> shapes;
...
protected void paintComponent(Graphics g) {
for (Shape s : shapes) { s.draw(g); }
}
}
Example 2 : Composition
An application is using a native library to process some data. The actual library implementation may or may not be known, and may or may not change in the future. A public interface is thus created and the actual implementation is determined at run-time. For example :
interface DataProcessorAdapter {
...
public Result process(Data data);
}
class DataProcessor {
private DataProcessorAdapter adapter;
public DataProcessor() {
try {
adapter = DataProcessorManager.createAdapter();
} catch (Exception e) {
throw new RuntimeException("Could not load processor adapter");
}
}
public Object process(Object data) {
return adapter.process(data);
}
}
static class DataProcessorManager {
static public DataProcessorAdapter createAdapter() throws ClassNotFoundException, InstantiationException, IllegalAccessException {
String adapterClassName = /* load class name from resource bundle */;
Class<?> adapterClass = Class.forName(adapterClassName);
DataProcessorAdapter adapter = (DataProcessorAdapter) adapterClass.newInstance();
//...
return adapter;
}
}
So, as you can see, the composition may offer some advantage over inheritance in the sense that it allows more flexibility in the code. It allows the application to have a solid API while the underlaying implementation may still change during it's life cycle. Composition can significantly reduce the cost of maintenance if properly used.
For example, when implementing test cases with JUnit for Exemple 2, you may want to use a dummy processor and would setup the DataProcessorManager to return such adapter, while using a "real" adapter (perhaps OS dependent) in production without changing the application source code. Using inheritance, you would most likely hack something up, or perhaps write a lot more initialization test code.
As you can see, compisition and inheritance differ in many aspects and are not preferred over another; each depend on the problem at hand. You could even mix inheritance and composition, for example :
static interface IShape {
public void draw(Graphics g);
}
static class Shape implements IShape {
private IShape shape;
public Shape(Class<? extends IShape> shape) throws InstantiationException, IllegalAccessException {
this.shape = (IShape) shape.newInstance();
}
public void draw(Graphics g) {
System.out.print("Drawing shape : ");
shape.draw(g);
}
}
static class Box implements IShape {
#Override
public void draw(Graphics g) {
System.out.println("Box");
}
}
static class Ellipse implements IShape {
#Override
public void draw(Graphics g) {
System.out.println("Ellipse");
}
}
static public void main(String...args) throws InstantiationException, IllegalAccessException {
IShape box = new Shape(Box.class);
IShape ellipse = new Shape(Ellipse.class);
box.draw(null);
ellipse.draw(null);
}
Granted, this last example is not clean (meaning, avoid it), but it shows how composition can be used.
Bottom line is that both examples, DataProcessor and Shape are "solid" classes, and their API should not change. However, the adapter classes may change and if they do, these changes should only affect their composing container, thus limit the maintenance to only these classes and not the entire application, as opposed to Example 1 where any change require more changes throughout the application. It all depends how flexible your application needs to be.
If you would change Fruit.peel()'s return type, you would have to modify Apple.peel() as well. But you don't have to change Apple's interface.
Remember: The interface are only the method names and their signatures, NOT the implementation.
Say you'd change Fruit.peel() to return a boolean instead of a int. Then, you could still let Apple.peel() return an int. So: The interface of Apple stays the same but Fruit's changed.
If you would have use inheritance, that would not be possible: Since Fruit.peel() now returns a boolean, Apple.peel() has to return an boolean, too. So: All code that uses Apple.peel() has to be changed, too. In the composition example, ONLY Apple.peel()'s code has to be changed.
The key word in the sentence is "interface".
You'll almost always need to change the Apple class in some way to accomodate the new return type of Fruit.peel, but you don't need to change its public interface if you use composition rather than inheritance.
If Apple is a Fruit (ie, inheritance) then any change to the public interface of Fruit necessitates a change to the public interface of Apple too. If Apple has a Fruit (ie, composition) then you get to decide how to accomodate any changes to the Fruit class; you're not forced to change your public interface if you don't want to.
Return type of Fruit.peel() is being changed from int to Peel. This doesn't meant that the return type of Apple.peel() is being forced to change to Peel as well. In case of inheritance, it is forced and any client using Apple has to be changed. In case of composition, Apple.peel() still returns an integer, by calling the Peel.getPeelCount() getter and hence the client need not be changed and hence Apple's interface is not changed ( or being forced to be changed)
Well, in the composition case, Apple.peel()'s implementation needs to be updated, but its method signature can stay the same. And that means the client code (which uses Apple) does not have to be modified, retested, and redeployed.
This is in contrast to inheritance, where a change in Fruit.peel()'s method signature would require changes all way into the client code.