I don't seem to find this in usage scenarios for the visitor pattern (or maybe I don't get it). It's also not hierarchical.
Let's use an authentication example. A UserAuthenticator authenticates credentials given by a user. It returns a result object. The result object contains the result of the authentication: authentication succeeded, not succeeded because username was not found, not succeeded because illegal characters were used etc. Client code may resort to conditionals to handle this.
In pseudocode:
AuthResult = Userauthenticator.authenticate(Username, Password)
if AuthResult.isAuthenticated: do something
else if AuthResult.AuthFailedBecauseUsernameNotFound: do something else
else if etc...
Would a visitor pattern fit here? :
Authresult.acceptVisitor(AuthVisitor)
Authresult then calls a method on AuthVisitor depending on the result :
AuthVisitor.handleNotAuthenticatedBecauseUsernameNotFound
I would not recommend using patterns for intent they were not made for.
The intents of the visitor patterns are:
Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates.
The classic technique for recovering lost type information.
Do the right thing based on the type of two objects.
Double dispatch
This solution would be useful if you had planned to do various authentification methods, but if you plan on only doing one, you'll have to use conditionals anyway.
Visitor is a valuable design when your data doesn't change fast as your behaviour. A typical example is with a parse tree:
your class hierarchy (your data) is frozen
your behaviour varies too much, you don't want to break your classes adding another virtual method
I don't think that a Visitor is a valuable solution here, since each time you add a subclass of AuthResult you break your visitor.
Visitor is about trading encapsulation with double dispatch.
You can try a similar approach:
interface Handler {
void onUsernameNotFound();
void onWrongPassword();
void authOk();
}
interface Authenticator {
void authenticate(String username, String password, Handler handler);
}
class SimpleAuthenticator implements Authetnciator {
void authenticate(String username, String password, Handler handler) {
if (username.equals("dfa")) {
if (password.equals("I'm1337")) {
handler.authOk();
} else {
handler.onWrongPassword();
}
} else {
handler.onUsernameNotFound();
}
}
}
some Handler stategies:
class FatalHandler implements Handler {
void onUsernameNotFound() {
throw new AuthError("auth failed");
}
void onWrongPassword() {
throw new AuthError("auth failed");
}
void authOk() {
/* do something */
}
}
and:
class DebugHandler implements Handler {
void onUsernameNotFound() {
System.out.println("wrong username");
}
void onWrongPassword() {
System.out.println("wrong password");
}
void authOk() {
System.out.println("ok");
}
}
now you can encapsulate error handling and operatorion in your Handlers that is much less code than Visitor since you don't really need double dispatch here.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
There is an interface called Processor, which has two implementations SimpleProcessor and ComplexProcessor.
Now I have a process, which consumes an input, and then using that input decides whether it should use SimpleProcessor or ComplexProcessor.
Current solution : I was thinking to use Abstract Factory, which will generate the instance on the basis of the input.
But the issue is that I don't want new instances. I want to use already instantiated objects. That is, I want to re-use the instances.
That means, Abstract factory is absolutely the wrong pattern to use here, as it is for generating objects on the basis of type.
Another thing, that our team normally does is to create a map from input to the corresponding processor instance. And at runtime, we can use that map to get the correct instance on the basis of input.
This feels like a adhoc solution.
I want this to be extendable : new input types can be mapped to new processor types.
Is there some standard way to solve this?
You can use a variation of the Chain of Responsibility pattern.
It will scale far better than using a Map (or hash table in general).
This variation will support dependency injection and is very easy to extend (without breaking any code or violating the Open-Closed principle).
Opposed to the classic version, handlers do not need to be explicitly chained. The classic version scales very bad.
The pattern uses polymorphism to enable extensibility and is therefore targeting an object oriented language.
The pattern is as follows:
The client API is a container class, that manages a collection of input handlers (for example SimnpleProcessor and ComplexProcessor).
Each handler is only known to the container by a common interface and unknown to the client.
The collection of handlers is passed to the container via the constructor (to enable optional dependency injection).
The container accepts the predicate (input) and passes it on to the anonymous handlers by iterating over the handler collection.
Each handler now decides based on the input if it can handle it (return true) or not (return false).
If a handler returns true (to signal that the input was successfully handled), the container will break further input processing by other handlers (alternatively, use a different criteria e.g., to allow multiple handlers to handle the input).
In the following very basic example implementation, the order of handler execution is simply defined by their position in their container (collection).
If this isn't sufficient, you can simply implement a priority algorithm.
Implementation (C#)
Below is the container. It manages the individual handler implementation using polymorphism. Since handler implementation are only known by their common interface, the container scales extremely well: simply add/inject an additional handler implementation.
The container is actually used directly by the client (whereas the handlers are hidden from the client, while anonymous to the container).
interface IInputProcessor
{
void Process(object input);
}
class InputProcessor : IInputProcessor
{
private IEnumerable<IInputHandler> InputHandlers { get; }
// Constructor.
// Optionally use an IoC container to inject the dependency (a collection of input handlers).
public InputProcessor(IEnumerable<IInputHandler> inputHandlers)
{
this.InputHandlers = inputHandlers;
}
// Method to handle the input.
// The input is then delegated to the input handlers.
public void Process(object input)
{
foreach (IInputHandler inputHandler in this.InputHandlers)
{
if (inputHandler.TryHandle(input))
{
return;
}
}
}
}
Below are the input handlers.
To add new handlers i.e. to extend input handling, simply implement the IInputHandler interface and add it to a collection which is passed/injected to the container (IInputProcessor):
interface IInputHandler
{
bool TryHandle(object input);
}
class SimpleProcessor : IInputHandler
{
public bool TryHandle(object input)
{
if (input == 1)
{
//TODO::Handle input
return true;
}
return false;
}
}
class ComplexProcessor : IInputHandler
{
public bool TryHandle(object input)
{
if (input == 3)
{
//TODO::Handle input
return true;
}
return false;
}
}
Usage Example
public class Program
{
public static void Main()
{
/* Setup Chain of Responsibility.
/* Preferably configure an IoC container. */
var inputHandlers = new List<IInputHandlers>
{
new SimpleProcessor(),
new ComplexProcessor()
};
IInputProcessor inputProcessor = new InputProcessor(inputHandlers);
/* Use the handler chain */
int input = 3;
inputProcessor.Pocess(input); // Will execute the ComplexProcessor
input = 1;
inputProcessor.Pocess(input); // Will execute the SimpleProcessor
}
}
It is possible to use Strategy pattern with combination of Factory pattern. Factory objects can be cached to have reusable objects without recreating them when objects are necessary.
As an alternative to caching, it is possible to use singleton pattern. In ASP.NET Core it is pretty simple. And if you have DI container, just make sure that you've set settings of creation instance to singleton
Let's start with the first example. We need some enum of ProcessorType:
public enum ProcessorType
{
Simple, Complex
}
Then this is our abstraction of processors:
public interface IProcessor
{
DateTime DateCreated { get; }
}
And its concrete implemetations:
public class SimpleProcessor : IProcessor
{
public DateTime DateCreated { get; } = DateTime.Now;
}
public class ComplexProcessor : IProcessor
{
public DateTime DateCreated { get; } = DateTime.Now;
}
Then we need a factory with cached values:
public class ProcessorFactory
{
private static readonly IDictionary<ProcessorType, IProcessor> _cache
= new Dictionary<ProcessorType, IProcessor>()
{
{ ProcessorType.Simple, new SimpleProcessor() },
{ ProcessorType.Complex, new ComplexProcessor() }
};
public IProcessor GetInstance(ProcessorType processorType)
{
return _cache[processorType];
}
}
And code can be run like this:
ProcessorFactory processorFactory = new ProcessorFactory();
Thread.Sleep(3000);
var simpleProcessor = processorFactory.GetInstance(ProcessorType.Simple);
Console.WriteLine(simpleProcessor.DateCreated); // OUTPUT: 2022-07-07 8:00:01
ProcessorFactory processorFactory_1 = new ProcessorFactory();
Thread.Sleep(3000);
var complexProcessor = processorFactory_1.GetInstance(ProcessorType.Complex);
Console.WriteLine(complexProcessor.DateCreated); // OUTPUT: 2022-07-07 8:00:01
The second way
The second way is to use DI container. So we need to modify our factory to get instances from dependency injection container:
public class ProcessorFactoryByDI
{
private readonly IDictionary<ProcessorType, IProcessor> _cache;
public ProcessorFactoryByDI(
SimpleProcessor simpleProcessor,
ComplexProcessor complexProcessor)
{
_cache = new Dictionary<ProcessorType, IProcessor>()
{
{ ProcessorType.Simple, simpleProcessor },
{ ProcessorType.Complex, complexProcessor }
};
}
public IProcessor GetInstance(ProcessorType processorType)
{
return _cache[processorType];
}
}
And if you use ASP.NET Core, then you can declare your objects as singleton like this:
services.AddSingleton<SimpleProcessor>();
services.AddSingleton<ComplexProcessor>();
Read more about lifetime of an object
I'm trying to create a validation layer that wraps calls to business logic methods in entities in the domain layer.
A Validator must have the same interface as the Entity and give access to the state the Entity holds.
However, the type signatures of the Validator's interface methods need to different to the Entity's, as the Validator may validate and convert inputs from the UI (for example). The Validator also needs wraps these input validation/conversions calls and the underlying business logic method call in try catches.
This is an example of my current implementation:
class Entity {
// state
int _num;
int get num => _num;
// init the state
Entity(this._num = 0)
// business logic methods
void incrementBy(int n) {
// business logic validation
if (n <= 0){
throw Exception('[n] must be greater than 0'); // shouldn't throw raw Exceptions in general
}
// business logic
_num += n;
}
}
class Validator {
// have to hold an instance of the entity
final Entity _entity;
Validator(this._entity);
// have to copy the getters in the entity class
int get num => _entity.num;
// same interface as the Entity, but different type signature
void incrementBy(String n) {
try {
// validate user input
final inc = ConvertToInt(n); // -> could throw a FormatException
// call the underlying busines logic
_entity.incrementBy(inc); // -> could throw an Exception
} on Exception catch (e) { // shouldn't catch raw Exceptions in general
...
}
}
Is there a better way to wrap the entity?
It feels very clunky to do it the way shown above because there is no enforcement of which methods need to be overridden, as would be the case of implementing the Entity, which you can't do as the type signatures must be the same.
Something like class Validator hides Entity{...} would be great. It would be something like the combination of an extends, you wouldn't need to hold an instance of the entity or reimplement the getters, and an implements as you would be forced to override all interface methods.
I don't know if this solution is worth it but you might use the covariant keyword and an extra interface to achieve something similar to this. It requires an extra interface and I don't exactly know if the code is less clunky but here we go.
Edit: Just wanted to point out that you can also place the covariant keyword on the interface, basically allowing any subclass of EntityIf to tighten the type.
Here's the Dart Pad link to the code below
/// This is the common interface between the entity
/// and the validator for the entity. Both need to
/// implement this.
abstract class EntityIf {
// Private factory constructor to disallow
// extending this class
EntityIf._();
// We use 'dynamic' as the type for [num].
// We'll enforce type later using the
// 'covariant' keyword
dynamic get num;
// Same here, type is dynamic
void incrementBy(dynamic value);
}
class Entity implements EntityIf {
Entity(this._num);
int _num;
// Getters don't need the covariant keyword for some reason ?!? I'm not complaining!
#override
int get num => _num;
// Here we see the covariant keyword in action.
// It allows restricting to a more specific type
// which is normally disallowed for overriding methods.
#override
void incrementBy(covariant int value) {
_num += value;
}
}
class ValidatorForEntity implements EntityIf {
// Validator still needs to wrap the entity, coudln't
// figure out a way around that
ValidatorForEntity(this._entity)
: assert(_entity != null);
final Entity _entity;
#override
dynamic get num => _entity.num;
// Validator just overrides the interface with no
// covariant keyword.
#override
void incrementBy(dynamic value) {
assert(value != null);
int finalValue = int.tryParse(value.toString());
if (finalValue == null) {
throw '[value] is not an instance of [int]';
}
// int type will be enforced here, so you can't
// create validators that break the entity
_entity.incrementBy(finalValue);
}
}
void main() {
final x = ValidatorForEntity(Entity(0));
x.incrementBy(1);
print(x.num); // prints 1
x.incrementBy('1');
print(x.num); // prints 2
try {
x.incrementBy('a');
} catch (e) {
print('$e'); // should give this error
}
}
I would like to ask if the decorator pattern suits my needs and is another way to make my software design much better?
Previously I have a device which is always on all the time. On the code below, that is the Device class. Now, to conserve some battery life, I need to turn it off then On again. I created a DeviceWithOnOffDecorator class. I used decorator pattern which I think helped a lot in avoiding modifications on the Device class. But having On and Off on every operation, I feel that the code doesn't conform to DRY principle.
namespace Decorator
{
interface IDevice
{
byte[] GetData();
void SendData();
}
class Device : IDevice
{
public byte[] GetData() {return new byte[] {1,2,3 }; }
public void SendData() {Console.WriteLine("Sending Data"); }
}
// new requirement, the device needs to be turned on and turned off
// after each operation to save some Battery Power
class DeviceWithOnOffDecorator:IDevice
{
IDevice mIdevice;
public DeviceWithOnOffDecorator(IDevice d)
{
this.mIdevice = d;
Off();
}
void Off() { Console.WriteLine("Off");}
void On() { Console.WriteLine("On"); }
public byte[] GetData()
{
On();
var b = mIdevice.GetData();
Off();
return b;
}
public void SendData()
{
On();
mIdevice.SendData();
Off();
}
}
class Program
{
static void Main(string[] args)
{
Device device = new Device();
DeviceWithOnOffDecorator devicewithOnOff = new DeviceWithOnOffDecorator(device);
IDevice iDevice = devicewithOnOff;
var data = iDevice.GetData();
iDevice.SendData();
}
}
}
On this example: I just have two operations only GetData and SendData, but on the actual software there are lots of operations involved and I need to do enclose each operations with On and Off,
void AnotherOperation1()
{
On();
// do all stuffs here
Off();
}
byte AnotherOperation2()
{
On();
byte b;
// do all stuffs here
Off();
return b;
}
I feel that enclosing each function with On and Off is repetitive and is there a way to improve this?
Edit: Also, the original code is in C++. I just wrote it in C# here to be able to show the problem clearer.
Decorator won't suite this purpose, since you are not adding the responsibility dynamically. To me what you need to do is intercept the request and execute on() and off() methods before and after the actual invocation. For that purpose write a Proxy that wraps the underlying instance and do the interception there while leaving your original type as it is.
I have a WCF service which has its Thread.CurrentPrincipal set in the ServiceConfiguration.ClaimsAuthorizationManager.
When I implement the service asynchronously like this:
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
// Audit log call (uses Thread.CurrentPrincipal)
var task = Task<int>.Factory.StartNew(this.WorkerFunction, state);
return task.ContinueWith(res => callback(task));
}
public string EndMethod1(IAsyncResult ar)
{
// Audit log result (uses Thread.CurrentPrincipal)
return ar.AsyncState as string;
}
private int WorkerFunction(object state)
{
// perform work
}
I find that the Thread.CurrentPrincipal is set to the correct ClaimsPrincipal in the Begin-method and also in the WorkerFunction, but in the End-method it's set to a GenericPrincipal.
I know I can enable ASP.NET compatibility for the service and use HttpContext.Current.User which has the correct principal in all methods, but I'd rather not do this.
Is there a way to force the Thread.CurrentPrincipal to the correct ClaimsPrincipal without turning on ASP.NET compatibility?
Starting with a summary of WCF extension points, you'll see the one that is expressly designed to solve your problem. It is called a CallContextInitializer. Take a look at this article which gives CallContextInitializer sample code.
If you make an ICallContextInitializer extension, you will be given control over both the BeginXXX thread context AND the EndXXX thread context. You are saying that the ClaimsAuthorizationManager has correctly established the user principal in your BeginXXX(...) method. In that case, you then make for yourself a custom ICallContextInitializer which either assigns or records the CurrentPrincipal, depending on whether it is handling your BeginXXX() or your EndXXX(). Something like:
public object BeforeInvoke(System.ServiceModel.InstanceContext instanceContext, System.ServiceModel.IClientChannel channel, System.ServiceModel.Channels.Message request){
object principal = null;
if (request.Properties.TryGetValue("userPrincipal", out principal))
{
//If we got here, it means we're about to call the EndXXX(...) method.
Thread.CurrentPrincipal = (IPrincipal)principal;
}
else
{
//If we got here, it means we're about to call the BeginXXX(...) method.
request.Properties["userPrincipal"] = Thread.CurrentPrincipal;
}
...
}
To clarify further, consider two cases. Suppose you implemented both an ICallContextInitializer and an IParameterInspector. Suppose that these hooks are expected to execute with a synchronous WCF service and with an async WCF service (which is your special case).
Below are the sequence of events and the explanation of what is happening:
Synchronous Case
ICallContextInitializer.BeforeInvoke();
IParemeterInspector.BeforeCall();
//...service executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
Nothing surprising in the above code. But now look below at what happens with asynchronous service operations...
Asynchronous Case
ICallContextInitializer.BeforeInvoke(); //TryGetValue() fails, so this records the UserPrincipal.
IParameterInspector.BeforeCall();
//...Your BeginXXX() routine now executes...
ICallContextInitializer.AfterInvoke();
//...Now your Task async code executes (or finishes executing)...
ICallContextInitializercut.BeforeInvoke(); //TryGetValue succeeds, so this assigns the UserPrincipal.
//...Your EndXXX() routine now executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
As you can see, the CallContextInitializer ensures you have opportunity to initialize values such as your CurrentPrincipal just before the EndXXX() routine runs. It therefore doesn't matter that the EndXXX() routine assuredly is executing on a different thread than did the BeginXXX() routine. And yes, the System.ServiceModel.Channels.Message object which is storing your user principal between Begin/End methods, is preserved and properly transmitted by WCF even though the thread changed.
Overall, this approach allows your EndXXX(IAsyncresult) to execute with the correct IPrincipal, without having to explicitly re-establish the CurrentPrincipal in the EndXXX() routine. And as with any WCF behavior, you can decide if this applies to individual operations, all operations on a contract, or all operations on an endpoint.
Not really the answer to my question, but an alternate approach of implementing the WCF service (in .NET 4.5) that does not exhibit the same issues with Thread.CurrentPrincipal.
public async Task<string> Method1()
{
// Audit log call (uses Thread.CurrentPrincipal)
try
{
return await Task.Factory.StartNew(() => this.WorkerFunction());
}
finally
{
// Audit log result (uses Thread.CurrentPrincipal)
}
}
private string WorkerFunction()
{
// perform work
return string.Empty;
}
The valid approach to this is to create an extension:
public class SLOperationContext : IExtension<OperationContext>
{
private readonly IDictionary<string, object> items;
private static ReaderWriterLockSlim _instanceLock = new ReaderWriterLockSlim();
private SLOperationContext()
{
items = new Dictionary<string, object>();
}
public IDictionary<string, object> Items
{
get { return items; }
}
public static SLOperationContext Current
{
get
{
SLOperationContext context = OperationContext.Current.Extensions.Find<SLOperationContext>();
if (context == null)
{
_instanceLock.EnterWriteLock();
context = new SLOperationContext();
OperationContext.Current.Extensions.Add(context);
_instanceLock.ExitWriteLock();
}
return context;
}
}
public void Attach(OperationContext owner) { }
public void Detach(OperationContext owner) { }
}
Now this extension is used as a container for objects that you want to persist between thread switching as OperationContext.Current will remain the same.
Now you can use this in BeginMethod1 to save current user:
SLOperationContext.Current.Items["Principal"] = OperationContext.Current.ClaimsPrincipal;
And then in EndMethod1 you can get the user by typing:
ClaimsPrincipal principal = SLOperationContext.Current.Items["Principal"];
EDIT (Another approach):
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
var task = Task.Factory.StartNew(this.WorkerFunction, state);
var ec = ExecutionContext.Capture();
return task.ContinueWith(res =>
ExecutionContext.Run(ec, (_) => callback(task), null));
}
I want to implement a magic token for my ServiceStack-based API. Whenever any value matches this special token, I'd like to signal special actions in my application. The ideal place for this assignment to occur would be after SS had processed the wire format (JSV, JSON, SOAP, etc.) and before it mapped the value onto the a .NET type. At the moment, I'm wondering about the best way to start on something like this. Is it something I could wire up in Configure()? Is it something I'll have to override and inject? Any assistance or direction in this matter would be appreciated, ASAP.
I don't see this as a ServiceStack implementation question, but rather a matter of how you define your DTOs. Given this requirement, as I understand it, I'd go with something like this:
interface IOverridableDTO
{
Object overrideValue(Object value);
}
class BaseOverridableDTO : IOverridableDTO
{
bool doOverride {get(){return(//results of magic token check)};}
public Object overrideValue(Object value)
{ if {doOverride}
return(null); // or whatever the override needs to be
return(value);
}
}
class MyDTO : BaseOverridableDTO
{
// override the overrideValue() method, if necessary
private int myDTOProperty;
public int? MyDTOProperty {
get() {return overrideValue((Object)myDTOProperty)};
set(int value) {myDTOProperty = value;}
}
}
// use as follows:
void DoSomethingWithAnOverridableDTO(BaseOverridableDTO dtoObject)
{ ... }