Use MEF to compose parts but postpone the creation of the parts - lazy-loading

As explained in these questions I'm trying to build an application that consists of a host and multiple task processing clients. With some help I have figured out how to discover and serialize part definitions so that I could store those definitions without having to have the actual runtime type loaded.
The next step I want to achieve (or next two steps really) is that I want to split the composition of parts from the actual creation and connection of the objects (represented by those parts). So if I have a set of parts then I would like to be able to do the following thing (in pseudo-code):
public sealed class Host
{
public CreationScript Compose()
{
CreationScript result;
var container = new DelayLoadCompositionContainer(
s => result = s);
container.Compose();
return script;
}
public static void Main()
{
var script = Compose();
// Send the script to the client application
SendToClient(script);
}
}
// Lives inside other application
public sealed class Client
{
public void Load(CreationScript script)
{
var container = new ScriptLoader(script);
container.Load();
}
public static void Main(string scriptText)
{
var script = new CreationScript(scriptText);
Load(script);
}
}
So that way I can compose the parts in the host application, but actually load the code and execute it in the client application. The goal is to put all the smarts of deciding what to load in one location (the host) while the actual work can be done anywhere (by the clients).
Essentially what I'm looking for is some way of getting the ComposablePart graph that MEF implicitly creates.
Now my question is if there are any bits in MEF that would allow me to implement this kind of behaviour? I suspect that the provider model may help me with this but that is a rather large and complex part of MEF so any guidelines would be helpful.

From lots of investigation it seems that is not possible to separate the composition process from the instantiation process in MEF so I have had to create my own approach for this problem. The solution assumes that the scanning of plugins results in having the type, import and export data stored somehow.
In order to compose parts you need to keep track of each part instance and how it is connected to other part instances. The simplest way to do this is to make use of a graph data structure that keeps track of which import is connected to which export.
public sealed class CompositionCollection
{
private readonly Dictionary<PartId, PartDefinition> m_Parts;
private readonly Graph<PartId, PartEdge> m_PartConnections;
public PartId Add(PartDefinition definition)
{
var id = new PartId();
m_Parts.Add(id, definition);
m_PartConnections.AddVertex(id);
return id;
}
public void Connect(
PartId importingPart,
MyImportDefinition import,
PartId exportingPart,
MyExportDefinition export)
{
// Assume that edges point from the export to the import
m_PartConnections.AddEdge(
new PartEdge(
exportingPart,
export,
importingPart,
import));
}
}
Note that before connecting two parts it is necessary to check if the import can be connected to the export. In other cases MEF does that but in this case we'll need to do that ourselves. An example of how to approach that is:
public bool Accepts(
MyImportDefinition importDefinition,
MyExportDefinition exportDefinition)
{
if (!string.Equals(
importDefinition.ContractName,
exportDefinition.ContractName,
StringComparison.OrdinalIgnoreCase))
{
return false;
}
// Determine what the actual type is we're importing. MEF provides us with
// that information through the RequiredTypeIdentity property. We'll
// get the type identity first (e.g. System.String)
var importRequiredType = importDefinition.RequiredTypeIdentity;
// Once we have the type identity we need to get the type information
// (still in serialized format of course)
var importRequiredTypeDef =
m_Repository.TypeByIdentity(importRequiredType);
// Now find the type we're exporting
var exportType = ExportedType(exportDefinition);
if (AvailableTypeMatchesRequiredType(importRequiredType, exportType))
{
return true;
}
// The import and export can't directly be mapped so maybe the import is a
// special case. Try those
Func<TypeIdentity, TypeDefinition> toDefinition =
t => m_Repository.TypeByIdentity(t);
if (ImportIsCollection(importRequiredTypeDef, toDefinition)
&& ExportMatchesCollectionImport(
importRequiredType,
exportType,
toDefinition))
{
return true;
}
if (ImportIsLazy(importRequiredTypeDef, toDefinition)
&& ExportMatchesLazyImport(importRequiredType, exportType))
{
return true;
}
if (ImportIsFunc(importRequiredTypeDef, toDefinition)
&& ExportMatchesFuncImport(
importRequiredType,
exportType,
exportDefinition))
{
return true;
}
if (ImportIsAction(importRequiredTypeDef, toDefinition)
&& ExportMatchesActionImport(importRequiredType, exportDefinition))
{
return true;
}
return false;
}
Note that the special cases (like IEnumerable<T>, Lazy<T> etc.) require determining if the importing type is based on a generic type which can be a bit tricky.
Once all the composition information is stored it is possible to do the instantiation of the parts at any point in time because all the required information is available. Instantiation requires a generous helping of reflection combined with the use of the trusty Activator class and will be left as an exercise to the reader.

Related

OOP design improvment in firebase and php

I have two nested documents in firebase one called adminChat and the second is TechnicalChat when I display the request for Admin he should see tech chat and vice versa for technical the question it's better to make a separate class for every one of them class AdminFeedback and class TechnicalFeedback and create get() method for every one of them or what should I do I feel that I duplicate my code twice and I want to avoid that.
public static function get($id)
{
$documents = self::$db->collection("problems/${id}/adminChat")->documents();
$messages = [];
foreach ($documents as $doc) {
if ($doc->exists()) {
array_push($messages, $doc->data());
}
}
return $messages;
}

Chaining Reactive Asynchronus calls in spring

I’m very new to the SpringReactor project.
Until now I've only used Mono from WebClient .bodyToMono() steps, and mostly block() those Mono's or .zip() multiple of them.
But this time I have a usecase where I need to asynchronously call methods in multiple service classes, and those multiple service classes are calling multiple backend api.
I understand Project Reactor doesn't provide asynchronous flow by default.
But we can make the publishing and/or subscribing on different thread and make code asynchronous
And that's what I am trying to do.
I tried to read the documentation here reactor reference but still not clear.
For the purpose of this question, I’m making up this imaginary scenario. that is a little closer to my use case.
Let's assume we need to get a search response from google for some texts searched under images.
Example Scenario
Let's have an endpoint in a Controller
This endpoint accepts the following object from request body
MultimediaSearchRequest{
Set<String> searchTexts; //many texts.
boolean isAddContent;
boolean isAddMetadata;
}
in the controller, I’ll break the above single request object into multiple objects of the below type.
MultimediaSingleSearchRequest{
String searchText;
boolean isAddContent;
boolean isAddMetadata;
}
This Controller talks to 3 Service classes.
Each of the service classes has a method searchSingleItem.
Each service class uses a few different backend Apis, but finally combines the results of those APIs responses into the same type of response class, let's call it MultimediaSearchResult.
class JpegSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
getImageUrlApi(req),
getContentApi(req) //dont call if req.isAddContent false
)
}
}
class GifSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
gitPartApi(req),
someRandomApi(req),
soemOtherRandomApi(req)
)
}
}
class VideoSearchHandleService {
public MultimediaSearchResult searchSingleItem
(MultimediaSingleSearchRequest req){
return comboneAllImageData(
getNameApi(req),
codecApi(req),
commentsApi(req),
anotherApi(req)
)
}
}
In the end, my controller returns the response as a List of MultimediaSearchResult
Class MultimediaSearchResponse{
List< MultimediaSearchResult> results;
}
If I want to use this all asynchronously using the project reactor. how to achieve it.
Like calling searchSingleItem method in each service for each searchText asynchronously.
Even within the services call each backend API asynchronously (I’m already using WebClient and converting response bodyToMono for backend API calls)
First, I will outline a solution for the upper "layer" of your scenario.
The code (a simple simulation of the scenario):
public class ChainingAsyncCallsInSpring {
public Mono<MultimediaSearchResponse> controllerEndpoint(MultimediaSearchRequest req) {
return Flux.fromIterable(req.getSearchTexts())
.map(searchText -> new MultimediaSingleSearchRequest(searchText, req.isAddContent(), req.isAddMetadata()))
.flatMap(multimediaSingleSearchRequest -> Flux.merge(
classOneSearchSingleItem(multimediaSingleSearchRequest),
classTwoSearchSingleItem(multimediaSingleSearchRequest),
classThreeSearchSingleItem(multimediaSingleSearchRequest)
))
.collectList()
.map(MultimediaSearchResponse::new);
}
private Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("1"));
}
private Mono<MultimediaSearchResult> classTwoSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("2"));
}
private Mono<MultimediaSearchResult> classThreeSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.just(new MultimediaSearchResult("3"));
}
}
Now, some rationale.
In the controllerEndpoint() function, first we create a Flux that will emit every single searchText from the request. We map these to MultimediaSingleSearchRequest objects, so that the services can consume them with the additional metadata that was provided with the original request.
Then, Flux::flatMap the created MultimediaSingleSearchRequest objects into a merged Flux, which (as opposed to Flux::concat) ensures that all three publishers are subscribed to eagerly i.e. they don't wait for one another. It works best on this exact scenario, when several independent publishers need to be subscribed to at the same time and their order is not important.
After the flat map, at this point, we have a Flux<MultimediaSearchResult>.
We continue with Flux::collectList, thus collecting the emitted values from all publishers (we could also use Flux::reduceWith here).
As a result, we now have a Mono<List<MultimediaSearchResult>>, which can easily be mapped to a Mono<MultimediaSearchResponse>.
The results list of the MultimediaSearchResponse will have 3 items for each searchText in the original request.
Hope this was helpful!
Edit
Extending the answer with a point of view from the service classes as well. Assuming that each inner (optionally skipped) call returns a different type of result, this would be one way of going about it:
public class MultimediaSearchResult {
private Details details;
private ContentDetails content;
private MetadataDetails metadata;
}
public Mono<MultimediaSearchResult> classOneSearchSingleItem(MultimediaSingleSearchRequest req) {
return Mono.zip(getSomeDetails(req), getContentDetails(req), getMetadataDetails(req))
.map(tuple3 -> new MultimediaSearchResult(
tuple3.getT1(),
tuple3.getT2().orElse(null),
tuple3.getT3().orElse(null)
)
);
}
// Always wanted
private Mono<Details> getSomeDetails(MultimediaSingleSearchRequest req) {
return Mono.just(new Details("details")); // api call etc.
}
// Wanted if isAddContent is true
private Mono<Optional<ContentDetails>> getContentDetails(MultimediaSingleSearchRequest req) {
return req.isAddContent()
? Mono.just(Optional.of(new ContentDetails("content-details"))) // api call etc.
: Mono.just(Optional.empty());
}
// Wanted if isAddMetadata is true
private Mono<Optional<MetadataDetails>> getMetadataDetails(MultimediaSingleSearchRequest req) {
return req.isAddMetadata()
? Mono.just(Optional.of(new MetadataDetails("metadata-details"))) // api call etc.
: Mono.just(Optional.empty());
}
Optionals are used for the requests that might be skipped, since Mono::zip will fail if either of the zipped publishers emit an empty value.
If the results of each inner call extend the same base class or are the same wrapped return type, then the original answer applies as to how they can be combined (Flux::merge etc.)

How to retrieve RouteValues in ActionSelector in ASP.NET Core

I've created a GitHub repo to better understand the problem here. I have two actions on two different controllers bound to the same route.
http://localhost/sameControllerRoute/{identifier}/values
[Route("sameControllerRoute")]
public class FirstController : Controller
{
public FirstController()
{
// different EF Core DataContext than SecondController and possibly other dependencies than SecondController
}
[HttpGet("{identifier}/values")]
public IActionResult Values(string identifier, DateTime from, DateTime to) // other parameters than SecondController/Values
{
return this.Ok("Was in FirstController");
}
}
[Route("sameControllerRoute")]
public class SecondController : Controller
{
public SecondController()
{
// different EF Core DataContext than FirstController and possibly other dependencies than FirstController
}
[HttpGet("{identifier}/values")]
public IActionResult Values(string identifier, int number, string somethingElse) // other parameters than FirstController/Values
{
return this.Ok("Was in SecondController");
}
}
Since there are two matching routes, the default ActionSelector fails with:
'[...] AmbiguousActionException: Multiple actions matched. [...]'
which is comprehensible.
So I thought I can implement my own ActionSelector. In there I would implement the logic that resolves the issue of multiple routes via same logic depending on the 'identifier' route value (line 27 in code)
If 'identifier' value is a --> then FirstController
If 'identifier' value is b --> then SecondController
and so on...
protected override IReadOnlyList<ActionDescriptor> SelectBestActions(IReadOnlyList<ActionDescriptor> actions)
{
if (actions.HasLessThan(2)) return base.SelectBestActions(actions); // works like base implementation
foreach (var action in actions)
{
if (action.Parameters.Any(p => p.Name == "identifier"))
{
/*** get value of identifier from route (launchSettings this would result in 'someIdentifier') ***/
// call logic that decides whether value of identifier matches the controller
// if yes
return new List<ActionDescriptor>(new[] { action }).AsReadOnly();
// else
// keep going
}
}
return base.SelectBestActions(actions); // fail in all other cases with AmbiguousActionException
}
But I haven't found a good solution to get access to the route values in ActionSelector. Which is comprehensible as well because ModelBinding hasn't kicked in yet since MVC is still trying to figure out the Route.
A dirty solution could be to get hold of IHttpContextAccessor and regex somehow against the path.
But I'm still hoping you could provide a better idea to retrieve the route values even though ModelBinding hasn't happend yet in the request pipeline.
Not sure that you need to use ActionSelector at all for your scenario. Accordingly, to provided code, your controllers works with different types of resources (and so they expect different query parameters). As so, it is better to use different routing templates. Something like this for example:
FirstController: /sameControllerRoute/resourceA/{identifier}/values
SecondController: /sameControllerRoute/resourceB/{identifier}/values
In the scope of REST, when we are talking about /sameControllerRoute/{identifier}/values route template, we expect that different identifier means the same resource type, but different resource name. And so, as API consumers, we expect that all of the following requests are supported
/sameControllerRoute/a/values?from=20160101&to=20170202
/sameControllerRoute/b/values?from=20160101&to=20170202
/sameControllerRoute/a/values?number=1&somethingElse=someData
/sameControllerRoute/b/values?number=1&somethingElse=someData
That is not true in your case
I ended up implementing the proposed solution by the ASP.NET team. This was to implement an IActionConstrain as shown here:
// Copyright (c) .NET Foundation. All rights reserved.
// Licensed under the Apache License, Version 2.0. See License.txt in the project root for license information.
using System;
using Microsoft.AspNetCore.Mvc.ActionConstraints;
namespace ActionConstraintSample.Web
{
public class CountrySpecificAttribute : Attribute, IActionConstraint
{
private readonly string _countryCode;
public CountrySpecificAttribute(string countryCode)
{
_countryCode = countryCode;
}
public int Order
{
get
{
return 0;
}
}
public bool Accept(ActionConstraintContext context)
{
return string.Equals(
context.RouteContext.RouteData.Values["country"].ToString(),
_countryCode,
StringComparison.OrdinalIgnoreCase);
}
}
}
https://github.com/aspnet/Entropy/blob/dev/samples/Mvc.ActionConstraintSample.Web/CountrySpecificAttribute.cs

Automatic object cache proxy with PHP

Here is a question on the Caching Proxy design pattern.
Is it possible to create with PHP a dynamic Proxy Caching implementation for automatically adding cache behaviour to any object?
Here is an example
class User
{
public function load($login)
{
// Load user from db
}
public function getBillingRecords()
{
// a very heavy request
}
public function computeStatistics()
{
// a very heavy computing
}
}
class Report
{
protected $_user = null;
public function __construct(User $user)
{
$this->_user = $user;
}
public function generate()
{
$billing = $this->_user->getBillingRecords();
$stats = $this->_user->computeStatistics();
/*
...
Some rendering, and additionnal processing code
...
*/
}
}
you will notice that report will use some heavy loaded methods from User.
Now I want to add a cache system.
Instead of designing a classic caching system, I just wonder if it is possible to implement a caching system in a proxy design pattern with this kind of usage:
<?php
$cache = new Cache(new Memcache(...));
// This line will create an object User (or from a child class of User ex: UserProxy)
// each call to a method specified in 3rd argument will use the configured cache system in 2
$user = ProxyCache::create("User", $cache, array('getBillingRecords', 'computeStatistics'));
$user->load('johndoe');
// user is an instance of User (or a child class) so the contract is respected
$report = new report($user)
$report->generate(); // long execution time
$report->generate(); // quick execution time (using cache)
$report->generate(); // quick execution time (using cache)
each call to a proxyfied method will run something like:
<?php
$key = $this->_getCacheKey();
if ($this->_cache->exists($key) == false)
{
$records = $this->_originalObject->getBillingRecords();
$this->_cache->save($key, $records);
}
return $this->_cache->get($key);
Do you think it is something we could do with PHP? do you know if it is a standard pattern? How would you implement it?
It would require to
implement dynamically a new child class of the original object
replace the specified original methods with the cached one
instanciate a new kind of this object
I think PHPUnit does something like this with the Mock system...
You can use the decorator pattern with delegation and create a cache decorator that accepts any object then delegates all calls after it runs it through the cache.
Does that make sense?

NHibernate Validator: Using Attributes vs. Using ValidationDefs

I've been using NH Validator for some time, mostly through ValidationDefs, but I'm still not sure about two things:
Is there any special benefit of using ValidationDef for simple/standard validations (like NotNull, MaxLength etc)?
I'm worried about the fact that those two methods throw different kinds of exceptions on validation, for example:
ValidationDef's Define.NotNullable() throws PropertyValueException
When using [NotNull] attribute, an InvalidStateException is thrown.
This makes me think mixing these two approaches isn't a good idea - it will be very difficult to handle validation exceptions consistently. Any suggestions/recommendations?
ValidationDef is probably more suitable for business-rules validation even if, having said that, I used it even for simple validation. There's more here.
What I like about ValidationDef is the fact that it has got a fluent interface.
I've been playing around with this engine for quite a while and I've put together something that works quite well for me.
I've defined an interface:
public interface IValidationEngine
{
bool IsValid(Entity entity);
IList<Validation.IBrokenRule> Validate(Entity entity);
}
Which is implemented in my validation engine:
public class ValidationEngine : Validation.IValidationEngine
{
private NHibernate.Validator.Engine.ValidatorEngine _Validator;
public ValidationEngine()
{
var vtor = new NHibernate.Validator.Engine.ValidatorEngine();
var configuration = new FluentConfiguration();
configuration
.SetDefaultValidatorMode(ValidatorMode.UseExternal)
.Register<Data.NH.Validation.User, Domain.User>()
.Register<Data.NH.Validation.Company, Domain.Company>()
.Register<Data.NH.Validation.PlanType, Domain.PlanType>();
vtor.Configure(configuration);
this._Validator = vtor;
}
public bool IsValid(DomainModel.Entity entity)
{
return (this._Validator.IsValid(entity));
}
public IList<Validation.IBrokenRule> Validate(DomainModel.Entity entity)
{
var Values = new List<Validation.IBrokenRule>();
NHibernate.Validator.Engine.InvalidValue[] values = this._Validator.Validate(entity);
if (values.Length > 0)
{
foreach (var value in values)
{
Values.Add(
new Validation.BrokenRule()
{
// Entity = value.Entity as BpReminders.Data.DomainModel.Entity,
// EntityType = value.EntityType,
EntityTypeName = value.EntityType.Name,
Message = value.Message,
PropertyName = value.PropertyName,
PropertyPath = value.PropertyPath,
// RootEntity = value.RootEntity as DomainModel.Entity,
Value = value.Value
});
}
}
return (Values);
}
}
I plug all my domain rules in there.
I bootstrap the engine at the app startup:
For<Validation.IValidationEngine>()
.Singleton()
.Use<Validation.ValidationEngine>();
Now, when I need to validate my entities before save, I just use the engine:
if (!this._ValidationEngine.IsValid(User))
{
BrokenRules = this._ValidationEngine.Validate(User);
}
and return, eventually, the collection of broken rules.