How to fill out Dynamic Dropdown in Hippo CMS with dynamic values? - dynamic

I have document type which contains "Dynamic Dropdown" field, and I want to fill it with some dynamic data. I couldn't figure out how to do it (couldn't find any adequate information, documentation, example about this). From links that I found I was able to do following things:
1) I've created service called SitemapValueListProvider in /hippo:configuration/hippo:frontend/cms/cms-services, with following properties:
plugin.class = com.test.cms.components.SitemapService
valuelist.provider = service.valuelist.custom
2) In CMS project created class com.test.cms.components.SitemapService
public class SitemapService extends Plugin implements IValueListProvider {
private final static String CONFIG_SOURCE = "source";
public SitemapService(IPluginContext context, IPluginConfig config) {
super(context, config);
String name = config.getString(IValueListProvider.SERVICE, "service.valuelist.custom");
context.registerService(this, name);
}
#Override
public ValueList getValueList(String name, Locale locale) {
ValueList valuelist = new ValueList();
if ((name == null) || (name.equals(""))) {
System.out.println("No node name (uuid or path) configured, returning empty value list");
} else {
valuelist.add(new ListItem("custom4", "Custom Value 4"));
valuelist.add(new ListItem("custom5", "Custom Value 5"));
valuelist.add(new ListItem("custom6", "Custom Value 6"));
}
return valuelist;
}
#Override
public List<String> getValueListNames() {
List<String> list = new ArrayList<>(1);
list.add("values");
return list;
}
#Override
public ValueList getValueList(IPluginConfig config) {
if (config == null) {
throw new IllegalArgumentException("Argument 'config' may not be null");
}
return getValueList(config.getString(CONFIG_SOURCE));
}
#Override
public ValueList getValueList(String name) {
return getValueList(name, null/*locale*/);
}
}
3) In CMS project created class com.test.cms.components.TestPlugin
public class TestPlugin extends Plugin{
public TestPlugin(IPluginContext context, IPluginConfig config) {
super(context, config);
context.registerService(this, "service.valuelist.custom");
}
}
4) For field /hippo:namespaces/cms/TestItem/editor:templates/_default_/dynamicdropdown of document type provided following properties: (using console)
plugin.class = com.test.cms.components.TestPlugin
But still unable to obtain data dynamically. Nothing happens at all.
I'm using HippoCMS 10 Community Edition

you are totally on the right track and I can't spot any obvious reason why this is not working. Can you double check a few things?
look for an error in the logs, possibly at the early start of the CMS. Maybe there is an error during the bootstrap process.
activate the development mode in the CMS: this adds extra logging in the CMS.
http://www.onehippo.org/library/development/debug-wicket-ajax-in-the-cms.html
you can also try to break the configuration by putting the wrong class name: if you don't have a ClassNotFound then you know your configuration is wrong and/or not picked-up.
HTH.

Related

Is it possible to add completion items to a Microsoft Language Server in runtime?

I am trying to develop a IntelliJ plugin which provides a Language Server with help of lsp4intellij by ballerina.
Thing is, i've got a special condition: The list of completion items should be editable in runtime.
But I've not found any way to communicate new completionItems to the LanguageServer process once its running.
My current idea is to add an action to the plugin which builds a new jar and then restarts the server with the new jar, using the Java Compiler API.
The problem with that is, i need to get the source code from the plugin project including the gradle dependencies accessable from the running plugin... any ideas?
If your requirement is to modify the completion items (coming from the language server) before displaying them in the IntelliJ UI, you can do that by implementing the LSP4IntelliJ's
LSPExtensionManager in your plugin.
Currently, we do not have proper documentation for the LSP4IntelliJ's extension points but you can refer to our Ballerina IntelliJ plugin as a reference implementation, where it has implemented Ballerina LSP Extension manager to override/modify completion items at the client runtime in here.
For those who might stumble upon this - it is indeed possible to change the amount of CompletionItems the LanguageServer can provide during runtime.
I simply edited the TextDocumentService.java (the library I used is LSP4J).
It works like this:
The main function of the LanguageServer needs to be started with an additional argument, which is the path to the config file in which you define the CompletionItems.
Being called from LSP4IntelliJ it would look like this:
String[] command = new String[]{"java", "-jar",
"path\\to\\LangServer.jar", "path\\to\\config.json"};
IntellijLanguageClient.addServerDefinition(new RawCommandServerDefinition("md,java", command));
The path String will then be passed through to the Constructor of your CustomTextDocumentServer.java, which will parse the config.json in a new Timer thread.
An Example:
public class CustomTextDocumentService implements TextDocumentService {
private List<CompletionItem> providedItems;
private String pathToConfig;
public CustomTextDocumentService(String pathToConfig) {
this.pathToConfig = pathToConfig;
Timer timer = new Timer();
timer.schedule(new ReloadCompletionItemsTask(), 0, 10000);
loadCompletionItems();
}
#Override
public CompletableFuture<Either<List<CompletionItem>, CompletionList>> completion(CompletionParams completionParams) {
return CompletableFuture.supplyAsync(() -> {
List<CompletionItem> completionItems;
completionItems = this.providedItems;
// Return the list of completion items.
return Either.forLeft(completionItems);
});
}
#Override
public void didOpen(DidOpenTextDocumentParams didOpenTextDocumentParams) {
}
#Override
public void didChange(DidChangeTextDocumentParams didChangeTextDocumentParams) {
}
#Override
public void didClose(DidCloseTextDocumentParams didCloseTextDocumentParams) {
}
#Override
public void didSave(DidSaveTextDocumentParams didSaveTextDocumentParams) {
}
private void loadCompletionItems() {
providedItems = new ArrayList<>();
CustomParser = new CustomParser(pathToConfig);
ArrayList<String> variables = customParser.getTheParsedItems();
for(String variable : variables) {
String itemTxt = "$" + variable + "$";
CompletionItem completionItem = new CompletionItem();
completionItem.setInsertText(itemTxt);
completionItem.setLabel(itemTxt);
completionItem.setKind(CompletionItemKind.Snippet);
completionItem.setDetail("CompletionItem");
providedItems.add(completionItem);
}
}
class ReloadCompletionItemsTask extends TimerTask {
#Override
public void run() {
loadCompletionItems();
}
}
}

Spring security custom FilterInvocationSecurityMetadataSource implementation 403 forbidden issue

To make things short I'm trying to implement a custom FilterInvocationSecurityMetadataSource in order to secure/authorize certain parts/URL endpoints dynamically in my web app using spring security 5.0.6 and Spring Boot 2.0.3.
The issue is that no matter what Role I use it always gives me the forbidden page.
I have tried several things with different role names and (believe me) I have searched the whole internet even on spring security 5.0.6 books but nothing seems to work.
This issue may be similar to this: Spring Security issue with securing URLs dynamically
Below the relevant parts of the custom FilterInvocationSecurityMetadataSource
public class DbFilterInvocationSecurityMetadataSource implements FilterInvocationSecurityMetadataSource {
public Collection<ConfigAttribute> getAttributes(Object object)
throws IllegalArgumentException {
FilterInvocation fi=(FilterInvocation)object;
String url=fi.getRequestUrl();
System.out.println("URL requested: " + url);
String[] stockArr = new String[]{"ROLE_ADMIN"};
return SecurityConfig.createList(stockArr);
}
Below the relevant parts of the custom implementation of securitywebconfigAdapter
#Configuration
public class Security extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.anyRequest().authenticated()
.withObjectPostProcessor(new ObjectPostProcessor<FilterSecurityInterceptor>() {
public <O extends FilterSecurityInterceptor> O postProcess(
O fsi) {
FilterInvocationSecurityMetadataSource newSource = new DbFilterInvocationSecurityMetadataSource();
fsi.setSecurityMetadataSource(newSource);
return fsi;
}
})
.and()
.formLogin()
.permitAll();
}
Below the relevant parts for custom userDetails authorities.
The user has the role: ROLE_ADMIN in database.
public class CustomUserDetails extends User implements UserDetails {
#Override
public Collection<? extends GrantedAuthority> getAuthorities() {
List<String> dbRoles=new ArrayList<String>();
for (Role userRole : super.getRoles()) {
dbRoles.add(userRole.getType());
}
List<SimpleGrantedAuthority> authorities=new ArrayList<SimpleGrantedAuthority>();
for (String role : dbRoles) {
authorities.add(new SimpleGrantedAuthority(role));
}
return authorities;
}
What am I doing wrong??
If more code is needed just comment below.
If you have even good books where I can learn this dynamic part of Spring security authorization comment below.
Thanks!
I managed to get into the security flow by debugging and it seems that by creating ConfigAttributes of this SecurityConfig class is the 'culprit'
return SecurityConfig.createList(stockArr);
public static List<ConfigAttribute> createList(String... attributeNames) {
Assert.notNull(attributeNames, "You must supply an array of attribute names");
List<ConfigAttribute> attributes = new ArrayList(attributeNames.length);
String[] var2 = attributeNames;
int var3 = attributeNames.length;
for(int var4 = 0; var4 < var3; ++var4) {
String attribute = var2[var4];
attributes.add(new SecurityConfig(attribute.trim()));
}
return attributes;
}
Above is the actual implementation of the method where you can see
attributes.add(new SecurityConfig(attribute.trim()));
And this always creates an instance of SecurityConfig type.
And below you can actually see where and how the decision is being made.
private WebExpressionConfigAttribute findConfigAttribute(Collection<ConfigAttribute> attributes) {
Iterator var2 = attributes.iterator();
ConfigAttribute attribute;
do {
if (!var2.hasNext()) {
return null;
}
attribute = (ConfigAttribute)var2.next();
} while(!(attribute instanceof WebExpressionConfigAttribute));
return (WebExpressionConfigAttribute)attribute;
}
So in order for it to actually return a configattribute for checking it must be of type WebExpressionConfigAttribute which is never going to be the case because of this
attributes.add(new SecurityConfig(attribute.trim()));
So the way I fixed it is to create my own accessDecisionManager the following way
public class MyAccessDecisionManager implements AccessDecisionManager {
#Override
public void decide(Authentication authentication, Object object, Collection<ConfigAttribute> configAttributes)
throws AccessDeniedException, InsufficientAuthenticationException {
if(configAttributes == null){
return ;
}
Iterator<ConfigAttribute> ite = configAttributes.iterator();
while(ite.hasNext()){
ConfigAttribute ca = ite.next();
String needRole = ((SecurityConfig)ca).getAttribute();
for(GrantedAuthority grantedAuthority : authentication.getAuthorities()){
if(needRole.trim().equals(grantedAuthority.getAuthority().trim())){
return;
}
}
}
throw new AccessDeniedException("Access is denied");
}
And registering as above now setting the accessdecisionManager with my custom one
.withObjectPostProcessor(new ObjectPostProcessor<FilterSecurityInterceptor>() {
public <O extends FilterSecurityInterceptor> O postProcess(
O fsi) {
FilterInvocationSecurityMetadataSource newSource = new DbFilterInvocationSecurityMetadataSource();
fsi.setSecurityMetadataSource(newSource);
fsi.setAccessDecisionManager(new MyAccessDecisionManager());
return fsi;
}

Gradle : how to use BuildConfig in an android-library with a flag that gets set in an app

My (gradle 1.10 and gradle plugin 0.8)-based android project consists of a big android-library that is a dependency for 3 different android-apps
In my library, I would love to be able to use a structure like this
if (BuildConfig.SOME_FLAG) {
callToBigLibraries()
}
as proguard would be able to reduce the size of the produced apk, based on the final value of SOME_FLAG
But I can't figure how to do it with gradle as :
* the BuildConfig produced by the library doesn't have the same package name than the app
* I have to import the BuildConfig with the library package in the library
* The apk of an apps includes the BuildConfig with the package of the app but not the one with the package of the library.
I tried without success to play with BuildTypes and stuff like
release {
// packageNameSuffix "library"
buildConfigField "boolean", "SOME_FLAG", "true"
}
debug {
//packageNameSuffix "library"
buildConfigField "boolean", "SOME_FLAG", "true"
}
What is the right way to builds a shared BuildConfig for my library and my apps whose flags will be overridden at build in the apps?
As a workaround, you can use this method, which uses reflection to get the field value from the app (not the library):
/**
* Gets a field from the project's BuildConfig. This is useful when, for example, flavors
* are used at the project level to set custom fields.
* #param context Used to find the correct file
* #param fieldName The name of the field-to-access
* #return The value of the field, or {#code null} if the field is not found.
*/
public static Object getBuildConfigValue(Context context, String fieldName) {
try {
Class<?> clazz = Class.forName(context.getPackageName() + ".BuildConfig");
Field field = clazz.getField(fieldName);
return field.get(null);
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (NoSuchFieldException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
return null;
}
To get the DEBUG field, for example, just call this from your Activity:
boolean debug = (Boolean) getBuildConfigValue(this, "DEBUG");
I have also shared this solution on the AOSP Issue Tracker.
Update: With newer versions of the Android Gradle plugin publishNonDefault is deprecated and has no effect anymore. All variants are now published.
The following solution/workaround works for me. It was posted by some guy in the google issue tracker:
Try setting publishNonDefault to true in the library project:
android {
...
publishNonDefault true
...
}
And add the following dependencies to the app project that is using the library:
dependencies {
releaseCompile project(path: ':library', configuration: 'release')
debugCompile project(path: ':library', configuration: 'debug')
}
This way, the project that uses the library includes the correct build type of the library.
You can't do what you want, because BuildConfig.SOME_FLAG isn't going to get propagated properly to your library; build types themselves aren't propagated to libraries -- they're always built as RELEASE. This is bug https://code.google.com/p/android/issues/detail?id=52962
To work around it: if you have control over all of the library modules, you could make sure that all the code touched by callToBigLibraries() is in classes and packages that you can cleave off cleanly with ProGuard, then use reflection so that you can access them if they exist and degrade gracefully if they don't. You're essentially doing the same thing, but you're making the check at runtime instead of compile time, and it's a little harder.
Let me know if you're having trouble figuring out how to do this; I could provide a sample if you need it.
I use a static BuildConfigHelper class in both the app and the library, so that I can have the packages BuildConfig set as final static variables in my library.
In the application, place a class like this:
package com.yourbase;
import com.your.application.BuildConfig;
public final class BuildConfigHelper {
public static final boolean DEBUG = BuildConfig.DEBUG;
public static final String APPLICATION_ID = BuildConfig.APPLICATION_ID;
public static final String BUILD_TYPE = BuildConfig.BUILD_TYPE;
public static final String FLAVOR = BuildConfig.FLAVOR;
public static final int VERSION_CODE = BuildConfig.VERSION_CODE;
public static final String VERSION_NAME = BuildConfig.VERSION_NAME;
}
And in the library:
package com.your.library;
import android.support.annotation.Nullable;
import java.lang.reflect.Field;
public class BuildConfigHelper {
private static final String BUILD_CONFIG = "com.yourbase.BuildConfigHelper";
public static final boolean DEBUG = getDebug();
public static final String APPLICATION_ID = (String) getBuildConfigValue("APPLICATION_ID");
public static final String BUILD_TYPE = (String) getBuildConfigValue("BUILD_TYPE");
public static final String FLAVOR = (String) getBuildConfigValue("FLAVOR");
public static final int VERSION_CODE = getVersionCode();
public static final String VERSION_NAME = (String) getBuildConfigValue("VERSION_NAME");
private static boolean getDebug() {
Object o = getBuildConfigValue("DEBUG");
if (o != null && o instanceof Boolean) {
return (Boolean) o;
} else {
return false;
}
}
private static int getVersionCode() {
Object o = getBuildConfigValue("VERSION_CODE");
if (o != null && o instanceof Integer) {
return (Integer) o;
} else {
return Integer.MIN_VALUE;
}
}
#Nullable
private static Object getBuildConfigValue(String fieldName) {
try {
Class c = Class.forName(BUILD_CONFIG);
Field f = c.getDeclaredField(fieldName);
f.setAccessible(true);
return f.get(null);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
}
Then, anywhere in your library where you want to check BuildConfig.DEBUG, you can check BuildConfigHelper.DEBUG and access it from anywhere without a context, and the same for the other properties. I did it this way so that the library will work with all my applications, without needing to pass a context in or set the package name some other way, and the application class only needs the import line changed to suit when adding it into a new application
Edit: I'd just like to reiterate, that this is the easiest (and only one listed here) way to get the values to be assigned to final static variables in the library from all of your applications without needing a context or hard coding the package name somewhere, which is almost as good as having the values in the default library BuildConfig anyway, for the minimal upkeep of changing that import line in each application.
For the case where the applicationId is not the same as the package (i.e. multiple applicationIds per project) AND you want to access from a library project:
Use Gradle to store the base package in resources.
In main/AndroidManifest.xml:
android {
applicationId "com.company.myappbase"
// note: using ${applicationId} here will be exactly as above
// and so NOT necessarily the applicationId of the generated APK
resValue "string", "build_config_package", "${applicationId}"
}
In Java:
public static boolean getDebug(Context context) {
Object obj = getBuildConfigValue("DEBUG", context);
if (obj instanceof Boolean) {
return (Boolean) o;
} else {
return false;
}
}
private static Object getBuildConfigValue(String fieldName, Context context) {
int resId = context.getResources().getIdentifier("build_config_package", "string", context.getPackageName());
// try/catch blah blah
Class<?> clazz = Class.forName(context.getString(resId) + ".BuildConfig");
Field field = clazz.getField(fieldName);
return field.get(null);
}
use both
my build.gradle
// ...
productFlavors {
internal {
// applicationId "com.elevensein.sein.internal"
applicationIdSuffix ".internal"
resValue "string", "build_config_package", "com.elevensein.sein"
}
production {
applicationId "com.elevensein.sein"
}
}
I want to call like below
Boolean isDebug = (Boolean) BuildConfigUtils.getBuildConfigValue(context, "DEBUG");
BuildConfigUtils.java
public class BuildConfigUtils
{
public static Object getBuildConfigValue (Context context, String fieldName)
{
Class<?> buildConfigClass = resolveBuildConfigClass(context);
return getStaticFieldValue(buildConfigClass, fieldName);
}
public static Class<?> resolveBuildConfigClass (Context context)
{
int resId = context.getResources().getIdentifier("build_config_package",
"string",
context.getPackageName());
if (resId != 0)
{
// defined in build.gradle
return loadClass(context.getString(resId) + ".BuildConfig");
}
// not defined in build.gradle
// try packageName + ".BuildConfig"
return loadClass(context.getPackageName() + ".BuildConfig");
}
private static Class<?> loadClass (String className)
{
Log.i("BuildConfigUtils", "try class load : " + className);
try {
return Class.forName(className);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
return null;
}
private static Object getStaticFieldValue (Class<?> clazz, String fieldName)
{
try { return clazz.getField(fieldName).get(null); }
catch (NoSuchFieldException e) { e.printStackTrace(); }
catch (IllegalAccessException e) { e.printStackTrace(); }
return null;
}
}
For me this is the ONLY ONE AND ACCEPTABLE* SOLUTION TO determine the ANDROID APPLICATION BuildConfig.class:
// base entry point
// abstract application
// which defines the method to obtain the desired class
// the definition of the application is contained in the library
// that wants to access the method or in a superior library package
public abstract class BasApp extends android.app.Application {
/*
* GET BUILD CONFIG CLASS
*/
protected Class<?> getAppBuildConfigClass();
// HELPER METHOD TO CAST CONTEXT TO BASE APP
public static BaseApp getAs(android.content.Context context) {
BaseApp as = getAs(context, BaseApp.class);
return as;
}
// HELPER METHOD TO CAST CONTEXT TO SPECIFIC BASEpp INHERITED CLASS TYPE
public static <I extends BaseApp> I getAs(android.content.Context context, Class<I> forCLass) {
android.content.Context applicationContext = context != null ?context.getApplicationContext() : null;
return applicationContext != null && forCLass != null && forCLass.isAssignableFrom(applicationContext.getClass())
? (I) applicationContext
: null;
}
// STATIC HELPER TO GET BUILD CONFIG CLASS
public static Class<?> getAppBuildConfigClass(android.content.Context context) {
BaseApp as = getAs(context);
Class buildConfigClass = as != null
? as.getAppBuildConfigClass()
: null;
return buildConfigClass;
}
}
// FINAL APP WITH IMPLEMENTATION
// POINTING TO DESIRED CLASS
public class MyApp extends BaseApp {
#Override
protected Class<?> getAppBuildConfigClass() {
return somefinal.app.package.BuildConfig.class;
}
}
USAGE IN LIBRARY:
Class<?> buildConfigClass = BaseApp.getAppBuildConfigClass(Context);
if(buildConfigClass !- null) {
// do your job
}
*there are couple of things need to be watched out:
getApplicationContext() - could return a context which is not an App ContexWrapper implementation - see what Applicatio class extends & get to know of the possibilities of context wrapping
the class returned by final app could be loaded by different class loaders than those who will use it - depends of loader implementation and some principals typical (chierarchy, visibility) for loaders
everything depends on the implemmentation of as in this case simple DELEGATION!!! - the solution could be more sophisticetaded - i wanted only to show here the usage of DELEGATION pattern :)
** why i downwoted all of reflection based patterns because they all have weak points and they all in some certain conditions will fail:
Class.forName(className); - because of not speciified loader
context.getPackageName() + ".BuildConfig"
a) context.getPackageName() - "by default - else see b)" returns not package defined in manifest but application id (somtimes they both are the same), see how the manifest package property is used and its flow - at the end apt tool will replace it with applicaton id (see ComponentName class for example what the pkg stands for there)
b) context.getPackageName() - will return what the implementaio wants to :P
*** what to change in my solution to make it more flawless
replace class with its name that will drop the problems wchich could appear when many classes loaded with different loaders accessing / or are used to obtain a final result involving class (get to know what describes the equality between two classes (for a compiler at runtime) - in short a class equality defines not a self class but a pair which is constituted by the loader and the class. (some home work - try load a inner class with different loader and access it by outer class loaded with different loader) - it would turns out that we will get illegal access error :) even the inner class is in the same package has all modificators allowing access to it outer class :) compiler/linker "VM" treats them as two not related classes...

How can I use MEF to manage interdependent modules?

I found this question difficult to express (particularly in title form), so please bear with me.
I have an application that I am continually modifying to do different things. It seems like MEF might be a good way to manage the different pieces of functionality. Broadly speaking, there are three sections of the application that form a pipeline of sorts:
Acquisition
Transformation
Expression
In it's simplest form, I can express each of these stages as an interface (IAcquisition etc). The problems start when I want to use acquisition components that provides richer data than standard. I want to design modules that use this richer data, but I can't rely on it being there.
I could, of course, add all of the data to the interface specification. I could deal with poorer data sources by throwing an exception or returning a null value. This seems a long way from ideal.
I'd prefer to do the MEF binding in three stages, such that modules are offered to the user only if they are compatible with those selected previously.
So my question: Can I specify metadata which restricts the set of available imports?
An example:
Acquision1 offers BasicData only
Acquision2 offers BasicData and AdvancedData
Transformation1 requires BasicData
Transformation2 requires BasicData and AdvancedData
Acquisition module is selected first.
If Acquisition1 is selected, don't offer Transformation 2, otherwise offer both.
Is this possible? If so, how?
Your question suggests a structure like this:
public class BasicData
{
public string Basic { get; set; } // example data
}
public class AdvancedData : BasicData
{
public string Advanced { get; set; } // example data
}
Now you have your acquisition, transformation and expression components. You want to be able to deal with different kinds of data, so they're generic:
public interface IAcquisition<out TDataKind>
{
TDataKind Acquire();
}
public interface ITransformation<TDataKind>
{
TDataKind Transform(TDataKind data);
}
public interface IExpression<in TDataKind>
{
void Express(TDataKind data);
}
And now you want to build a pipeline out of them that looks like this:
IExpression.Express(ITransformation.Transform(IAcquisition.Acquire));
So let's start building a pipeline builder:
using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using System.ComponentModel.Composition.Primitives;
using System.Linq;
using System.Linq.Expressions;
// namespace ...
public static class PipelineBuidler
{
private static readonly string AcquisitionIdentity =
AttributedModelServices.GetTypeIdentity(typeof(IAcquisition<>));
private static readonly string TransformationIdentity =
AttributedModelServices.GetTypeIdentity(typeof(ITransformation<>));
private static readonly string ExpressionIdentity =
AttributedModelServices.GetTypeIdentity(typeof(IExpression<>));
public static Action BuildPipeline(ComposablePartCatalog catalog,
Func<IEnumerable<string>, int> acquisitionSelector,
Func<IEnumerable<string>, int> transformationSelector,
Func<IEnumerable<string>, int> expressionSelector)
{
var container = new CompositionContainer(catalog);
The class holds MEF type identities for your three contract interfaces. We'll need those later to identify the correct exports. Our BuildPipeline method returns an Action. That is going to be the pipeline, so we can just do pipeline(). It takes a ComposablePartCatalog and three Funcs (to select an export). That way, we can keep all the dirty work inside this class. Then we start by creating a CompositionContainer.
Now we have to build ImportDefinitions, first for the acquisition component:
var aImportDef = new ImportDefinition(def => (def.ContractName == AcquisitionIdentity), null, ImportCardinality.ZeroOrMore, true, false);
This ImportDefinition simply filters out all exports of the IAcquisition<> interface. Now we can give it to the container:
var aExports = container.GetExports(aImportDef).ToArray();
aExports now holds all IAcquisition<> exports in the catalog. So let's run the selector on this:
var selectedAExport = aExports[acquisitionSelector(aExports.Select(export => export.Metadata["Name"] as string))];
And there we have our acquisition component:
var acquisition = selectedAExport.Value;
var acquisitionDataKind = (Type)selectedAExport.Metadata["DataKind"];
Now we're going to do the same for the transformation and the expression components, but with one slight difference: The ImportDefinition is going to ensure that each component can handle the output of the previous component.
var tImportDef = new ImportDefinition(def => (def.ContractName == TransformationIdentity) && ((Type)def.Metadata["DataKind"]).IsAssignableFrom(acquisitionDataKind),
null, ImportCardinality.ZeroOrMore, true, false);
var tExports = container.GetExports(tImportDef).ToArray();
var selectedTExport = tExports[transformationSelector(tExports.Select(export => export.Metadata["Name"] as string))];
var transformation = selectedTExport.Value;
var transformationDataKind = (Type)selectedTExport.Metadata["DataKind"];
var eImportDef = new ImportDefinition(def => (def.ContractName == ExpressionIdentity) && ((Type)def.Metadata["DataKind"]).IsAssignableFrom(transformationDataKind),
null, ImportCardinality.ZeroOrMore, true, false);
var eExports = container.GetExports(eImportDef).ToArray();
var selectedEExport = eExports[expressionSelector(eExports.Select(export => export.Metadata["Name"] as string))];
var expression = selectedEExport.Value;
var expressionDataKind = (Type)selectedEExport.Metadata["DataKind"];
And now we can wire it all up in an expression tree:
var acquired = Expression.Call(Expression.Constant(acquisition), typeof(IAcquisition<>).MakeGenericType(acquisitionDataKind).GetMethod("Acquire"));
var transformed = Expression.Call(Expression.Constant(transformation), typeof(ITransformation<>).MakeGenericType(transformationDataKind).GetMethod("Transform"), acquired);
var expressed = Expression.Call(Expression.Constant(expression), typeof(IExpression<>).MakeGenericType(expressionDataKind).GetMethod("Express"), transformed);
return Expression.Lambda<Action>(expressed).Compile();
}
}
And that's it! A simple example application would look like this:
[Export(typeof(IAcquisition<>))]
[ExportMetadata("DataKind", typeof(BasicData))]
[ExportMetadata("Name", "Basic acquisition")]
public class Acquisition1 : IAcquisition<BasicData>
{
public BasicData Acquire()
{
return new BasicData { Basic = "Acquisition1" };
}
}
[Export(typeof(IAcquisition<>))]
[ExportMetadata("DataKind", typeof(AdvancedData))]
[ExportMetadata("Name", "Advanced acquisition")]
public class Acquisition2 : IAcquisition<AdvancedData>
{
public AdvancedData Acquire()
{
return new AdvancedData { Advanced = "Acquisition2A", Basic = "Acquisition2B" };
}
}
[Export(typeof(ITransformation<>))]
[ExportMetadata("DataKind", typeof(BasicData))]
[ExportMetadata("Name", "Basic transformation")]
public class Transformation1 : ITransformation<BasicData>
{
public BasicData Transform(BasicData data)
{
data.Basic += " - Transformed1";
return data;
}
}
[Export(typeof(ITransformation<>))]
[ExportMetadata("DataKind", typeof(AdvancedData))]
[ExportMetadata("Name", "Advanced transformation")]
public class Transformation2 : ITransformation<AdvancedData>
{
public AdvancedData Transform(AdvancedData data)
{
data.Basic += " - Transformed2";
data.Advanced += " - Transformed2";
return data;
}
}
[Export(typeof(IExpression<>))]
[ExportMetadata("DataKind", typeof(BasicData))]
[ExportMetadata("Name", "Basic expression")]
public class Expression1 : IExpression<BasicData>
{
public void Express(BasicData data)
{
Console.WriteLine("Expression1: {0}", data.Basic);
}
}
[Export(typeof(IExpression<>))]
[ExportMetadata("DataKind", typeof(AdvancedData))]
[ExportMetadata("Name", "Advanced expression")]
public class Expression2 : IExpression<AdvancedData>
{
public void Express(AdvancedData data)
{
Console.WriteLine("Expression2: ({0}) - ({1})", data.Basic, data.Advanced);
}
}
class Program
{
static void Main(string[] args)
{
var pipeline = PipelineBuidler.BuildPipeline(new AssemblyCatalog(typeof(Program).Assembly), StringSelector, StringSelector, StringSelector);
pipeline();
}
static int StringSelector(IEnumerable<string> strings)
{
int i = 0;
foreach (var item in strings)
Console.WriteLine("[{0}] {1}", i++, item);
return int.Parse(Console.ReadLine());
}
}

Mono.CSharp: how do I inject a value/entity *into* a script?

Just came across the latest build of Mono.CSharp and love the promise it offers.
Was able to get the following all worked out:
namespace XAct.Spikes.Duo
{
class Program
{
static void Main(string[] args)
{
CompilerSettings compilerSettings = new CompilerSettings();
compilerSettings.LoadDefaultReferences = true;
Report report = new Report(new Mono.CSharp.ConsoleReportPrinter());
Mono.CSharp.Evaluator e;
e= new Evaluator(compilerSettings, report);
//IMPORTANT:This has to be put before you include references to any assemblies
//our you;ll get a stream of errors:
e.Run("using System;");
//IMPORTANT:You have to reference the assemblies your code references...
//...including this one:
e.Run("using XAct.Spikes.Duo;");
//Go crazy -- although that takes time:
//foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies())
//{
// e.ReferenceAssembly(assembly);
//}
//More appropriate in most cases:
e.ReferenceAssembly((typeof(A).Assembly));
//Exception due to no semicolon
//e.Run("var a = 1+3");
//Doesn't set anything:
//e.Run("a = 1+3;");
//Works:
//e.ReferenceAssembly(typeof(A).Assembly);
e.Run("var a = 1+3;");
e.Run("A x = new A{Name=\"Joe\"};");
var a = e.Evaluate("a;");
var x = e.Evaluate("x;");
//Not extremely useful:
string check = e.GetVars();
//Note that you have to type it:
Console.WriteLine(((A) x).Name);
e = new Evaluator(compilerSettings, report);
var b = e.Evaluate("a;");
}
}
public class A
{
public string Name { get; set; }
}
}
And that was fun...can create a variable in the script's scope, and export the value.
There's just one last thing to figure out... how can I get a value in (eg, a domain entity that I want to apply a Rule script on), without using a static (am thinking of using this in a web app)?
I've seen the use compiled delegates -- but that was for the previous version of Mono.CSharp, and it doesn't seem to work any longer.
Anybody have a suggestion on how to do this with the current version?
Thanks very much.
References:
* Injecting a variable into the Mono.CSharp.Evaluator (runtime compiling a LINQ query from string)
* http://naveensrinivasan.com/tag/mono/
I know it's almost 9 years later, but I think I found a viable solution to inject local variables. It is using a static variable but can still be used by multiple evaluators without collision.
You can use a static Dictionary<string, object> which holds the reference to be injected. Let's say we are doing all this from within our class CsharpConsole:
public class CsharpConsole {
public static Dictionary<string, object> InjectionRepository {get; set; } = new Dictionary<string, object>();
}
The idea is to temporarily place the value in there with a GUID as key so there won't be any conflict between multiple evaluator instances. To inject do this:
public void InjectLocal(string name, object value, string type=null) {
var id = Guid.NewGuid().ToString();
InjectionRepository[id] = value;
type = type ?? value.GetType().FullName;
// note for generic or nested types value.GetType().FullName won't return a compilable type string, so you have to set the type parameter manually
var success = _evaluator.Run($"var {name} = ({type})MyNamespace.CsharpConsole.InjectionRepository[\"{id}\"];");
// clean it up to avoid memory leak
InjectionRepository.Remove(id);
}
Also for accessing local variables there is a workaround using Reflection so you can have a nice [] accessor with get and set:
public object this[string variable]
{
get
{
FieldInfo fieldInfo = typeof(Evaluator).GetField("fields", BindingFlags.NonPublic | BindingFlags.Instance);
if (fieldInfo != null)
{
var fields = fieldInfo.GetValue(_evaluator) as Dictionary<string, Tuple<FieldSpec, FieldInfo>>;
if (fields != null)
{
if (fields.TryGetValue(variable, out var tuple) && tuple != null)
{
var value = tuple.Item2.GetValue(_evaluator);
return value;
}
}
}
return null;
}
set
{
InjectLocal(variable, value);
}
}
Using this trick, you can even inject delegates and functions that your evaluated code can call from within the script. For instance, I inject a print function which my code can call to ouput something to the gui console window:
public delegate void PrintFunc(params object[] o);
public void puts(params object[] o)
{
// call the OnPrint event to redirect the output to gui console
if (OnPrint!=null)
OnPrint(string.Join("", o.Select(x => (x ?? "null").ToString() + "\n").ToArray()));
}
This puts function can now be easily injected like this:
InjectLocal("puts", (PrintFunc)puts, "CsInterpreter2.PrintFunc");
And just be called from within your scripts:
puts(new object[] { "hello", "world!" });
Note, there is also a native function print but it directly writes to STDOUT and redirecting individual output from multiple console windows is not possible.