Loading Static Data from Database in an RIA Application - wcf

As part of a WCF RIA application I'm creating, I'd like to cache a bunch of static supporting data locally (lists of water systems, countries, provinces; that sort of thing.) I've created a simple static class to cache the lists into (LocalStateContainer.cs).
Example:
public static class LocalStateContainer
{
private static IEnumerable _waterSystems;
public static IEnumerable WaterSystems
{
get
{
if (_waterSystems== null)
{
DomainDataSource ds = new DomainDataSource();
Web.SuperDomainContext d = new Web.SuperDomainContext();
ds.DomainContext = d;
ds.QueryName = "GetWaterSystems";
ds.Load();
_waterSystems = ds.Data;
}
return _waterSystems;
}
}
}
Is it prudent to use a DomainDataSource in this way? Could I not just as easily go:
public static class LocalStateContainer
{
private static IEnumerable _waterSystems;
public static IEnumerable WaterSystems
{
get
{
if (_waterSystems== null)
{
Web.SuperDomainContext d = new Web.SuperDomainContext();
_waterSystems = from w in d.WaterSystems select w;
}
return _waterSystems;
}
}
}
More broadly, when is it smart to use a DomainDataSource to retrieve data versus accessing the DomainContext directly? I imagine for 2-way linking the DomainDataSource is the way to go, but is it harmful/foolish to yank static data directly out of the DomainContext?
Any insight is appreciated; I'm still very new to Silverlight so apologies if this is mickey mouse stuff.
Thanks!

I wouldn't bother with a DomainDataSource here, just have a static myDomainContext in App.cs that you can ping:
LoadOperation<my_entity> loadComplete = App.myDAL.Load(App.myDAL.Getmy_entityQuery());
And then if you care about knowing when it's done fetching:
loadComplete.Completed += new EventHandler(loadChain_Completed);
void loadChain_Completed(object sender, EventArgs e)
{
//Stuff to do when data has been fetched, for example
return App.myDAL.my_entitys.ToList();
}

Related

Is the decorator pattern the correct pattern to be used on this situation

I would like to ask if the decorator pattern suits my needs and is another way to make my software design much better?
Previously I have a device which is always on all the time. On the code below, that is the Device class. Now, to conserve some battery life, I need to turn it off then On again. I created a DeviceWithOnOffDecorator class. I used decorator pattern which I think helped a lot in avoiding modifications on the Device class. But having On and Off on every operation, I feel that the code doesn't conform to DRY principle.
namespace Decorator
{
interface IDevice
{
byte[] GetData();
void SendData();
}
class Device : IDevice
{
public byte[] GetData() {return new byte[] {1,2,3 }; }
public void SendData() {Console.WriteLine("Sending Data"); }
}
// new requirement, the device needs to be turned on and turned off
// after each operation to save some Battery Power
class DeviceWithOnOffDecorator:IDevice
{
IDevice mIdevice;
public DeviceWithOnOffDecorator(IDevice d)
{
this.mIdevice = d;
Off();
}
void Off() { Console.WriteLine("Off");}
void On() { Console.WriteLine("On"); }
public byte[] GetData()
{
On();
var b = mIdevice.GetData();
Off();
return b;
}
public void SendData()
{
On();
mIdevice.SendData();
Off();
}
}
class Program
{
static void Main(string[] args)
{
Device device = new Device();
DeviceWithOnOffDecorator devicewithOnOff = new DeviceWithOnOffDecorator(device);
IDevice iDevice = devicewithOnOff;
var data = iDevice.GetData();
iDevice.SendData();
}
}
}
On this example: I just have two operations only GetData and SendData, but on the actual software there are lots of operations involved and I need to do enclose each operations with On and Off,
void AnotherOperation1()
{
On();
// do all stuffs here
Off();
}
byte AnotherOperation2()
{
On();
byte b;
// do all stuffs here
Off();
return b;
}
I feel that enclosing each function with On and Off is repetitive and is there a way to improve this?
Edit: Also, the original code is in C++. I just wrote it in C# here to be able to show the problem clearer.
Decorator won't suite this purpose, since you are not adding the responsibility dynamically. To me what you need to do is intercept the request and execute on() and off() methods before and after the actual invocation. For that purpose write a Proxy that wraps the underlying instance and do the interception there while leaving your original type as it is.

do we need sessions in WebRTC?

I am creating a sample project for learning purpose(later on I will be working on project based on webrtc and kurento), I am using Kurento media server with it, I have modified the tutorial of the kurento server and made one sample out of it.
In all of the samples for Kurento Server they are using a UserRegistry.java where they are storing objects of UserSession as shown below:
public class UserSession {
private static final Logger log = LoggerFactory.getLogger(UserSession.class);
private final String name;
private final WebSocketSession session;
private String sdpOffer;
private String callingTo;
private String callingFrom;
private WebRtcEndpoint webRtcEndpoint;
private WebRtcEndpoint playingWebRtcEndpoint;
private final List<IceCandidate> candidateList = new ArrayList<>();
public UserSession(WebSocketSession session, String name) {
this.session = session;
this.name = name;
}
public void sendMessage(JsonObject message) throws IOException {
log.debug("Sending message from user '{}': {}", name, message);
session.sendMessage(new TextMessage(message.toString()));
}
public String getSessionId() {
return session.getId();
}
public void setWebRtcEndpoint(WebRtcEndpoint webRtcEndpoint) {
this.webRtcEndpoint = webRtcEndpoint;
if (this.webRtcEndpoint != null) {
for (IceCandidate e : candidateList) {
this.webRtcEndpoint.addIceCandidate(e);
}
this.candidateList.clear();
}
}
public void addCandidate(IceCandidate candidate) {
if (this.webRtcEndpoint != null) {
this.webRtcEndpoint.addIceCandidate(candidate);
} else {
candidateList.add(candidate);
}
if (this.playingWebRtcEndpoint != null) {
this.playingWebRtcEndpoint.addIceCandidate(candidate);
}
}
public void clear() {
this.webRtcEndpoint = null;
this.candidateList.clear();
}
}
I have two questions on this:
Why do we need session object?
What are the alternatives(if there are any) to manage session?
Let me give some more background on 2nd question. I found out that I can run the Kurento-JavaScript-Client(I need to convert it to browser version and then I can use it.) on the client side only (That way I won't require a backend server i.e. nodejs or tomcat - this is my assumption). So in this case how would I manage session or I can totally remove the UserRegistry concept and use some other way.
Thanks & Regards
You need to store sessions to implement signalling between the clients and the application server. See for example here. The signalling diagram describes the messages required to start/stop/etc the WebRTC video communication.
If you are planing to get rid of the application server (i.e. move to JavaScript client completely) you can take a look to a publish/subscribe API such as PubNub.

Storm Kafkaspout KryoSerialization issue for java bean from kafka topic

Hi I am new to Storm and Kafka.
I am using storm 1.0.1 and kafka 0.10.0
we have a kafkaspout that would receive java bean from kafka topic.
I have spent several hours digging to find the right approach for that.
Found few articles which are useful but none of the approaches worked for me so far.
Following is my codes:
StormTopology:
public class StormTopology {
public static void main(String[] args) throws Exception {
//Topo test /zkroot test
if (args.length == 4) {
System.out.println("started");
BrokerHosts hosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConf1 = new SpoutConfig(hosts, args[1], args[2],
args[3]);
kafkaConf1.zkRoot = args[2];
kafkaConf1.useStartOffsetTimeIfOffsetOutOfRange = true;
kafkaConf1.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaConf1.scheme = new SchemeAsMultiScheme(new KryoScheme());
KafkaSpout kafkaSpout1 = new KafkaSpout(kafkaConf1);
System.out.println("started");
ShuffleBolt shuffleBolt = new ShuffleBolt(args[1]);
AnalysisBolt analysisBolt = new AnalysisBolt(args[1]);
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaspout", kafkaSpout1, 1);
//builder.setBolt("counterbolt2", countbolt2, 3).shuffleGrouping("kafkaspout");
//This is for field grouping in bolt we need two bolt for field grouping or it wont work
topologyBuilder.setBolt("shuffleBolt", shuffleBolt, 3).shuffleGrouping("kafkaspout");
topologyBuilder.setBolt("analysisBolt", analysisBolt, 5).fieldsGrouping("shuffleBolt", new Fields("trip"));
Config config = new Config();
config.registerSerialization(VehicleTrip.class, VehicleTripKyroSerializer.class);
config.setDebug(true);
config.setNumWorkers(1);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(args[0], config, topologyBuilder.createTopology());
// StormSubmitter.submitTopology(args[0], config,
// builder.createTopology());
} else {
System.out
.println("Insufficent Arguements - topologyName kafkaTopic ZKRoot ID");
}
}
}
I am serializing the data at kafka using kryo
KafkaProducer:
public class StreamKafkaProducer {
private static Producer producer;
private final Properties props = new Properties();
private static final StreamKafkaProducer KAFKA_PRODUCER = new StreamKafkaProducer();
private StreamKafkaProducer(){
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "com.abc.serializer.MySerializer");
producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
}
public static StreamKafkaProducer getStreamKafkaProducer(){
return KAFKA_PRODUCER;
}
public void produce(String topic, VehicleTrip vehicleTrip){
ProducerRecord<String,VehicleTrip> producerRecord = new ProducerRecord<>(topic,vehicleTrip);
producer.send(producerRecord);
//producer.close();
}
public static void closeProducer(){
producer.close();
}
}
Kyro Serializer:
public class DataKyroSerializer extends Serializer<Data> implements Serializable {
#Override
public void write(Kryo kryo, Output output, VehicleTrip vehicleTrip) {
output.writeLong(data.getStartedOn().getTime());
output.writeLong(data.getEndedOn().getTime());
}
#Override
public Data read(Kryo kryo, Input input, Class<VehicleTrip> aClass) {
Data data = new Data();
data.setStartedOn(new Date(input.readLong()));
data.setEndedOn(new Date(input.readLong()));
return data;
}
I need to get the data back to the Data bean.
As per few articles I need to provide with a custom scheme and make it part of topology but till now I have no luck
Code for Bolt and Scheme
Scheme:
public class KryoScheme implements Scheme {
private ThreadLocal<Kryo> kryos = new ThreadLocal<Kryo>() {
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.addDefaultSerializer(Data.class, new DataKyroSerializer());
return kryo;
};
};
#Override
public List<Object> deserialize(ByteBuffer ser) {
return Utils.tuple(kryos.get().readObject(new ByteBufferInput(ser.array()), Data.class));
}
#Override
public Fields getOutputFields( ) {
return new Fields( "data" );
}
}
and bolt:
public class AnalysisBolt implements IBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
private String topicname = null;
public AnalysisBolt(String topicname) {
this.topicname = topicname;
}
public void prepare(Map stormConf, TopologyContext topologyContext) {
System.out.println("prepare");
}
public void execute(Tuple input, BasicOutputCollector collector) {
System.out.println("execute");
Fields fields = input.getFields();
try {
JSONObject eventJson = (JSONObject) JSONSerializer.toJSON((String) input
.getValueByField(fields.get(1)));
String StartTime = (String) eventJson.get("startedOn");
String EndTime = (String) eventJson.get("endedOn");
String Oid = (String) eventJson.get("_id");
int V_id = (Integer) eventJson.get("vehicleId");
//call method getEventForVehicleWithinTime(Long vehicleId, Date startTime, Date endTime)
System.out.println("==========="+Oid+"| "+V_id+"| "+StartTime+"| "+EndTime);
} catch (Exception e) {
e.printStackTrace();
}
}
but if I submit the storm topology i am getting error:
java.lang.IllegalStateException: Spout 'kafkaspout' contains a
non-serializable field of type com.abc.topology.KryoScheme$1, which
was instantiated prior to topology creation.
com.minda.iconnect.topology.KryoScheme$1 should be instantiated within
the prepare method of 'kafkaspout at the earliest.
Appreciate help to debug the issue and guide to right path.
Thanks
Your ThreadLocal is not Serializable. The preferable solution would be to make your serializer both Serializable and threadsafe. If this is not possible, then I see 2 alternatives since there is no prepare method as you would get in a bolt.
Declare it as static, which is inherently transient.
Declare it transient and access it via a private get method. Then you can initialize the variable on first access.
Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail.
If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
Source: Best Practices for implementing Apache Storm

How can I use MEF to manage interdependent modules?

I found this question difficult to express (particularly in title form), so please bear with me.
I have an application that I am continually modifying to do different things. It seems like MEF might be a good way to manage the different pieces of functionality. Broadly speaking, there are three sections of the application that form a pipeline of sorts:
Acquisition
Transformation
Expression
In it's simplest form, I can express each of these stages as an interface (IAcquisition etc). The problems start when I want to use acquisition components that provides richer data than standard. I want to design modules that use this richer data, but I can't rely on it being there.
I could, of course, add all of the data to the interface specification. I could deal with poorer data sources by throwing an exception or returning a null value. This seems a long way from ideal.
I'd prefer to do the MEF binding in three stages, such that modules are offered to the user only if they are compatible with those selected previously.
So my question: Can I specify metadata which restricts the set of available imports?
An example:
Acquision1 offers BasicData only
Acquision2 offers BasicData and AdvancedData
Transformation1 requires BasicData
Transformation2 requires BasicData and AdvancedData
Acquisition module is selected first.
If Acquisition1 is selected, don't offer Transformation 2, otherwise offer both.
Is this possible? If so, how?
Your question suggests a structure like this:
public class BasicData
{
public string Basic { get; set; } // example data
}
public class AdvancedData : BasicData
{
public string Advanced { get; set; } // example data
}
Now you have your acquisition, transformation and expression components. You want to be able to deal with different kinds of data, so they're generic:
public interface IAcquisition<out TDataKind>
{
TDataKind Acquire();
}
public interface ITransformation<TDataKind>
{
TDataKind Transform(TDataKind data);
}
public interface IExpression<in TDataKind>
{
void Express(TDataKind data);
}
And now you want to build a pipeline out of them that looks like this:
IExpression.Express(ITransformation.Transform(IAcquisition.Acquire));
So let's start building a pipeline builder:
using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using System.ComponentModel.Composition.Primitives;
using System.Linq;
using System.Linq.Expressions;
// namespace ...
public static class PipelineBuidler
{
private static readonly string AcquisitionIdentity =
AttributedModelServices.GetTypeIdentity(typeof(IAcquisition<>));
private static readonly string TransformationIdentity =
AttributedModelServices.GetTypeIdentity(typeof(ITransformation<>));
private static readonly string ExpressionIdentity =
AttributedModelServices.GetTypeIdentity(typeof(IExpression<>));
public static Action BuildPipeline(ComposablePartCatalog catalog,
Func<IEnumerable<string>, int> acquisitionSelector,
Func<IEnumerable<string>, int> transformationSelector,
Func<IEnumerable<string>, int> expressionSelector)
{
var container = new CompositionContainer(catalog);
The class holds MEF type identities for your three contract interfaces. We'll need those later to identify the correct exports. Our BuildPipeline method returns an Action. That is going to be the pipeline, so we can just do pipeline(). It takes a ComposablePartCatalog and three Funcs (to select an export). That way, we can keep all the dirty work inside this class. Then we start by creating a CompositionContainer.
Now we have to build ImportDefinitions, first for the acquisition component:
var aImportDef = new ImportDefinition(def => (def.ContractName == AcquisitionIdentity), null, ImportCardinality.ZeroOrMore, true, false);
This ImportDefinition simply filters out all exports of the IAcquisition<> interface. Now we can give it to the container:
var aExports = container.GetExports(aImportDef).ToArray();
aExports now holds all IAcquisition<> exports in the catalog. So let's run the selector on this:
var selectedAExport = aExports[acquisitionSelector(aExports.Select(export => export.Metadata["Name"] as string))];
And there we have our acquisition component:
var acquisition = selectedAExport.Value;
var acquisitionDataKind = (Type)selectedAExport.Metadata["DataKind"];
Now we're going to do the same for the transformation and the expression components, but with one slight difference: The ImportDefinition is going to ensure that each component can handle the output of the previous component.
var tImportDef = new ImportDefinition(def => (def.ContractName == TransformationIdentity) && ((Type)def.Metadata["DataKind"]).IsAssignableFrom(acquisitionDataKind),
null, ImportCardinality.ZeroOrMore, true, false);
var tExports = container.GetExports(tImportDef).ToArray();
var selectedTExport = tExports[transformationSelector(tExports.Select(export => export.Metadata["Name"] as string))];
var transformation = selectedTExport.Value;
var transformationDataKind = (Type)selectedTExport.Metadata["DataKind"];
var eImportDef = new ImportDefinition(def => (def.ContractName == ExpressionIdentity) && ((Type)def.Metadata["DataKind"]).IsAssignableFrom(transformationDataKind),
null, ImportCardinality.ZeroOrMore, true, false);
var eExports = container.GetExports(eImportDef).ToArray();
var selectedEExport = eExports[expressionSelector(eExports.Select(export => export.Metadata["Name"] as string))];
var expression = selectedEExport.Value;
var expressionDataKind = (Type)selectedEExport.Metadata["DataKind"];
And now we can wire it all up in an expression tree:
var acquired = Expression.Call(Expression.Constant(acquisition), typeof(IAcquisition<>).MakeGenericType(acquisitionDataKind).GetMethod("Acquire"));
var transformed = Expression.Call(Expression.Constant(transformation), typeof(ITransformation<>).MakeGenericType(transformationDataKind).GetMethod("Transform"), acquired);
var expressed = Expression.Call(Expression.Constant(expression), typeof(IExpression<>).MakeGenericType(expressionDataKind).GetMethod("Express"), transformed);
return Expression.Lambda<Action>(expressed).Compile();
}
}
And that's it! A simple example application would look like this:
[Export(typeof(IAcquisition<>))]
[ExportMetadata("DataKind", typeof(BasicData))]
[ExportMetadata("Name", "Basic acquisition")]
public class Acquisition1 : IAcquisition<BasicData>
{
public BasicData Acquire()
{
return new BasicData { Basic = "Acquisition1" };
}
}
[Export(typeof(IAcquisition<>))]
[ExportMetadata("DataKind", typeof(AdvancedData))]
[ExportMetadata("Name", "Advanced acquisition")]
public class Acquisition2 : IAcquisition<AdvancedData>
{
public AdvancedData Acquire()
{
return new AdvancedData { Advanced = "Acquisition2A", Basic = "Acquisition2B" };
}
}
[Export(typeof(ITransformation<>))]
[ExportMetadata("DataKind", typeof(BasicData))]
[ExportMetadata("Name", "Basic transformation")]
public class Transformation1 : ITransformation<BasicData>
{
public BasicData Transform(BasicData data)
{
data.Basic += " - Transformed1";
return data;
}
}
[Export(typeof(ITransformation<>))]
[ExportMetadata("DataKind", typeof(AdvancedData))]
[ExportMetadata("Name", "Advanced transformation")]
public class Transformation2 : ITransformation<AdvancedData>
{
public AdvancedData Transform(AdvancedData data)
{
data.Basic += " - Transformed2";
data.Advanced += " - Transformed2";
return data;
}
}
[Export(typeof(IExpression<>))]
[ExportMetadata("DataKind", typeof(BasicData))]
[ExportMetadata("Name", "Basic expression")]
public class Expression1 : IExpression<BasicData>
{
public void Express(BasicData data)
{
Console.WriteLine("Expression1: {0}", data.Basic);
}
}
[Export(typeof(IExpression<>))]
[ExportMetadata("DataKind", typeof(AdvancedData))]
[ExportMetadata("Name", "Advanced expression")]
public class Expression2 : IExpression<AdvancedData>
{
public void Express(AdvancedData data)
{
Console.WriteLine("Expression2: ({0}) - ({1})", data.Basic, data.Advanced);
}
}
class Program
{
static void Main(string[] args)
{
var pipeline = PipelineBuidler.BuildPipeline(new AssemblyCatalog(typeof(Program).Assembly), StringSelector, StringSelector, StringSelector);
pipeline();
}
static int StringSelector(IEnumerable<string> strings)
{
int i = 0;
foreach (var item in strings)
Console.WriteLine("[{0}] {1}", i++, item);
return int.Parse(Console.ReadLine());
}
}

Mono.CSharp: how do I inject a value/entity *into* a script?

Just came across the latest build of Mono.CSharp and love the promise it offers.
Was able to get the following all worked out:
namespace XAct.Spikes.Duo
{
class Program
{
static void Main(string[] args)
{
CompilerSettings compilerSettings = new CompilerSettings();
compilerSettings.LoadDefaultReferences = true;
Report report = new Report(new Mono.CSharp.ConsoleReportPrinter());
Mono.CSharp.Evaluator e;
e= new Evaluator(compilerSettings, report);
//IMPORTANT:This has to be put before you include references to any assemblies
//our you;ll get a stream of errors:
e.Run("using System;");
//IMPORTANT:You have to reference the assemblies your code references...
//...including this one:
e.Run("using XAct.Spikes.Duo;");
//Go crazy -- although that takes time:
//foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies())
//{
// e.ReferenceAssembly(assembly);
//}
//More appropriate in most cases:
e.ReferenceAssembly((typeof(A).Assembly));
//Exception due to no semicolon
//e.Run("var a = 1+3");
//Doesn't set anything:
//e.Run("a = 1+3;");
//Works:
//e.ReferenceAssembly(typeof(A).Assembly);
e.Run("var a = 1+3;");
e.Run("A x = new A{Name=\"Joe\"};");
var a = e.Evaluate("a;");
var x = e.Evaluate("x;");
//Not extremely useful:
string check = e.GetVars();
//Note that you have to type it:
Console.WriteLine(((A) x).Name);
e = new Evaluator(compilerSettings, report);
var b = e.Evaluate("a;");
}
}
public class A
{
public string Name { get; set; }
}
}
And that was fun...can create a variable in the script's scope, and export the value.
There's just one last thing to figure out... how can I get a value in (eg, a domain entity that I want to apply a Rule script on), without using a static (am thinking of using this in a web app)?
I've seen the use compiled delegates -- but that was for the previous version of Mono.CSharp, and it doesn't seem to work any longer.
Anybody have a suggestion on how to do this with the current version?
Thanks very much.
References:
* Injecting a variable into the Mono.CSharp.Evaluator (runtime compiling a LINQ query from string)
* http://naveensrinivasan.com/tag/mono/
I know it's almost 9 years later, but I think I found a viable solution to inject local variables. It is using a static variable but can still be used by multiple evaluators without collision.
You can use a static Dictionary<string, object> which holds the reference to be injected. Let's say we are doing all this from within our class CsharpConsole:
public class CsharpConsole {
public static Dictionary<string, object> InjectionRepository {get; set; } = new Dictionary<string, object>();
}
The idea is to temporarily place the value in there with a GUID as key so there won't be any conflict between multiple evaluator instances. To inject do this:
public void InjectLocal(string name, object value, string type=null) {
var id = Guid.NewGuid().ToString();
InjectionRepository[id] = value;
type = type ?? value.GetType().FullName;
// note for generic or nested types value.GetType().FullName won't return a compilable type string, so you have to set the type parameter manually
var success = _evaluator.Run($"var {name} = ({type})MyNamespace.CsharpConsole.InjectionRepository[\"{id}\"];");
// clean it up to avoid memory leak
InjectionRepository.Remove(id);
}
Also for accessing local variables there is a workaround using Reflection so you can have a nice [] accessor with get and set:
public object this[string variable]
{
get
{
FieldInfo fieldInfo = typeof(Evaluator).GetField("fields", BindingFlags.NonPublic | BindingFlags.Instance);
if (fieldInfo != null)
{
var fields = fieldInfo.GetValue(_evaluator) as Dictionary<string, Tuple<FieldSpec, FieldInfo>>;
if (fields != null)
{
if (fields.TryGetValue(variable, out var tuple) && tuple != null)
{
var value = tuple.Item2.GetValue(_evaluator);
return value;
}
}
}
return null;
}
set
{
InjectLocal(variable, value);
}
}
Using this trick, you can even inject delegates and functions that your evaluated code can call from within the script. For instance, I inject a print function which my code can call to ouput something to the gui console window:
public delegate void PrintFunc(params object[] o);
public void puts(params object[] o)
{
// call the OnPrint event to redirect the output to gui console
if (OnPrint!=null)
OnPrint(string.Join("", o.Select(x => (x ?? "null").ToString() + "\n").ToArray()));
}
This puts function can now be easily injected like this:
InjectLocal("puts", (PrintFunc)puts, "CsInterpreter2.PrintFunc");
And just be called from within your scripts:
puts(new object[] { "hello", "world!" });
Note, there is also a native function print but it directly writes to STDOUT and redirecting individual output from multiple console windows is not possible.