How to access properties dynamically / late-bound? - dynamic

I'd like to implement a method which allows me to access a property of an unknown/anonymous object (-graph) in a late-bound / dynamic way (I don't even know how to correctly call it).
Here's an example of what I'd like to achieve:
// setup an anonymous object
var a = new { B = new { C = new { I = 33 } } };
// now get the value of a.B.C.I in a late-bound way
var i = Get(a, "B.C.I");
And here's a simple implementation using "classic" reflection:
public static object Get(object obj, string expression)
{
foreach (var name in expression.Split('.'))
{
var property = obj.GetType().GetProperty(name);
obj = property.GetValue(obj, null);
}
return obj;
}
What other options do I have with C# / .NET 4 to implement something similar as shown above, but maybe simpler, more performant, etc.?
Maybe there are ways to achieve the same thing, which would allow me to specify expression using a lambda expression instead of a string? Would expression trees be helpful in any way (e.g. as shown in this question)?
Update: the object and the expression are passed into my code via a web service call. That's why I used object and string in my Get() method.

Do you actually only have the expression as a string? Is it known at compile-time, just that the types themselves aren't known (or are hard to express)? If so:
dynamic d = a;
int i = d.B.C.I;
If you really only have it as a string (e.g. as user-entered data) that makes life a lot harder, and none of the C# 4 features really help you. You could potentially evaluate it as an IronPython script or something like that...
EDIT: Okay, after the update it sounds like you're in the latter situation - in which case, I don't know of a nicer way than using reflection. Some of the built-in property binding built for UIs may help, but I don't know of a clean way of accessing it.

If you want to use C# style, you could use the Mono compiler as a service from your application. I describe how to do this here:
Mono Compiler as a Service (MCS)
As an alternative approach, you could use reflection to put all of your object's properties into an ExpandoObject then use it like a dictionary (because ExpandoObject implements IDictionary). Alternatively, you could use JSON.NET and call JObject.FromObject, which will turn a regular object into a JObject which is accessible in a dictionary-like style (and as an added benefit has recursive graph support). Lastly, you could use the same approach to dump your object into a dictionary of dictionaries.

Related

Various params method signature versus one object param method signature

I need to construct objects with many properties.
I could create one constructor with one param for each property:
class Friend{
constructor(name, birthday, phone, address, job, favouriteGame, favouriteBand){
this.name = name;
this.birthday = birthday;
this.phone = phone;
this.address = address;
this.job = job;
this.favouriteGame = favouriteGame;
this.favouriteBand = favouriteBand;
}
}
or I could recieve one param with a literal object or an array with all the values:
class Friend{
constructor(descriptor){
this.name = descriptor.name;
this.birthday = descriptor.birthday;
this.phone = descriptor.phone;
this.address = descriptor.address;
this.job = descriptor.job;
this.favouriteGame = descriptor.favouriteGame;
this.favouriteBand = descriptor.favouriteBand;
}
}
In which case should I use each one?
Is there a design-pattern about this subject?
I'm interested in OOP. I'm using Javascript but it could be wrote in any other language supporting OOP (PHP, Java, C++, Python, any other possible)
The first one seems more explicit for the clients as each parameter is defined in the constructor declaration.
But this way is also error prone as you have many parameters and some parameters have the same type. The client can easily mix them as these are passed to.
So in this specific case, using a literal such as constructor(descriptor){...} seems clearer.
I am not sure that this type of constructor be a design pattern. It is rather a constructor flavor that depends on the requirement but also on the language used.
In JavaScript, it is common enough as it is straighter and it avoids writing boiler plate code for setters method or a builder constructor.
But in other languages as Java, defining a constructor with so many arguments is a also bad smell but using a literal is not a possible alternative either.
So using setters or a builder constructor is generally advised.
If you think about it that many fields in a single object is a slightly annoying pattern. It's not a terribly bad smell--let's call it slightly oderous.
The one case I've seen for having many fields like this is Java Beans or pojos. These tend to be a class with a lot of fields with annotations telling various services how the fields should be used. These don't really need complex constructors because they are usually created using the annotations for guidance.
Other classes--the ones with logic in them--don't usually need this many initialized fields.
When they do need this, I'd lean heavily towards factory pattern & immutability.
Intellij has an add builder pattern refactor that would be good for this, there is probably one for eclipse/netbeans as a plugin but I haven't looked too hard for it.
Wow the 3rd edition just came out, but the correct answer for this is you should use Bloch's Static Builder Pattern, which solves both the problem you note here, and the related (maybe more important one) of how to make many of the fields immutable.
Read about it here.

Possibility of having "dynamically-binded" and "implicit" interface?

Is there any construct that allows all classes which implemented a set of functions to be considered as a certain interface, even when the classes themselves do not explicitly implement the interface?
To make the question clearer, I'll make an example. Suppose we want to implement LinearSearch, which look through the whole array and search for certain key, and return the index of the key upon discovery. Essentially, the psudeocode might look something like this:
LinearSearch(A, key)
for (k = 0; k < A.length(); k++)
if (A.get(k) == key)
return k
return NULL
In that case, any classes which implemented length and get will be able to search through the structure. We could implement this on DynamicArray, which acts the same as ArrayList in Java. We could implement this on a LinkedList, ignoring the fact the get takes linear time per query. Similarly for other structures that implement these 2 functions. However, such classes might not have explicitly implemented a common interface, even though it is favorable to have them being in one.
While writing this question, I feel a sense of insecurity tinkering within me about such a construct, but I cannot put it into words. So, is there any reason you think that this might not be a good construct in actual languages?
It's called "duck typing". Message-based object models like Smalltalk allow sending any message to an object as long as its name and parameters match.
In languages like C++, you can emulate this using "signals" and "slots", which, at their most primitive, can be implemented by writing a little template adapter class like
class CallGetLengthAdapterBase
{
public:
int length() = 0;
key_type key() = 0;
};
template<class N>
class CallGetLengthAdapter : public CallGetLengthAdapterBase
{
public:
CallGetLengthAdapter( N* obj ) { mObject = obj; };
int length() { return mObject->length(); };
key_type key() { return mObject->key(); };
protected:
N* mObject;
};
So the LinearSearch would just know about CallGetLengthAdapterBase, and would take a pointer to an object of this type. Whoever owns and connects both of these objects would call them like:
LinearSearch( CallGetLengthAdapter<A_type>(&A), key );
That's all.
From Wikipedia:
Go has "interface" types that are compatible with any type that supports a given set of methods (the type does not need to explicitly implement the interface). The empty interface, interface{}, is compatible with all types.
It sounds like this is what you mean, so it is another sense of interface than we might be used to from Java or such. This is a structural typing kind of interface, where the structure of methods involved are the important part, not a name given to the interface.
More formally, it seems that this is called a type class.

C# 4.0 'dynamic' and foreach statement

Not long time before I've discovered, that new dynamic keyword doesn't work well with the C#'s foreach statement:
using System;
sealed class Foo {
public struct FooEnumerator {
int value;
public bool MoveNext() { return true; }
public int Current { get { return value++; } }
}
public FooEnumerator GetEnumerator() {
return new FooEnumerator();
}
static void Main() {
foreach (int x in new Foo()) {
Console.WriteLine(x);
if (x >= 100) break;
}
foreach (int x in (dynamic)new Foo()) { // :)
Console.WriteLine(x);
if (x >= 100) break;
}
}
}
I've expected that iterating over the dynamic variable should work completely as if the type of collection variable is known at compile time. I've discovered that the second loop actually is looked like this when is compiled:
foreach (object x in (IEnumerable) /* dynamic cast */ (object) new Foo()) {
...
}
and every access to the x variable results with the dynamic lookup/cast so C# ignores that I've specify the correct x's type in the foreach statement - that was a bit surprising for me... And also, C# compiler completely ignores that collection from dynamically typed variable may implements IEnumerable<T> interface!
The full foreach statement behavior is described in the C# 4.0 specification 8.8.4 The foreach statement article.
But... It's perfectly possible to implement the same behavior at runtime! It's possible to add an extra CSharpBinderFlags.ForEachCast flag, correct the emmited code to looks like:
foreach (int x in (IEnumerable<int>) /* dynamic cast with the CSharpBinderFlags.ForEachCast flag */ (object) new Foo()) {
...
}
And add some extra logic to CSharpConvertBinder:
Wrap IEnumerable collections and IEnumerator's to IEnumerable<T>/IEnumerator<T>.
Wrap collections doesn't implementing Ienumerable<T>/IEnumerator<T> to implement this interfaces.
So today foreach statement iterates over dynamic completely different from iterating over statically known collection variable and completely ignores the type information, specified by user. All that results with the different iteration behavior (IEnumarble<T>-implementing collections is being iterated as only IEnumerable-implementing) and more than 150x slowdown when iterating over dynamic. Simple fix will results a much better performance:
foreach (int x in (IEnumerable<int>) dynamicVariable) {
But why I should write code like this?
It's very nicely to see that sometimes C# 4.0 dynamic works completely the same if the type will be known at compile-time, but it's very sadly to see that dynamic works completely different where IT CAN works the same as statically typed code.
So my question is: why foreach over dynamic works different from foreach over anything else?
First off, to explain some background to readers who are confused by the question: the C# language actually does not require that the collection of a "foreach" implement IEnumerable. Rather, it requires either that it implement IEnumerable, or that it implement IEnumerable<T>, or simply that it have a GetEnumerator method (and that the GetEnumerator method returns something with a Current and MoveNext that matches the pattern expected, and so on.)
That might seem like an odd feature for a statically typed language like C# to have. Why should we "match the pattern"? Why not require that collections implement IEnumerable?
Think about the world before generics. If you wanted to make a collection of ints, you'd have to use IEnumerable. And therefore, every call to Current would box an int, and then of course the caller would immediately unbox it back to int. Which is slow and creates pressure on the GC. By going with a pattern-based approach you can make strongly typed collections in C# 1.0!
Nowadays of course no one implements that pattern; if you want a strongly typed collection, you implement IEnumerable<T> and you're done. Had a generic type system been available to C# 1.0, it is unlikely that the "match the pattern" feature would have been implemented in the first place.
As you've noted, instead of looking for the pattern, the code generated for a dynamic collection in a foreach looks for a dynamic conversion to IEnumerable (and then does a conversion from the object returned by Current to the type of the loop variable of course.) So your question basically is "why does the code generated by use of the dynamic type as a collection type of foreach fail to look for the pattern at runtime?"
Because it isn't 1999 anymore, and even when it was back in the C# 1.0 days, collections that used the pattern also almost always implemented IEnumerable too. The probability that a real user is going to be writing production-quality C# 4.0 code which does a foreach over a collection that implements the pattern but not IEnumerable is extremely low. Now, if you're in that situation, well, that's unexpected, and I'm sorry that our design failed to anticipate your needs. If you feel that your scenario is in fact common, and that we've misjudged how rare it is, please post more details about your scenario and we'll consider changing this for hypothetical future versions.
Note that the conversion we generate to IEnumerable is a dynamic conversion, not simply a type test. That way, the dynamic object may participate; if it does not implement IEnumerable but wishes to proffer up a proxy object which does, it is free to do so.
In short, the design of "dynamic foreach" is "dynamically ask the object for an IEnumerable sequence", rather than "dynamically do every type-testing operation we would have done at compile time". This does in theory subtly violate the design principle that dynamic analysis gives the same result as static analysis would have, but in practice it's how we expect the vast majority of dynamically accessed collections to work.
But why I should write code like this?
Indeed. And why would the compiler write code like that? You've removed any chance it might have had to guess that the loop could be optimized. Btw, you seem to interpret the IL incorrectly, it is rebinding to obtain IEnumerable.Current, the MoveNext() call is direct and GetEnumerator() is called only once. Which I think is appropriate, the next element might or might not cast to an int without problems. It could be a collection of various types, each with their own binder.

God object - decrease coupling to a 'master' object

I have an object called Parameters that gets tossed from method to method down and up the call tree, across package boundaries. It has about fifty state variables. Each method might use one or two variables to control its output.
I think this is a bad idea, beacuse I can't easily see what a method needs to function, or even what might happen if with a certain combination of parameters for module Y which is totally unrelated to my current module.
What are some good techniques for decreasing coupling to this god object, or ideally eliminating it ?
public void ExporterExcelParFonds(ParametresExecution parametres)
{
ApplicationExcel appExcel = null;
LogTool.Instance.ExceptionSoulevee = false;
bool inclureReferences = parametres.inclureReferences;
bool inclureBornes = parametres.inclureBornes;
DateTime dateDebut = parametres.date;
DateTime dateFin = parametres.dateFin;
try
{
LogTool.Instance.AfficherMessage(Variables.msg_GenerationRapportPortefeuilleReference);
bool fichiersPreparesAvecSucces = PreparerFichiers(parametres, Sections.exportExcelParFonds);
if (!fichiersPreparesAvecSucces)
{
parametres.afficherRapportApresGeneration = false;
LogTool.Instance.ExceptionSoulevee = true;
}
else
{
The caller would do :
PortefeuillesReference pr = new PortefeuillesReference();
pr.ExporterExcelParFonds(parametres);
First, at the risk of stating the obvious: pass the parameters which are used by the methods, rather than the god object.
This, however, might lead to some methods needing huge amounts of parameters because they call other methods, which call other methods in turn, etcetera. That was probably the inspiration for putting everything in a god object. I'll give a simplified example of such a method with too many parameters; you'll have to imagine that "too many" == 3 here :-)
public void PrintFilteredReport(
Data data, FilterCriteria criteria, ReportFormat format)
{
var filteredData = Filter(data, criteria);
PrintReport(filteredData, format);
}
So the question is, how can we reduce the amount of parameters without resorting to a god object? The answer is to get rid of procedural programming and make good use of object oriented design. Objects can use each other without needing to know the parameters that were used to initialize their collaborators:
// dataFilter service object only needs to know the criteria
var dataFilter = new DataFilter(criteria);
// report printer service object only needs to know the format
var reportPrinter = new ReportPrinter(format);
// filteredReportPrinter service object is initialized with a
// dataFilter and a reportPrinter service, but it doesn't need
// to know which parameters those are using to do their job
var filteredReportPrinter = new FilteredReportPrinter(dataFilter, reportPrinter);
Now the FilteredReportPrinter.Print method can be implemented with only one parameter:
public void Print(data)
{
var filteredData = this.dataFilter.Filter(data);
this.reportPrinter.Print(filteredData);
}
Incidentally, this sort of separation of concerns and dependency injection is good for more than just eliminating parameters. If you access collaborator objects through interfaces, then that makes your class
very flexible: you can set up FilteredReportPrinter with any filter/printer implementation you can imagine
very testable: you can pass in mock collaborators with canned responses and verify that they were used correctly in a unit test
If all your methods are using the same Parameters class then maybe it should be a member variable of a class with the relevant methods in it, then you can pass Parameters into the constructor of this class, assign it to a member variable and all your methods can use it with having to pass it as a parameter.
A good way to start refactoring this god class is by splitting it up into smaller pieces. Find groups of properties that are related and break them out into their own class.
You can then revisit the methods that depend on Parameters and see if you can replace it with one of the smaller classes you created.
Hard to give a good solution without some code samples and real world situations.
It sounds like you are not applying object-oriented (OO) principles in your design. Since you mention the word "object" I presume you are working within some sort of OO paradigm. I recommend you convert your "call tree" into objects that model the problem you are solving. A "god object" is definitely something to avoid. I think you may be missing something fundamental... If you post some code examples I may be able to answer in more detail.
Query each client for their required parameters and inject them?
Example: each "object" that requires "parameters" is a "Client". Each "Client" exposes an interface through which a "Configuration Agent" queries the Client for its required parameters. The Configuration Agent then "injects" the parameters (and only those required by a Client).
For the parameters that dictate behavior, one can instantiate an object that exhibits the configured behavior. Then client classes simply use the instantiated object - neither the client nor the service have to know what the value of the parameter is. For instance for a parameter that tells where to read data from, have the FlatFileReader, XMLFileReader and DatabaseReader all inherit the same base class (or implement the same interface). Instantiate one of them base on the value of the parameter, then clients of the reader class just ask for data to the instantiated reader object without knowing if the data come from a file or from the DB.
To start you can break your big ParametresExecution class into several classes, one per package, which only hold the parameters for the package.
Another direction could be to pass the ParametresExecution object at construction time. You won't have to pass it around at every function call.
(I am assuming this is within a Java or .NET environment) Convert the class into a singleton. Add a static method called "getInstance()" or something similar to call to get the name-value bundle (and stop "tramping" it around -- see Ch. 10 of "Code Complete" book).
Now the hard part. Presumably, this is within a web app or some other non batch/single-thread environment. So, to get access to the right instance when the object is not really a true singleton, you have to hide selection logic inside of the static accessor.
In java, you can set up a "thread local" reference, and initialize it when each request or sub-task starts. Then, code the accessor in terms of that thread-local. I don't know if something analogous exists in .NET, but you can always fake it with a Dictionary (Hash, Map) which uses the current thread instance as the key.
It's a start... (there's always decomposition of the blob itself, but I built a framework that has a very similar semi-global value-store within it)

Serialize Entity Framework objects into JSON

It seems that serializing Entity Framework objects into JSON is not possible using either WCF's native DataContractJsonSerializer or ASP.NET's native JavaScript serializer. This is due to the reference counting issues both serializers reject. I have also tried Json.NET, which also fails specifically on a Reference Counting issue.
Edit: Json.NET can now serialize and deserialize Entity Framework entities.
My objects are Entity Framework objects, which are overloaded to perform additional business functionality (eg. authentication, etc.) and I do not want to decorate these classes with platform-specific attributes, etc. as I want to present a platform-agnostic API.
I've actually blogged about the individual steps I went though at https://blog.programx.co.uk/2009/03/18/wcf-json-serialization-woes-and-a-solution/
Have I missed something obvious?
The way I do this is by projecting the data I want to serialize into an anonymous type and serializing that. This ensures that only the information I actually want in the JSON is serialized, and I don't inadvertently serialize something further down the object graph. It looks like this:
var records = from entity in context.Entities
select new
{
Prop1 = entity.Prop1,
Prop2 = entity.Prop2,
ChildProp = entity.Child.Prop
}
return Json(records);
I find anonymous types just about ideal for this. The JSON, obviously, doesn't care what type was used to produce it. And anonymous types give you complete flexibility as to what properties and structure you put into the JSON.
Microsoft made an error in the way they made EF objects into data contracts. They included the base classes, and the back links.
Your best bet will be to create equivalent Data Transfer Object classes for each of the entities you want to return. These would include only the data, not the behavior, and not the EF-specific parts of an entity. You would also create methods to translate to and from your DTO classes.
Your services would then return the Data Transfer Objects.
Based off of #Craig Stuntz answer and similar to a DTO, for my solution I have created a partial class of the model (in a separate file) and a return object method with how I want it using only the properties that will be needed.
namespace TestApplication.Models
{
public partial class Employee
{
public object ToObject()
{
return new
{
EmployeeID = EmployeeID,
Name = Name,
Username = Username,
Office = Office,
PhoneNumber = PhoneNumber,
EmailAddress = EmailAddress,
Title = Title,
Department = Department,
Manager = Manager
};
}
}
}
And then I call it simply in my return:
var employee = dbCtx.Employees.Where(x => x.Name == usersName).Single();
return employee.ToObject();
I think the accepted answer is more quick and easy, I just use my method to keep all of my returns consistent and DRY.
My solution was to simply remove the parent reference on my child entities.
So in my model, I selected the relationship and changed the Parent reference to be Internal rather than Public.
May not be an ideal solution for all, but worked for me.
One more solution if you want to have better code consistency is to use JavaScriptConverter which will handle circular reference dependencies and will not serialize such references.
I've blogged about here:
http://hellowebapps.com/2010-09-26/producing-json-from-entity-framework-4-0-generated-classes/
FYI I found an alternative solution
You can set the parent relationship as private so then the properties are not exposed during the translation removing the infinite property loop
I battled with this problem for days,
Solution. Inside your edmx window.
- right click and add code generation item
- Select Code tab
- select EF 4x.POCOC Entity Generator
If you don't see it, then you will have to install it with nuget, search EF.
The Entity generator will generate all you complex type and entity object into simple classes to serialize into json.
I solved it by getting only object types from System namespace, and then convert them to Dictionary and then add them to list. Works good for me :)
It looks complicated, but this was the only generic solution that worked for me...
I'm using this logic for a helper I'm making, so it's for a special use where I need to be able to intercept every object type in entity object, maybe someone could adapt it to his use.
List<Dictionary<string, string>> outputData = new List<Dictionary<string, string>>();
// convert all items to objects
var data = Data.ToArray().Cast<object>().ToArray();
// get info about objects; and get only those we need
// this will remove circular references and other stuff we don't need
PropertyInfo[] objInfos = data[0].GetType().GetProperties();
foreach (PropertyInfo info in objInfos) {
switch (info.PropertyType.Namespace)
{
// all types that are in "System" namespace should be OK
case "System":
propeties.Add(info.Name);
break;
}
}
Dictionary<string, string> rowsData = null;
foreach (object obj in data) {
rowsData = new Dictionary<string, string>();
Type objType = obj.GetType();
foreach (string propertyName in propeties)
{
//if You don't need to intercept every object type You could just call .ToString(), and remove other code
PropertyInfo info = objType.GetProperty(propertyName);
switch(info.PropertyType.FullName)
{
case "System.String":
var colData = info.GetValue(obj, null);
rowsData.Add(propertyName, colData != null ? colData.ToString() : String.Empty);
break;
//here You can add more variable types if you need so (like int and so on...)
}
}
outputData .Add(rowsData); // add a new row
}
"outputData " is safe for JSON encode...
Hope someone will find this solution helpful. It was fun writing it :)