In the benefit of creating clean decoupled code in c# I was hoping to get some feedback on using a dynamic parameter to construct objects. Typically I believe you'd create an interface and use the interface as the contract, but then you have to create interfaces for all your classes which I think is kind of icky...
So, my question is what are the pros and cons of doing something like this:
class Class1
{
public string Description { get; set; }
public string Name { get; set; }
public Class1(dynamic obj)
{
Name = obj.Name;
Description = obj.Description;
}
}
vs
class Class1
{
public string Description { get; set; }
public string Name { get; set; }
public Class1(IClass1 obj)
{
Name = obj.Name;
Description = obj.Description;
}
}
Pros of the interface:
The compiler will tell you if you're using the wrong kind of argument
The signature of the constructor tells you what's required from the parameter
Pros of dynamic:
You don't need to declare the interface or implement it
Existing classes with Name and Description properties can be used with no change
Anonymous types can be used within the same assembly if they have Name and Description properties
Personally I typically use C# as a statically typed language unless I'm interacting with something naturally dynamic (e.g. where I'd otherwise use reflection, or calling into COM or the DLR)... but I can see that in some cases this could be useful. Just don't over-do it :)
In both scenarios for the method to function properly as expected the objects being passed into the method must have your Name and Description properties.
My concern is that the best practice for using a dynamic as you have, you would need to provide additional method documentation to ensure other programmers or even yourself six months from now know the expected data contracts that must be present on the object being passed and even then you really should write error handling into your method to ensure it functions as expected when that contract is broken.
Does all these potential gotchas out weight the hypothetical gain of not writing an interface which in the example given would be literally only a 5 basic lines of code, which would then do everything your forcing yourself to do manually.
Assuming you want to follow best practices which lead to well documented and easy to read code. I would lean towards using an interface for this purpose.
Related
This is quite a common problem I run into. Let's hear your solutions. I'm going to use an Employee-managing application as an example:-
We've got some entity classes, some of which implement a particular interface.
public interface IEmployee { ... }
public interface IRecievesBonus { int Amount { get; } }
public class Manager : IEmployee, IRecievesBonus { ... }
public class Grunt : IEmployee /* This company sucks! */ { ... }
We've got a collection of Employees that we can iterate over. We need to grab all the objects that implement IRecievesBonus and pay the bonus.
The naive implementation goes something along the lines of:-
foreach(Employee employee in employees)
{
IRecievesBonus bonusReciever = employee as IRecievesBonus;
if(bonusReciever != null)
{
PayBonus(bonusReciever);
}
}
or alternately in C#:-
foreach(IRecievesBonus bonusReciever in employees.OfType<IRecievesBonus>())
{
PayBonus(bonusReciever);
}
We cannot modify the IEmployee interface to include details of the child type as we don't want to pollute the super-type with details that only the sub-type cares about.
We do not have an existing collection of only the subtype.
We cannot use the Visitor pattern because the element types are not stable. Also, we might have a type which implements both IRecievesBonus and IDrinksTea. Its Accept method would contain an ambiguous call to visitor.Visit(this).
Often we're forced down this route because we can't modify the super-type, nor the collection e.g. in .NET we may need to find all the Buttons on this Form via the child Controls collection. We may need to do something to the child types that depends on some aspect of the child type (e.g. the bonus amount in the example above).
Strikes me as odd that there isn't an "accepted" way to do this, given how often it comes up.
1) Is the type conversion worth avoiding?
2) Are there any alternatives I haven't thought of?
EDIT
Péter Török suggests composing Employee and pushing the type conversion further down the object tree:-
public interface IEmployee
{
public IList<IEmployeeProperty> Properties { get; }
}
public interface IEmployeeProperty { ... }
public class DrinksTeaProperty : IEmployeeProperty
{
int Sugars { get; set; }
bool Milk { get; set; }
}
foreach (IEmployee employee in employees)
{
foreach (IEmployeeProperty property in employee.Propeties)
{
// Handle duplicate properties if you need to.
// Since this is just an example, we'll just
// let the greedy ones have two cups of tea.
DrinksTeaProperty tea = property as DrinksTeaProperty;
if (tea != null)
{
MakeTea(tea.Sugers, tea.Milk);
}
}
}
In this example it's definitely worth pushing these traits out of the Employee type - particularly because some managers might drink tea and some might not - but we still have the same underlying problem of the type conversion.
Is it the case that it's "ok" so long as we do it at the right level? Or are we just moving the problem around?
The holy grail would be a variant on the Visitor pattern where:-
You can add element members without modifying all the visitors
Visitors should only visit types they're interested in visiting
The visitor can visit the member based on an interface type
Elements might implement multiple interfaces which are visited by different visitors
Doesn't involve casting or reflection
but I appreciate that's probably unrealistic.
I would definitely try to resolve this with composition instead of inheritance, by associating the needed properties/traits to Employee, instead of subclassing it.
I can give an example partly in Java, I think it's close enough to your language (C#) to be useful.
public enum EmployeeProperty {
RECEIVES_BONUS,
DRINKS_TEA,
...
}
public class Employee {
Set<EmployeeProperty> properties;
// methods to add/remove/query properties
...
}
And the modified loop would look like this:
foreach(Employee employee in employees) {
if (employee.getProperties().contains(EmployeeProperty.RECEIVES_BONUS)) {
PayBonus(employee);
}
}
This solution is much more flexible than subclassing:
it can trivially handle any combination of employee properties, while with subclassing you would experience a combinatorial explosion of subclasses as the number of properties grow,
it trivially allows you to change Employee properties runtime, while with subclassing this would require changing the concrete class of your object!
In Java, enums can have properties or (even virtual) methods themselves - I don't know whether this is possible in C#, but in the worst case, if you need more complex properties, you can implement them with a class hierarchy. (Even in this case, you are not back to square one, since you have an extra level of indirection which gives you the flexibility described above.)
Update
You are right that in the most general case (discussed in the last sentence above) the type conversion problem is not resolved, just pushed one level down on the object graph.
In general, I don't know a really satisfying solution to this problem. The typical way to handle it is using polymorphism: pull up the common interface and manipulate the objects via that, thus eliminating the need for downcasts. However, in cases when the objects in question do not have a common interface, what to do? It may help to realize that in these cases the design does not reflect reality well: practically, we created a marker interface solely to enable us to put a bunch of distinct objects into a common collection, but there is no semantical relationship between the objects.
So I believe in these cases the awkwardness of downcasts is a signal that there may be a deeper problem with our design.
You could implement a custom iterator that only iterates over the IRecievesBonus types.
I'm recently getting a bit confused with interfaces and abstract classes and I feel I dont fully grasp it like I thought I did. I think I'm using them incorrectly. I'll describe what I'm doing at the moment, the problem I have faced, and then hopefully it be clear what I'm doing wrong if anything.
I wanted to write some classes that do some parsing of xml. I have different user types that have different parsing requirements.
My logic went as follows.
All parsers share a "parse" function in common and must have at least this function so I made an Interface with this function defined named IParse;
I start out with 2 user types, user type A and user type B. User type A & B share some basic functions but user type B has slightly more functions than A so I put the functions to parse what they share in an abstract class that both will extend called "ParseBase".
So now I have
// Interface
public interface IParser
{
function parse(xml:XML):void;
}
// Base Class
public class ParseBase()
{
public function getbasicdata():void{}
public function getmorebasicdata():void{}
}
//User type A
public class userTypeA extends ParseBase implement IParse
{
public function parse(xml:XML):void
{
getbasicdata()
getmorebasicdata()
}
}
//user type B
public class userTypeB extends ParseBase implement IParse
{
public function parse(xml:XML):void
{
getbasicdata()
getmorebasicdata()
}
public function extraFunctionForB():void
{
}
public function anotherExtraFunctionForB():void
{
}
}
The problem I have come up against now which leads me believe that I'm doing something wrong is as follows.
Lets say I want to add another function UserTypeB. I go and write a new public function in that class. Then In my implementation I use a switch to check what Usertype to create.
Var userParser:IParser
if(a)
{
userParser= new userTypeA();
}else if(b)
{
userParser= new userTypeB();
}
If i then try to access that new function I can't see it in my code hinting. The only function names I see are the functions defined in the interface.
What am I doing wrong?
You declare the new function only in userTypeB, not in IParser. Thus it is not visible via IParser's interface. Since userParser is declared as an IParser, you can't directly access userTypeB's functions via it - you need to either downcast it to userTypeB, or add the new function to IParser to achieve that.
Of course, adding a function to IParser only makes sense if that function is meaningful for all parsers, not only for userTypeB. This is a design question, which IMO can't be reasonably answered without knowing a lot more about your app. One thing you can do though, is to unite IParser and BaseParser - IMO you don't need both. You can simply define the public interface and some default implementation in a single abstract class.
Oher than that, this has nothing to do with abstract classes - consider rephrasing the title. Btw in the code you show, ParseBase does not seem to be abstract.
In order to access functions for a specific sub-type (UserTypeB, for example) you need the variable to be of that type (requires explicit casting).
The use of interfaces and abstract classes is useful when you only require the methods defined in the interface. If you build the interface correctly, this should be most of the time.
As Peter Torok says (+1), the IParser declares just one function parse(xml). When you create a variable userParser of type IParser, you will be allowed to call ony the parse() method. In order to call a function defined in the subtype, you will have to explicitly cast it into that subtype.
In that case IMO your should rethink the way you have designed your parsers, an example would be to put a declaration in your IParser (Good if you make this abstract and have common base functionality in here) that allow subtypes (parsers) to do some customization before and after parsing.
You can also have a separate BaseParser abstract type that implemnts the IParser interface.
I have document scanning system where several types of documents are scanned. Initially, the document has no information when its scanned, then they get classified and additional information is entered for them in a second step later. So, I have a base class called Document, and subclasses for each type with their respective metadata like below. I have it setup as a table-per-subclass (joined subclass) mapping in NHibernate.
public class Document
{
public int ID { get; set; }
public string FilePath { get; set; }
}
public class Certificate : Document
{
// certificate-specific fields
}
public class Correspondence : Document
{
// correspondence-specific fields
}
What I need to be able to do is create a Document class first and save it. Then retrieve in a second step later on and convert it to one of the subclass types and fill in the rest of its information. What would be the best approach to do this, and is this even possible with NHibernate? If at all possible I would like to retain the original document record, but its not a dealbreaker if I have to jettison it.
Unfortunately, NHibernate does not allow you to switch between subclasses after initial creation; to get this working the way you want, you have 3 options:
Use a native sql call to change the discriminator (and possibly) add or change any subclass-related fields.
Copy the contents of your object to a new object of the proper class and then delete the original.
Don't use subclasses, control the state of your object through an enumeration or some other mechanism that allows you to determine their type at run-time.
This issue has already been discussed here. I would go with Terry Wilcox's tip to use a role for this. Composition over inheritance.
Is it a violation of the Persistance igorance to inject a repository interface into a Entity object Like this. By not using a interface I clearly see a problem but when using a interface is there really a problem? Is the code below a good or bad pattern and why?
public class Contact
{
private readonly IAddressRepository _addressRepository;
public Contact(IAddressRepository addressRepository)
{
_addressRepository = addressRepository;
}
private IEnumerable<Address> _addressBook;
public IEnumerable<Address> AddressBook
{
get
{
if(_addressBook == null)
{
_addressBook = _addressRepository.GetAddresses(this.Id);
}
return _addressBook;
}
}
}
It's not exactly a good idea, but it may be ok for some limited scenarios. I'm a little confused by your model, as I have a hard time believing that Address is your aggregate root, and therefore it wouldn't be ordinary to have a full-blown address repository. Based on your example, you probably are actually using a table data gateway or dao rather than a respository.
I prefer to use a data mapper to solve this problem (an ORM or similar solution). Basically, I would take advantage of my ORM to treat address-book as a lazy loaded property of the aggregate root, "Contact". This has the advantage that your changes can be saved as long as the entity is bound to a session.
If I weren't using an ORM, I'd still prefer that the concrete Contact repository implementation set the property of the AddressBook backing store (list, or whatever). I might have the repository set that enumeration to a proxy object that does know about the other data store, and loads it on demand.
You can inject the load function from outside. The new Lazy<T> type in .NET 4.0 comes in handy for that:
public Contact(Lazy<IEnumerable<Address>> addressBook)
{
_addressBook = addressBook;
}
private Lazy<IEnumerable<Address>> _addressBook;
public IEnumerable<Address> AddressBook
{
get { return this._addressBook.Value; }
}
Also note that IEnumerable<T>s might be intrinsically lazy anyhow when you get them from a query provider. But for any other type you can use the Lazy<T>.
Normally when you follow DDD you always operate with the whole aggregate. The repository always returns you a fully loaded aggregate root.
It doesn't make much sense (in DDD at least) to write code as in your example. A Contact aggregate will always contain all the addresses (if it needs them for its behavior, which I doubt to be honest).
So typically ContactRepository supposes to construct you the whole Contact aggregate where Address is an entity or, most likely, a value object inside this aggregate.
Because Address is an entity/value object that belongs to (and therefore managed by) Contact aggregate it will not have its own repository as you are not suppose to manage entities that belong to an aggregate outside this aggregate.
Resume: always load the whole Contact and call its behavior method to do something with its state.
Since its been 2 years since I asked the question and the question somewhat misunderstood I will try to answer it myself.
Rephrased question:
"Should Business entity classes be fully persistance ignorant?"
I think entity classes should be fully persistance ignorant, because you will instanciate them many places in your code base so it will quickly become messy to always have to inject the Repository class into the entity constructor, neither does it look very clean. This becomes even more evident if you are in need of injecting several repositories. Therefore I always use a separate handler/service class to do the persistance jobs for the entities. These classes are instanciated far less frequently and you usually have more control over where and when this happens. Entity classes are kept as lightweight as possible.
I now always have 1 Repository pr aggregate root and if I have need for some extra business logic when entities are fetched from repositories I usually create 1 ServiceClass for the aggregate root.
By taking a tweaked example of the code in the question as it was a bad example I would do it like this now:
Instead of:
public class Contact
{
private readonly IContactRepository _contactRepository;
public Contact(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save()
{
_contactRepository.Save(this);
}
}
I do it like this:
public class Contact
{
}
public class ContactService
{
private readonly IContactRepository _contactRepository;
public ContactService(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save(Contact contact)
{
_contactRepository.Save(contact);
}
}
Resolving a class that has multiple constructors with NInject doesn't seem to work.
public class Class1 : IClass
{
public Class1(int param) {...}
public Class1(int param2, string param3) { .. }
}
the following doesn’t seem to work:
IClass1 instance =
IocContainer.Get<IClass>(With.Parameters.ConstructorArgument(“param”, 1));
The hook in the module is simple, and worked before I added the extra constructor:
Bind().To();
The reason that it doesn't work is that manually supplied .ctor arguments are not considered in the .ctor selection process. The .ctors are scored according to how many parameters they have of which there is a binding on the parameter type. During activation, the manually supplied .ctor arguments are applied. Since you don't have bindings on int or string, they are not scored. You can force a scoring by adding the [Inject] attribute to the .ctor you wish to use.
The problem you're having is that Ninject selects .ctors based on the number of bound parameters available to it. That means that Ninject fundamentally doesn't understand overloading.
You can work around this problem by using the .ToConstructor() function in your bindings and combining it with the .Named() function. That lets you create multiple bindings for the same class to different constructors with different names. It's a little kludgy, but it works.
I maintain my own software development blog so this ended up being a post on it. If you want some example code and a little more explanation you should check it out.
http://www.nephandus.com/2013/05/10/overloading-ninject/