I am searching around for a solution to my problem but all I get is the reasons this does happen as opposed to preventing if from happening.
I have a class, WorkflowActivityInstance which has a collection of WorkflowActivityInstanceTransitions which represents the transitioning of the state of the workflow. The transitions are mapped in a Transitions property fine.
Therefore: WorkflowActivityInstance <-- WorkflowActivityInstanceTransition
I would like a view on the object which would give the WorkflowActivityInstance state including its current state, which would simply be the latest WorkflowActivityInstanceTransition without having the user-coder to perform their own sorting and selection on the Transitions property.
Originally, I had:
public virtual IWorkflowActivityInstanceTransition CurrentState
{
get { return Transitions.OrderBy(q => q.TransitionTimeStamp).LastOrDefault(); }
}
But I just get:
NHibernate.InvalidProxyTypeException:
NHibernate.InvalidProxyTypeException: The following types may not be
used as proxies:
FB.SimpleWorkflow.NHibernate.Model.WorkflowActivityInstance: method
CurrentState should be 'public/protected virtual' or 'protected
internal virtual'.
I tried to be cheeky and convert this to a method:
public IWorkflowActivityInstanceTransition GetCurrentState()
{
return Transitions.OrderBy(q => q.TransitionTimeStamp).LastOrDefault();
}
But I get a very similar:
NHibernate.InvalidProxyTypeException:
NHibernate.InvalidProxyTypeException: The following types may not be
used as proxies:
FB.SimpleWorkflow.NHibernate.Model.WorkflowActivityInstance: method
GetCurrentState should be 'public/protected virtual' or 'protected
internal virtual'.
I would like to keep the very simple behaviour of CurrentState in my model class, and prevent NHibernate from over-reaching itself and trying to map/proxy this property. It feels that this should just be an attribute on the property I don't want to map ...
How can I achieve this?
NHibernate needs to override all public, protected and internal methods, otherwise proxies can't work (it would be possible for your code to access a not yet initialized proxy).
I can't see a reason why your property wouldn't work, but the error is very clear for your method, you miss the virtual keyword.
You must to use virtual keyword. This is how it work of Nhibernate. And also this page will help you.
Github nhibernate/nhibernate-core
Related
I am designing a process that should be run daily at work, and I've written a class to do the work. It looks something like this:
class LeadReport {
public $posts = array();
public $fields = array();
protected _getPost() {
// get posts of a certain type
// set them to the property $this->posts
}
protected _getFields() {
// use $this->posts to get fields
// set $this->fields
}
protected _writeCsv() {
// use the properties to write a csv
}
protected _sendMail() {
// attach a csv and send it
}
public function main() {
$this->out('Lead Report');
$this->out("Getting Yesterday's Posts...");
$this->_getPosts();
$this->out("Done.");
$this->out("Next, Parsing the Fields...");
$this->_getFields();
$this->out("Done.");
$this->out("Next, Writing the CSVs...");
$this->_writeCsv();
$this->out("Done.");
$this->out("Finally, Sending Mail");
$this->_sendMail();
$this->out('Bye!');
}
}
After showing this code to one of my colleagues, he commented that the _get() methods should have return values, and that the _write() and _sendMail() methods should use those values as parameters.
So, two questions:
1) Which is "correct" in this case (properties or return values)?
2) Is there a general rule or principle that governs when to use properties over when to use return values in object oriented programming?
I think maybe the source of your question comes from the fact that you are not entirely convinced that using properties is better than having public fields. For example here, common practice says that should not have posts and fields as public. You should use the getField method or a Field protected property to regulate access to those fields. Having a protected getField and a public fields doesn't really make sense.
In this case your colleague may be pointing at two things. The fact that you need to use Properties and not public fields and also the fact that it is probably better to pass the post into the method and not have the method access a property if you can. That way you don't have to set a property before calling the method. Think of it as a way of documenting what the method needs for it to operate. In this way another developer doesn't need to know which properties need to be set for the method to work. Everything the method needs should be passed in.
Regarding why we need properties in the first place? why shouldn't you use public fields. Isn't it more convenient? It sure is. The reason we use properties and not public fields is that just like most other concepts in OOP, you want your object to hide its details from the outside world and just project well defined interfaces of its state. Why? Ultimately to hide implementation details and keep internal change to ripple out(Encapsulation). Also, accessing properties has the added benefit of debugging. You can simply set a breakpoint in a property to see when a variable is changed or simply do a check if the variable is of certain value. Instead of littering your code with said check all over the place. There are many more goodies that come with this, returning readonly values, access control, etc.
To sum up, fields are though of as internal state. Properties(actual get/set methods) are though of as methods that interact with internal state. Having an outside object interact with interfaces is smiley face. Having outside class interact with internal state directly is frowny face.
In my Google Web Toolkit project, I got the following error:
com.google.gwt.user.client.rpc.SerializationException: Type ‘your.class.Type’ was not included in the set of types which can be serialized by this SerializationPolicy or its Class object could not be loaded. For security purposes, this type will not be serialized.
What are the possible causes of this error?
GWT keeps track of a set of types which can be serialized and sent to the client. your.class.Type apparently was not on this list. Lists like this are stored in .gwt.rpc files. These lists are generated, so editing these lists is probably useless. How these lists are generated is a bit unclear, but you can try the following things:
Make sure your.class.Type implements java.io.Serializable
Make sure your.class.Type has a public no-args constructor
Make sure the members of your.class.Type do the same
Check if your program does not contain collections of a non-serializable type, e.g. ArrayList<Object>. If such a collection contains your.class.Type and is serialized, this error will occur.
Make your.class.Type implement IsSerializable. This marker interface was specifically meant for classes that should be sent to the client. This didn't work for me, but my class also implemented Serializable, so maybe both interfaces don't work well together.
Another option is to create a dummy class with your.class.Type as a member, and add a method to your RPC interface that gets and returns the dummy. This forces the GWT compiler to add the dummy class and its members to the serialization whitelist.
I'll also add that if you want to use a nested class, use a static member class.
I.e.,
public class Pojo {
public static class Insider {
}
}
Nonstatic member classes get the SerializationException in GWT 2.4
I had the same issue in a RemoteService like this
public List<X> getX(...);
where X is an interface. The only implementation did conform to the rules, i.e. implements Serializable or IsSerializable, has a default constructor, and all its (non-transient and non-final) fields follow those rules as well.
But I kept getting that SerializationException until I changed the result type from List to X[], so
public X[] getX(...);
worked. Interestingly, the only argument being a List, Y being an interface, was no problem at all...
I have run into this problem, and if you per chance are using JPA or Hibernate, this can be a result of trying to return the query object and not creating a new object and copying your relavant fields into that new object. Check the following out, which I saw in a google group.
#SuppressWarnings("unchecked")
public static List<Article> getForUser(User user)
{
List<Article> articles = null;
PersistenceManager pm = PMF.get().getPersistenceManager();
try
{
Query query = pm.newQuery(Article.class);
query.setFilter("email == emailParam");
query.setOrdering("timeStamp desc");
query.declareParameters("String emailParam");
List<Article> results = (List<Article>) query.execute(user.getEmail
());
articles = new ArrayList<Article>();
for (Article a : results)
{
a.getEmail();
articles.add(a);
}
}
finally
{
pm.close();
}
return articles;
}
this helped me out a lot, hopefully it points others in the right direction.
Looks like this question is very similar to what IsSerializable or not in GWT?, see more links to related documentation there.
When your class has JDO annotations, then this fixed it for me (in addition to the points in bspoel's answer) : https://stackoverflow.com/a/4826778/1099376
Is it a violation of the Persistance igorance to inject a repository interface into a Entity object Like this. By not using a interface I clearly see a problem but when using a interface is there really a problem? Is the code below a good or bad pattern and why?
public class Contact
{
private readonly IAddressRepository _addressRepository;
public Contact(IAddressRepository addressRepository)
{
_addressRepository = addressRepository;
}
private IEnumerable<Address> _addressBook;
public IEnumerable<Address> AddressBook
{
get
{
if(_addressBook == null)
{
_addressBook = _addressRepository.GetAddresses(this.Id);
}
return _addressBook;
}
}
}
It's not exactly a good idea, but it may be ok for some limited scenarios. I'm a little confused by your model, as I have a hard time believing that Address is your aggregate root, and therefore it wouldn't be ordinary to have a full-blown address repository. Based on your example, you probably are actually using a table data gateway or dao rather than a respository.
I prefer to use a data mapper to solve this problem (an ORM or similar solution). Basically, I would take advantage of my ORM to treat address-book as a lazy loaded property of the aggregate root, "Contact". This has the advantage that your changes can be saved as long as the entity is bound to a session.
If I weren't using an ORM, I'd still prefer that the concrete Contact repository implementation set the property of the AddressBook backing store (list, or whatever). I might have the repository set that enumeration to a proxy object that does know about the other data store, and loads it on demand.
You can inject the load function from outside. The new Lazy<T> type in .NET 4.0 comes in handy for that:
public Contact(Lazy<IEnumerable<Address>> addressBook)
{
_addressBook = addressBook;
}
private Lazy<IEnumerable<Address>> _addressBook;
public IEnumerable<Address> AddressBook
{
get { return this._addressBook.Value; }
}
Also note that IEnumerable<T>s might be intrinsically lazy anyhow when you get them from a query provider. But for any other type you can use the Lazy<T>.
Normally when you follow DDD you always operate with the whole aggregate. The repository always returns you a fully loaded aggregate root.
It doesn't make much sense (in DDD at least) to write code as in your example. A Contact aggregate will always contain all the addresses (if it needs them for its behavior, which I doubt to be honest).
So typically ContactRepository supposes to construct you the whole Contact aggregate where Address is an entity or, most likely, a value object inside this aggregate.
Because Address is an entity/value object that belongs to (and therefore managed by) Contact aggregate it will not have its own repository as you are not suppose to manage entities that belong to an aggregate outside this aggregate.
Resume: always load the whole Contact and call its behavior method to do something with its state.
Since its been 2 years since I asked the question and the question somewhat misunderstood I will try to answer it myself.
Rephrased question:
"Should Business entity classes be fully persistance ignorant?"
I think entity classes should be fully persistance ignorant, because you will instanciate them many places in your code base so it will quickly become messy to always have to inject the Repository class into the entity constructor, neither does it look very clean. This becomes even more evident if you are in need of injecting several repositories. Therefore I always use a separate handler/service class to do the persistance jobs for the entities. These classes are instanciated far less frequently and you usually have more control over where and when this happens. Entity classes are kept as lightweight as possible.
I now always have 1 Repository pr aggregate root and if I have need for some extra business logic when entities are fetched from repositories I usually create 1 ServiceClass for the aggregate root.
By taking a tweaked example of the code in the question as it was a bad example I would do it like this now:
Instead of:
public class Contact
{
private readonly IContactRepository _contactRepository;
public Contact(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save()
{
_contactRepository.Save(this);
}
}
I do it like this:
public class Contact
{
}
public class ContactService
{
private readonly IContactRepository _contactRepository;
public ContactService(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save(Contact contact)
{
_contactRepository.Save(contact);
}
}
Resolving a class that has multiple constructors with NInject doesn't seem to work.
public class Class1 : IClass
{
public Class1(int param) {...}
public Class1(int param2, string param3) { .. }
}
the following doesn’t seem to work:
IClass1 instance =
IocContainer.Get<IClass>(With.Parameters.ConstructorArgument(“param”, 1));
The hook in the module is simple, and worked before I added the extra constructor:
Bind().To();
The reason that it doesn't work is that manually supplied .ctor arguments are not considered in the .ctor selection process. The .ctors are scored according to how many parameters they have of which there is a binding on the parameter type. During activation, the manually supplied .ctor arguments are applied. Since you don't have bindings on int or string, they are not scored. You can force a scoring by adding the [Inject] attribute to the .ctor you wish to use.
The problem you're having is that Ninject selects .ctors based on the number of bound parameters available to it. That means that Ninject fundamentally doesn't understand overloading.
You can work around this problem by using the .ToConstructor() function in your bindings and combining it with the .Named() function. That lets you create multiple bindings for the same class to different constructors with different names. It's a little kludgy, but it works.
I maintain my own software development blog so this ended up being a post on it. If you want some example code and a little more explanation you should check it out.
http://www.nephandus.com/2013/05/10/overloading-ninject/
I am trying to map a domain model in NHibernate. The domain model is implemented with what I think is DDD style. The mapping works mostly but then when I try to use a collection filter on an a collection I get an exception which says: The collection was unreferenced.
I know the problem comes from how I've implemented the collection. My question: Is it possible to use collection filters in nHibernate on collections implemented this way or should I just forget it, i.e. nHibernate cannot work with this.
The code is as follows:
Person
{
IList<Address> _addresses = new List<Address>();
public string FirstName {get; set;}
...
public void addAddress(Address address)
{
// ... do some checks or validation
_addresses.Add(address);
}
public void removeAddress(Address address) {...}
public ReadOnlyCollection<Address> Addresses
{
get { return new ReadOnlyCollection<Address>(_addresses); }
}
}
The main issue is that I don't want to expose the internal addresses collection publicly.
Every other thing works, I use the field.camelcase-underscore access so nHibernate interacts directly with the field. I've been working through the Hibernate in Action book, an now I'm in chapter 7 where it deals with collection filters.
Is there any way around this. I've got it to work by exposing the internal collection like this:
public ReadOnlyCollection<Address> Addresses
{
get { return _addresses; }
}
but I really dont want to do this.
Help would really be appreciated.
Jide
If I recall correctly - NHibernate filter works as additional clause in sql queries to reduce returned rows from db.
My question to You is - why do You need that?
I mean - how much addresses one person might have? 1? 5? 10?
About collection isolation...
I myself just accept it as a sacrifice for NHibernate (just like argument-less ctor's and "virtual`ity") and use exposed IList everywhere (with private setters) just to reduce technical complexity. Their contents surely can be modified from outside, but I just don't do that.
It's more important to keep code easily understandable than making it super safe. Safety will follow.