SerializationException: type not included in serializable type set - serialization

In my Google Web Toolkit project, I got the following error:
com.google.gwt.user.client.rpc.SerializationException: Type ‘your.class.Type’ was not included in the set of types which can be serialized by this SerializationPolicy or its Class object could not be loaded. For security purposes, this type will not be serialized.
What are the possible causes of this error?

GWT keeps track of a set of types which can be serialized and sent to the client. your.class.Type apparently was not on this list. Lists like this are stored in .gwt.rpc files. These lists are generated, so editing these lists is probably useless. How these lists are generated is a bit unclear, but you can try the following things:
Make sure your.class.Type implements java.io.Serializable
Make sure your.class.Type has a public no-args constructor
Make sure the members of your.class.Type do the same
Check if your program does not contain collections of a non-serializable type, e.g. ArrayList<Object>. If such a collection contains your.class.Type and is serialized, this error will occur.
Make your.class.Type implement IsSerializable. This marker interface was specifically meant for classes that should be sent to the client. This didn't work for me, but my class also implemented Serializable, so maybe both interfaces don't work well together.
Another option is to create a dummy class with your.class.Type as a member, and add a method to your RPC interface that gets and returns the dummy. This forces the GWT compiler to add the dummy class and its members to the serialization whitelist.

I'll also add that if you want to use a nested class, use a static member class.
I.e.,
public class Pojo {
public static class Insider {
}
}
Nonstatic member classes get the SerializationException in GWT 2.4

I had the same issue in a RemoteService like this
public List<X> getX(...);
where X is an interface. The only implementation did conform to the rules, i.e. implements Serializable or IsSerializable, has a default constructor, and all its (non-transient and non-final) fields follow those rules as well.
But I kept getting that SerializationException until I changed the result type from List to X[], so
public X[] getX(...);
worked. Interestingly, the only argument being a List, Y being an interface, was no problem at all...

I have run into this problem, and if you per chance are using JPA or Hibernate, this can be a result of trying to return the query object and not creating a new object and copying your relavant fields into that new object. Check the following out, which I saw in a google group.
#SuppressWarnings("unchecked")
public static List<Article> getForUser(User user)
{
List<Article> articles = null;
PersistenceManager pm = PMF.get().getPersistenceManager();
try
{
Query query = pm.newQuery(Article.class);
query.setFilter("email == emailParam");
query.setOrdering("timeStamp desc");
query.declareParameters("String emailParam");
List<Article> results = (List<Article>) query.execute(user.getEmail
());
articles = new ArrayList<Article>();
for (Article a : results)
{
a.getEmail();
articles.add(a);
}
}
finally
{
pm.close();
}
return articles;
}
this helped me out a lot, hopefully it points others in the right direction.

Looks like this question is very similar to what IsSerializable or not in GWT?, see more links to related documentation there.

When your class has JDO annotations, then this fixed it for me (in addition to the points in bspoel's answer) : https://stackoverflow.com/a/4826778/1099376

Related

Optaplanner: prevent custom List from beeing cloned by FieldAccessingSolutionCloner

I have a #PlanningSolution class, that has one field with a custom List implementation as type.
When solving I run into the following issue (as described in the optaplanner documentation):
java.lang.IllegalStateException: The cloneCollectionClass (class java.util.ArrayList) created for originalCollectionClass (class Solution$1) is not assignable to the field's type (class CustomListImpl).
Maybe consider replacing the default SolutionCloner.
As this field has no impact on planning, can I prevent FieldAccessingSolutionCloner from trying to clone that particular field e.g. by adding some annotation? I dont want to provide a complete custom SolutionCloner.
When inspecting the sources of FieldAccessingSolutionCloner I found out that I only needed to override the method retrieveCachedFields(...) or constructCloneCollection(...) so I tried to extend FieldAccessingSolutionCloner but then I need a public no-args-constructor. There I dont know how to initialise the field solutionDescriptor in the no-args-constructor to use my ExtendedFieldAccessingSolutionCloner as solution cloner.
If the generic solution cloner decided to clone that List, there is probably a good reason for it do so: one of the the elements in that list probably has a reference to a planning entity or the planning solution - and therefore the entire list needs to be planning cloned.
If that's not the case, this is a bug in OptaPlanner. Please provide the classes source code of the class with that field and the CustomListImpl class too, so we can reproduce and fix it.
To supply a custom SolutionCloner, follow the docs which will show something like this (but this is a simple case without chained variables, so it's easy to get right, but solution cloning is notoriously difficult!).
#PlanningSolution(solutionCloner = VaccinationSolutionCloner.class)
public class VaccinationSolution {...}
public class VaccinationSolutionCloner implements SolutionCloner<VaccinationSolution> {
#Override
public VaccinationSolution cloneSolution(VaccinationSolution solution) {
List<PersonAssignment> personAssignmentList = solution.getPersonAssignmentList();
List<PersonAssignment> clonedPersonAssignmentList = new ArrayList<>(personAssignmentList.size());
for (PersonAssignment personAssignment : personAssignmentList) {
PersonAssignment clonedPersonAssignment = new PersonAssignment(personAssignment);
clonedPersonAssignmentList.add(clonedPersonAssignment);
}
return new VaccinationSolution(solution.getVaccineTypeList(), solution.getVaccinationCenterList(), solution.getAppointmentList(),
solution.getVaccinationSlotList(), clonedPersonAssignmentList, solution.getScore());
}
}

Jackson private constructors, JDK 9+, Lombok

I'm looking for documentation on how Jackson works with private constructors on immutable types. Using Jackson 2.9.6 and the default object mapper provided by spring boot two running with jdk-10.0.1
Given JSON:
{"a":"test"}
and given a class like:
public class ExampleValue {
private final String a;
private ExampleValue() {
this.a = null;
}
public String getA() {
return this.a;
}
}
Deserialisation (surprisingly, at least to me) seems to work.
Whereas this does not:
public class ExampleValue {
private final String a;
private ExampleValue(final String a) {
this.a = a;
}
public String getA() {
return this.a;
}
}
And this does:
public class ExampleValue {
private final String a;
#java.beans.ConstructorProperties({"a"})
private ExampleValue(final String a) {
this.a = a;
}
public String getA() {
return this.a;
}
}
My assumption is that the only way the first example can work is by using reflection to set the value of the final field (which I presume it does by java.lang.reflect.AccessibleObject.setAccessible(true).
Question 1: am I right that this is how Jackson works in this case? I presume this would have the potential to fail under a security manager which does not allow this operation?
My personal preference, therefore, would be the last code example above, since it involves less "magic" and works under a security manager. However, I have been slightly confused by various threads I've found about Lombok and constructor generation which used to generate by default #java.beans.ConstructorProperties(...) but then changed default to no longer do this and now allows one to configure it optionally using lombok.anyConstructor.addConstructorProperties=true
Some people (including in the lombok release notes for v1.16.20) suggest that:
Oracle more or less broke this annotation with the release of JDK9, necessitating this breaking change.
but I'm not precisely clear on what is meant by this, what did Oracle break? For me using JDK 10 with jackson 2.9.6 it seems to work ok.
Question 2: Is any one able to shed any light on how this annotation was broken in JDK 9 and why lombok now considers it undesirable to generate this annotation by default anymore.
Answer 1: This is exactly how it works (also to my surprise). According to the Jackson documentation on Mapper Features, the properties INFER_PROPERTY_MUTATORS, ALLOW_FINAL_FIELDS_AS_MUTATORS, and CAN_OVERRIDE_ACCESS_MODIFIERS all default to true. Therefore, in your first example, Jackson
creates an instance using the private constructor with the help of AccessibleObject#setAccessible (CAN_OVERRIDE_ACCESS_MODIFIERS),
detects a fully-accessable getter method for a (private) field, and considers the field as mutable property (INFER_PROPERTY_MUTATORS),
ignores the final on the field due to ALLOW_FINAL_FIELDS_AS_MUTATORS, and
gains access to that field using AccessibleObject#setAccessible (CAN_OVERRIDE_ACCESS_MODIFIERS).
However, I agree that one should not rely on that, because as you said a security manager could prohibit it, or Jackson's defaults may change. Furthermore, it feels "not right" to me, as I would expect that class to be immutable and the field to be unsettable.
Example 2 does not work because Jackson does not find a usable constructor (because it cannot map the field names to the parameter names of the only existing constructor, as these names are not present at runtime). #java.beans.ConstructorProperties in your third example bypasses this problem, as Jackson explicitly looks for that annotation at runtime.
Answer 2:
My interpretation is that #java.beans.ConstructorProperties is not really broken, but just cannot be assumed to be present any more with Java 9+. This is due to its membership in the java.desktop module (see, e.g., this thread for a discussion on this topic). As modularized Java applications may have a module path without this module, lombok would break such applications if it would generate this annotation by default. (Furthermore, this annotation is not available in general on the Android SDK.)
So if you have a non-modularized application or a modularized application with java.desktop on the module path, it's perfectly fine to let lombok generate the annotation by setting lombok.anyConstructor.addConstructorProperties=true, or to add the annotation manually if you are not using lombok.

OOP - When to Use Properties vs When to Use Return Values

I am designing a process that should be run daily at work, and I've written a class to do the work. It looks something like this:
class LeadReport {
public $posts = array();
public $fields = array();
protected _getPost() {
// get posts of a certain type
// set them to the property $this->posts
}
protected _getFields() {
// use $this->posts to get fields
// set $this->fields
}
protected _writeCsv() {
// use the properties to write a csv
}
protected _sendMail() {
// attach a csv and send it
}
public function main() {
$this->out('Lead Report');
$this->out("Getting Yesterday's Posts...");
$this->_getPosts();
$this->out("Done.");
$this->out("Next, Parsing the Fields...");
$this->_getFields();
$this->out("Done.");
$this->out("Next, Writing the CSVs...");
$this->_writeCsv();
$this->out("Done.");
$this->out("Finally, Sending Mail");
$this->_sendMail();
$this->out('Bye!');
}
}
After showing this code to one of my colleagues, he commented that the _get() methods should have return values, and that the _write() and _sendMail() methods should use those values as parameters.
So, two questions:
1) Which is "correct" in this case (properties or return values)?
2) Is there a general rule or principle that governs when to use properties over when to use return values in object oriented programming?
I think maybe the source of your question comes from the fact that you are not entirely convinced that using properties is better than having public fields. For example here, common practice says that should not have posts and fields as public. You should use the getField method or a Field protected property to regulate access to those fields. Having a protected getField and a public fields doesn't really make sense.
In this case your colleague may be pointing at two things. The fact that you need to use Properties and not public fields and also the fact that it is probably better to pass the post into the method and not have the method access a property if you can. That way you don't have to set a property before calling the method. Think of it as a way of documenting what the method needs for it to operate. In this way another developer doesn't need to know which properties need to be set for the method to work. Everything the method needs should be passed in.
Regarding why we need properties in the first place? why shouldn't you use public fields. Isn't it more convenient? It sure is. The reason we use properties and not public fields is that just like most other concepts in OOP, you want your object to hide its details from the outside world and just project well defined interfaces of its state. Why? Ultimately to hide implementation details and keep internal change to ripple out(Encapsulation). Also, accessing properties has the added benefit of debugging. You can simply set a breakpoint in a property to see when a variable is changed or simply do a check if the variable is of certain value. Instead of littering your code with said check all over the place. There are many more goodies that come with this, returning readonly values, access control, etc.
To sum up, fields are though of as internal state. Properties(actual get/set methods) are though of as methods that interact with internal state. Having an outside object interact with interfaces is smiley face. Having outside class interact with internal state directly is frowny face.

Is this a ddd anti-pattern?

Is it a violation of the Persistance igorance to inject a repository interface into a Entity object Like this. By not using a interface I clearly see a problem but when using a interface is there really a problem? Is the code below a good or bad pattern and why?
public class Contact
{
private readonly IAddressRepository _addressRepository;
public Contact(IAddressRepository addressRepository)
{
_addressRepository = addressRepository;
}
private IEnumerable<Address> _addressBook;
public IEnumerable<Address> AddressBook
{
get
{
if(_addressBook == null)
{
_addressBook = _addressRepository.GetAddresses(this.Id);
}
return _addressBook;
}
}
}
It's not exactly a good idea, but it may be ok for some limited scenarios. I'm a little confused by your model, as I have a hard time believing that Address is your aggregate root, and therefore it wouldn't be ordinary to have a full-blown address repository. Based on your example, you probably are actually using a table data gateway or dao rather than a respository.
I prefer to use a data mapper to solve this problem (an ORM or similar solution). Basically, I would take advantage of my ORM to treat address-book as a lazy loaded property of the aggregate root, "Contact". This has the advantage that your changes can be saved as long as the entity is bound to a session.
If I weren't using an ORM, I'd still prefer that the concrete Contact repository implementation set the property of the AddressBook backing store (list, or whatever). I might have the repository set that enumeration to a proxy object that does know about the other data store, and loads it on demand.
You can inject the load function from outside. The new Lazy<T> type in .NET 4.0 comes in handy for that:
public Contact(Lazy<IEnumerable<Address>> addressBook)
{
_addressBook = addressBook;
}
private Lazy<IEnumerable<Address>> _addressBook;
public IEnumerable<Address> AddressBook
{
get { return this._addressBook.Value; }
}
Also note that IEnumerable<T>s might be intrinsically lazy anyhow when you get them from a query provider. But for any other type you can use the Lazy<T>.
Normally when you follow DDD you always operate with the whole aggregate. The repository always returns you a fully loaded aggregate root.
It doesn't make much sense (in DDD at least) to write code as in your example. A Contact aggregate will always contain all the addresses (if it needs them for its behavior, which I doubt to be honest).
So typically ContactRepository supposes to construct you the whole Contact aggregate where Address is an entity or, most likely, a value object inside this aggregate.
Because Address is an entity/value object that belongs to (and therefore managed by) Contact aggregate it will not have its own repository as you are not suppose to manage entities that belong to an aggregate outside this aggregate.
Resume: always load the whole Contact and call its behavior method to do something with its state.
Since its been 2 years since I asked the question and the question somewhat misunderstood I will try to answer it myself.
Rephrased question:
"Should Business entity classes be fully persistance ignorant?"
I think entity classes should be fully persistance ignorant, because you will instanciate them many places in your code base so it will quickly become messy to always have to inject the Repository class into the entity constructor, neither does it look very clean. This becomes even more evident if you are in need of injecting several repositories. Therefore I always use a separate handler/service class to do the persistance jobs for the entities. These classes are instanciated far less frequently and you usually have more control over where and when this happens. Entity classes are kept as lightweight as possible.
I now always have 1 Repository pr aggregate root and if I have need for some extra business logic when entities are fetched from repositories I usually create 1 ServiceClass for the aggregate root.
By taking a tweaked example of the code in the question as it was a bad example I would do it like this now:
Instead of:
public class Contact
{
private readonly IContactRepository _contactRepository;
public Contact(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save()
{
_contactRepository.Save(this);
}
}
I do it like this:
public class Contact
{
}
public class ContactService
{
private readonly IContactRepository _contactRepository;
public ContactService(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save(Contact contact)
{
_contactRepository.Save(contact);
}
}

Injection of class with multiple constructors

Resolving a class that has multiple constructors with NInject doesn't seem to work.
public class Class1 : IClass
{
public Class1(int param) {...}
public Class1(int param2, string param3) { .. }
}
the following doesn’t seem to work:
IClass1 instance =
IocContainer.Get<IClass>(With.Parameters.ConstructorArgument(“param”, 1));
The hook in the module is simple, and worked before I added the extra constructor:
Bind().To();
The reason that it doesn't work is that manually supplied .ctor arguments are not considered in the .ctor selection process. The .ctors are scored according to how many parameters they have of which there is a binding on the parameter type. During activation, the manually supplied .ctor arguments are applied. Since you don't have bindings on int or string, they are not scored. You can force a scoring by adding the [Inject] attribute to the .ctor you wish to use.
The problem you're having is that Ninject selects .ctors based on the number of bound parameters available to it. That means that Ninject fundamentally doesn't understand overloading.
You can work around this problem by using the .ToConstructor() function in your bindings and combining it with the .Named() function. That lets you create multiple bindings for the same class to different constructors with different names. It's a little kludgy, but it works.
I maintain my own software development blog so this ended up being a post on it. If you want some example code and a little more explanation you should check it out.
http://www.nephandus.com/2013/05/10/overloading-ninject/