I am using Gemfire Spring to put/find against a pojo object
#Autowired
GemfireTemplate responseTemplate;
....
HTTPAudit audit = new HTTPAudit(sessionId,response);
responseTemplate.put("Detail", audit);
System.out.println("Get caching...");
SelectResults<HTTPAudit> result = responseTemplate.find("SELECT * from /HTTPAudit WHERE sessionId= $1", sessionId);
HTTPAudit object implements Serializable interface.
The entity has been saved successfully in Gemfire but it throws an exception when try to deserialize it. Error message is " com.gemstone.gemfire.SerializationException: A ClassNotFoundException was thrown while trying to deserialize cached value."
The details regarding your configuration is less the complete here. There are many possible scenarios under which this can happen. One of the more likely situations is that, the code snippet above is perhaps a client cache where /HTTPAudit is a PROXY Client Region to some peer Server Region (data policy unclear, although not as important if this is a client/server topology in play).
Because your HTTPAudit implements java.io.Serializable, GemFire will use Java Serialization to send the object over the wire.
GemFire will then store the object in the "form" that it got (in this case Serialized).
Next, you go onto run a OQL statement (Query) and proceed in accessing a field on the object (sessionId). Because GemFire cannot access data in a "Java Serialized" form, it must deserialize the value to inspect it for the predicate of the Query.
In this case, I am guessing your "GemFire Server" node does not have the HTTPAudit class on the CLASSPATH, which it needs.
If you want to avoid the deserialization of the HTTPAudit object on the GemFire Server when an OQL like the one above is issued, then you should switch to PDX Serialization (http://gemfire.docs.pivotal.io/latest/userguide/developing/data_serialization/gemfire_pdx_serialization.html), and set the Server's configuration 'read-serailized' attribute to true.
However, you should be careful since not all OQL operations on an object necessarily keep the object in serialized form, even when using PDX.
For instance, an OQL Query similar to...
SELECT audit.toString() FROM /HTTPAudit audit WHERE ...
Would cause even a PDX serialized object to be deserialized during Query execution.
Not that calling toString() is good practice (it is merely to demonstrate a point), but certain object operations can cause GemFire to deserialize a value to perform the object operation in the OQL statement during processing, even when stored in the PDX serialized form, which would thus require the class to be on the Server's CLASSPATH. So, be careful.
However, in your case, the problem is caused because you are using the less efficient, though more standard, Java Serialization to store and access your object. Unlike PDX serialization, there is no "type metadata" that enables GemFire to access data on the object in serialized form without having to "deserialize" it first. With Java Serialization, GemFire must deserialize the object to access it's information.
Hope this helps.
Related
there is a data contract (say, EmployeeView) in my WCF service. I have decorated it with Serializable attribute, and all members are marked as DataMember
A method in the WCF is returning List<EmployeeView>.
When I execute this method through WCF Test client or MVC app, it gets executed successfully, but while transferring result it is giving an error of The underlying connection was closed: The connection was closed unexpectedly. Is List<EmployeeView> not serialized though EmployeeView is marked as serialized?
Further to add, if I execute an OperationContract returning only "EmployeeView" it gives me different error saying, The service is offline or inaccessible; the client-side configuration does not match the proxy This is making things strange, because other operations returning string, etc are working fine
No. It depends on whether the concrete implementation of List is Serializable.
You also need to stop using the terms 'serialized' and 'Serializable' as though they mean the same thing. They don't.
I have a WCF service handling a very large number of requests (thousands per second). Each request contains objects, so they get built inside the DataContractSerializer during deserialization. My service processes the messages, and they get cleaned up by the .net garbage collector.
The problem is that garbage collections are causing problems for my service (requests occasionally taking 100+ milliseconds longer than they should). I need to minimize them. So I am looking for a way of using object pooling. In other words, I want the data contract serializer to obtain an object from my object pool (instead of getting one via GetUninitializedObject), and then when I am done processing the message, I would release it back to the pool for cleaning & reuse, thereby avoiding thousands of memory allocations a second.
I've seen this is possible with protobuf-net (Using protobuf-net, is it possible to deserialize a message without allocating memory?) and in fact I'm using protobuf elsewhere, but for this particular situation that is not an option
The DataContractSerializer is sealed and cannot be updated. So unfortunately you would not be able to remove it's call to FormatterServices.GetUninitializedObject.
What you will have to do instead is create your own serializer inheriting from XmlObjectSerializer so that you can fully control instance creation.
The next step is to create a DataContractSerializerOperationBehavior and override the CreateSerializer methods to return your customized serializer.
Last thing to do is remove the default DataContractSerializerOperationBehavior from the endpoint and replace it with the custom one that implements your custom serializer. Carlos Figueira has a post on his blog showing exactly how to do this (go to the section called Real world scenario: using a new serializer).
These objects have collections of type ICollection<>
If I pass an object graph from client to server it throws the following exception:
System.NotSupportedException was unhandled by user code
Message=Collection was of a fixed size.
Source=mscorlib
Which is occurs in the fixup code the T4 template has generated. It seems the collections are being deserialized on the server as arrays and so can't be modified. Is there a way to specify the type the serializer should use?
I would strongly recommend that you don't use the POCO classes on your service boundary. Create a separate set of classes to model the data you want to send and receive across the wire (Data Transfer Objects - DTOs) and use a tool like automapper to move data between the DTOs and your POCO classes
Essentially you end up tying the consumers of your service to your service's internal conceptual model which means you become constrained in changing your implementation because you need to avoid breaking your clients
Try using the following attribute
[ServiceKnownType(typeof(List<string>))]
If that doesn't work, perhaps try using IList<T> if that is possible in your situation
I am trying to use Generic DataContract class so that I don't have to implement several types for a collection of different objects.
Exp :
I have a Profile object which contains a collection of objects.
So I want to have one Profile<Foo> and Profile<Foo1> where profile contains a collection of Foo or Foo1 objects.
I have been reading that WCF does not support generic classes and actually the error that I get is the following.
Type 'GL.RequestResponse.ProfileResponse1[T]' cannot be exported as a schema type because it is an open generic type. You can only export a generic type if all its generic parameter types are actual types.`
Now the ProfileResponse is this Profile object that I am trying to use.
Now in my host I am doing the following. :
ServiceConfig(typeof(ProfileHandler<EducationResponse>).Assembly,
typeof(ProfileRequest).Assembly,
typeof(Container)).Initialize();
This is dhe definition of the handler with the datacontract.
public class ProfileHandler<T> : RequestHandler<ProfileRequest,
ProfileResponse<T>>
The Container is using Windsor Container to register the objects.
The registration works fine but after I instantiated the Service Host for WCF processor, and call Open Method of the host I get the above error.
Is there really no way for me to write generic response requests for wcf with agatha ?
It feels like such a waste to have to define a Profile container class for each type being contained in that collection.
thanks.
One cannot have open generic handlers, because the server side needs to know what the type is.
One can use so called closed generic methods. This way the server side knows the types for witch to load the handler.
Also, one could potentially configure Agatha so that it allows to receive extra information related to the request. In this case, it would be the type wrapped in the response.
One could do this by defining a a BaseRequest class and having all the request extend this class. This class can have a property which takes the type of the response. Or the type to be wrapped in the response.
In the process, when examining the request, the process can get the type to be wrapped in the Response, so that i knows how to load the class.
I have not implemented this, since it would take too much time and I am not sure I want to be responsible for maintaining Agatha for our application, but this is how I would do it.
We are using a WCF Data Service to broker our data server side, and give third parties easy OData access to our data. The server side of things has been relatively easy. The client side, on the other hand, is giving us fits.
We are converting from regular Entity Framework to Data Services, and we've created an assembly which contains the generated client objects that talk to the data service (via a Service Reference). Those classes are partial, so we've added some logic and extended properties to them. This all works great.
The issue we are having is that we need to process our objects at save time, because they need to do some advanced serialization before they are sent over the wire. The DataServiceContext class contains two events: WritingEntity and ReadingEntity. The ReadingEntity event actually happens at the correct time for us (post object deserialization). The WritingEntity event happens at the WRONG time for us (post object serialization).
Is there any way to catch an object before it's written to the request, so that we can call a method on entity that is about to be written?
Obviously we could just loop through the Entities list, looking for any entity that is not in a state of Unchanged or Deleted, and call the appropriate method there...but this would require me to add special code every time I wanted to call SaveChanges on the context. This may be what we need to do, but it would be nice if there was a way to catch the entities before they are written to XML for sending to the service.
Currently there's no hook in the DataServiceContext to do what you want. The closest I can think of is the approach you suggested with walking all the entities and findings those which were modified. You could do this in your own SaveChanges-like method on the context class (which is also partial).