I would like to have a C++ client application that maintains a cache of objects that come from a Java server. The objects need to be compatible. I understand that Gemfire maintains them in a serializable format. This means the Java class needs to be equivalent to the C++ class.
Is there a common practice for defining the class structure in common place in a language-independent specifcation and generating the equivalent Java and C++ classes that are serializable to PDX or any other form that Gemfire uses?
Regards,
Yash
Before PDX I used to create a language-neutral representation of my domain and simultaneously generate Java, C++ and .Net classes using DataSerializable. However, PDX makes this unnecessary for the most part. I enclose the sample config below.
If you encounter types that you are using that Java does not support, you still do not have to resort to generating serializers but you can focus in on serializing that one type (see page 564 of http://gemfire.docs.pivotal.io/pdf/pivotal-gemfire-ug.pdf
Consider generating your own serializers when you have an insane need for speed since the auto-serializer can produce a drag. This is usually not needed but if you do, here are the instructions: http://data-docs-samples.cfapps.io/docs-gemfire/latest/javadocs/japi/com/gemstone/gemfire/DataSerializer.html
Here is the configuration for using the pdx auto serializer:
<!-- Cache configuration configuring auto serialization behavior -->
<cache>
<pdx>
<pdx-serializer>
<class-name>com.gemstone.gemfire.pdx.ReflectionBasedAutoSerializer
</class-name>
<parameter name="classes">
<string>com.company.domain.DomainObject</string>
</parameter>
</pdx-serializer>
</pdx>
...
</cache>
If I answered your question, please give check "Answered". Thanks.
Related
I have two different Java 8 projects that will live on different servers and which will both use Akka (specifically Akka Remoting) to talk to each other.
For instance, one app might send a Fizzbuzz message to the other app:
public class Fizzbuzz {
private int foo;
private String bar;
// Getters, setters & ctor omitted for brevity
}
I've never used Akka Remoting before. I assume I need to create a 3rd project, a library/jar for holding the shared messages (such as Fizzbuzz and others) and then pull that library in to both projects as a dependency.
Is it that simple? Are there any serialization (or other Akka and/or networking) considerations that affect the design of these "shared" messages? Thanks in advance!
Shared library is a way to go for sure, except there are indeed serialization concerns:
Akka-remoting docs:
When using remoting for actors you must ensure that the props and messages used for those actors are serializable. Failing to do so will cause the system to behave in an unintended way.
For more information please see Serialization.
Basically, you'll need to provide and configure the serialization for actor props and messages sent (including all the nested classes of course). If I'm not mistaking default settings will get you up and running without any configuration on your side, provided that everything you send over the wire is java-serializable.
However, default config uses default Java serialization, which is known to be quite inefficient - so you might want to switch to protobuf, kryo, or maybe even json. In that case, it would make sense to provide the serialization implementation and bindings as a shared library - either a dedicated one or a part of the "shared models" one that you mentioned in the question - depends if you want to reuse it elsewhere and mind/don't mind having serailization-related transitive dependencies popping all over the place.
Finally, if you allow some personal opinion, I would suggest trying protobuf first - it's binary format (read: efficient) and is widely supported (there are bindings for other languages). Kryo works well too (I have a few closed-source akka-cluster apps with kryo serialization in production), but has a few quirks with regards to collection/map handling.
I use JPype to build a SOAP client in my python based test platform. However, I need to extend a Java class to make a call like this:
Like
void process(Context parameter)
The type Context here is a class and to give an implementation, I need to extend Context in python using JPype.
class MyContext extends Context { //override the methods}
With JProxy functionality (in JPype), I'm able to "implement" java interfaces.
But I want to extend a class not an interface. Any help is appreciated.
This very much a limitation. JPype does not allow sub-classing.
sourceforge link
Changed the SOAP method to accept an interface in the API contract.
JPype is an effort to allow Python programs full access to Java class libraries. This is achieved not through re-implementing Python, as Jython/JPython has done, but rather through interfacing at the native level in both virtual machines.
Eventually, it should be possible to replace Java with Python in many, though not all, situations. JSP, Servlets, RMI servers and IDE plugins are all good candidates.
Once this integration is achieved, a second phase will be started to separate the Java logic from the Python logic, eventually allowing the bridging technology to be used in other environments, i.e. Ruby, Perl, COM, etc ..
I'm trying to use JavaFX in my android device, with the help of javafxports.
I used the XStream to parse some XML file in my program.
When i compile them, the javafxports outputs the following warnings:
Note: there were 9 classes trying to access annotations using reflection.
You should consider keeping the annotation attributes
(using '-keepattributes *Annotation*').
(http://proguard.sourceforge.net/manual/troubleshooting.html#attributes)
Note: there were 32 classes trying to access generic signatures using reflection.
You should consider keeping the signature attributes
(using '-keepattributes Signature').
(http://proguard.sourceforge.net/manual/troubleshooting.html#attributes)
Note: there were 56 unresolved dynamic references to classes or interfaces.
You should check if you need to specify additional program jars.
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclass)
Note: there were 3 class casts of dynamically created class instances.
You might consider explicitly keeping the mentioned classes and/or
their implementations (using '-keep').
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclasscast)
Note: there were 39 accesses to class members by means of introspection.
You should consider explicitly keeping the mentioned class members
(using '-keep' or '-keepclassmembers').
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclassmember)
Note: you're ignoring all warnings!
The output .apk can be installed and run until it calls the xstream classes to read annotations in my classes. The reason is actually described in the warnings.
So my question is, how can i disable the proguard when generating .apk, or send it a custom proguard.pro configuration.
And my build.gradle is almost the same as that in the helloworld example.
Thanks.
I'm engaged in building NServiceBus Gateway handler, and I need to avoid config files so that all configuration is defined inside c# classes. As a result I have to convert the following section to c# code
<GatewayConfig>
<Channels>
<Channel Address="http://localhost:25899/SiteB/" ChannelType="Http" Default="true"/>
</Channels>
</GatewayConfig>
I've found GatewayConfig, ChannelCollection and ChannelConfig in a NServiceBus.Config namespace, but I can not link them together, coz GatewayConfig refers to ChannelCollection, but ChannelCollection has nothing to do with ChannelConfig. Please help
Just create a class implementing IProvideConfiguration of GatewayConfig. That gives you a way to provide your own config. Look at the pubsub sample for the exact details on how to do this.
Well, I've found the way to do it as I installed Reflector and looked into the implementation. There is a ChannelCollection.CreateNewElement() method returning back System.Configuration.ConfigurationElement. NServiceBus overriden the method instantiating ChannelConfig inside it, so all I have to do is to cast ConfigurationElement type to ChannelConfig type which is far from intuitive interface. Looks like this NServiceBus.Config.ChannelCollection is kind of unfinished work, because if you look at other collections like NServiceBus.Config.MessageEndpointMappingCollection you can find there all necessary type-safe methods to work with its child elements NServiceBus.Config.MessageEndpointMapping, so I think NServiceBus team was just lazy to make the same for ChannelCollection.
UPDATE: as CreateNewElement() method is protected, I have to implement my own class inherited from ChannelCollection to make a method adding new ChannelConfig element publicly available
When using Scala RemoteActors I was getting a ClassNotFoundException that referred to scala.actors.remote.NetKernel. I copied someone else's example and added RemoteActor.classLoader = getClass.getClassLoader to my Actor and now everything works. Why is this necessary?
Remote Actors use Java serialization to send messages back and forth. Inside the actors library, you'll find a custom object input stream ( https://lampsvn.epfl.ch/trac/scala/browser/scala/trunk/src/actors/scala/actors/remote/JavaSerializer.scala ) that is used to serialize objects to/from a socket. There's also some routing code and other magic.
In any case, the ClassLoader used for remoting is rather important. I'd recommend looking up Java RMI if you're unfamiliar with it. In any case, the ClassLoader that Scala picks when serializing/deserializing actors is the one Located on RemoteActor which defaults to null.
This means that by default, you will be unhappy without specifying a ClassLoader ;).
If you were in an environment that controls classloaders, such as OSGi, you'd want to make sure you set this value to a classloader that has access to all classes used by all serialized actors.
Hope that helps!