Can Java serialization be used with Gemfire/Geode, without deploying classes on the server? - gemfire

The goal is to avoid having to deploy java classes to Gemfire/Geode servers.
The use-case only requires storage/retrieval of serialized objects. The keys would be Strings. There would be no use of server-side functions or OQL.
I'm aware that Gemfire/Geode PDX doesn't (in most cases) require java classes to be deployed to the Gemfire/Geode server. I'm curious what is possible when using plain old Java serialization.

As long as there is no server side logic that will require deserializing the values, then I think it should work. Besides not using functions and OQL, and CacheListeners or CacheWriters would also require deserializing values on the server.

Related

Imposing limits on Wsdl created by Apache CXF

I'm currently working on a project to develop some web services, and we've chosen Apache CXF as the technology of choice. We're working with a contract-last methodology (We're coding our service and allowing CXF to generate the wsdl for us.) While we have been able to use Java-WS annotations to set certain fields as required, we have been unable to figure out how to impose character limits on strings elements in the wsdl. While it isn't a huge deal (we're fairly close with the vendor we're working with, so a casual agreement to not let the requests coming to us exceed said length is enough for now), we are curious is there is a way to do this.
Is there a way to with CFX to impose something like a 20 character limit on an input string?
This isn't something that JAXB provides. However, there is a project that has forked the jaxb-ri that adds support for this:
https://github.com/whummer/jaxb-facets
If you don't need to have the facet reflected in the WSDL, you could just do the validation as part of the setter method of the JAXB beans.
setName(String s) {
if (s.length() > 20) thrown new Exception(....);
name = s;
}
There are also some events that the JAXB bean could respond to after unmarshalling where you could validate things. See the afterUnmarshall descriptions in the javadoc: http://docs.oracle.com/javase/6/docs/api/javax/xml/bind/Unmarshaller.html
When you do a bottom up approach to webservice(i.e. impl first, WSDL later), you are at the webservice engine's mercy wrt to thew restrictions on the datatypes.
But if you go thru the top-down approach(WSDL first, impl later), you can eliminate those kind of problems, but would require you to know how to develop WSDLs and apply facets (restrictions).
Fortunately, CXF supports both, and you can give a try to wsdl2java.
Also remember, adding a facet doesn't mean that it will be enforced by the webservice engine. Most of the engines just ignore it, though I am not sure about CXF.

WCF OData for multiplatform development?

The OP in this question asks about using an WCF/OData as an internal data access layer.
Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly
The resounding reply seems to be don't do it. I'm in similar position to the OP, but have a concern not raised in the original question. I'm trying to develop (natively) for a lot of different platforms but want to keep as much of the data and business logic server side as possible. So I'll have iOS/Android/Web (MVC)/Desktop applications. Currently, I have a single WinForms applications with an ORM data access layer (LLBLGen Pro).
I'm envisioning moving most of my business / data access logic (possibly still with LLBLGen or other ORM) behind a WCF / OData interface. Then making all my different clients on the different platforms very thin (basically UI and WCF calls).
Is this also overengineered? Am I missing a simpler solution?
I cannot see any problem in your architecture or consider it overengeenered as a OData is a standard protocol and your concept conforms the DRY principle as well.
I change the question: Why would you implement the same business logic in each client to introduce more possible bugs and loose the possibility to fix the errors at one single and centralized place. Your idea makes you able to implement the security layer only once.
OData is a cross-platform standard and you can find a OData libraries for each development platform (MSDN, OData.org, JayData for JavaScript). Furthermore, you can use OData FunctionImports/Service methods and entity-level methods, which will simplify your queries.
If you are running multiplatform development, then you may find more practical to choose platform-agnostic communication protocol, such as HTTP, rather than bringing multiple drivers and ORMs to access your data Sources directly. In addition since OData is a REST protocol, you don't need much on the Client side: anything that can format OData HTTP requests and parse HTTP responses. There are however a few aspects to be aware of:
OData server is not a replacement for an SQL database. It supports batches but they are not the same as DB transactions (although in many cases can be used to model transactional operations). It supports parent-child relations but it does not support JOINs in classic SQL sense. So you have to plan what you expose as OData entity. It's too easy to build an OData server using WCF Data Services wrapping EF model. Too easy because People often expose low Level database content instead of building high level domain types.
As for today an OData multiplatorm clients are still under development, but they are coming. If I may suggest something I am personally working on, have a look at Simple.Data OData adapter (https://github.com/simplefx/Simple.OData, look at its Wiki pages for examples) - it has a NuGet package. While this a Client Library that only supports .NET 4.0, part of it is being extracted to be published as a portable class Library Simple.OData.Client to support .NET 4.x, Windows Store, Silverlight 5, Windows Phone 8, Android and iOS. In fact, if you check winrt branch of the Git repository, you will find a multiplatform PCL already, it's just not published on NuGet yet.

JAX-RS standard solution for serialize arrays of one element

Like it is described in jersey documentation, jersey use as default MAPPED notation, so I have a serialization problem with lists of one element.
http://jersey.java.net/nonav/documentation/latest/json.html#d4e949
Until now, I resolved the problem in the client side, but now, I don't have access to client code.
I need serialize arrays in NATURAL notation, but i can't make me code dependent of jersey. I need a solution according to JAX-RS standards, a cross platform solution.
In JAX-RS, don't specify JSON provider, so i must implement my own provider if i want a portable solution.

XStream <-> Alternative binary formats (e.g. protocol buffers)

We currently use XStream for encoding our web service inputs/outputs in XML. However we are considering switching to a binary format with code generator for multiple languages (protobuf, Thrift, Hessian, etc) to make supporting new clients easier and less reliant on hand-coding (also to better support our message formats which include binary data).
However most of our objects on the server are POJOs with XStream handling the serialization via reflection and annotations, and most of these libraries assume they will be generating the POJOs themselves. I can think of a few ways to interface an alternative library:
Write an XStream marshaler for the target format.
Write custom code to marshal the POJOs to/from the classes generated by the alternative library.
Subclass the generated classes to implement the POJO logic. May require some rewriting. (Also did I mention we want to use Terracotta?)
Use another library that supports both reflection (like XStream) and code generation.
However I'm not sure which serialization library would be best suited to the above techniques.
(1) might not be that much work since many serialization libraries include a helper API that knows how to read/write primitive values and delimiters.
(2) probably gives you the widest choice of tools: https://github.com/eishay/jvm-serializers/wiki/ToolBehavior (some are language-neutral). Flawed but hopefully not totally useless benchmarks: https://github.com/eishay/jvm-serializers/wiki
Many of these tools generate classes, which would require writing code to convert to/from your POJOs. Tools that work with POJOs directly typically aren't language-neutral.
(3) seems like a bad idea (not knowing anything about your specific project). I normally keep my message classes free of any other logic.
(4) The Protostuff library (which supports the Protocol Buffer format) lets you write a "schema" to describe how you want your POJOs serialized. But writing this schema might end up being more work and more error-prone than just writing code to convert between your POJOs and some tool's generated classes.
Protostuff can also automatically generate a schema via reflection, but this might yield a message format that feels a bit Java-centric.

Xstream/HTTP service

We run multiple websites which use the same rich functional backend running as a library. The backend is comprised of multiple components with a lot of objects shared between them. Now, we need to separate a stateless rule execution component into a different container for security reasons. It would be great if I could have access to all the backend objects seamlessly in the rules component (rather than defining a new interface and objects/adapters).
I would like to use a RPC mechanism that will seamlessly support passing our java pojos (some of them are hibernate beans) over the wire. Webservices like JAXB, Axis etc. are needing quite a bit of boiler plate and configuration for each object. Whereas those using Java serialization seem straightforward but I am concerned about backward/forward compatibility issues.
We are using Xstream for serializing our objects into persistence store and happy so far. But none of the popular rpc/webservice framework seem use xstream for serialization. Is it ok to use xstream and send my objects over HTTP using my custom implementation? OR will java serialization just work OR are there better alternatives?
Advance thanks for your advise.
The good thing with standard Java serialization is that it produces binary stream which is quite a bit more space- and bandwidth-efficient than any of these XML serialization mechanisms. But as you wrote, XML can be more back/forward compatibility friendly, and it's easier to parse and modify by hand and/or by scripts, if need arises. It's a trade-off; if you need long-time storage, then it's advisable to avoid plain serialization.
I'm a happy XStream user. Zero problems so far.