We run multiple websites which use the same rich functional backend running as a library. The backend is comprised of multiple components with a lot of objects shared between them. Now, we need to separate a stateless rule execution component into a different container for security reasons. It would be great if I could have access to all the backend objects seamlessly in the rules component (rather than defining a new interface and objects/adapters).
I would like to use a RPC mechanism that will seamlessly support passing our java pojos (some of them are hibernate beans) over the wire. Webservices like JAXB, Axis etc. are needing quite a bit of boiler plate and configuration for each object. Whereas those using Java serialization seem straightforward but I am concerned about backward/forward compatibility issues.
We are using Xstream for serializing our objects into persistence store and happy so far. But none of the popular rpc/webservice framework seem use xstream for serialization. Is it ok to use xstream and send my objects over HTTP using my custom implementation? OR will java serialization just work OR are there better alternatives?
Advance thanks for your advise.
The good thing with standard Java serialization is that it produces binary stream which is quite a bit more space- and bandwidth-efficient than any of these XML serialization mechanisms. But as you wrote, XML can be more back/forward compatibility friendly, and it's easier to parse and modify by hand and/or by scripts, if need arises. It's a trade-off; if you need long-time storage, then it's advisable to avoid plain serialization.
I'm a happy XStream user. Zero problems so far.
Related
I am learning the spring 5 webflux and reactive streams. And there are new HandlerFunctions and RouterFunctions to implement the Http requests and response.
and as per the documentations:
The annotation counterpart to a handler function would be a method with #RequestMapping.
As #RequestMapping is quite easy to handle, implement and understand, then why is there a need of more complex and difficult way to handle Http request and response via this HandlerFunctions and RouterFunction utility?
Please suggest.
Spring WebFlux gives you two different styles: annotations and functional.
Depending on the type of application that you'd like to build, the constraints that you have to deal with - one or the other might be more relevant. You can even mix both in the same application.
The annotation-based model is very successful, but it also comes with a few limitations, mostly because of Java annotations themselves:
the code path is not always clear, unless you know the internals of Spring Framework (do you know where handler mappings are detected? matched against incoming requests?)
it's using reflection, which has a cost
it can be hard to debug and extend
The functional variant tries to fix those issues and embrace a functional style (with the JDK8 function API) and immutability. It's got a "more library; less framework" touch to it, meaning that you're more in control of things. Here's an example: with RouterFunction, you can chain RequestPredicates and they are executed in order, so you're in full control of what ultimately handles the incoming request. With the annotations model, the most specific handler will be selected, by looking at the annotations on the method and the incoming request.
If you're perfectly happy with the annotations model, there's no reason to switch. But again, you can mix both and maybe you'll find the functional model handy. In my opinion, trying it even if you don't plan on adopting it won't hurt - worst case scenario this will broaden a bit your perspective as a developer and show you a different way of doing things.
For more on that, you can check out Arjen Poutsma's talk on the functional web framework in Spring WebFlux.
It's not needed, and doesn't break webflux. I'd use #RequestMapping, if you don't have special needs making HandlerFunction neccessary.
For RouterFunctions: If you don't want to use JSON parsing, and want to modify the ServerRequest directly (e.g. have the raw InputStream), you'd have to use the RouterFunctions (AFAIK). You'd then return a raw stream (Mono), too. I had a case, where I needed to play proxy with a little bit extra, and thus needed to avoid the JSON parsing, that you'd usually have with #RequestMapping
I am wondering if there are any performance overhead issues to consider when using WCF vs. Binary Serialization done manually. I am building an n-tier site and wish to implement asynchronous behavior across tiers. I plan on passing data in binary form to lessen bandwidth. WCF seems to be a good shortcut to building your own tools, but I am wondering if there are any points to be aware of when making the choice between use of the WCF vs. System.IO Namespace and building your own light weight library.
There is a binary formatter for WCF, though its not entirely binary; it produces SOAP messages whose content is formatted using the .NET Binary Format for XML, which is a highly compacted form of XML. (Examples of what this looks like are found on this samples page.)
Alternatively, you can implement your own custom message formatter, as long as the formatter was available on both client and server side, to format however you want. (I think you'll still have some overhead from WCF but not much.)
My personal opinion, no amount of overhead savings you might get from defining a custom binary format, and writing all of the serialization/deserialization code to implement it manually, will ever compensate the time and effort you will spend trying to implement and debug such a mechanism.
I come from a web development background and haven't done anything significant in Java in quite some time.
I'm doing a small project, most of which involves some models with relationships and straightforward CRUD operations with those objects.
JPA/EclipseLink seems to suit the problem, but this is the kind of app that has File->Open and File->Save features, i.e. the data will be stored in files by the user, rather than persisting in the database between sessions.
The last time I worked on a project like this, I stored the objects in ArrayList objects, but having worked with MVC frameworks since, that seems a bit primitive. On the other hand, using JPA, opening a file would require loading a whole bunch of objects in the database, just for the convenience of not having to write code to manage the objects.
What's the typical approach for managing model data with Java SE desktop applications?
JPA was specifically build with databases in mind. This means that typically it operates on a big datastore with objects belonging to many different users.
In a file based scenario, quite often files are not that big and all objects in the file belong to the same user and same document. In that case I'd say for a binary format the old Java serialization still works for temporary files.
For longer term or interchangeable formats XML is better suited. Using JAXB (included in the standard Java library) you can marshal and demarshal Java objects to XML using an annotation based approach that on the surface resembles JPA. In fact, I've worked with model objects that have both JPA and JAXB annotations so they can be stored in a Database as well as in an XML file.
If your desktop app however uses files that represents potentially huge datasets for which you need paging and querying, then using JPA might still be the better option. There are various small embedded DBs available for Java, although I don't know how simple it is to let a data source point to a user selected file. Normally a persistence unit in Java is mapped to a fixed data source and you can't yet create persistence units on the fly.
Yet another option would be to use JDO, which is a mapping technology like JPA, but not an ORM. It's much more independent of the backend persistence technology that's being used and indeed maps to files as well.
Sorry that this is not a real answer, but more like some things to take into account, but hope it's helpful in some way.
I'm looking for the algorithm that Google's Web Toolkit uses to serialize data posted to the server during an AJAX request. I'm looking to duplicate it in another language so that I can tie in another of my projects with a GWT project.
Any help is much appreciated!
The GWT-RPC serialization is heavily tied to Java. It even sends Java class names over the wire.
I suggest you use something like JSON to communicate with the server. This way, you can use any programming language with the GWT server.
Update: There are no definitive references to the GWT-RPC format, and a mailing list post explains that decision:
The GWT RPC format is intentionally opaque JSON. This makes it
somewhere between difficult and impossible to add a non-GWT agent to
the RPC discussion. There isn't really a nice work-around for
creating a non-Java server-side implementation but, because your
RemoteServiceServlet implementation just has to implement your
synchronous RPC interface, it's quite possible for non-GWT clients to
talk to the same server-side business logic, just without using the
RPC protocol.
and the little detail which surfaced was
The wire format is plain text. It's actually JSON. It's just
unreadable JSON because the assumption is that both the producing and
consuming code is auto-generated and can make all kinds of assumptions
about the structure of the text.
I've written a design document explaining the GWT-RPC wire format. Hopefully you'll find it useful.
We currently use XStream for encoding our web service inputs/outputs in XML. However we are considering switching to a binary format with code generator for multiple languages (protobuf, Thrift, Hessian, etc) to make supporting new clients easier and less reliant on hand-coding (also to better support our message formats which include binary data).
However most of our objects on the server are POJOs with XStream handling the serialization via reflection and annotations, and most of these libraries assume they will be generating the POJOs themselves. I can think of a few ways to interface an alternative library:
Write an XStream marshaler for the target format.
Write custom code to marshal the POJOs to/from the classes generated by the alternative library.
Subclass the generated classes to implement the POJO logic. May require some rewriting. (Also did I mention we want to use Terracotta?)
Use another library that supports both reflection (like XStream) and code generation.
However I'm not sure which serialization library would be best suited to the above techniques.
(1) might not be that much work since many serialization libraries include a helper API that knows how to read/write primitive values and delimiters.
(2) probably gives you the widest choice of tools: https://github.com/eishay/jvm-serializers/wiki/ToolBehavior (some are language-neutral). Flawed but hopefully not totally useless benchmarks: https://github.com/eishay/jvm-serializers/wiki
Many of these tools generate classes, which would require writing code to convert to/from your POJOs. Tools that work with POJOs directly typically aren't language-neutral.
(3) seems like a bad idea (not knowing anything about your specific project). I normally keep my message classes free of any other logic.
(4) The Protostuff library (which supports the Protocol Buffer format) lets you write a "schema" to describe how you want your POJOs serialized. But writing this schema might end up being more work and more error-prone than just writing code to convert between your POJOs and some tool's generated classes.
Protostuff can also automatically generate a schema via reflection, but this might yield a message format that feels a bit Java-centric.