We are using XML Web Services for Android and Iphone Application. Now we want to optimize those WS to speedup the Mobile phone application performance.
Thanks
I have no experience with XML Web Services, but FlatBuffers simply produces and consumes binary byte arrays, so as long as XML Web Services can encode those, it should work.
That said, FlatBuffers is all about speed, and you will throw a lot of that away by going through a layer of XML. If you truly wanted to have a fast and optimized communication between your server and your apps, you would send a FlatBuffer directly, either over HTTP if that is simplest, or better yet directly over TCP/IP (or any minimal transport layer on top of TCP/IP).
Sending it directly might actually be simpler too, since you won't have to deal with XML at all.
Related
Instead of having a gRPC server (say, due to platform restrictions), you have a REST endpoint that returns data.SerializeToString() as the payload. Of course, any clients of this endpoint would have the appropriate proto files for each response, so they can ParseFromString(data) and be on their way. Reasons for doing this includes the benefits of Protobufs.
Improved understanding of the question: is it common to use PBs for other purposes than gRPC transport?
Yes it is totally common and reasonable. PBs are really nothing more than a data serialization format. gRPC just uses it as message interchange format (natural choice as both are Google creations). Let the answer be the description from Google itself:
Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data.
Google's basic tutorial is saving it to disk. Do anything you would do with any other binary blob (jpeg, mp3, ...)
BUT! if serialization speed is really critical for you, don't assume anything. Today's JSON libs may be unexpectedly well performing - depends on your specific platforms and dominant message characteristics. Do your own performance tests. If JSON inferiority is confirmed, then there are again libs with faster serialization than PB. To name a couple: Google's less popular PB sibling FlatBuffers and something called Simple Binary Encoding, which was developed for High Frequency Trading... speaks for itself.
The OP in this question asks about using an WCF/OData as an internal data access layer.
Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly
The resounding reply seems to be don't do it. I'm in similar position to the OP, but have a concern not raised in the original question. I'm trying to develop (natively) for a lot of different platforms but want to keep as much of the data and business logic server side as possible. So I'll have iOS/Android/Web (MVC)/Desktop applications. Currently, I have a single WinForms applications with an ORM data access layer (LLBLGen Pro).
I'm envisioning moving most of my business / data access logic (possibly still with LLBLGen or other ORM) behind a WCF / OData interface. Then making all my different clients on the different platforms very thin (basically UI and WCF calls).
Is this also overengineered? Am I missing a simpler solution?
I cannot see any problem in your architecture or consider it overengeenered as a OData is a standard protocol and your concept conforms the DRY principle as well.
I change the question: Why would you implement the same business logic in each client to introduce more possible bugs and loose the possibility to fix the errors at one single and centralized place. Your idea makes you able to implement the security layer only once.
OData is a cross-platform standard and you can find a OData libraries for each development platform (MSDN, OData.org, JayData for JavaScript). Furthermore, you can use OData FunctionImports/Service methods and entity-level methods, which will simplify your queries.
If you are running multiplatform development, then you may find more practical to choose platform-agnostic communication protocol, such as HTTP, rather than bringing multiple drivers and ORMs to access your data Sources directly. In addition since OData is a REST protocol, you don't need much on the Client side: anything that can format OData HTTP requests and parse HTTP responses. There are however a few aspects to be aware of:
OData server is not a replacement for an SQL database. It supports batches but they are not the same as DB transactions (although in many cases can be used to model transactional operations). It supports parent-child relations but it does not support JOINs in classic SQL sense. So you have to plan what you expose as OData entity. It's too easy to build an OData server using WCF Data Services wrapping EF model. Too easy because People often expose low Level database content instead of building high level domain types.
As for today an OData multiplatorm clients are still under development, but they are coming. If I may suggest something I am personally working on, have a look at Simple.Data OData adapter (https://github.com/simplefx/Simple.OData, look at its Wiki pages for examples) - it has a NuGet package. While this a Client Library that only supports .NET 4.0, part of it is being extracted to be published as a portable class Library Simple.OData.Client to support .NET 4.x, Windows Store, Silverlight 5, Windows Phone 8, Android and iOS. In fact, if you check winrt branch of the Git repository, you will find a multiplatform PCL already, it's just not published on NuGet yet.
I am wondering if there are any performance overhead issues to consider when using WCF vs. Binary Serialization done manually. I am building an n-tier site and wish to implement asynchronous behavior across tiers. I plan on passing data in binary form to lessen bandwidth. WCF seems to be a good shortcut to building your own tools, but I am wondering if there are any points to be aware of when making the choice between use of the WCF vs. System.IO Namespace and building your own light weight library.
There is a binary formatter for WCF, though its not entirely binary; it produces SOAP messages whose content is formatted using the .NET Binary Format for XML, which is a highly compacted form of XML. (Examples of what this looks like are found on this samples page.)
Alternatively, you can implement your own custom message formatter, as long as the formatter was available on both client and server side, to format however you want. (I think you'll still have some overhead from WCF but not much.)
My personal opinion, no amount of overhead savings you might get from defining a custom binary format, and writing all of the serialization/deserialization code to implement it manually, will ever compensate the time and effort you will spend trying to implement and debug such a mechanism.
I have decided to use Twisted for a project and have developed a server that can push data to clients on other computers. At the moment I am using dummy data for testing speed requirements but I now need to interface Twisted to my other Python DAQ application which basically collects real-time data (500 Hz) from various external devices over different transports (e.g. Bluetooth). (note: the DAQ (data acquisition) application is on the same computer as the Twisted server)
Since the DAQ application is not part of the Twisted framework I am wondering what is the most efficient (fastest, robust, minimal latency) way to pass the data to the Twisted server. I have considered using a light-weight database, memcache, Queue or even the Twisted plugins but it is hard to tell which would be the most appropriate and best fit. I should add that the DAQ application was developed before deciding on using Twisted so I have so far considered it as separate from the Twisted network.
On the other side of the system, the client side, which reside on multiple computers, I have a similar problem. As the data streams in (I am sending lines of data, about 100 bytes each) I want to hand this data off to another application which will process this data for a web application (I would prefer to use Twisted Web Service for this but that is not my choice!) The Web application is being written in Java. Once again I have considered the choices above but since I am new to Twisted I am not sure which is the best approach. (note: the Web application is on the same computers as the Twisted clients)
Any advice or thoughts would be greatly appreciated.
My suggestion would be to to build a simple protocol with twisted's built-in support for AMP; you can hook this in to any other languages or frameworks using one of the implementations of AMP in other languages. AMP is designed to be as easy as possible to implement, as it's just a socket with some length-prefixed strings arranged into key/value pairs.
There's obviously a zillion different ways you could go about this, but I would first look at using a queue to pass the data to your Twisted server. If you deploy one of the many opensource queueing tools (e.g. RabbitMQ, ZeroMQ, OpenMQ, and loads of others), you should be able to write from your DAQ product using something generic like HTTP, then read into your Twisted server also using HTTP. If you don't like HTTP, then there would be a lot of alternative transports to choose from - just identify which you want to use, then use that as a basis for selecting your queueing tool.
This would give you an extremely flexible solution, in that you could upgrade or change any of these products with minimal impact to anything else in the whole solution.
We run multiple websites which use the same rich functional backend running as a library. The backend is comprised of multiple components with a lot of objects shared between them. Now, we need to separate a stateless rule execution component into a different container for security reasons. It would be great if I could have access to all the backend objects seamlessly in the rules component (rather than defining a new interface and objects/adapters).
I would like to use a RPC mechanism that will seamlessly support passing our java pojos (some of them are hibernate beans) over the wire. Webservices like JAXB, Axis etc. are needing quite a bit of boiler plate and configuration for each object. Whereas those using Java serialization seem straightforward but I am concerned about backward/forward compatibility issues.
We are using Xstream for serializing our objects into persistence store and happy so far. But none of the popular rpc/webservice framework seem use xstream for serialization. Is it ok to use xstream and send my objects over HTTP using my custom implementation? OR will java serialization just work OR are there better alternatives?
Advance thanks for your advise.
The good thing with standard Java serialization is that it produces binary stream which is quite a bit more space- and bandwidth-efficient than any of these XML serialization mechanisms. But as you wrote, XML can be more back/forward compatibility friendly, and it's easier to parse and modify by hand and/or by scripts, if need arises. It's a trade-off; if you need long-time storage, then it's advisable to avoid plain serialization.
I'm a happy XStream user. Zero problems so far.