Object communication vs String using RabbitMQ and Mule - rabbitmq

I'm working on a project using mule and rabbitmq. The scenario is that, my application listens to a queue ,processes the message and then responses it. My question is, whether receiving java object, using mule "byte-array-to-object-transformer" and returning the response object might have better performance than receiving json, transforming it to the related object then transforming the response again and returning the json back. I think it depends on rabbitmq mechanism and mule transformers both.

Rule #1 of performance issues: You do not have a performance issue until you can prove you have one.
This said, here is an interesting bit from https://github.com/RichardHightower/json-parsers-benchmark/wiki/Serialization-Jackson-vs.-Boon-vs.-Java-Serialization :
Most people assume that Java object serialization is faster than
Jackson JSON serialization because Jackson is using JSON and Java
object serialization is binary. But most people are wrong.
Jackson JSON serialization is much faster than built in Java object
serialization.
Personally, as a rule of thumb, I try to avoid sending serialized Java objects over the wire because things can break down in horrible ways. It is way more robust to send data over the wire, for example as JSON. Sending data instead of serialized objects allows you to be very lax in the way you deal with it, for example by gracefully dealing with new/unexpected fields instead of dying in a fire because of binary incompatibility.

We use RabbitMQ with serialization (provided by a separate lib). It gives better performance than JSON in terms of message length, but that might not be very important in your case.
A minus of a serialization approach is that all your objects (and all their non transient fields) should be Serializable, which is not always possible. Also, when using serialization, you should always watch that both sending and receiving parts speak the same language i.e. have the same versions of classes.
If you'll decide to go with serialization, check out the FST - that's what we are using and it's a great replacement for the standard Java serialization. It's really easy to use and shows great results for both speed and output size.

Related

Jackson deserialization with multiple naming strategies

I would like my Jackson de-serialization process (while consuming from a Rabbit queue) to support both Snake and Camel case, depending on the messages representing the same object.
This might look weird... So this would be temporary, in order to workaround publisher and consumer services release complexity due to this format change.
Is there an easy way to that with Jackson?

With Spring webflux, does returning a Mono<Foo> cut down on serialization cost compared to returning a fully-realized Foo instance?

I am converting my data service to use the MongoDB reactive driver. With the way that I am querying for information (in several parts, concurrently) it has allowed me to coordinate all of the activities much more efficiently and quickly.
So far, the consumers of this API are not ready to be converted, so I end up calling Mono.zip(...).blockOptional before returning the fully realized object to the rest method to return to the client. But I am wondering if I could benefit from returning a Mono, instead, and still get some benefits, even if the consumers of my data service API are not ready to convert fully to reactive.
Would returning a Mono save on Spring web serialization/deserialization between the two services? That, currently, is the most expensive portion of the entire data flow. Or would it be basically the same cost in time and performance between returning a Mono or the object itself?
Yes, I understand the benefit of making the whole data flow entirely reactive, and I agree that is the best way to go. But, for now, I am trying to learn whether or not I can get the benefit of less serialization before going "full reactive".
There is no difference in serialisation when you return Mono or Foo. There is difference in blocking thread or not.
Simply when you return Mono or flux each IO trip will not block thread but as soon as your data leave your service it will be serialised into json
Also as soon as you call block() on reactive api there is no benefits at all.
The main idea of reactive is not to make api faster it is to make your api to handle more. So in case if you have one service fully reactive and other consumers are not then you still have benefits because your service will be able to handle more consumers

Decoding and encoding strings for kotlinx.serialization.properties

I'm currently struggling with the experimental KXS-properties serialization backend, mainly because of two reasons:
I can't find any documentation for it (I think there is none)
KXS-properties only includes a serializer / deserializer, but no encoder / decoder
The endpoint provided by the framework is essentially Map<String, Any>, but the map is flat and the keys already have the usual dot-separated properties syntax. So the step that I have to take is to encode the map to a single string that is printable to a .properties file AND decode a single string from a .properties file into the map. I'm generally following the Properties Format Spec from https://docs.oracle.com/javase/10/docs/api/java/util/Properties.html#load(java.io.Reader), it's not as easy as one might think.
The problem is that I can't use java.util.Properties right away because KXS is multiplatform and it would kinda kill the purpose of it when I'd restrict it to JVM because I use java.util.Properties. If I were to use it, the solution would be pretty simple, like this: https://gist.github.com/RaphaelTarita/748e02c06574b20c25ab96c87235096d
So I'm trying to implement my own encoder / decoder, following the rough structure of kotlinx.serialization.json.Json.kt. Although it's pretty tedious, it went well so far, but now I've stumbled upon a new problem:
As far as I know (I am not sure because there is no documentation), the map only contains primitives (or primitive-equivalents, as Kotlin does not really have primitives). I suspect this because when you write your own KSerializers for the KXS frontend, you can specify to encode to any primitive by invoking the encodeXXX() functions of the Encoder interface. Now the problem is: When I try to decode to the map that should contain primitives, how do I even know which primitives are expected by the model class?
I've once written my own serializer / deserializer in Java to learn about the topic, but in that implementation, the backend was a lot more tightly coupled to the frontend, so that I could query the expected primitive type from the model class in the backend. But in my situation, I don't have access to the model class and I have no clue how to retrieve the expected types.
As you can see, I've tried multiple approaches, but none of them worked right away. If you can help me to get any of these to work, that would be very much appreciated
Thank you!
The way it works in kotlinx.serialization is that there are serializers that describe classes and structures etc. as well as code that writes/read properties as well as the struct. It is then the job of the format to map those operations to/from a data format.
The intended purpose of kotlinx.serialization.Properties is to support serializing a Kotlin class to/from a java.util.Properties like structure. It is fairly simple in setup in that every nested property is serialized by prepending the property name to the name (the dotted properties syntax).
Unfortunately it is indeed the case that this deserializing from this format requires knowing the types expected. It doesn't just read from string. However, it is possible to determine the structure. You can use the descriptor property of the serializer to introspect the expectations.
From my perspective this format is a bit more simple than it should be. It is a good example of a custom format though. A key distinction between formats is whether they are intended to just provide a storage format, or whether the output is intended (be able to) to represent a well designed api. The latter ones need to be more complex.

Using Zeroc Slice / Ice for Data Serialization (vs. Thrift / Protocol Buffers)

For now, all I am looking for is simple serialization / deserialization. I am not looking for transport layers or other network stack elements.
I have found that building a simple serialize/deserialize scenario is easy in Thrift and Protocol Buffers. I would like to try to do the same using Ice's Slice.
The main benefit I see is that Slice seems to support "classes" in addition to "structs". these classes support inheritance, which seems nice. I'd like to try serializing these in a simple fashion, while ignoring the rest of the transport layers, etc which Ice provides.
I've gotten as far as running slice2java on a .ice file containing a simple class (not even with inheritance yet), but am unsure how to proceed. The generated class doesn't seem to provide a direct way to serialize itself, and I can't find documentation on how to do it using Ice libraries.
As an example, here is the PB code to do what I want:
Person p = Person.newBuilder().
setEmail("John#doe.com").
setId(1234).
setName("John Doe").build();
//write the buffer to a file.
p.writeTo(new FileOutputStream("JohnDoe.pb"));
//read it back in!
Person IsItJohnDoe = Person.parseFrom(new FileInputStream("JohnDoe.pb"));
System.out.println(IsItJohnDoe);
If anyone has encountered a similar problem, thank you in advance. I unfortunately don't have the time to investigate ice / slice as comprehensively as I would like.
According to ZeroC documentation, all Java classes, generated from Slice definitions implement java.io.Serializable interface, so you can serialize them using that interface. More details you can find in documentation: Serializable Objects in Java

WCF Data Objects Best Practice

I am in the processing of migrating from web services to WCF, and rather than trying to make old code work in WCF, I am just going to rebuild the services. As a part of this process, I have not figured out the best design to provide easy to consume services and also support future changes.
My service follows the pattern below; I actually have many more methods than this so duplication of code is an issue.
<ServiceContract()>
Public Interface IPublicApis
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataA(ByVal req As RequestA) As ResponseA
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataB(ByVal req As RequestB) As ResponseB
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataC(ByVal req As RequestC) As ResponseC
End Interface
Following this advice, I first created the schemas for the Request and Response objects. I then used SvcUtil to create the resulting classes so that I am assured the objects are consumable by other languages, and the clients will find the schemas easy to work with (no references to other schemas). However, because the Requests and Responses have similar data, I would like to use interfaces and inheritance so that I am not implementing multiple versions of the same code.
I have thought about writting my own version of the classes using interfaces and inheritance in a seperate class library, and implementing all of the logging, security, data retrieval logic there. Inside each operation I will just convert the RequestA to my InternalRequestA and call InternalRequestA's process function which will return an InternalResponseA. I will then convert that back to a ResponseA and send to the client.
Is this idea crazy?!? I am having problems finding another solution that takes advantage of inheritance internally, but still gives clean schemas to the client that support future updates.
The contracts created by using WCF data contracts generally produce relatively straight-forward schemas that are highly interoperable. I believe this was one of the guiding principles for the design of WCF. However, this interoperability relates to the messages themselves and not the objects that some other system might produce from them. How the messages are converted to/from objects at the other end entirely depends on the other system.
We have had no real issues using inheritance with data contract objects.
So, given that you clearly have control over the schemas (i.e. they are not being specified externally) and can make good use of WCF's inbuilt data contract capabilities, I struggle to see the benefit you will get the additional complexity and effort implied in your proposed approach.
In my view the logic associated with processing the messages should be kept entirely separate from the messages themselves.