Jackson deserialization with multiple naming strategies - jackson

I would like my Jackson de-serialization process (while consuming from a Rabbit queue) to support both Snake and Camel case, depending on the messages representing the same object.
This might look weird... So this would be temporary, in order to workaround publisher and consumer services release complexity due to this format change.
Is there an easy way to that with Jackson?

Related

With Spring webflux, does returning a Mono<Foo> cut down on serialization cost compared to returning a fully-realized Foo instance?

I am converting my data service to use the MongoDB reactive driver. With the way that I am querying for information (in several parts, concurrently) it has allowed me to coordinate all of the activities much more efficiently and quickly.
So far, the consumers of this API are not ready to be converted, so I end up calling Mono.zip(...).blockOptional before returning the fully realized object to the rest method to return to the client. But I am wondering if I could benefit from returning a Mono, instead, and still get some benefits, even if the consumers of my data service API are not ready to convert fully to reactive.
Would returning a Mono save on Spring web serialization/deserialization between the two services? That, currently, is the most expensive portion of the entire data flow. Or would it be basically the same cost in time and performance between returning a Mono or the object itself?
Yes, I understand the benefit of making the whole data flow entirely reactive, and I agree that is the best way to go. But, for now, I am trying to learn whether or not I can get the benefit of less serialization before going "full reactive".
There is no difference in serialisation when you return Mono or Foo. There is difference in blocking thread or not.
Simply when you return Mono or flux each IO trip will not block thread but as soon as your data leave your service it will be serialised into json
Also as soon as you call block() on reactive api there is no benefits at all.
The main idea of reactive is not to make api faster it is to make your api to handle more. So in case if you have one service fully reactive and other consumers are not then you still have benefits because your service will be able to handle more consumers

Kotlin Coroutines: Channel vs Flow

I'm recently studying and reading a lot about Flow and Kotlin Coroutines. But I still get confused about when I should use Flow and when I should use Channel.
At the beginning it looked more simple. Working with hot streams of data? Channel. Cold ones? Flows. Same goes if you need to listen to streams of data from more than a single place; if that's the case Channel is the choice to go. There are still a lot of examples and questions.
But recently FlowChannels where introduced, together with tons of methods and classes that encourage the use of Flow, which facilities transforming Channels into Flows and so on. With all this new stuff coming on each Kotlin release I am getting more and more confused. So the question is:
When should I use Channel and when should I use Flow?
For many use cases where the best tool so far was Channel, Flow has become the new best tool.
As a specific example, callbackFlow is now the best approach to receiving data from a 3rd-party API's callback. This works especially well in a GUI setting. It couples the callback, a channel, and the associated receiving coroutine all in the same self-contained Flow instance. The callback is registered only while the flow is being collected. Cancellation of the flow automatically propagates into closing the channel and deregistering the callback. You just have to provide the callback-deregistering code once.
You should look at Channel as a lower-level primitive that Flow uses in its implementation. Consider working with it directly only after you realize Flow doesn't fit your requirements.
In my opinion a great explanation is here (Roman Elizarov) Cold flows, hot channels:
Channels are a great fit to model data sources that are intrinsically hot, data sources that exist without application’s requests for them: incoming network connections, event streams, etc.
Channels, just like futures, are synchronization primitives. You shall use a channel when you need to send data from one coroutine to another coroutine in the same or in a different process
But what if we don’t need either concurrency or synchronization, but need just non-blocking streams of data? We did not have a type for that until recently, so welcome Kotlin Flow type...
Unlike channels, flows do not inherently involve any concurrency. They are non-blocking, yet sequential. The goal of flows is to become for asynchronous data streams what suspending functions are for asynchronous operations — convenient, safe, easy to learn and easy to use.

Making changes to Nsb Message and SagaData classes

Both our Message and our SagaData classes contain properties that are defined in our solutions central Model project. We're now in the process of refactoring our solution such that we'll have a specific project where properties of our NServiceBus classes will be defined. We're doing this to hopefully decouple the Nsb layer from the rest of the application, and to avoid avoid unnecessary pollution of our Nsb classes as the solution's Model project changes.
The Nsb specific Model (Nsb.Model) project will closely mirror the central Model project, and AutoMapper will take care of mapping our objects from Nsb.Model <-> Model.
I think we don't need to be too worried about refactoring our Message classes, as it should be safe enough to simply deploy this change when there are no in-flight messages (we'll have plenty of opportunities to do this)
I'm more worried about our Saga and SagaData classes. There's always going to be some Saga's running (mostly dormant waiting for Timeouts) and I'm worried that issues could come up with already running Saga's when we make changes to the SagaData class. The changes to the SagaData classes are basically referencing a new assembly (Nsb.Model) which has all the same classes as the old assembly (Model). One of the classes has been renamed in the new assembly, other than that they're all identical to the old ones.
We're using NHibernate as our persistance. I've tried single Saga's on our testing environments and deploying the changes while the Saga waits for a Timeout, and it looks like it basically has no issues with the updated assembly nor the name change of one of it's properties class. However I'm reluctant to deploy this to production without fully understanding what effects this could have and whether our application will stay healthy once this gets deployed.
NServiceBus uses NHibernate to create the schemas that represent the SagaData class. You can either rely on NHibernate trying to modify the current schema, or write migration scripts yourself.
For example, adding a property will result in an additional column that will be created by NHibernate. That column will have no value for all the saga data that is already present. Removing a property will remove the column and the data will be lost.
Modifying a complex object in a collection will provide difficulties. The best way to know if this works for your project is actually perform the upgrade and verify during development and in a test environment.
I suspect you're running for a while already, otherwise are SQL Persister (which doesn't use an OR/M) uses JSON serialized objects to store data inside a single column and it relies on the flexibility of the serializer to migrate from version to version. Our customers have had much better results with that than with NHibernate.
But as I said. An option is to look at the before and after states and create migration scripts yourself. With complex changes, this might be a better alternative.

Object communication vs String using RabbitMQ and Mule

I'm working on a project using mule and rabbitmq. The scenario is that, my application listens to a queue ,processes the message and then responses it. My question is, whether receiving java object, using mule "byte-array-to-object-transformer" and returning the response object might have better performance than receiving json, transforming it to the related object then transforming the response again and returning the json back. I think it depends on rabbitmq mechanism and mule transformers both.
Rule #1 of performance issues: You do not have a performance issue until you can prove you have one.
This said, here is an interesting bit from https://github.com/RichardHightower/json-parsers-benchmark/wiki/Serialization-Jackson-vs.-Boon-vs.-Java-Serialization :
Most people assume that Java object serialization is faster than
Jackson JSON serialization because Jackson is using JSON and Java
object serialization is binary. But most people are wrong.
Jackson JSON serialization is much faster than built in Java object
serialization.
Personally, as a rule of thumb, I try to avoid sending serialized Java objects over the wire because things can break down in horrible ways. It is way more robust to send data over the wire, for example as JSON. Sending data instead of serialized objects allows you to be very lax in the way you deal with it, for example by gracefully dealing with new/unexpected fields instead of dying in a fire because of binary incompatibility.
We use RabbitMQ with serialization (provided by a separate lib). It gives better performance than JSON in terms of message length, but that might not be very important in your case.
A minus of a serialization approach is that all your objects (and all their non transient fields) should be Serializable, which is not always possible. Also, when using serialization, you should always watch that both sending and receiving parts speak the same language i.e. have the same versions of classes.
If you'll decide to go with serialization, check out the FST - that's what we are using and it's a great replacement for the standard Java serialization. It's really easy to use and shows great results for both speed and output size.

WCF Data Objects Best Practice

I am in the processing of migrating from web services to WCF, and rather than trying to make old code work in WCF, I am just going to rebuild the services. As a part of this process, I have not figured out the best design to provide easy to consume services and also support future changes.
My service follows the pattern below; I actually have many more methods than this so duplication of code is an issue.
<ServiceContract()>
Public Interface IPublicApis
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataA(ByVal req As RequestA) As ResponseA
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataB(ByVal req As RequestB) As ResponseB
<OperationContract(AsyncPattern:=False)>
Function RetrieveQueryDataC(ByVal req As RequestC) As ResponseC
End Interface
Following this advice, I first created the schemas for the Request and Response objects. I then used SvcUtil to create the resulting classes so that I am assured the objects are consumable by other languages, and the clients will find the schemas easy to work with (no references to other schemas). However, because the Requests and Responses have similar data, I would like to use interfaces and inheritance so that I am not implementing multiple versions of the same code.
I have thought about writting my own version of the classes using interfaces and inheritance in a seperate class library, and implementing all of the logging, security, data retrieval logic there. Inside each operation I will just convert the RequestA to my InternalRequestA and call InternalRequestA's process function which will return an InternalResponseA. I will then convert that back to a ResponseA and send to the client.
Is this idea crazy?!? I am having problems finding another solution that takes advantage of inheritance internally, but still gives clean schemas to the client that support future updates.
The contracts created by using WCF data contracts generally produce relatively straight-forward schemas that are highly interoperable. I believe this was one of the guiding principles for the design of WCF. However, this interoperability relates to the messages themselves and not the objects that some other system might produce from them. How the messages are converted to/from objects at the other end entirely depends on the other system.
We have had no real issues using inheritance with data contract objects.
So, given that you clearly have control over the schemas (i.e. they are not being specified externally) and can make good use of WCF's inbuilt data contract capabilities, I struggle to see the benefit you will get the additional complexity and effort implied in your proposed approach.
In my view the logic associated with processing the messages should be kept entirely separate from the messages themselves.