com.esotericsoftware.kryo cross version compatibility - serialization

I have a few apps in data pipeline which use kafka as a queuing system.
If a producer app on java7 using kryo 2.22 produces to a kafka (java7) then would a consumer app on java8 using kryo 4.0 be able do deserialize the data ?
I short data serialization/deserialization compatible across different kryo versions ?

Well, after certain testing and looking up kryo's github docs i found that data serialization/deserialization is NOT compatible across major version changes of kayo library
https://github.com/EsotericSoftware/kryo :: Section Versioning Semantics, Upgrading
we increase the major version if serialization compatibility is broken (data serialized with the previous version cannot be deserialized with the new version)

Related

Jackson databind security vulnerabilities in ignite version 2.14.0

We are using apache ignite v2.14.0 in our project. We regularly check for security vulnerabilities coming from our code base or third-party libraries. We are using aquasec for that purpose. In the security scan for ignite, it has shown 2 high severity vulnerabilities associated with jackson-databind which is heavily used. The version of jackson-databind being used in ignite is 2.12.7.
The CVE numbers for the vulnerabilities are :
CVE-2022-42003
CVE-2022-42004
We need to tell our security team how much impact can these vulnerabilities have on our system and any precautions we can take to avoid these.
The jackson jars are used by internal libraries also, so we can not remove them fully even if we override the jackson version in parent pom and that too can only work for our codebase, ignite will still use 2.12.7.
Jackson Databind has been upgraded to 2.14 in IGNITE-18108
The fix should be in Apache Ignite 2.15 release, it's better to ask the DEV community for concrete dates, but most likely it will be delivered in Q1 2023.
I suppose you can do one of the following:
If you are ok with building Ignite from the sources, you might cherry-pick this change and build Ignite from the sources on your own.
You can check if GridGain Community Edition fits your needs. It has a much more frequent release cycle and these CVEs are already fixed in GG 8.8.23
Wait for Ignite 2.15.

Why is the Akka.Serialization.Hyperion serialization package still in beta?

I know that Akka.Serialization.Hyperion is scheduled to be the default serialization in Akka.NET 1.4. The beta mark usually means don't use this in production. I was curious what the pitfalls might be around using Hyperion in production at this time.

Hortonworks vs Apache projects

I want to know what is the difference between installing HortonWorks HDP vs installing the components directly from Apache projects? One thing I can think of is that Horton works probably has the packages aligned so that the version of each component is compatible with that of the others within the suite, while getting them directly from Apache projects, I may have to handle version compatibility myself. Is that correct? Is there any other difference involved ignoring the support subscription aspect of it.
Thanks.
There are a lot of differences between "roll your own" and using a distribution. Some of the most obvious include:
All of the various components and versions have been tested and built to work together - incompatibility between versions (e.g. Hive, Hadoop, Spark, etc.) can be a painful problem to sort through on your own
Most distribution providers, including Hortonworks, will bring patches in from unstable releases into stable releases, so even for the "same" version (e.g. Hive 1.2.1) you're getting a better release than vanilla - these can include both bug fixes and "safe" feature changes
Most distribution providers, including Hortonworks, provide some flavor of centralized platform management. I'm a big fan of Ambari (the one that comes with HDP), for example - it makes configuration and monitoring significantly easier than coordinating a vanilla install
I would strongly recommend against trying to deploy vanilla, unless it's just for learning and playing. HDP community edition is free (both definitions) and a major improvement over doing it yourself. My last deployment of HDP was entirely based on the community edition.

Is the DataMapper available throughout Mule Future enterprise edition?

I have doubt, current version we have DataWeaver which is similar to Datamapper for transformation.
If we need, require to add as a plugin.
In this link https://docs.mulesoft.com/mule-user-guide/v/3.7/datamapper-user-guide-and-reference#examples says Datamapper is exclusive to entreprsie edtion only.
Is the Datamapper can be used throughout the future version of enterprise edition completely ( either as Plugin for standalone or default for cloudhub?.
Is it have a any chance of deprecating in future for enterprise version?
Thanks in advance.
DataMapper will be supported till Mule runtime version 4.0 , if you start off new i would recommend going with DataWeave. Otherwise you'll need to migrate at a later moment

Difference between 'Microsoft.WindowsAzure.Storage.CloudStorageAccount' and 'Microsoft.WindowsAzure.CloudStorageAccount' classes in terms of usage?

After upgrading from Azure SDK 1.7 => 1.8, we're noticing that there are two classes of essentially the same thing:
Microsoft.WindowsAzure.Storage.CloudStorageAccount (v1.7)
Microsoft.WindowsAzure.CloudStorageAccount (v1.8)
Before we migrate over to 1.8 in code (we can still reference Azure SDK 1.7 and compile), does anyone know what the newer benefits are and if there is some sample code to use it? This is from the perspective of Azure Diagnostics, so start and stop On-Demand transfers.
Microsoft.WindowsAzure.Storage namespace is introduced with Storage Client Library 2.0, which comes with Azure SDK 1.8. But the Storage Client Library 1.7 also exists in SDK 1.8 for backward compatibility reasons. For more information on Storage Client Library 2.0, please refer to this blog post.