Ignite get column map types in Cassandra - ignite

Ignite-Cassandra module doesn't support Cassandra complex types like
Map type. Only BLOB and simple types which could be directly mapped
to appropriate java types are supported.
What else to get Map type from Cassandra use Ignite?

Take a look at the JavaDoc for the PersistenceStrategy enum.
There are three possible values: BLOB, POJO and PRIMITIVE. POJO strategy can be used to store maps of primitive types.
You can use this strategy in your persistence settings. Example: https://apacheignite-mix.readme.io/docs/examples#section-example-4

Thank you Denis,
But I want to persistence Map Type of java to Map Type Cassandra. Could be used only for POJO objects following Java Beans convention and having their fields of simple java type which could be directly mapped to corresponding Cassandra types. No complex type: map, list.

Related

Decoding and encoding strings for kotlinx.serialization.properties

I'm currently struggling with the experimental KXS-properties serialization backend, mainly because of two reasons:
I can't find any documentation for it (I think there is none)
KXS-properties only includes a serializer / deserializer, but no encoder / decoder
The endpoint provided by the framework is essentially Map<String, Any>, but the map is flat and the keys already have the usual dot-separated properties syntax. So the step that I have to take is to encode the map to a single string that is printable to a .properties file AND decode a single string from a .properties file into the map. I'm generally following the Properties Format Spec from https://docs.oracle.com/javase/10/docs/api/java/util/Properties.html#load(java.io.Reader), it's not as easy as one might think.
The problem is that I can't use java.util.Properties right away because KXS is multiplatform and it would kinda kill the purpose of it when I'd restrict it to JVM because I use java.util.Properties. If I were to use it, the solution would be pretty simple, like this: https://gist.github.com/RaphaelTarita/748e02c06574b20c25ab96c87235096d
So I'm trying to implement my own encoder / decoder, following the rough structure of kotlinx.serialization.json.Json.kt. Although it's pretty tedious, it went well so far, but now I've stumbled upon a new problem:
As far as I know (I am not sure because there is no documentation), the map only contains primitives (or primitive-equivalents, as Kotlin does not really have primitives). I suspect this because when you write your own KSerializers for the KXS frontend, you can specify to encode to any primitive by invoking the encodeXXX() functions of the Encoder interface. Now the problem is: When I try to decode to the map that should contain primitives, how do I even know which primitives are expected by the model class?
I've once written my own serializer / deserializer in Java to learn about the topic, but in that implementation, the backend was a lot more tightly coupled to the frontend, so that I could query the expected primitive type from the model class in the backend. But in my situation, I don't have access to the model class and I have no clue how to retrieve the expected types.
As you can see, I've tried multiple approaches, but none of them worked right away. If you can help me to get any of these to work, that would be very much appreciated
Thank you!
The way it works in kotlinx.serialization is that there are serializers that describe classes and structures etc. as well as code that writes/read properties as well as the struct. It is then the job of the format to map those operations to/from a data format.
The intended purpose of kotlinx.serialization.Properties is to support serializing a Kotlin class to/from a java.util.Properties like structure. It is fairly simple in setup in that every nested property is serialized by prepending the property name to the name (the dotted properties syntax).
Unfortunately it is indeed the case that this deserializing from this format requires knowing the types expected. It doesn't just read from string. However, it is possible to determine the structure. You can use the descriptor property of the serializer to introspect the expectations.
From my perspective this format is a bit more simple than it should be. It is a good example of a custom format though. A key distinction between formats is whether they are intended to just provide a storage format, or whether the output is intended (be able to) to represent a well designed api. The latter ones need to be more complex.

Difference between jackson objectMapper to others

I can't find any explanation about difference between jackson's ObjectMapper to other mappers like dozer/mapStruct/modelMapping/etc. All the articles compare dozer/mapStruct/modelMapping but they ignore ObjectMapper. I can't understand what is wrong? Is the same mapper?
Dozer, MapStruct and ModelMapping are Java Bean to Java Bean mappers frameworks that recursively copies data from one object to another, property by property, field by field.
From other side, ObjectMapper provides functionality for reading and writing JSON, either to and from basic POJOs, or to and from a general-purpose JSON Tree Model. ObjectMapper has some additional features like converting objects (see convertValue method) but it is not a main reason why this class was created.
So, if you want to implement sophisticated mapping between two different models you should use mappers; if you want to serialise model to JSON or deserialise model from JSON payload you have to use ObjectMapper from Jackson.
Jackson library- Mainly concerned with converting Objects/ Entities to JSON and back.
ModelMapper/ MapStruct - Concerned with mapping One entity to another like, mapping an Entity to its DTO. This operation can get pretty gnarly owing to the size and complexity of different entities, so we need these libraries to make work easier.

How to add a custom value type in Sorm?

I see that Sorm already supports org.joda.time.DateTime. Is there a possibility to add support for other types?
For example, my case class has a java.nio.charset.Charset or Locale field, which I would like to convert to a string. Suppose I have functions to accomplish the conversion from the custom type to/from a SQL type, how can I tell Sorm to use it?
SORM's support for a certain datatype is quite more complex than just ability to convert to and from an SQL type. Values of some types may span several columns (e.g. Tuple, Range), others may require intermediate tables (Seq, Set, Map) and all of them require an individual approach to translating query clauses. All that would have resulted in a quite complex ad-hoc type-mapping API if one was to be exposed.
But the thing is the above is really not the reason why such an API is not exposed and most probably not to ever be. You see, SORM's philosophy is essentially all about pure immutable data model and the cleanest way to design such one is to use standard Scala's immutable datatypes and case classes.
So the clean way for you to design your application with SORM would be to convert those stateful Java's classes to immutable values in your application. For instance you could implement a custom case class Charset (...) in your model, register it with SORM's Instance and have your conversion functions work between this type and the Java's one in your application. Besides that, you could implement this Charset as an Enumeration, which seems to be the most appropriate.
Concerning your argument about the Joda Time types support, it's there mostly because some data types were needed to represent the SQL's timestamps. See this logic as reverse to what you were thinking of.

What is difference between XmlSchemaType/XmlQualifiedName classes

Can someone tell me the difference between XmlSchemaType and XmlQualifiedName class. I'm bit confused when to choose which class. Actually I'm using IXmlSerializable interface for my class and to specify schema for this I used XmlSchemaProviderAttribute and specify the function which can return either XmlSchemaType or XmlQualifiedName. Both work fine and I successfully generate the proxy. but unable to find a consolidated analysis which one is to use in which condition.
As per Microsoft
XmlSchemaType Class:
The base class for all simple types and complex types.
XmlQualifiedName Class:
Represents an XML qualified name.
but I'm unable to understand the exact difference between these two.
After doing google and reading some article I've finally find the difference between these 2 and understand where to choose what?
There are 3 different types that can implement IXmlSerializable Interface
Content types
Element types
Legacy DataSet types
For content types we need to use XmlQualifiedName Class as return parameter (Method name specify in XmlSchemaProvider) and this will require that main root element of XSD will be the complextype.
For elements types we need to use XmlSchemaType Class. Here you can specify any root element in XSD.
For Legacy DataSet types we don't use XmlSchemaProvider attribute. Instead, they rely on GetSchema method for Schema generation.
I've found all this useful information from following MSDN link. A must read article for better understanding how Xml Serialization works in WCF.
Using the XmlSerializer Class

How to store complex objects into hadoop Hbase?

I have complex objects with collection fields which needed to be stored to Hadoop. I don't want to go through whole object tree and explicitly store each field. So I just think about serialization of complex fields and store it as one big piece. And than desirialize it when reading object. So what is the best way to do it? I though about using some kind serilization for that but I hope that Hadoop has means to handle this situation.
Sample object's class to store:
class ComplexClass {
<simple fields>
List<AnotherComplexClassWithCollectionFields> collection;
}
HBase only deals with byte arrays, so you can serialize your object in any way you see fit.
The standard Hadoop way of serializing objects is to implement the org.apache.hadoop.io.Writable interface. Then you can serialize your object into a byte array using org.apache.hadoop.io.WritableUtils.toByteArray(Writable ... writable).
Also, there are other serialization frameworks that people in the Hadoop community use, like Avro, Protocol Buffers, and Thrift. All have their specific use cases, so do your research. If you're doing something simple, implementing Hadoop's Writable should be good enough.