For now, all I am looking for is simple serialization / deserialization. I am not looking for transport layers or other network stack elements.
I have found that building a simple serialize/deserialize scenario is easy in Thrift and Protocol Buffers. I would like to try to do the same using Ice's Slice.
The main benefit I see is that Slice seems to support "classes" in addition to "structs". these classes support inheritance, which seems nice. I'd like to try serializing these in a simple fashion, while ignoring the rest of the transport layers, etc which Ice provides.
I've gotten as far as running slice2java on a .ice file containing a simple class (not even with inheritance yet), but am unsure how to proceed. The generated class doesn't seem to provide a direct way to serialize itself, and I can't find documentation on how to do it using Ice libraries.
As an example, here is the PB code to do what I want:
Person p = Person.newBuilder().
setEmail("John#doe.com").
setId(1234).
setName("John Doe").build();
//write the buffer to a file.
p.writeTo(new FileOutputStream("JohnDoe.pb"));
//read it back in!
Person IsItJohnDoe = Person.parseFrom(new FileInputStream("JohnDoe.pb"));
System.out.println(IsItJohnDoe);
If anyone has encountered a similar problem, thank you in advance. I unfortunately don't have the time to investigate ice / slice as comprehensively as I would like.
According to ZeroC documentation, all Java classes, generated from Slice definitions implement java.io.Serializable interface, so you can serialize them using that interface. More details you can find in documentation: Serializable Objects in Java
Related
I'm currently struggling with the experimental KXS-properties serialization backend, mainly because of two reasons:
I can't find any documentation for it (I think there is none)
KXS-properties only includes a serializer / deserializer, but no encoder / decoder
The endpoint provided by the framework is essentially Map<String, Any>, but the map is flat and the keys already have the usual dot-separated properties syntax. So the step that I have to take is to encode the map to a single string that is printable to a .properties file AND decode a single string from a .properties file into the map. I'm generally following the Properties Format Spec from https://docs.oracle.com/javase/10/docs/api/java/util/Properties.html#load(java.io.Reader), it's not as easy as one might think.
The problem is that I can't use java.util.Properties right away because KXS is multiplatform and it would kinda kill the purpose of it when I'd restrict it to JVM because I use java.util.Properties. If I were to use it, the solution would be pretty simple, like this: https://gist.github.com/RaphaelTarita/748e02c06574b20c25ab96c87235096d
So I'm trying to implement my own encoder / decoder, following the rough structure of kotlinx.serialization.json.Json.kt. Although it's pretty tedious, it went well so far, but now I've stumbled upon a new problem:
As far as I know (I am not sure because there is no documentation), the map only contains primitives (or primitive-equivalents, as Kotlin does not really have primitives). I suspect this because when you write your own KSerializers for the KXS frontend, you can specify to encode to any primitive by invoking the encodeXXX() functions of the Encoder interface. Now the problem is: When I try to decode to the map that should contain primitives, how do I even know which primitives are expected by the model class?
I've once written my own serializer / deserializer in Java to learn about the topic, but in that implementation, the backend was a lot more tightly coupled to the frontend, so that I could query the expected primitive type from the model class in the backend. But in my situation, I don't have access to the model class and I have no clue how to retrieve the expected types.
As you can see, I've tried multiple approaches, but none of them worked right away. If you can help me to get any of these to work, that would be very much appreciated
Thank you!
The way it works in kotlinx.serialization is that there are serializers that describe classes and structures etc. as well as code that writes/read properties as well as the struct. It is then the job of the format to map those operations to/from a data format.
The intended purpose of kotlinx.serialization.Properties is to support serializing a Kotlin class to/from a java.util.Properties like structure. It is fairly simple in setup in that every nested property is serialized by prepending the property name to the name (the dotted properties syntax).
Unfortunately it is indeed the case that this deserializing from this format requires knowing the types expected. It doesn't just read from string. However, it is possible to determine the structure. You can use the descriptor property of the serializer to introspect the expectations.
From my perspective this format is a bit more simple than it should be. It is a good example of a custom format though. A key distinction between formats is whether they are intended to just provide a storage format, or whether the output is intended (be able to) to represent a well designed api. The latter ones need to be more complex.
I'm not so much seeking a specific implementation but trying to figure out the proper terms for what I'm trying to do so I can properly research the topic.
I have a bunch of interfaces and those interfaces are implemented by controllers, repositories, services and whatnot. Somewhere in the start up process of the application we're using the Castle.MicroKernel.Registration.Component class to register the classes to use for a particular interface. For instance:
Component.For<IPaginationService>().ImplementedBy<PaginationService>().LifeStyle.Transient
Recently I became interested in creating an audit trail of every class and method call. There's a few hundred of these classes so writing a proxy class for each one by hand isn't very practical. I could use a template to generate the code but I'd rather not blow up our code base with all that.
So I'm curious if there's some kind of on the fly solution. I know nHibernate creates proxy classes at some point which overlay all the entity classes. Can someone give me some guidance on how I might be able to do something similar here?
Something like:
Component.For<IPaginationService>().ImplementedBy<ProxyFor<PaginationService>>().LifeStyle.Transient
Obviously that won't work because I can only use generics to generalize the types of methods but not the methods themselves. Is there some tricky reflection approach I can use to do this?
You are looking for what Castle Windsor calls interceptors. It's an aspect-oriented way to tackle cross-cutting concerns -- auditing is certainly one of them. See documentation, or an article about the approach:
Aspect oriented programming is an approach that effectively “injects” pieces of code before or after an existing operation. This works by defining an Inteceptor wrapping the logic being invoked then registering it to run whenever a particular set/sub-set of methods are called.
If you want to apply it to many registered services, read more about interceptor selection mechanisms: IModelInterceptorsSelector helps there.
Using PostSharp, things like this can be even done at compile time. This can speed the resulting application, but when used correctly, interceptors are not slow.
I'm interested to know what data is held in the MetaData annotation added to each Kotlin class.
But most fields give no more detail than
"Metadata in a custom format. The format may be different (or even absent) for different kinds."
https://github.com/JetBrains/kotlin/blob/master/libraries/stdlib/jvm/runtime/kotlin/Metadata.kt
Is there are reference somewhere that explains how to interpret this data?
kotlin.Metadata contains information about Kotlin symbols, such as their names, signatures, relations between types, etc. Some of this information is already present in the JVM signatures in the class files, but a lot is not, since there's quite a few Kotlin-specific things which JVM class files cannot represent properly: type nullability, mutable/read-only collection interfaces, declaration-site variance, and others.
No specific actions were taken to make the schema of the data encoded in this annotation public, because for most users such data is needed to introspect a program at runtime, and the Kotlin reflection library provides a nice API for that.
If you need to inspect Kotlin-specific stuff which is not exposed via the reflection API, or you're just generally curious what else is stored in that annotation, you can take a look at the implementation of kotlinx.reflect.lite. It's a light-weight library, the core of which is the protobuf-generated schema parser. There's not much supported there at the moment, but there are schemas available
which you can use to read any other data you need.
UPD (August 2018): since this was answered, we've published a new (experimental and unstable) library, which is designed to be the intended way for reading and modifying the metadata: https://discuss.kotlinlang.org/t/announcing-kotlinx-metadata-jvm-library-for-reading-modifying-metadata-of-kotlin-jvm-class-files/7980
I'm working on a project using mule and rabbitmq. The scenario is that, my application listens to a queue ,processes the message and then responses it. My question is, whether receiving java object, using mule "byte-array-to-object-transformer" and returning the response object might have better performance than receiving json, transforming it to the related object then transforming the response again and returning the json back. I think it depends on rabbitmq mechanism and mule transformers both.
Rule #1 of performance issues: You do not have a performance issue until you can prove you have one.
This said, here is an interesting bit from https://github.com/RichardHightower/json-parsers-benchmark/wiki/Serialization-Jackson-vs.-Boon-vs.-Java-Serialization :
Most people assume that Java object serialization is faster than
Jackson JSON serialization because Jackson is using JSON and Java
object serialization is binary. But most people are wrong.
Jackson JSON serialization is much faster than built in Java object
serialization.
Personally, as a rule of thumb, I try to avoid sending serialized Java objects over the wire because things can break down in horrible ways. It is way more robust to send data over the wire, for example as JSON. Sending data instead of serialized objects allows you to be very lax in the way you deal with it, for example by gracefully dealing with new/unexpected fields instead of dying in a fire because of binary incompatibility.
We use RabbitMQ with serialization (provided by a separate lib). It gives better performance than JSON in terms of message length, but that might not be very important in your case.
A minus of a serialization approach is that all your objects (and all their non transient fields) should be Serializable, which is not always possible. Also, when using serialization, you should always watch that both sending and receiving parts speak the same language i.e. have the same versions of classes.
If you'll decide to go with serialization, check out the FST - that's what we are using and it's a great replacement for the standard Java serialization. It's really easy to use and shows great results for both speed and output size.
I'm very new to Akka. I'm designing a modular system in Akka, and I'm looking for a way to define an API for each module.
Normally in Java I'd write a bunch of beans and some interfaces that accept and produce those beans. From what I gather, in Akka Message types replace the beans, and there seems to be no equivalent for an Interface (or something to help the compiler enforce some "what happens when" or "what can happen when" contract).
I would welcome any advice or best practice on what is the best way to write the most coherent API. If the compiler can understand it, it's a serious bonus.
The API exposed by an Actor (or a collection of collaborating Actors) is defined by the set of message types that it accepts. My recommendation is to keep these message classes close to the Actor, e.g. as static inner classes of the UntypedActor class (for Java) or in the Actor’s companion object (for Scala). For larger actor hierarchies implementing a single interface I would recommend placing all the message classes with the “head actor” (the entry point to the hierarchy) or in a separate class that has a descriptive name and is otherwise empty. Having the as top-level classes can easily lead to name-clashes that can in Java only be resolved by using fully-qualified class names.
The compiler can currently not yet help you in avoiding the mistake of sending the wrong message to an ActorRef because that reference is oblivious to the kind of actor it represents. We are researching ways to tackle this problem, you can take a look at the TypedChannels experiment (Scala only) and later this year we will start working on a simpler solution that also supports Java (codename “Akka Gålbma”, see the roadmap).