I need to serialize some complex interface (template.Template). It has many unexported fields, and gob don't want to work with them. Any suggestions?
P.S. Actualy, I trying to put a parsed template to memcache on App Engine.
The short answer is that there's usually a reason for unexported fields--template.Template, for instance, contains information which changes during parsing--so I'd advise against serializing them yourself with reflect. However, the GobEncoder and GobDecoder interfaces were recently added to gob; if you need to serialize a complex struct with unexported fields, encourage the author of the package to implement these interfaces. Even better, implement them yourself (shouldn't be hard for template.Template) and contribute your patch.
If the type is from another package (such as template) this can't be done with any of the current serialization libs for Go (gob, json, bson, etc.). Nor should it be done, because the fields are unexported.
However, if you really need to, you can write your own serializer using package reflect, specifically Value.Field() and friends to get the unexported fields. Then you just need to store them in a way that you can decode later.
Related
I'm currently struggling with the experimental KXS-properties serialization backend, mainly because of two reasons:
I can't find any documentation for it (I think there is none)
KXS-properties only includes a serializer / deserializer, but no encoder / decoder
The endpoint provided by the framework is essentially Map<String, Any>, but the map is flat and the keys already have the usual dot-separated properties syntax. So the step that I have to take is to encode the map to a single string that is printable to a .properties file AND decode a single string from a .properties file into the map. I'm generally following the Properties Format Spec from https://docs.oracle.com/javase/10/docs/api/java/util/Properties.html#load(java.io.Reader), it's not as easy as one might think.
The problem is that I can't use java.util.Properties right away because KXS is multiplatform and it would kinda kill the purpose of it when I'd restrict it to JVM because I use java.util.Properties. If I were to use it, the solution would be pretty simple, like this: https://gist.github.com/RaphaelTarita/748e02c06574b20c25ab96c87235096d
So I'm trying to implement my own encoder / decoder, following the rough structure of kotlinx.serialization.json.Json.kt. Although it's pretty tedious, it went well so far, but now I've stumbled upon a new problem:
As far as I know (I am not sure because there is no documentation), the map only contains primitives (or primitive-equivalents, as Kotlin does not really have primitives). I suspect this because when you write your own KSerializers for the KXS frontend, you can specify to encode to any primitive by invoking the encodeXXX() functions of the Encoder interface. Now the problem is: When I try to decode to the map that should contain primitives, how do I even know which primitives are expected by the model class?
I've once written my own serializer / deserializer in Java to learn about the topic, but in that implementation, the backend was a lot more tightly coupled to the frontend, so that I could query the expected primitive type from the model class in the backend. But in my situation, I don't have access to the model class and I have no clue how to retrieve the expected types.
As you can see, I've tried multiple approaches, but none of them worked right away. If you can help me to get any of these to work, that would be very much appreciated
Thank you!
The way it works in kotlinx.serialization is that there are serializers that describe classes and structures etc. as well as code that writes/read properties as well as the struct. It is then the job of the format to map those operations to/from a data format.
The intended purpose of kotlinx.serialization.Properties is to support serializing a Kotlin class to/from a java.util.Properties like structure. It is fairly simple in setup in that every nested property is serialized by prepending the property name to the name (the dotted properties syntax).
Unfortunately it is indeed the case that this deserializing from this format requires knowing the types expected. It doesn't just read from string. However, it is possible to determine the structure. You can use the descriptor property of the serializer to introspect the expectations.
From my perspective this format is a bit more simple than it should be. It is a good example of a custom format though. A key distinction between formats is whether they are intended to just provide a storage format, or whether the output is intended (be able to) to represent a well designed api. The latter ones need to be more complex.
I'm trying to do some polymorphic deseralization of JSON using Jackson, however the list of subclasses is unknown at compile time, so I can't use a #JsonSubtype annotation on the base class.
Instead I want to use a TypeIdResolver class to perform the conversion to and from a property value.
The list of possible subclasses I might encounter will be dynamic, but they are all registered at run time with a registry. So I would appear to need my TypeIdResolver object to have a reference to that registry class. It has to operate in what is essentially a dependency injection environment (i.e I can't have a singleton class that the TypeIdResolver consults), so I think I need to inject the registry class into the TypeIdResolver. The kind of code I think I want write is:
ObjectMapper mapper = new ObjectMapper();
mapper.something(new MyTypeIdResolver(subclassRegistry));
mapper.readValue(...)
However, I can't find a way of doing the bit in the middle. The only methods I can find use java annotations to specify what the TypeIdResolver is going to be.
This question Is there a way to specify #JsonTypeIdResolver on mapper config instead of annotation? is the same, though the motivation is different, and the answer is to use an annotation mixin, which won't work here.
SimpleModule has method registerSubtypes(), with which you can register subtypes. If only passing Classes, simple class name is used as type id, but you can also pass NamedType to define type id to use for sub-class.
So, if you do know full set, just build SimpleModule, register that to mapper.
Otherwise if this does not work you may need to resort to just sharing data via static singleton instance (if applicable), or even ThreadLocal.
Note that in the end what I did was abandon Jackson and write my own much simpler framework based on javax.json that just did the kinds of serialisation I wanted in a much more straightforward fashion. I was only dealing with simple DTO (data transfer object) classes, so it was just much simpler to write my own simple framework.
I'm interested to know what data is held in the MetaData annotation added to each Kotlin class.
But most fields give no more detail than
"Metadata in a custom format. The format may be different (or even absent) for different kinds."
https://github.com/JetBrains/kotlin/blob/master/libraries/stdlib/jvm/runtime/kotlin/Metadata.kt
Is there are reference somewhere that explains how to interpret this data?
kotlin.Metadata contains information about Kotlin symbols, such as their names, signatures, relations between types, etc. Some of this information is already present in the JVM signatures in the class files, but a lot is not, since there's quite a few Kotlin-specific things which JVM class files cannot represent properly: type nullability, mutable/read-only collection interfaces, declaration-site variance, and others.
No specific actions were taken to make the schema of the data encoded in this annotation public, because for most users such data is needed to introspect a program at runtime, and the Kotlin reflection library provides a nice API for that.
If you need to inspect Kotlin-specific stuff which is not exposed via the reflection API, or you're just generally curious what else is stored in that annotation, you can take a look at the implementation of kotlinx.reflect.lite. It's a light-weight library, the core of which is the protobuf-generated schema parser. There's not much supported there at the moment, but there are schemas available
which you can use to read any other data you need.
UPD (August 2018): since this was answered, we've published a new (experimental and unstable) library, which is designed to be the intended way for reading and modifying the metadata: https://discuss.kotlinlang.org/t/announcing-kotlinx-metadata-jvm-library-for-reading-modifying-metadata-of-kotlin-jvm-class-files/7980
I've been working with NHibernate for quite a while and have come to realize that my architecture might be a bit...dated. This is for an NHibernate library that is living behind several apps that are related to each other.
First off, I have a DomainEntity base class with an ID and an EntityID (which is a guid that I use when I expose an item to the web or through an API, instead of the internal integer id - I think I had a reason for it at one point, but I'm not sure it's really valid now). I have a Repository base class where T inherits from DomainEntity that provides a set of generalized search methods. The inheritors of DomainEntity may also implement several interfaces that track things like created date, created by, etc., that are largely a log for the most recent changes to the object. I'm not fond of using a repository pattern for this, but it wraps the logic of setting those values when an object is saved (provided that object is saved through the repository and not saved as part of saving something else).
I would like to rid myself of the repositories. They don't make me happy and really seem like clutter these days, especially now that I'm not querying with hql and now that I can use extension methods on the Session object. But how do I hook up this sort of functionality cleanly? Let's assume for the purposes of discussion that I have something like structuremap set up and capable of returning an object that exposes context information (current user and the like), what would be a good flexible way of providing this functionality outside of the repository structure? Bonus points if this can be hooked up along with a convention-based mapping setup (I'm looking into replacing the XML files as well).
If you dislike the fact that repositories can become bloated over time then you may want to use something like Query Objects.
The basic idea is that you break down a single query into an individual object that you can then apply it to the database.
Some example implementation links here.
I have created an application, using ARC, that parses data from an online XML file. I am able to get everything I need using one class and one call to the API. The API provides the XML data. Due to the large xml file, I have a lot of variables, IBOutlets, and IBActions associated with this class.
But there are two approaches to this:
1) create a class which parses the XML data and also implements that data for your application
, i.e. create one class that does everything (as I have already done)
or
2) create a class which parses the XML data and create other classes which handle the data obtained from the XML parser class, i.e. one class does the parsing and another class implements that data
Note that some APIs that provide XML data track the number of calls/minute or calls/day to their service. So you would not want several classes calling the API, it would be better to make one request to the API which receives all the data you need.
So is it better to use several smaller classes to handle the xml data or is it fine to just use one large class to do everything?
When in doubt, smaller classes are better.
2) create a class which parses the XML data and create other classes which handle the data obtained from the XML parser class, i.e. one class does the parsing and another class implements that data
One key advantage of this is that the thing that the latter class models is separate from the parsing work that the former class does. This becomes important:
As Peter Willsey said, when your XML parser changes. For example, if you switch from stream-based to document-based parsing, or vice versa, or if you switch from one parsing library to another.
When your XML input changes. If you want to add support for a new format or a new version of a format, or kill off support for an obsolete format, you can simply add/remove parsing classes; the model class can remain unchanged (or receive only small and obvious improvements to support new functionality in new/improved formats).
When you add support for non-XML inputs. For example, JSON, plists, keyed archives, or custom proprietary formats. Again, you can simply add/remove parsing classes; the model class need not change much, if at all.
Even if none of these things ever happen, they're still better separated than mashed together. Parsing input and modeling the user's data are two different jobs; mashing them together makes them hard or impossible to reason about separately. Keep them separate, and you can change one without having to step around the other.
I guess it depends on your application. Something to consider is, what if you have to change the XML Parser you are using? You will have to rewrite your monolithic class and you could break a lot of unrelated functionality. If you abstracted the XML parser it would just be a matter of rewriting that particular class's implementation. Or what if the scope of your application changes and suddenly you have several views ? Will you be able to reuse code elsewhere without violating the DRY (Don't repeat yourself) principle ?
What you want to strive for is low coupling and high cohesion, meaning classes should not depend on each other and classes should have well defined responsibilities with highly related methods.