How to achieve scope singleton with Toothpick annotations? - kotlin

I have a non-Android app having many similar shards objects, and I want all objets inside each shard (DB client, DAOs...) to be singletons.
For this purpose, I have created a ShardSingleton annotation:
#Scope
#Retention(AnnotationRetention.RUNTIME)
annotation class ShardSingleton
and I am creating each shard object inside its own scope:
var shard1: Shard = KTP.openScopes("app", "shard1")
.supportScopeAnnotation(ShardSingleton::class.java)
.getInstance(Shard::class.java)
For my DAO to actually be a singleton inside its shard, I have to annotate it with both #ShardSingleton and #Singleton:
#Singleton // the FooDAO is not a singleton without this annotation
#ShardSingleton
#InjectConstructor
class FooDAO(val dbClient: DBClient)
At first sight (and probably out of ignorance), I thought #ShardSingleton would have been enough.
Is it expected?
Here is a gist demonstrating the behavior:
https://gist.github.com/bfreuden/a866b21c5a6342a3ce1ed26aa636f9f6

Related

Safe initialization of property using other abstract property

I want an abstract class declaring abstract val values: List<String> and val totalLen: Int. I'd like to initialize totalLen like this:
abstract class MyClass {
abstract val values: List<String>
val totalLen: Int = values.sumOf { it.length }
}
My editor warns me that I'm Accessing non-final property values in constructor . Is this something I can ignore, or will it come back and bite me.
Using init block yields the same warning.
Putting values in primary constructor removes the warning. I'd like to avoid that so that the code extending the class would prettier.
// prettier
abstract class MyClass {
abstract val values: List<String>
val totalLen = values.sumOf {it.length}
}
class Child : MyClass() {
override val values = listOf("a", "b")
...
}
// uglier
abstract class MyClassOther (val values: List<String>) {
val totalLen = values.sumOf {it.length}
}
class ChildOther : MyClassOther(listOf("a", "b")) {
...
}
Here the aesthetic difference isn't big, but if i had more abstact and non abstract properties it would result in crowded call to the super-class constructor in extending class.
Also, using sealed class for parrent doesn't help to remove IDE warning.
EDIT: found out about lazy delegate property. It solves this, but I'd still like to know if there is another solution without using delegates.
The issue is a subtle one, but it can have serious consequences down the line.
Object construction works on the general principle that a superclass should be fully initialised before the subclass is initialised. (That includes any code in the constructor itself, along with any property initialisers and init blocks.)
This means that if you call anything overridable from a constructor, initialisers, or init blocks, it will run before the class is fully initialised, and before anything in any subclasses is initialised. Properties might not have their initial values set, invariants might not be established, and you can't rely on anything working.
The only safe course is never to call anything overridable from a constructor or initialiser. That's what your IDE is warning you about.
(In practice, if you control all the code, you may be able to get away with calling something that has no dependency on any subclass's state, nor on anything in the superclass that's not yet set up. But that's fragile, and can break if someone adds some innocent-looking code in a subclass.)
Have you tried to run your ‘prettier’ case? That demonstrates the problem very neatly, by throwing a NullPointerException when you try to construct a Child! Here's the order in which things would happen:
External code calls the Child() constructor.
That calls the MyClass constructor.
That initialises totalLen.
That gets the value of values. ← NullPointerException
Having completed the MyClass construction, it continues with the Child() constructor.
That initialises values.
I hope you can see the problem: when the MyClass initialiser runs, values is not yet initialised. At that point, its value is undefined; on the JVM, it'll always be null (or 0, or false, depending on the type) — on other platforms, it might be something random or impossible.
Your ‘uglier’ case works around that well: it creates the list before calling the superclass constructor, so you know it's safe to use. But, as you say, it would be pretty unclear if there were multiple properties and/or multiple levels of inheritance along the way.
Unfortunately, I don't know of any general workarounds for this.
As you say, making superclass initialisers lazy is one approach, but it doesn't fit every case.
Spring has a #PostConstruct annotation which lets you specify code to be run after the class (and any subclasses) have all be initialised (including any Spring autowiring and other magic), which can be a good solution if you're using Spring. Other frameworks might provide something similar.
You could do something similar in an ad-hoc way, if you know your lowest-level class(es) will be final: they could call a construction-completed superclass method at the end of their constructors. But of course that's not a general solution.
Ultimately, giving superclasses dependencies on their subclasses (instead of the other way around) seems like a bit of a code smell, and this is only one of the problems it can cause. You might look at your code and see whether inheritance is really needed, or whether composition might be a better approach.

What are sealed classes in Kotlin?

I'm a beginner in Kotlin and recently read about Sealed Classes. But from the doc the only think I actually get is that they are exist.
The doc stated, that they are "representing restricted class hierarchies". Besides that I found a statement that they are enums with superpower. Both aspects are actually not clear.
So can you help me with the following questions:
What are sealed classes and what is the idiomatic way of using ones?
Does such a concept present in other languages like Python, Groovy or C#?
UPDATE:
I carefully checked this blog post and still can't wrap my head around that concept. As stated in the post
Benefit
The feature allows us to define class hierarchies that are
restricted in their types, i.e. subclasses. Since all subclasses need
to be defined inside the file of the sealed class, there’s no chance
of unknown subclasses which the compiler doesn’t know about.
Why the compiler doesn't know about other subclasses defined in other files? Even IDE knows that. Just press Ctrl+Alt+B in IDEA on, for instance, List<> definition and all implementations will be shown even in other source files. If a subclass can be defined in some third-party framework, which not used in the application, why should we care about that?
Say you have a domain (your pets) where you know there is a definite enumeration (count) of types. For example, you have two and only two pets (which you will model with a class called MyPet). Meowsi is your cat and Fido is your dog.
Compare the following two implementations of that contrived example:
sealed class MyPet
class Meowsi : MyPet()
class Fido : MyPet()
Because you have used sealed classes, when you need to perform an action depending on the type of pet, then the possibilities of MyPet are exhausted in two and you can ascertain that the MyPet instance will be exactly one of the two options:
fun feed(myPet: MyPet): String {
return when(myPet) {
is Meowsi -> "Giving cat food to Meowsi!"
is Fido -> "Giving dog biscuit to Fido!"
}
}
If you don't use sealed classes, the possibilities are not exhausted in two and you need to include an else statement:
open class MyPet
class Meowsi : MyPet()
class Fido : MyPet()
fun feed(myPet: MyPet): String {
return when(myPet) {
is Mewosi -> "Giving cat food to Meowsi!"
is Fido -> "Giving dog biscuit to Fido!"
else -> "Giving food to someone else!" //else statement required or compiler error here
}
}
In other words, without sealed classes there is not exhaustion (complete coverage) of possibility.
Note that you could achieve exhaustion of possiblity with Java enum however these are not fully-fledged classes. For example, enum cannot be subclasses of another class, only implement an interface (thanks EpicPandaForce).
What is the use case for complete exhaustion of possibilities? To give an analogy, imagine you are on a tight budget and your feed is very precious and you want to ensure you don't end up feeding extra pets that are not part of your household.
Without the sealed class, someone else in your home/application could define a new MyPet:
class TweetiePie : MyPet() //a bird
And this unwanted pet would be fed by your feed method as it is included in the else statement:
else -> "Giving food to someone else!" //feeds any other subclass of MyPet including TweetiePie!
Likewise, in your program exhaustion of possibility is desirable because it reduces the number of states your application can be in and reduces the possibility of bugs occurring where you have a possible state where behaviour is poorly defined.
Hence the need for sealed classes.
Mandatory else
Note that you only get the mandatory else statement if when is used as an expression. As per the docs:
If [when] is used as an expression, the value of the satisfied branch becomes the value of the overall expression [... and] the else branch is mandatory, unless the compiler can prove that all possible cases are covered with branch conditions
This means you won't get the benefit of sealed classes for something like this):
fun feed(myPet: MyPet): Unit {
when(myPet) {
is Meowsi -> println("Giving cat food to Meowsi!") // not an expression so we can forget about Fido
}
}
To get exhaustion for this scenario, you would need to turn the statement into an expression with return type.
Some have suggested an extension function like this would help:
val <T> T.exhaustive: T
get() = this
Then you can do:
fun feed(myPet: MyPet): Unit {
when(myPet) {
is Meowsi -> println("Giving cat food to Meowsi!")
}.exhaustive // compiler error because we forgot about Fido
}
Others have suggested that an extension function pollutes the namespace and other workarounds (like compiler plugins) are required.
See here for more about this problem.
Sealed classes are easier to understand when you understand the kinds of problems they aim to solve. First I'll explain the problems, then I'll introduce the class hierarchies and the restricted class hierarchies step by step.
We'll take a simple example of an online delivery service where we use three possible states Preparing, Dispatched and Delivered to display the current status of an online order.
Problems
Tagged class
Here we use a single class for all the states. Enums are used as type markers. They are used for tagging the states Preparing, Dispatched and Delivered :
class DeliveryStatus(
val type: Type,
val trackingId: String? = null,
val receiversName: String? = null) {
enum class Type { PREPARING, DISPATCHED, DELIVERED }
}
The following function checks the state of the currently passed object with the help of enums and displays the respective status:
fun displayStatus(state: DeliveryStatus) = when (state.type) {
PREPARING -> print("Preparing for dispatch")
DISPATCHED -> print("Dispatched. Tracking ID: ${state.trackingId ?: "unavailable"}")
DELIVERED -> print("Delivered. Receiver's name: ${state.receiversName ?: "unavailable"}")
}
As you can see, we are able to display the different states properly. We also get to use exhaustive when expression, thanks to enums. But there are various problems with this pattern:
Multiple responsibilities
The class DeliveryStatus has multiple responsibilities of representing different states. So it can grow bigger, if we add more functions and properties for different states.
More properties than needed
An object has more properties than it actually needs in a particular state. For example, in the function above, we don't need any property for representing the Preparing state. The trackingId property is used only for the Dispatched state and the receiversName property is concerned only with the Delivered state. The same is true for functions. I haven't shown functions associated with states to keep the example small.
No guarantee of consistency
Since these unused properties can be set from unrelated states, it's hard to guarantee the consistency of a particular state. For example, one can set the receiversName property on the Preparing state. In that case, the Preparing will be an illegal state, because we can't have a receiver's name for the shipment that hasn't been delivered yet.
Need to handle null values
Since not all properties are used for all states, we have to keep the properties nullable. This means we also need to check for the nullability. In the displayStatus() function we check the nullability using the ?:(elvis) operator and show unavailable, if that property is null. This complicates our code and reduces readability. Also, due to the possibility of a nullable value, the guarantee for consistency is reduced further, because the null value of receiversName in Delivered is an illegal state.
Introducing Class Hierarchies
Unrestricted class hierarchy: abstract class
Instead of managing all the states in a single class, we separate the states in different classes. We create a class hierarchy from an abstract class so that we can use polymorphism in our displayStatus() function:
abstract class DeliveryStatus
object Preparing : DeliveryStatus()
class Dispatched(val trackingId: String) : DeliveryStatus()
class Delivered(val receiversName: String) : DeliveryStatus()
The trackingId is now only associated with the Dispatched state and receiversName is only associated with the Delivered state. This solves the problems of multiple responsibilities, unused properties, lack of state consistency and null values.
Our displayStatus() function now looks like the following:
fun displayStatus(state: DeliveryStatus) = when (state) {
is Preparing -> print("Preparing for dispatch")
is Dispatched -> print("Dispatched. Tracking ID: ${state.trackingId}")
is Delivered -> print("Delivered. Received by ${state.receiversName}")
else -> throw IllegalStateException("Unexpected state passed to the function.")
}
Since we got rid of null values, we can be sure that our properties will always have some values. So now we don't need to check for null values using the ?:(elvis) operator. This improves code readability.
So we solved all the problems mentioned in the tagged class section by introducing a class hierarchy. But the unrestricted class hierarchies have the following shortcomings:
Unrestricted Polymorphism
By unrestricted polymorphism I mean that our function displayStatus() can be passed a value of unlimited number of subclasses of the DeliveryStatus. This means we have to take care of the unexpected states in displayStatus(). For this, we throw an exception.
Need for the else branch
Due to unrestricted polymorphism, we need an else branch to decide what to do when an unexpected state is passed. If we use some default state instead of throwing an exception and then forget to take care of any newly added subclass, then that default state will be displayed instead of the state of the newly created subclass.
No exhaustive when expression
Since the subclasses of an abstract class can exist in different packages and compilation units, the compiler doesn't know all the possible subclasses of the abstract class. So it won't flag an error at compile time, if we forget to take care of any newly created subclasses in the when expression. In that case, only throwing an exception can help us. Unfortunately, we'll know about the newly created state only after the program crashes at runtime.
Sealed Classes to the Rescue
Restricted class hierarchy: sealed class
Using the sealed modifier on a class does two things:
It makes that class an abstract class. Since Kotlin 1.5, you can use a sealed interface too.
It makes it impossible to extend that class outside of that file. Since Kotlin 1.5 the same file restriction has been removed. Now the class can be extended in other files too but they need to be in the same compilation unit and in the same package as the sealed type.
sealed class DeliveryStatus
object Preparing : DeliveryStatus()
class Dispatched(val trackingId: String) : DeliveryStatus()
class Delivered(val receiversName: String) : DeliveryStatus()
Our displayStatus() function now looks cleaner:
fun displayStatus(state: DeliveryStatus) = when (state) {
is Preparing -> print("Preparing for Dispatch")
is Dispatched -> print("Dispatched. Tracking ID: ${state.trackingId}")
is Delivered -> print("Delivered. Received by ${state.receiversName}")
}
Sealed classes offer the following advantages:
Restricted Polymorphism
By passing an object of a sealed class to a function, you are also sealing that function, in a sense. For example, now our displayStatus() function is sealed to the limited forms of the state object, that is, it will either take Preparing, Dispatched or Delivered. Earlier it was able to take any subclass of DeliveryStatus. The sealed modifier has put a limit on polymorphism. As a result, we don't need to throw an exception from the displayStatus() function.
No need for the else branch
Due to restricted polymorphism, we don't need to worry about other possible subclasses of DeliveryStatus and throw an exception when our function receives an unexpected type. As a result, we don't need an else branch in the when expression.
Exhaustive when expression
Just like all the possible values of an enum class are contained inside the same class, all the possible subtypes of a sealed class are contained inside the same package and the same compilation unit. So, the compiler knows all the possible subclasses of this sealed class. This helps the compiler to make sure that we have covered(exhausted) all the possible subtypes in the when expression. And when we add a new subclass and forget to cover it in the when expression, it flags an error at compile time.
Note that in the latest Kotlin versions, your when is exhaustive for the when expressions as well the when statements.
Why in the same file?
The same file restriction has been removed since Kotlin 1.5. Now you can define the subclasses of the sealed class in different files but the files need to be in the same package and the same compilation unit. Before 1.5, the reason that all the subclasses of a sealed class needed to be in the same file was that it had to be compiled together with all of its subclasses for it to have a closed set of types. If the subclasses were allowed in other files, the build tools like Gradle would have to keep track of the relations of files and this would affect the performance of incremental compilation.
IDE feature: Add remaining branches
When you just type when (status) { } and press Alt + Enter, Enter, the IDE automatically generates all the possible branches for you like the following:
when (state) {
is Preparing -> TODO()
is Dispatched -> TODO()
is Delivered -> TODO()
}
In our small example there are just three branches but in a real project you could have hundreds of branches. So you save the effort of manually looking up which subclasses you have defined in different files and writing them in the when expression one by one in another file. Just use this IDE feature. Only the sealed modifier enables this.
That's it! Hope this helps you understand the essence of sealed classes.
If you've ever used an enum with an abstract method just so that you could do something like this:
public enum ResultTypes implements ResultServiceHolder {
RESULT_TYPE_ONE {
#Override
public ResultOneService getService() {
return serviceInitializer.getResultOneService();
}
},
RESULT_TYPE_TWO {
#Override
public ResultTwoService getService() {
return serviceInitializer.getResultTwoService();
}
},
RESULT_TYPE_THREE {
#Override
public ResultThreeService getService() {
return serviceInitializer.getResultThreeService();
}
};
When in reality what you wanted is this:
val service = when(resultType) {
RESULT_TYPE_ONE -> resultOneService,
RESULT_TYPE_TWO -> resultTwoService,
RESULT_TYPE_THREE -> resultThreeService
}
And you only made it an enum abstract method to receive compile time guarantee that you always handle this assignment in case a new enum type is added; then you'll love sealed classes because sealed classes used in assignments like that when statement receive a "when should be exhaustive" compilation error which forces you to handle all cases instead of accidentally only some of them.
So now you cannot end up with something like:
switch(...) {
case ...:
...
default:
throw new IllegalArgumentException("Unknown type: " + enum.name());
}
Also, enums cannot extend classes, only interfaces; while sealed classes can inherit fields from a base class. You can also create multiple instances of them (and you can technically use object if you need the subclass of the sealed class to be a singleton).

Retrofit/Gson how to create type adapters dynamically

I learned from one example, so I am not sure if this is the optimal way or not, but anyways, I use the following code. I create one Retrofit instance and use it for all requests.
Since there are many methods, there are many types of data. It seems that I can create adapters (json -> my data class) automatically simply by adding annotations. But I needed more control (inheritance: the data classes have shared fields, dependency: some fields may not exist depending on other field's values), so I created a custom adaptor for each of the classes. So, currently my code is like this:
if (instance == null)
{
val gson = GsonBuilder()
.registerTypeAdapter(myClass1::class.java, myClassDeserialiser1())
.... (tens of this) ....
.registerTypeAdapter(myClass30::class.java, myClassDeserialiser30())
.create()
instance = Retrofit.Builder().
.addConverterFactory(GsonConverterFactory.create(gson))
.build()
.create(MyAPIs::class.java);
}
return instance
The problem above is that I am creating instances of all parsers at the same time in advance. This may be inefficient. I wish I could create them when they are first needed. Is that possible?
You can add a #JsonAdapter annotation to your class definition in lieu of calling registerTypeAdapter. I can't comment on its efficiency, but it keeps the adapter info tied to the object instead of the creation of your Gson object, which sounds like what you want.

Are serializers the right spot to remove shared state from Akka messages?

I am working on a distributed algorithm and decided to use a Akka to scale it across machines. The machines need to exchange messages very frequently and these messages reference some immutable objects that exist on every machine. Hence, it seems sensible to "compress" the messages in the sense that the shared, replicated objects should not be serialized in the messages. Not only would this save on network bandwidth but it also would avoid creating duplicate objects in the receiver side whenever a message is deserialized.
Now, my question is how to do this properly. So far, I could think of two options:
Handle this on the "business layer", i.e., converting my original message objects to some reference objects that replace references to the shared, replicated objects by some symbolic references. Then, I would send those reference objects rather than the original messages. Think of it as replacing some actual web resource with a URL. Doing this seems rather straight-forward in terms of coding but it also drags serialization concerns into the actual business logic.
Write custom serializers that are aware of the shared, replicated objects. In my case, it would be okay that this solution would introduce the replicated, shared objects as global state to the actor systems via the serializers. However, the Akka documentation does not describe how to programmatically add custom serializers, which would be necessary to weave in the shared objects with the serializer. Also, I could imagine that there are a couple of reasons, why such a solution would be discouraged. So, I am asking here.
Thanks a lot!
It's possible to write your own, custom serializers and let them do all sorts of weird things, then you can bind them at the config level as usual:
class MyOwnSerializer extends Serializer {
// If you need logging here, introduce a constructor that takes an ExtendedActorSystem.
// class MyOwnSerializer(actorSystem: ExtendedActorSystem) extends Serializer
// Get a logger using:
// private val logger = Logging(actorSystem, this)
// This is whether "fromBinary" requires a "clazz" or not
def includeManifest: Boolean = true
// Pick a unique identifier for your Serializer,
// you've got a couple of billions to choose from,
// 0 - 40 is reserved by Akka itself
def identifier = 1234567
// "toBinary" serializes the given object to an Array of Bytes
def toBinary(obj: AnyRef): Array[Byte] = {
// Put the code that serializes the object here
//#...
Array[Byte]()
//#...
}
// "fromBinary" deserializes the given array,
// using the type hint (if any, see "includeManifest" above)
def fromBinary(
bytes: Array[Byte],
clazz: Option[Class[_]]): AnyRef = {
// Put your code that deserializes here
//#...
null
//#...
}
}
But this raises an important question: if your messages all references data that is shared on the machines already, why would you want to put in the message the pointer to the object (very bad! messages should be immutable, and a pointer isn't!), rather than some sort of immutable, string objectId (kinda your option 1) ? This is a much better option when it comes to preserving the immutability of the messages, and there is little change in your business logic (just put a wrapper over the shared state storage)
for more info, see the documentation
I finally went with the solution proposed by Diego and want to share some more details on my reasoning and solution.
First of all, I am also in favor of option 1 (handling the "compaction" of messages in the business layer) for those reasons:
Serializers are global to the actor system. Making them stateful is actually a most severe violation of Akka's very philosophy as it goes against the encapsulation of behavior and state in actors.
Serializers have to be created upfront, anyway (even when adding them "programatically").
Design-wise, one can argue that "message compaction is not a responsibility of the serializer, either. In a strict sense, serialization is merely the transformation of runtime-specific data into a compact, exchangable representation. Changing what to serialize, is not a task of a serializer, though.
Having settled upon this, I still strived for a clear separation of "message compaction" and the actual business logic in the actors. I came up with a neat way to do this in Scala, which I want to share here. The basic idea is to make the message itself look like a normal case class but still allow these messages to "compactify" themselves. Here is an abstract example:
class Sender extends ActorRef {
def context: SharedContext = ... // This is the shared data present on every node.
// ...
def someBusinessLogic(receiver: ActorRef) {
val someData = computeData
receiver ! MyMessage(someData)
}
}
class Receiver extends ActorRef {
implicit def context: SharedContext = ... // This is the shared data present on every node.
def receiver = {
case MyMessage(someData) =>
// ...
}
}
object Receiver {
object MyMessage {
def apply(someData: SomeData) = MyCompactMessage(someData: SomeData)
def unapply(myCompactMessage: MyCompactMessage)(implicit context: SharedContext)
: Option[SomeData] =
Some(myCompactMessage.someData(context))
}
}
As you can see, the sender and receiver code feels just like using a case class and in fact, MyMessage could be a case class.
However, by implementing apply and unapply manually, one can insert its own "compactification" logic and also implicitly inject the shared data necessary to do the "uncompactification", without touching the sender and receiver. For defining MyCompactMessage, I found Protocol Buffers to be especially suited, as it is already a dependency of Akka and efficient in terms of space and computation, but any other solution would do.

Guava - Can a Multimap be serialized?

I am looking at this API ArrayListMultiMap which implements the Serializable interface. Does that mean I can serialize this object ? Are all Multimap objects serialized ?
The meaning of Serializable is always the same: If an object isn't serializable, it can't be serialized. If it is, it may work or not... Especially in case of collections (including maps and multimaps), it depends on their content.
As an example, you can surely serialize ArrayList<String> as ArrayList.class is serializable and so is each member of the list. OTOH trying to serialize ArrayList<Object> may or may not work: If all contained objects are e.g. strings, it will work. If any member is not serializable, you'll get an exception.
Does it mean I can serialize this object?
If all keys and values are serializable, you can.
Are all multiMap object serializable?
No, the interface Multimap doesn't extend Serializable, so there may be non-serializable implementation. Indeed, you can get such an instance via e.g. Multimaps.filterEntries.
ArrayListMultimap and HashMultimap are Serializable BUT the Collection views (in asMap() for example) are not.
This problem is answered here:
To use the map returned by asMap(), you can re-create a new map and wrap the Multimap Collection views into other collections (for example a Set), that will make the new map Serializable:
Multimap<MyClass, MyOtherClass> myMultiMap = HashMultimap.create();
// ... build your multimap
Map<MyClass, Set<MyOtherClass>> map = myMultiMap.asMap().entrySet()
.stream()
.collect(Collectors.toMap(
Map.Entry::getKey,
(entry) -> ImmutableSet.copyOf(entry.getValue())
));
Or java 7 compliant code:
Multimap<MyClass, MyOtherClass> myMultiMap = HashMultimap.create();
// ... build your multimap
Map<MyClass, Set<MyOtherClass>> map = Maps.newHashMap();
for (Map.Entry<MyClass, Collection<MyOtherClass>> entry :
myMultiMap.asMap().entrySet()) {
map.put(entry.getKey(), ImmutableSet.copyOf(entry.getValue()));
}