How many coroutines is too many? - kotlin

I need to speed up a search over some collection with millions of elements.
Search predicate needs to be passed as argument.
I have been wondering wether the simplest solution(at least for now) wouldn't be just using coroutines for the task.
The question I am facing right now is how many coroutines can I actually create at once. :D As a side note there might be more than one such search running concurrently.
Can I make millions of coroutines(one for every item) for every such search? Should I decide on some workload per coroutine(for example 1000 items per coroutine)? Should I also decide on some cap for coroutines amount?
I have rough understanding of coroutines and how they actually work, however, I have no idea what are the performance limitations of this feature.
Thanks!

The memory weight of a coroutine scales with the depth of the call trace from the coroutine builder block to the suspension point. Each suspend fun call adds another Continuation object to a linked list and this is retained while the coroutine is suspended. A rough figure for one Continuation instance is 100 bytes.
So, if you have a call trace depth of, say, 5, that amounts to 500 bytes per item. A million items is 500 MB.
However, unless your search code involves blocking operations that would leave a thread idle, you aren't gaining anything from coroutines. Your task looks more like an instance of data paralellism and you can solve it very efficiently using the java.util.stream API (as noted by user marstran in the comment).

According the kotlin coroutine starter guide, the example launches 100K coroutines. I believe what you intend to do is exactly what kotlin coroutine is designed for.

If you will not do many modifications over your collection then just store it in a HashMap,
else store it in a TreeMap. Then just search items there. I believe the search methods implemented there are optimized enough to handle a million items in a blink. I would not use coroutines in this case.
Documentation (for Kotlin):
HashMap: https://developer.android.com/reference/kotlin/java/util/HashMap
TreeMap: https://developer.android.com/reference/kotlin/java/util/TreeMap

Related

Coroutines, understanding suspend

I'm trying to understand a passage in Hands-On Design Patterns with Kotlin, Chapter 8, Threads and Coroutines.
Why is it that when we rewrite the function as suspend, "we can serve 20 times more users, all thanks to the smart way Kotlin has rewritten our code".
fun profile(id:String):Profile {
val bio = fetchBioOverHttp(id) //takes 1s
val picture = fetchPictureFromDb(id) // takes 100ms
val friends = fetchFriendsFromDb(id) // takes 500ms
return Profile(bio, picture)
}
I've attached the two relevant pages but basically, it says "if we have a thread pool of of 10 threads, the first 10 requests will get into the pool and the 11th will get stuck until the first one finishes. This means we can serve three users simultaneously, and the fourth one will wait until the first one gets his/her results."
I think I understand this point. 3 threads execute the three methods in parallel, then another 3, then another 3, which gives us 9 threads actively executing code. The 10th thread executes the first fetchBioOverHttp method, and we're out of threads until thread #1 finishes its fetchBioOverHttp call.
However, how does rewriting these methods as suspend methods result in serving 20 times more users? I guess I'm not understanding the path of execution here.
To be honest, I don't like this example.
Author meant that after rewriting httpCall() it doesn't wait for the result - it schedules processing in the background, registers a callback and then immediately returns. The caller thread is freed and it can start handling another request while the first one is being processed. By using this technique we can process multiple requests while using even a single thread.
I don't like this explanation, because it ignores how coroutines really work internally. Instead, it tries to compare them to something the reader could be familiar with - asynchronous callback-based APIs. Normally, this is good as it helps to understand. However, in this case the problem is that in most cases coroutines internally... create a thread pool and use it to schedule blocking IO operations. Therefore, both provided solutions are pretty much the same and the main difference is that we created a pool of 10 threads and by default coroutines use 64 threads.
Kotlin compiler does not cut the function into two. There is still a single function with a lot of additional code inside. I agree it can be interpreted as two functions calling each other, but this is not what the compiler does. If that wasn't explained in the book, I think this is misleading.

How do you know when you need to yield()?

Take Kotlin channels for example
for(msg in channel){
// to stuff
yield() // maybe?
}
How do you know if yield is required? I assume that Channels are built in a way that yielding happens automatically behind the scenes in the iterator but I'm not sure. In general, how do you know you need manual yields when using the rest of Kotlin's coroutine library that might do it for you automatically?
In most cases you should not at all need to use yield() or be concerned with it. Coroutines can switch automatically whenever we get to a suspension point, which usually happens pretty often.
yield() is needed only if our code does not suspend for prolonged time. That usually means we are performing intensive CPU calculations. In your example receiving from the channel is suspending operation, so you don't need yield() here.
You only need to call yield if you want to artificially add a suspension point when you have none in a piece of code. Suspension points are calls to suspend functions.
If you don't know which functions are suspend from the top of your head, you can quickly identify those in IntelliJ IDEA for instance because every suspend function call is marked with an icon:
So in your case you would see it on the iteration over the channel:
You only really need to manually add a yield if you have loops or extended pieces of code that exclusively use regular functions, or more generally if you want to ensure other coroutines have a chance to run at a particular point in time (for instance in tests). This shouldn't happen often.

Synchronized collection that blocks on every method

I have a collection that is commonly used between different threads. In one thread I need to add items, remove items, retrieve items and iterate over the list of items. What I am looking for is a collection that blocks access to any of its read/write/remove methods whenever any of these methods are already being called. So if one thread retrieves an item, another thread has to wait until the reading has completed before it can remove an item from the collection.
Kotlin doesn't appear to provide this. However, I could create a wrapper class that provides the synchronization I'm looking for. Java does appear to offer the synchronizedList class but from what I read, this is really for blocking calls on a single method, meaning that no two threads can remove an item at the same time but one can remove while the other reads an item (which is what I am trying to avoid).
Are there any other solutions?
A wrapper such as the one returned by synchronizedList
synchronizes calls to every method, using the wrapper itself as the lock. So one thread would be blocked from calling get(), say, while another thread is currently calling put(). (This is what the question seems to ask for.)
However, as the docs to that method point out, this does nothing to protect sequences of calls, such as you might use when iterating through a collection. If another thread changes the collection in between your calls to next(), then anything could happen. (This is what I think the question is really about!)
To handle that safely, your options include:
Manual synchronization. Surround each sequence of calls to the collection in a synchronized block that synchronises on the collection, e.g.:
val list = Collections.synchronizedList(mutableListOf<String>())
// …
synchronized (list) {
for (i in list) {
// …
}
}
This is straightforward, and relatively easy to do if the collection is under your control. But if you miss any sequences, then you could get unexpected behaviour. Also, you'll need to keep your sequences short, to avoid holding the lock for an extended time and affecting performance.
Use a concurrent collection implementation which provides primitives letting you do all the processing you need in a single call, avoiding iteration and other sequences.
For maps, Java provides very good support with its ConcurrentMap interface, and high-performance implementations such as ConcurrentHashMap. These have methods allowing you to iterate, update single or multiple mappings, search, reduce, and many other whole-map operations in a single call, avoiding any concurrency problems.
For sets (as per this question) you can use a ConcurrentSkipListSet, or you can create one from a ConcurrentHashMap with newKeySet().
For lists (as per this question), there are fewer options. (I think concurrent lists are much less commonly needed.) If you don't need random access, ConcurrentLinkedQueue may suffice. Or if modification is much less common than iteration, CopyOnWriteArrayList could work.
There are many other concurrent classes in the java.util.concurrent package, so it's well worth looking through to see if any of those is a better match for your particular case.
If you have specialised requirements, you could write your own collection implementation which supports them. Obviously this is more work, and only worthwhile if none of the above approaches does what you want.
In general, I think it's well worth stepping back and seeing whether iteration is really needed. Historically, in imperative languages all the way from FORTRAN through BASIC and C up to Java, the for loop has traditionally been the tool of choice (sometimes the only structure) for operating on collections of data — and for those of us who grew up on those languages, it's what we reach for instinctively. But the functional programming paradigm provides alternative tools, and so in languages like Kotlin which provide some of them, it's good to stop and ask ourselves “What am I ultimately trying to achieve here?” (Often what we want is actually to update all entries, or map to a new structure, or search for an element, or find the maximum — all of which have better approaches in Kotlin than low-level iteration.)
After all, if you can tell the compiler what you want to do, instead of how to do it, then your program is likely to be shorter and easier to read and maintain, freeing you to think about more important things!

Do we need to lock the immutable list in kotlin?

var list = listOf("one", "two", "three")
fun One() {
list.forEach { result ->
/// Does something here
}
}
fun Two() {
list = listOf("four", "five", "six")
}
Can function One() and Two() run simultaneously? Do they need to be protected by locks?
No, you dont need to lock the variable. Even if the function One() still runs while you change the variable, the forEach function is running for the first list. What could happen is that the assignment in Two() happens before the forEach function is called, but the forEach would either loop over one or the other list and not switch due to the assignment
if you had a println(result) in your forEach, your program would output either
one
two
three
or
four
five
six
dependent on if the assignment happens first or the forEach method is started.
what will NOT happen is something like
one
two
five
six
Can function One() and Two() run simultaneously?
There are two ways that that could happen:
One of those functions could call the other.  This could happen directly (where the code represented by // Does something here in One()⁽¹⁾ explicitly calls Two()), or indirectly (it could call something else which ends up calling Two() — or maybe the list property has a custom setter which does something that calls One()).
One thread could be running One() while a different thread is running Two().  This could happen if your program launches a new thread directly, or a library or framework could do so.  For example, GUI frameworks tend to have one thread for dispatching events, and others for doing work that could take time; and web server frameworks tend to use different threads for servicing different requests.
If neither of those could apply, then there would be no opportunity for the functions to run simultaneously.
Do they need to be protected by locks?
If there's any possibility of them being run on multiple threads, then yes, they need to be protected somehow.
99.999% of the time, the code would do exactly what you'd expect; you'd either see the old list or the new one.  However, there's a tiny but non-zero chance that it would behave strangely — anything from giving slightly wrong results to crashing.  (The risk depends on things like the OS, CPU/cache topology, and how heavily loaded the system is.)
Explaining exactly why is hard, though, because at a low level the Java Virtual Machine⁽²⁾ does an awful lot of stuff that you don't see.  In particular, to improve performance it can re-order operations within certain limits, as long as the end result is the same — as seen from that thread.  Things may look very different from other threads — which can make it really hard to reason about multi-threaded code!
Let me try to describe one possible scenario…
Suppose Thread A is running One() on one CPU core, and Thread B is running Two() on another core, and that each core has its own cache memory.⁽³⁾
Thread B will create a List instance (holding references to strings from the constant pool), and assign it to the list property; both the object and the property are likely to be written to its cache first.  Those cache lines will then get flushed back to main memory — but there's no guarantee about when, nor about the order in which that happens.  Suppose the list reference gets flushed first; at that point, main memory will have the new list reference pointing to a fresh area of memory where the new object will go — but since the new object itself hasn't been flushed yet, who knows what's there now?
So if Thread A starts running One() at that precise moment, it will get the new list reference⁽⁴⁾, but when it tries to iterate through the list, it won't see the new strings.  It might see the initial (empty) state of the list object before it was constructed, or part-way through construction⁽⁵⁾.  (I don't know whether it's possible for it to see any of the values that were in those memory locations before the list was created; if so, those might represent an entirely different type of object, or even not a valid object at all, which would be likely to cause an exception or error of some kind.)
In any case, if multiple threads are involved, it's possible for one to see list holding neither the original list nor the new one.
So, if you want your code to be robust and not fail occasionally⁽⁶⁾, then you have to protect against such concurrency issues.
Using #Synchronized and #Volatile is traditional, as is using explicit locks.  (In this particular case, I think that making list volatile would fix the problem.)
But those low-level constructs are fiddly and hard to use well; luckily, in many situations there are better options.  The example in this question has been simplified too much to judge what might work well (that's the down-side of minimal examples!), but work queues, actors, executors, latches, semaphores, and of course Kotlin's coroutines are all useful abstractions for handling concurrency more safely.
Ultimately, concurrency is a hard topic, with a lot of gotchas and things that don't behave as you'd expect.
There are many source of further information, such as:
These other questions cover some of the issues.
Chapter 17: Threads And Locks from the Java Language Specification is the ultimate reference on how the JVM behaves.  In particular, it describes what's needed to ensure a happens-before relationship that will ensure full visibility.
Oracle has a tutorial on concurrency in Java; much of this applies to Kotlin too.
The java.util.concurrent package has many useful classes, and its summary discusses some of these issues.
Concurrent Programming In Java: Design Principles And Patterns by Doug Lea was at one time the best guide to handling concurrency, and these excerpts discuss the Java memory model.
Wikipedia also covers the Java memory model
(1) According to Kotlin coding conventions, function names should start with a lower-case letter; that makes them easier to distinguish from class/object names.
(2) In this answer I'm assuming Kotlin/JVM.  Similar risks are likely apply to other platforms too, though the details differ.
(3) This is of course a simplification; there may be multiple levels of caching, some of which may be shared between cores/processors; and some systems have hardware which tries to ensure that the caches are consistent…
(4) References themselves are atomic, so a thread will either see the old reference or the new one — it can't see a bit-pattern comprising parts of the old and new ones, pointing somewhere completely random.  So that's one problem we don't have!
(5) Although the reference is immutable, the object gets mutated during construction, so it might be in an inconsistent state.
(6) And the more heavily loaded your system is, the more likely it is for concurrency issues to occur, which means that things will probably fail at the worst possible time!

What are the specifics about the continuations upon which Raku(do) relies?

The topic of delimited continuations was barely discussed among programming language enthusiasts in the 1990s and 2000s. It has recently been re-emerging as a major thing in programming language discussions.
My hope is that someone can at least authoritatively say whether the continuations underlying Rakudo (as contrasted with Raku) do or don't have each of the six characteristics listed below. I say a bit more about the sort of answer I'm hoping for after the list.
Quoting verbatim (with a formatting touch up) from an online message[1] written by the person driving the work on adding continuations to the JVM:
Asymmetric: When the continuation suspends or yields, the execution returns to the caller (of Continuation.run()). Symmetric continuations don't have the notion of a caller. When they yield, they must specify another continuation to transfer the execution to. Neither symmetric nor asymetric continuations are more powerful than one another, and each could be used to simulate the other.
Stackful: The continuation can be suspended at any depth in the call-stack, rather than in the same subroutine where the delimited context begins when the continuation is stackless (as is the case in C#). I.e the continuation has its own stack rather than just a single subroutine frame. Stackful continuations are more powerful than stackless ones.
Delimited: The continuation captures the execution context that starts with a specific call (in our case, the body of a certain runnable) rather than the entire execution state all the way up to main(). Delimited continuations are strictly more powerful than undelimited ones (http://okmij.org/ftp/continuations/undelimited.html), the latter considered "not practically useful" (http://okmij.org/ftp/continuations/against-callcc.html).
Multi-prompt: Continuations can be nested, and anywhere in the call stack, any of the enclosing continutions can be suspended. This is similar to nesting of try/catch blocks, and throwing an exception of a certain type that unwinds the stack up to the nearest catch that handles it rather than just the nearest catch. An example of nested continuations can be using a Python-like generator inside a virtual thread. The generator code can do a blocking IO call, which will suspend the enclosing thread continuation, and not just the generator: https://youtu.be/9vupFNsND6o?t=2188
One-shot/non-reentrant: Every time we continue a suspended continuation its state is mutated, and we cannot continue it from the same suspension state multiple times (i.e we can't go back in time). This is unlike reentrant continuations where every time we suspend them, a new immutable continuation object that represents a particular suspension point is returned. I.e. the continuation is a single point in time, and every time we continue it we go back to that state. Reentrant continuations are strictly more powerful than non-reentrant ones; i.e. they can do things that are strictly impossible with just one-shot continuations.
Cloneable: If we are able to clone a one-shot continuation we can provide the same ability as reentrant continuations. Even though the continuation is mutated every time we continue it, we can clone its state before continuing to create a snapshot of that point in time that we can return to later.
Aiui continuations aren't directly exposed in Raku, so perhaps the correct answer related to Raku (as against Rakudo) would be "there are no continuations". But that's not clear to me so in the following, in which I describe what I'm hoping might be in an answer if I'm very lucky, I'll pretend it makes some sense to talk about them in the context of both Raku and Rakudo as two distinct realms.
Here's the sort of answer I'm imagining would be possible (though I'm just somewhat wildly guessing at what is actually true):
"As a "100 year" language design, Raku's current underlying semantic [execution?] model requires, at minimum, stackless one-shot multi prompt delimited continuations.
From a theoretic pov, Raku's design can never expand to require that continuations are cloneable but it could theoretically expand to require they are stackful.
Rakudo implements the currently required continuation semantics.
MoarVM has support for these semantics built in, and could realistically track the theoretically possible expansions of requirements if Raku's design ever so expands.
The JVM and JS backends have suitable shims that achieve the same thing, albeit at a cost to performance. It seems plausible that the JVM backend could switch to using continuations that are native to the JVM if it comes to pass that it gets them, provided of course that they meet requirements, but my current impression is that it would likely realistically be perhaps a decade away, or more, before we would need to consider crossing that bridge."
(Or something vaguely like that.)
If an answer also provided a bit more detail on something like the above, perhaps some code links, that would be a particularly awesome addition.
Similarly, if an answer included a couple brief examples of how this continuation power surfaces in current Raku features, and a speculation about how it might one day, say 10 years from now, surface in other features, that would make an answer an over-the-top brilliant one.
PS. Thank you to #Larry who understood things deeply enough to know continuations needed to be part of the picture; to Stefan O'Rear for his contributions, including the initial implementations of what I think are one-shot multi prompt delimited continuations; and to jnthn for making the dream come true.
Footnotes
1 There is work underway to introduce continuations as a first class construct to the JVM. A key driver of this effort is Ron Pressler. The above is based on a message he wrote in November.
Rakudo uses continuations as an implementation strategy for two features:
gather/take - for implementing lazy iterators
Making await on the thread pool non-blocking
The characteristics of the continuations implemented follow the requirements of these language features. I'll go through them in a slightly different order than above because it eases explaining.
Stackful - yes, because we need to be able to do the take or await at any depth in the callstack relative to the gather or the thread pool worker's work loop. For example, you could write a recursive graph traversal algorithm inside of a gather and then take each encountered node. For await, this is at the heart of the difference between Raku's await and await as seen in many other languages: you don't have to refactor all the way up the call stack.
Delimited - yes. The continuation reset operation installs a tag (or "prompt"), and when we do a continuation control operation, we slice the stack at this delimiter. I can't imagine how you'd implement the Raku features involved without them being delimited.
Multi-prompt - yes, this is required because you can be iterating one data source provided by a gather inside of another gather's implementation, or do an await inside of a gather.
Asymmetric - after the continuation has been taken, execution continues after the reset instruction. In the await case, we go and find another task in the worker task queue, and in the take case we're back in the pull-one method of the iterator and can return the taken value. I think this approach fits well in a language where only a few features use continuations.
One-shot/non-reentrant - yes, and at least in MoarVM the memory safety of the runtime depends on this property. It is enforced by an atomic compare and swap operation, so if two threads were to race to invoke the continuation, only one could ever succeed. No Raku features need the additional complexity that reentrant continuations would imply.
Cloneable - no, because no Raku features need it. In theory this isn't too awful to implement in MoarVM in terms of saying "yes, we can do it", but I suspect it raises a lot of questions like "how deep should be clone". If you just cloned all the invocation records and similar, you'd still share Scalar containers, Arrays, etc. between the clones.
As I understand it - though I follow from a distance - the JVM continuations are at least partly aimed at the same design space that the Raku await mechanism is in, and so I'd be surprised if they didn't end up providing what Raku needs. This would clearly simplify compilation of Raku code to the JVM (currently it does the global CPS transform as it does code generation, which curiously turned out simpler than I expected), and it'd almost certainly perform much better too, because the transform required probably obscures quite a few things from the perspective of the JIT compiler.
So far as code goes, you can see the current continuations implementation, which uses the continuation data structure which in turn has various bits of memory management. At the time of writing, these have all been significantly refactored as part of the new callstack representation required by ongoing dispatcher work; those changes do make working with continuations more efficient, but don't change the overall set of operations.