Is there a replacement for actor yet? - kotlin

I find actor to be extremely convenient for sequential execution of background work and safe private state. Unfortunately, it is marked as obsolete and has been for some time. I can't find any information on a replacement, though there have been many flow additions to Kotlin Coroutines.
Is there a way to achieve the same benefits as actor with the new coroutine library additions as of today? Would would code that achieves this look like?

Related

How to understand coroutine cancellation is cooperative

In Kotlin, coroutine cancellation is cooperative. How should I understand it?
Link to Kotlin documentation.
If you have a Java background, you may be familiar with the thread interruption mechanism. Any thread can call thread.interrupt() and the receiving thread will get a signal in the form of a Boolean isInterrupted flag becoming true. The receiving thread may check the flag at any time with currentThread.isInterrupted() — or it may ignore it completely. That's why this mechanism is said to be cooperative.
Kotlin's coroutine cancellation mechanism is an exact replica of this: you have a coroutineContext.isActive flag that you (or a function you call) may check.
In both cases some well-known functions, for example Thread.sleep() in Java and delay() in Kotlin, check this flag and throw an InterruptedException and CancellationException, respectively. These methods/functions are said to be "interruptible" / "cancellable".
I'm not 100% sure whether I understand your question, but maybe this helps:
Coroutines are usually executed within the same thread you start them with. You can use different dispatchers, but they are designed to work when being started from the same thread. There's no extra scheduling happening.
You can compare this with scheduling mechanisms in an OS. Coroutines behave similar like to cooperative scheduling. You find similar concepts in many frameworks and languages to deal with async operations. Ruby for example has fibers which behave similar.
Basically this means that if a coroutine is hogging on your CPU in a busy loop, you cannot cancel it (unless you kill the whole process). Instead, your coroutines has to regularly check for cancellation and also add waits/delays/yields so that other coroutines can work.
This also defines on when coroutines are helpful the most: when running in a single-threaded-context, it doesn't help to use co-routines for local-only calculations. I used them mostly for processing async calls like interactions with databases or web servers.
This article also has some explanations on how coroutines work - maybe it helps you with any additional questions: https://antonioleiva.com/coroutines/

How many coroutines is too many?

I need to speed up a search over some collection with millions of elements.
Search predicate needs to be passed as argument.
I have been wondering wether the simplest solution(at least for now) wouldn't be just using coroutines for the task.
The question I am facing right now is how many coroutines can I actually create at once. :D As a side note there might be more than one such search running concurrently.
Can I make millions of coroutines(one for every item) for every such search? Should I decide on some workload per coroutine(for example 1000 items per coroutine)? Should I also decide on some cap for coroutines amount?
I have rough understanding of coroutines and how they actually work, however, I have no idea what are the performance limitations of this feature.
Thanks!
The memory weight of a coroutine scales with the depth of the call trace from the coroutine builder block to the suspension point. Each suspend fun call adds another Continuation object to a linked list and this is retained while the coroutine is suspended. A rough figure for one Continuation instance is 100 bytes.
So, if you have a call trace depth of, say, 5, that amounts to 500 bytes per item. A million items is 500 MB.
However, unless your search code involves blocking operations that would leave a thread idle, you aren't gaining anything from coroutines. Your task looks more like an instance of data paralellism and you can solve it very efficiently using the java.util.stream API (as noted by user marstran in the comment).
According the kotlin coroutine starter guide, the example launches 100K coroutines. I believe what you intend to do is exactly what kotlin coroutine is designed for.
If you will not do many modifications over your collection then just store it in a HashMap,
else store it in a TreeMap. Then just search items there. I believe the search methods implemented there are optimized enough to handle a million items in a blink. I would not use coroutines in this case.
Documentation (for Kotlin):
HashMap: https://developer.android.com/reference/kotlin/java/util/HashMap
TreeMap: https://developer.android.com/reference/kotlin/java/util/TreeMap

Kotlin Coroutines: Do we need to synchronize shared state?

From the official guide and samples from web, I didn't see any mentions of locking or synchronization, or how safe is modifying a shared variable in multiple launch or async calls.
Coroutines bring a concurrent programming model that may result in simultaneously executed code. Just as you know it from thread-based libraries, you have to care about synchronization as noted in the docs:
Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.
With Kotlin Coroutines you can make use of acquainted strategies like using thread-safe data structures, confining execution to a single thread or using locks (e.g. Mutex).
Besides the common patterns, Kotlin coroutines encourage us to use a "share by communication" style. Concretely, an "actor" can be shared between coroutines. They can be used by coroutines, which may send/take messages to/from it. Also have a look at Channels.

What is implemetation difference between dojo.aspect and dojox.lang.aspect

While implementating aspect oriented programming i am getting confuse
Why Dojo having to two differentaspect library files?
When to use
dojo.aspect and dojox.lang.aspect ?
I have never heard about dojox.lang.aspect before, but according to git log the latest commit dates to 2008.
You can find an explanation why dojox.lang.aspect exists in an article by its author Eugene Lazutkin: AOP aspect of JavaScript with Dojo.
While it doesn't make much sense in some cases, we should realize that the primary role of dojo.connect() is to process DOM events.
[...]
To sum it up: events != AOP. Events cannot simulate all aspects of AOP, and AOP cannot be used in lieu of events.
So AOP in JavaScript sounds simple! Well, in reality there are several sticky places:
Implementing every single advice as a function is expensive.
Usually iterations are more efficient than recursive calls.
The size of call stack is limited.
If we have chained calls, it'll be impossible to rearrange them. So if we want to remove one advice in the middle, we are out of luck.
The “around” advice will “eat” all previously attached “before” and “after” advices changing the order of their execution. We cannot guarantee anymore that “before” advices run before all “around” advices, and so on.
Usually in order to decouple an “around” advice from the original function, the proceed() function is used. Calling it will result in calling the next-in-chain around advice method or the original method.
Our aspect is an object but in some cases we want it to be a static object, in other cases we want it to be a dynamic object created taking into account the state of the object we operate upon.
These and some other problems were resolved in dojox.lang.aspect (currently in the trunk, will be released in Dojo 1.2).
As of the latest Dojo 1.7, there is a strong tendency to differentiate between events and aspects, i.e. between dojo/on and dojo/aspect (both were implemented via dojo.connect before).
From a usage standpoint, dojo/aspect is a very simplified version of dojox/lang/aspect.
With dojo/aspect, you can create aspects corresponding to a named function (e.g. the "get" method in the "xhr" class), allowing you to create a before, after or around advice anytime xhr.get is called.
On the other hand, (TMHO) only dojox/lang/aspect provides enough features to play with aop.
It allows you to define your pointcuts with regular expressions, therefore allowing things like "execute an around advice for any functions whose name starts with get on any object"...
You can even pass-in an array of function names or regular expressions to which your aspects will be applied.
The blog post pointed by phusick gives good examples of that.

Futures for Objective-C?

Has anyone implemented Futures in Objective-C? I (hopefully not naively) assume that it should be reasonably simple to wrap NSInvocations in a nice API?
Mike Ash has implemented Futures using Blocks:
http://www.mikeash.com/pyblog/friday-qa-2010-02-26-futures.html
http://www.mikeash.com/pyblog/friday-qa-2010-03-05-compound-futures.html
PromiseKit seems quite popular. There's my Collapsing Futures library. There's also RXPromise. And plenty more.
Some notes between those three:
PromiseKit has Swift support
Each can be installed via CocoaPods.
Each automatically flattens doubly-future values into singly-future values.
Each is thread-safe.
RXPromise and PromiseKit act like Promises/A+ from JavaScript.
They differ in how futures are controlled. In collapsing futures there's a FutureSource, which has-a future instead of is-a future. In RXPromise and PromiseKit, a future is its own source.
They differ in how future are cancelled. In RXPromise the consumer calls cancel on the future itself. In collapsing futures, the producer cancels a token it gave to the method that made the future. I don't know what PromiseKit does.
All have excellent documentation on each method.
I'm biased towards collapsing futures, since I wrote it and so clearly prefer the design decisions it made. I think keeping control separate is hugely important because it helps prevent self-sustaining reference cycles (not an issue in JS, but definitely an issue in Obj-C when working with blocks). I also think cancel tokens simply make things easier. On the other hand, acting like a well-known spec from a well-known language would be really nice.
MPWFoundation has futures based on Higher Order Messaging:
Assuming you have a regular computation with a message computeResult:
result = [someObject computeResult];
prefixing that message with the future message will compute the result in the background:
result = [[someObject future] computeResult];
The object in result is a proxy that will block when messages are sent to it until the value is received.
Apple's documentation on blocks in Grand Central Dispatch may be of interest.