Kotlin flows as a message queue between coroutines - kotlin

I'm attempting to use Kotlin's Flow class as a message queue to transfer data from a producer (a camera) to a set of workers (image analyzers) running on separate coroutines.
The producer in my case is a camera, and will run substantially faster than the workers. Back pressure should be handled by dropping data so that the image analyzers are always operating on the latest images from the camera.
When using channels, this solution works, but seems messy and does not provide an easy way for me to translate the data between the camera and the analyzers (like flow.map).
class ImageAnalyzer<Result> {
fun analyze(image: Bitmap): Result {
// perform some work on the image and return a Result. This can take a long time.
}
}
class CameraAdapter {
private val imageChannel = Channel<Bitmap>(capacity = Channel.RENDEZVOUS)
private val imageReceiveMutex = Mutex()
// additional code to make this camera work and listen to lifecycle events of the enclosing activity.
protected fun sendImageToStream(image: CameraOutput) {
// use channel.offer to ensure the latest images are processed
runBlocking { imageChannel.offer(image) }
}
#OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() {
runBlocking { imageChannel.close() }
}
/**
* Get the stream of images from the camera.
*/
fun getImageStream(): ReceiveChannel<Bitmap> = imageChannel
}
class ImageProcessor<Result>(workers: List<ImageAnalyzer<Result>>) {
private val analysisResults = Channel<Result>(capacity = Channel.RENDEZVOUS)
private val cancelMutex = Mutex()
var finished = false // this can be set elsewhere when enough images have been analyzed
fun subscribeTo(channel: ReceiveChannel<Bitmap>, processingCoroutineScope: CoroutineScope) {
// omit some checks to make sure this is not already subscribed
processingCoroutineScope.launch {
val workerScope = this
workers.forEachIndexed { index, worker ->
launch(Dispatchers.Default) {
startWorker(channel, workerScope, index, worker)
}
}
}
}
private suspend fun startWorker(
channel: ReceiveChannel<Bitmap>,
workerScope: CoroutineScope,
workerId: Int,
worker: ImageAnalyzer
) {
for (bitmap in channel) {
analysisResults.send(worker.analyze(bitmap))
cancelMutex.withLock {
if (finished && workerScope.isActive) {
workerScope.cancel()
}
}
}
}
}
class ExampleApplication : CoroutineScope {
private val cameraAdapter: CameraAdapter = ...
private val imageProcessor: ImageProcessor<Result> = ...
fun analyzeCameraStream() {
imageProcessor.subscribeTo(cameraAdapter.getImageStream())
}
}
What's the proper way to do this? I would like to use a ChannelFlow instead of a Channel to pass data between the camera and the ImageProcessor. This would allow me to call flow.map to add metadata to the images before they're sent to the analyzers. However, when doing so, each ImageAnalyzer gets a copy of the same image instead of processing different images in parallel. Is it possible to use a Flow as a message queue rather than a broadcaster?

I got this working with flows! It was important to keep the flows backed by a channel throughout this sequence so that each worker would pick up unique images to operate on. I've confirmed this functionality through unit tests.
Here's my updated code for posterity:
class ImageAnalyzer<Result> {
fun analyze(image: Bitmap): Result {
// perform some work on the image and return a Result. This can take a long time.
}
}
class CameraAdapter {
private val imageStream = Channel<Bitmap>(capacity = Channel.RENDEZVOUS)
private val imageReceiveMutex = Mutex()
// additional code to make this camera work and listen to lifecycle events of the enclosing activity.
protected fun sendImageToStream(image: CameraOutput) {
// use channel.offer to enforce the drop back pressure strategy
runBlocking { imageChannel.offer(image) }
}
#OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() {
runBlocking { imageChannel.close() }
}
/**
* Get the stream of images from the camera.
*/
fun getImageStream(): Flow<Bitmap> = imageChannel.receiveAsFlow()
}
class ImageProcessor<Result>(workers: List<ImageAnalyzer<Result>>) {
private val analysisResults = Channel<Result>(capacity = Channel.RENDEZVOUS)
private val cancelMutex = Mutex()
var finished = false // this can be set elsewhere when enough images have been analyzed
fun subscribeTo(flow: Flow<Bitmap>, processingCoroutineScope: CoroutineScope): Job {
// omit some checks to make sure this is not already subscribed
return processingCoroutineScope.launch {
val workerScope = this
workers.forEachIndexed { index, worker ->
launch(Dispatchers.Default) {
startWorker(flow, workerScope, index, worker)
}
}
}
}
private suspend fun startWorker(
flow: Flow<Bitmap>,
workerScope: CoroutineScope,
workerId: Int,
worker: ImageAnalyzer
) {
while (workerScope.isActive) {
flow.collect { bitmap ->
analysisResults.send(worker.analyze(bitmap))
cancelMutex.withLock {
if (finished && workerScope.isActive) {
workerScope.cancel()
}
}
}
}
}
fun getAnalysisResults(): Flow<Result> = analysisResults.receiveAsFlow()
}
class ExampleApplication : CoroutineScope {
private val cameraAdapter: CameraAdapter = ...
private val imageProcessor: ImageProcessor<Result> = ...
fun analyzeCameraStream() {
imageProcessor.subscribeTo(cameraAdapter.getImageStream())
}
}
It appears that, so long as the flow is backed by a channel, the subscribers will each get a unique image.

Related

CUBA Platform push messages from backend to UI

i was wondering if it is possible to send messages from the backend (for example a running task that receives information from an external system) to the UI. In my case it needs to be a specific session (no broadcast) and only on a specific screen
plan B would be polling the backend frequently but i was hoping to get something more "realtime"
I was trying to work something out like this, but i keep getting a NotSerializableException.
#Push
class StorageAccess : Screen(), MessageListener {
#Inject
private lateinit var stationWSService: StationWebSocketService
#Inject
private lateinit var notifications: Notifications
#Subscribe
private fun onInit(event: InitEvent) {
}
#Subscribe("stationPicker")
private fun onStationPickerValueChange(event: HasValue.ValueChangeEvent<StorageUnit>) {
val current = AppUI.getCurrent()
current.userSession.id ?: return
val prevValue = event.prevValue
if (prevValue != null) {
stationWSService.remove(current.userSession.id)
}
val value = event.value ?: return
stationWSService.listen(current.userSession.id, value, this)
}
override fun messageReceived(message: String) {
val current = AppUI.getCurrent()
current.access {
notifications.create().withCaption(message).show()
}
}
#Subscribe
private fun onAfterDetach(event: AfterDetachEvent) {
val current = AppUI.getCurrent()
current.userSession.id ?: return
stationWSService.remove(current.userSession.id)
}
}
-- The callback interface
interface MessageListener : Serializable {
fun messageReceived(message: String);
}
-- The listen method of my backend service
private val listeners: MutableMap<String, MutableMap<UUID, MessageListener>> = HashMap()
override fun listen(id: UUID, storageUnit: StorageUnit, callback: MessageListener) {
val unitStationIP: String = storageUnit.unitStationIP ?: return
if (!listeners.containsKey(unitStationIP))
listeners[unitStationIP] = HashMap()
listeners[unitStationIP]?.set(id, callback)
}
The Exception i get is NotSerializableException: com.haulmont.cuba.web.sys.WebNotifications which happens during adding the listener to the backend: stationWSService.listen(current.userSession.id, value, this)
as far as i understand this is the place where the UI sends the information to the backend - and with it the entire status of the class StorageAccess, including all its members.
is there an elegant solution to this?
regards
There is an add-on that solves exactly this problem: https://github.com/cuba-platform/global-events-addon

Android Kotlin Coroutines: what is the difference between flow, callbackFlow, channelFlow,... other flow constructors

I have code that should change SharedPreferences into obsarvable storage with flow so I've code like this
internal val onKeyValueChange: Flow<String> = channelFlow {
val callback = SharedPreferences.OnSharedPreferenceChangeListener { _, key ->
coroutineScope.launch {
//send(key)
offer(key)
}
}
sharedPreferences.registerOnSharedPreferenceChangeListener(callback)
awaitClose {
sharedPreferences.unregisterOnSharedPreferenceChangeListener(callback)
}
}
or this
internal val onKeyValueChange: Flow<String> = callbackFlow {
val callback = SharedPreferences.OnSharedPreferenceChangeListener { _, key ->
coroutineScope.launch {
send(key)
//offer(key)
}
}
sharedPreferences.registerOnSharedPreferenceChangeListener(callback)
awaitClose {
sharedPreferences.unregisterOnSharedPreferenceChangeListener(callback)
}
}
Then I observe this preferences for token, userId, companyId and then log into but there is odd thing as I need to build app three times like changing token not causes tokenFlow to emit anything, then second time new userId not causes userIdFlow to emit anything, then after 3rd login I can logout/login and it works. On logout I am clearing all 3 properties stores in prefs token, userId, companyId.
For callbackFlow:
You cannot use emit() as the simple Flow (because it's a suspend function) inside a callback. Therefore the callbackFlow offers you a synchronized way to do it with the trySend() option.
Example:
fun observeData() = flow {
myAwesomeInterface.addListener{ result ->
emit(result) // NOT ALLOWED
}
}
So, coroutines offer you the option of callbackFlow:
fun observeData() = callbackFlow {
myAwesomeInterface.addListener{ result ->
trySend(result) // ALLOWED
}
awaitClose{ myAwesomeInterface.removeListener() }
}
For channelFlow:
The main difference with it and the basic Flow is described in the documentation:
A channel with the default buffer size is used. Use the buffer
operator on the resulting flow to specify a user-defined value and to
control what happens when data is produced faster than consumed, i.e.
to control the back-pressure behavior.
The trySend() still stands for the same thing. It's just a synchronized way (a non suspending way) for emit() or send()
I suggest you to check Romans Elizarov blog for more detailed information especially this post.
Regarding your code, for callbackFlow you wont' be needing a coroutine launch:
coroutineScope.launch {
send(key)
//trySend(key)
}
Just use trySend()
Another Example, maybe much concrete:
private fun test() {
lifecycleScope.launch {
someFlow().collectLatest {
Log.d("TAG", "Finally we received the result: $it")
// Cancel this listener, so it will not be subscribed anymore to the callbackFlow. awaitClose() will be triggered.
// cancel()
}
}
}
/**
* Define a callbackFlow.
*/
private fun someFlow() = callbackFlow {
// A dummy class which will run some business logic and which will sent result back to listeners through ApiCallback methods.
val service = ServiceTest() // a REST API class for example
// A simple callback interface which will be called from ServiceTest
val callback = object : ApiCallback {
override fun someApiMethod(data: String) {
// Sending method used by callbackFlow. Into a Flow we have emit(...) or for a ChannelFlow we have send(...)
trySend(data)
}
override fun anotherApiMethod(data: String) {
// Sending method used by callbackFlow. Into a Flow we have emit(...) or for a ChannelFlow we have send(...)
trySend(data)
}
}
// Register the ApiCallback for later usage by ServiceTest
service.register(callback)
// Dummy sample usage of callback flow.
service.execute(1)
service.execute(2)
service.execute(3)
service.execute(4)
// When a listener subscribed through .collectLatest {} is calling cancel() the awaitClose will get executed.
awaitClose {
service.unregister()
}
}
interface ApiCallback {
fun someApiMethod(data: String)
fun anotherApiMethod(data: String)
}
class ServiceTest {
private var callback: ApiCallback? = null
fun unregister() {
callback = null
Log.d("TAG", "Unregister the callback in the service class")
}
fun register(callback: ApiCallback) {
Log.d("TAG", "Register the callback in the service class")
this.callback = callback
}
fun execute(value: Int) {
CoroutineScope(Dispatchers.IO).launch {
if (value < 2) {
callback?.someApiMethod("message sent through someApiMethod: $value.")
} else {
callback?.anotherApiMethod("message sent through anotherApiMethod: $value.")
}
}
}
}

Unit testing Kotlin's ConflatedBroadcastChannel behavior

In the new project that I'm currently working on I have no RxJava dependency at all, because until now I didn't need that - coroutines solve threading problem pretty gracefully.
At this point I stumbled upon on a requirement to have a BehaviorSubject-alike behavior, where one can subscribe to a stream of data and receive the latest value upon subscription. As I've learned, Channels provide very similar behavior in Kotlin, so I decided to give them a try.
From this article I've learned, that ConflatedBroadcastChannel is the type of channel that mimics BehaviorSubject, so I declared following:
class ChannelSender {
val channel = ConflatedBroadcastChannel<String>()
fun sendToChannel(someString: String) {
GlobalScope.launch(Dispatchers.Main) { channel.send(someString) }
}
}
For listening to the channel I do this:
class ChannelListener(val channelSender: ChannelSender) {
fun listenToChannel() {
channelSender.channel.consumeEach { someString ->
if (someString == "A") foo.perform()
else bar.perform()
}
}
}
This works as expected, but at this point I'm having difficulties understanding how to unit test ChannelListener.
I've tried to find something related here, but none of example-channel-**.kt classes were helpful.
Any help, suggestion or correction related to my incorrect assumptions is appreciated. Thanks.
With the help of Alexey I could manage to end up having following code, which answers the question:
class ChannelListenerTest {
private val val channelSender: ChannelSender = mock()
private val sut = ChannelListener(channelSender)
private val broadcastChannel = ConflatedBroadcastChannel<String>()
private val timeLimit = 1_000L
private val endMarker = "end"
#Test
fun `some description here`() = runBlocking {
whenever(channelSender.channel).thenReturn(broadcastChannel)
val sender = launch(Dispatchers.Default) {
broadcastChannel.offer("A")
yield()
}
val receiver = launch(Dispatchers.Default) {
while (isActive) {
val i = waitForEvent()
if (i == endMarker) break
yield()
}
}
try {
withTimeout(timeLimit) {
sut.listenToChannel()
sender.join()
broadcastChannel.offer(endMarker) // last event to signal receivers termination
receiver.join()
}
verify(foo).perform()
} catch (e: CancellationException) {
println("Test timed out $e")
}
}
private suspend fun waitForEvent(): String =
with(broadcastChannel.openSubscription()) {
val value = receive()
cancel()
value
}
}

produce<Type> vs Channel<Type>()

Trying to understand channels. I want to channelify the android BluetoothLeScanner. Why does this work:
fun startScan(filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> {
val channel = Channel<ScanResult>()
scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
channel.offer(result)
}
}
scanner.startScan(filters, settings, scanCallback)
return channel
}
But not this:
fun startScan(scope: CoroutineScope, filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> = scope.produce {
scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
offer(result)
}
}
scanner.startScan(filters, settings, scanCallback)
}
It tells me Channel was closed when it wants to call offer for the first time.
EDIT1: According to the docs: The channel is closed when the coroutine completes. which makes sense. I know we can use suspendCoroutine with resume for a one shot callback-replacement. This however is a listener/stream-situation. I don't want the coroutine to complete
Using produce, you introduce scope to your Channel. This means, the code that produces the items, that are streamed over the channel, can be cancelled.
This also means that the lifetime of your Channel starts at the start of the lambda of the produce and ends when this lambda ends.
In your example, the lambda of your produce call almost ends immediately, which means your Channel is closed almost immediately.
Change your code to something like this:
fun CoroutineScope.startScan(filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> = produce {
scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
offer(result)
}
}
scanner.startScan(filters, settings, scanCallback)
// now suspend this lambda forever (until its scope is canceled)
suspendCancellableCoroutine<Nothing> { cont ->
cont.invokeOnCancellation {
scanner.stopScan(...)
}
}
}
...
val channel = scope.startScan(filter)
...
...
scope.cancel() // cancels the channel and stops the scanner.
I added the line suspendCancellableCoroutine<Nothing> { ... } to make it suspend 'forever'.
Update: Using produce and handling errors in a structured way (allows for Structured Concurrency):
fun CoroutineScope.startScan(filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> = produce {
// Suspend this lambda forever (until its scope is canceled)
suspendCancellableCoroutine<Nothing> { cont ->
val scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
offer(result)
}
override fun onScanFailed(errorCode: Int) {
cont.resumeWithException(MyScanException(errorCode))
}
}
scanner.startScan(filters, settings, scanCallback)
cont.invokeOnCancellation {
scanner.stopScan(...)
}
}
}

How to achieve mutex on method in Kotlin and prioritize one thread before another?

I have two kafka topics my_priorized_topic and my_not_so_priorized_topic. I want to have mutex on EventProcessor.doLogic, and always prioritize on handle messages from my_prioritized_topic before messages from my_not_so_prioritized_topic
Can anyone give me some pointers how to solve this with Kotlin, maybe with coroutines?
class EventProcessor {
fun doLogic(message: String) {
... // code which cannot be parallelized
}
}
class KafkaConsumers(private val eventProcessor: EventProcessor) {
#KafkaConsumer(topic = "my_priorized_topic")
fun consumeFromPriorizedTopic(message: String) {
eventProcessor.doLogic(message)
}
#KafkaConsumer(topic = "my_not_so_priorized_topic")
fun consumeFromNotSoPrioritizedTopic(message: String) {
eventProcessor.doLogic(message)
}
}
You could create two Channels for your high and low priority tasks. Then to consume the events from the channels, use coroutines' select expression and put the high priority task channel first.
Example (the String is the even):
fun process(value: String) {
// do what you want with the event
}
suspend fun selectFromHighAndLow(highPriorityChannel: ReceiveChannel<String>, lowPriorityChannel: ReceiveChannel<String>): String =
select<String> {
highPriorityChannel.onReceive { value ->
value
}
lowPriorityChannel.onReceive { value ->
value
}
}
val highPriorityChannel = Channel<String>()
val lowPriorityChannel = Channel<String>()
while (true) {
process(selectFromHighAndLow(highPriorityChannel, lowPriorityChannel))
}
To send stuff to those channels, you can use channel.send(event).