Trying to understand channels. I want to channelify the android BluetoothLeScanner. Why does this work:
fun startScan(filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> {
val channel = Channel<ScanResult>()
scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
channel.offer(result)
}
}
scanner.startScan(filters, settings, scanCallback)
return channel
}
But not this:
fun startScan(scope: CoroutineScope, filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> = scope.produce {
scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
offer(result)
}
}
scanner.startScan(filters, settings, scanCallback)
}
It tells me Channel was closed when it wants to call offer for the first time.
EDIT1: According to the docs: The channel is closed when the coroutine completes. which makes sense. I know we can use suspendCoroutine with resume for a one shot callback-replacement. This however is a listener/stream-situation. I don't want the coroutine to complete
Using produce, you introduce scope to your Channel. This means, the code that produces the items, that are streamed over the channel, can be cancelled.
This also means that the lifetime of your Channel starts at the start of the lambda of the produce and ends when this lambda ends.
In your example, the lambda of your produce call almost ends immediately, which means your Channel is closed almost immediately.
Change your code to something like this:
fun CoroutineScope.startScan(filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> = produce {
scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
offer(result)
}
}
scanner.startScan(filters, settings, scanCallback)
// now suspend this lambda forever (until its scope is canceled)
suspendCancellableCoroutine<Nothing> { cont ->
cont.invokeOnCancellation {
scanner.stopScan(...)
}
}
}
...
val channel = scope.startScan(filter)
...
...
scope.cancel() // cancels the channel and stops the scanner.
I added the line suspendCancellableCoroutine<Nothing> { ... } to make it suspend 'forever'.
Update: Using produce and handling errors in a structured way (allows for Structured Concurrency):
fun CoroutineScope.startScan(filters: List<ScanFilter>, settings: ScanSettings = defaultSettings): ReceiveChannel<ScanResult?> = produce {
// Suspend this lambda forever (until its scope is canceled)
suspendCancellableCoroutine<Nothing> { cont ->
val scanCallback = object : ScanCallback() {
override fun onScanResult(callbackType: Int, result: ScanResult) {
offer(result)
}
override fun onScanFailed(errorCode: Int) {
cont.resumeWithException(MyScanException(errorCode))
}
}
scanner.startScan(filters, settings, scanCallback)
cont.invokeOnCancellation {
scanner.stopScan(...)
}
}
}
Related
can someone tel me why this Job.join() never complete ?
private fun handleNotification() {
val job = Job()
val scope = viewModelScope + job
scope.launch {
getUserInfoUseCase().collectLatest { result ->
_state.value = when (result) {
is ResultWrapper.Success -> _state.value.copy(user = result.value)
is ResultWrapper.Error -> ...
ResultWrapper.NetworkError -> ...
}
}
}
viewModelScope.launch {
job.children.forEach { it.join() }
Timber.d("NOTIFICATION :: User = ${state.value.user}")
Timber.d("NOTIFICATION :: Notification = $notification")
}
}
I will want to add some extra job after. May is cause the getUserInfoUseCase() steel collecting ?
If getUserInfoUseCase() is like most real-world-application Flows, it is an infinite flow. That means that if you call collect on it, collect suspends forever. Therefore, calling join on a Job that is waiting for such a collect call will also suspend forever.
For example, a Flow returned by Room monitors the database for changes forever. It's always possible some future change could be coming that will result in another emission, so the Flow has no end.
If you just want to wait for the flow's first emission, maybe you could use a channel like this.
private fun handleNotification() {
val firstFlowItemChannel = Channel<Unit>(1)
scope.launch {
getUserInfoUseCase().collectLatest { result ->
_state.value = when (result) {
is ResultWrapper.Success -> _state.value.copy(user = result.value)
is ResultWrapper.Error -> ...
ResultWrapper.NetworkError -> ...
}
firstFlowItemChannel.send(Unit)
}
}
viewModelScope.launch {
firstFlowItemChannel.receive()
Timber.d("NOTIFICATION :: User = ${state.value.user}")
Timber.d("NOTIFICATION :: Notification = $notification")
}
}
using kotlin, having code
fun fetchRemoteDataApi(): Single<RemoteDataResponse> = networkApi.getData()
// it is just a retrofit
#GET(".../api/getData")
fun getData() : Single<RemoteDataResponse>
fun mergeApiWithDb(): Completable = fetchRemoteDataApi()
.zipWith(localDao.getAll())
.flatMapCompletable { (remoteData, localData) ->
doMerge(remoteData, localData) //<== return a Completable
}
the code flow:
val mergeApiDbCall = mergeApiWithDb().onErrorComplete().cache() //<=== would like do some inspection at this level
PublishSubject.create<Unit>().toFlowable(BackpressureStrategy.LATEST)
.compose(Transformers.flowableIO())
.switchMap {
//merge DB with api, or local default value first then listen to DB change
mergeApiDbCall.andThen(listAllTopics())
.concatMapSingle { topics -> remoteTopicUsers.map { topics to it } }
}
.flatMapCompletable { (topics, user) ->
// do something return Completable
}
.subscribe({
...
}, { throwable ->
...
})
and when making the call
val mergeApiDbCall = mergeApiWithDb().onErrorComplete().cache()
the question is if would like to inspect on the Singles<RemoteDataResponse> returned from fetchRemoteDataApi() (i.e. using Log.i(...) to printout the content of RemoteDataResponse, etc.), either in got error or success case, how to do it?
/// the functions
fun listAllTopics(): Flowable<List<String>> = localRepoDao.getAllTopics()
// which a DAO:
#Query("SELECT topic FROM RemoteDataTable WHERE read = 1")
fun getAllTopics(): Flowable<List<String>>
///
private val remoteTopicUsers: Single<List<User>>
get() {
return Single.create {
networkApi.getTopicUsers(object : ICallback.IGetTopicUsersCallback {
override fun onSuccess(result: List<User>) = it.onSuccess(result)
override fun onError(errorCode: Int, errorMsg: String?) = it.onError(Exception(errorCode, errorMsg))
})
}
}
You cannot extract information about elements from the Completable. Though you can use doOnComplete() on Completable, it will not provide you any information about the element.
You can inspect elements if you call doOnSuccess() on your Single, so you need to incorporate this call earlier in your code. To inspect errors you can use doOnError() on both Completable or Single.
I'm attempting to use Kotlin's Flow class as a message queue to transfer data from a producer (a camera) to a set of workers (image analyzers) running on separate coroutines.
The producer in my case is a camera, and will run substantially faster than the workers. Back pressure should be handled by dropping data so that the image analyzers are always operating on the latest images from the camera.
When using channels, this solution works, but seems messy and does not provide an easy way for me to translate the data between the camera and the analyzers (like flow.map).
class ImageAnalyzer<Result> {
fun analyze(image: Bitmap): Result {
// perform some work on the image and return a Result. This can take a long time.
}
}
class CameraAdapter {
private val imageChannel = Channel<Bitmap>(capacity = Channel.RENDEZVOUS)
private val imageReceiveMutex = Mutex()
// additional code to make this camera work and listen to lifecycle events of the enclosing activity.
protected fun sendImageToStream(image: CameraOutput) {
// use channel.offer to ensure the latest images are processed
runBlocking { imageChannel.offer(image) }
}
#OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() {
runBlocking { imageChannel.close() }
}
/**
* Get the stream of images from the camera.
*/
fun getImageStream(): ReceiveChannel<Bitmap> = imageChannel
}
class ImageProcessor<Result>(workers: List<ImageAnalyzer<Result>>) {
private val analysisResults = Channel<Result>(capacity = Channel.RENDEZVOUS)
private val cancelMutex = Mutex()
var finished = false // this can be set elsewhere when enough images have been analyzed
fun subscribeTo(channel: ReceiveChannel<Bitmap>, processingCoroutineScope: CoroutineScope) {
// omit some checks to make sure this is not already subscribed
processingCoroutineScope.launch {
val workerScope = this
workers.forEachIndexed { index, worker ->
launch(Dispatchers.Default) {
startWorker(channel, workerScope, index, worker)
}
}
}
}
private suspend fun startWorker(
channel: ReceiveChannel<Bitmap>,
workerScope: CoroutineScope,
workerId: Int,
worker: ImageAnalyzer
) {
for (bitmap in channel) {
analysisResults.send(worker.analyze(bitmap))
cancelMutex.withLock {
if (finished && workerScope.isActive) {
workerScope.cancel()
}
}
}
}
}
class ExampleApplication : CoroutineScope {
private val cameraAdapter: CameraAdapter = ...
private val imageProcessor: ImageProcessor<Result> = ...
fun analyzeCameraStream() {
imageProcessor.subscribeTo(cameraAdapter.getImageStream())
}
}
What's the proper way to do this? I would like to use a ChannelFlow instead of a Channel to pass data between the camera and the ImageProcessor. This would allow me to call flow.map to add metadata to the images before they're sent to the analyzers. However, when doing so, each ImageAnalyzer gets a copy of the same image instead of processing different images in parallel. Is it possible to use a Flow as a message queue rather than a broadcaster?
I got this working with flows! It was important to keep the flows backed by a channel throughout this sequence so that each worker would pick up unique images to operate on. I've confirmed this functionality through unit tests.
Here's my updated code for posterity:
class ImageAnalyzer<Result> {
fun analyze(image: Bitmap): Result {
// perform some work on the image and return a Result. This can take a long time.
}
}
class CameraAdapter {
private val imageStream = Channel<Bitmap>(capacity = Channel.RENDEZVOUS)
private val imageReceiveMutex = Mutex()
// additional code to make this camera work and listen to lifecycle events of the enclosing activity.
protected fun sendImageToStream(image: CameraOutput) {
// use channel.offer to enforce the drop back pressure strategy
runBlocking { imageChannel.offer(image) }
}
#OnLifecycleEvent(Lifecycle.Event.ON_DESTROY)
fun onDestroy() {
runBlocking { imageChannel.close() }
}
/**
* Get the stream of images from the camera.
*/
fun getImageStream(): Flow<Bitmap> = imageChannel.receiveAsFlow()
}
class ImageProcessor<Result>(workers: List<ImageAnalyzer<Result>>) {
private val analysisResults = Channel<Result>(capacity = Channel.RENDEZVOUS)
private val cancelMutex = Mutex()
var finished = false // this can be set elsewhere when enough images have been analyzed
fun subscribeTo(flow: Flow<Bitmap>, processingCoroutineScope: CoroutineScope): Job {
// omit some checks to make sure this is not already subscribed
return processingCoroutineScope.launch {
val workerScope = this
workers.forEachIndexed { index, worker ->
launch(Dispatchers.Default) {
startWorker(flow, workerScope, index, worker)
}
}
}
}
private suspend fun startWorker(
flow: Flow<Bitmap>,
workerScope: CoroutineScope,
workerId: Int,
worker: ImageAnalyzer
) {
while (workerScope.isActive) {
flow.collect { bitmap ->
analysisResults.send(worker.analyze(bitmap))
cancelMutex.withLock {
if (finished && workerScope.isActive) {
workerScope.cancel()
}
}
}
}
}
fun getAnalysisResults(): Flow<Result> = analysisResults.receiveAsFlow()
}
class ExampleApplication : CoroutineScope {
private val cameraAdapter: CameraAdapter = ...
private val imageProcessor: ImageProcessor<Result> = ...
fun analyzeCameraStream() {
imageProcessor.subscribeTo(cameraAdapter.getImageStream())
}
}
It appears that, so long as the flow is backed by a channel, the subscribers will each get a unique image.
I have code that should change SharedPreferences into obsarvable storage with flow so I've code like this
internal val onKeyValueChange: Flow<String> = channelFlow {
val callback = SharedPreferences.OnSharedPreferenceChangeListener { _, key ->
coroutineScope.launch {
//send(key)
offer(key)
}
}
sharedPreferences.registerOnSharedPreferenceChangeListener(callback)
awaitClose {
sharedPreferences.unregisterOnSharedPreferenceChangeListener(callback)
}
}
or this
internal val onKeyValueChange: Flow<String> = callbackFlow {
val callback = SharedPreferences.OnSharedPreferenceChangeListener { _, key ->
coroutineScope.launch {
send(key)
//offer(key)
}
}
sharedPreferences.registerOnSharedPreferenceChangeListener(callback)
awaitClose {
sharedPreferences.unregisterOnSharedPreferenceChangeListener(callback)
}
}
Then I observe this preferences for token, userId, companyId and then log into but there is odd thing as I need to build app three times like changing token not causes tokenFlow to emit anything, then second time new userId not causes userIdFlow to emit anything, then after 3rd login I can logout/login and it works. On logout I am clearing all 3 properties stores in prefs token, userId, companyId.
For callbackFlow:
You cannot use emit() as the simple Flow (because it's a suspend function) inside a callback. Therefore the callbackFlow offers you a synchronized way to do it with the trySend() option.
Example:
fun observeData() = flow {
myAwesomeInterface.addListener{ result ->
emit(result) // NOT ALLOWED
}
}
So, coroutines offer you the option of callbackFlow:
fun observeData() = callbackFlow {
myAwesomeInterface.addListener{ result ->
trySend(result) // ALLOWED
}
awaitClose{ myAwesomeInterface.removeListener() }
}
For channelFlow:
The main difference with it and the basic Flow is described in the documentation:
A channel with the default buffer size is used. Use the buffer
operator on the resulting flow to specify a user-defined value and to
control what happens when data is produced faster than consumed, i.e.
to control the back-pressure behavior.
The trySend() still stands for the same thing. It's just a synchronized way (a non suspending way) for emit() or send()
I suggest you to check Romans Elizarov blog for more detailed information especially this post.
Regarding your code, for callbackFlow you wont' be needing a coroutine launch:
coroutineScope.launch {
send(key)
//trySend(key)
}
Just use trySend()
Another Example, maybe much concrete:
private fun test() {
lifecycleScope.launch {
someFlow().collectLatest {
Log.d("TAG", "Finally we received the result: $it")
// Cancel this listener, so it will not be subscribed anymore to the callbackFlow. awaitClose() will be triggered.
// cancel()
}
}
}
/**
* Define a callbackFlow.
*/
private fun someFlow() = callbackFlow {
// A dummy class which will run some business logic and which will sent result back to listeners through ApiCallback methods.
val service = ServiceTest() // a REST API class for example
// A simple callback interface which will be called from ServiceTest
val callback = object : ApiCallback {
override fun someApiMethod(data: String) {
// Sending method used by callbackFlow. Into a Flow we have emit(...) or for a ChannelFlow we have send(...)
trySend(data)
}
override fun anotherApiMethod(data: String) {
// Sending method used by callbackFlow. Into a Flow we have emit(...) or for a ChannelFlow we have send(...)
trySend(data)
}
}
// Register the ApiCallback for later usage by ServiceTest
service.register(callback)
// Dummy sample usage of callback flow.
service.execute(1)
service.execute(2)
service.execute(3)
service.execute(4)
// When a listener subscribed through .collectLatest {} is calling cancel() the awaitClose will get executed.
awaitClose {
service.unregister()
}
}
interface ApiCallback {
fun someApiMethod(data: String)
fun anotherApiMethod(data: String)
}
class ServiceTest {
private var callback: ApiCallback? = null
fun unregister() {
callback = null
Log.d("TAG", "Unregister the callback in the service class")
}
fun register(callback: ApiCallback) {
Log.d("TAG", "Register the callback in the service class")
this.callback = callback
}
fun execute(value: Int) {
CoroutineScope(Dispatchers.IO).launch {
if (value < 2) {
callback?.someApiMethod("message sent through someApiMethod: $value.")
} else {
callback?.anotherApiMethod("message sent through anotherApiMethod: $value.")
}
}
}
}
I'm trying to learn a bit of Functional Programming using Kotlin and Arrow and in this way I've already read some blogposts like the following one: https://jorgecastillo.dev/kotlin-fp-1-monad-stack, which is good, I've understand the main idea, but when creating a program, I can't figure out how to run it.
Let me be more explicit:
I have the following piece of code:
typealias EitherIO<A, B> = EitherT<ForIO, A, B>
sealed class UserError(
val message: String,
val status: Int
) {
object AuthenticationError : UserError(HttpStatus.UNAUTHORIZED.reasonPhrase, HttpStatus.UNAUTHORIZED.value())
object UserNotFound : UserError(HttpStatus.NOT_FOUND.reasonPhrase, HttpStatus.NOT_FOUND.value())
object InternalServerError : UserError(HttpStatus.INTERNAL_SERVER_ERROR.reasonPhrase, HttpStatus.INTERNAL_SERVER_ERROR.value())
}
#Component
class UserAdapter(
private val myAccountClient: MyAccountClient
) {
#Lazy
#Inject
lateinit var subscriberRepository: SubscriberRepository
fun getDomainUser(ssoId: Long): EitherIO<UserError, User?> {
val io = IO.fx {
val userResource = getUserResourcesBySsoId(ssoId, myAccountClient).bind()
userResource.fold(
{ error -> Either.Left(error) },
{ success ->
Either.right(composeDomainUserWithSubscribers(success, getSubscribersForUserResource(success, subscriberRepository).bind()))
})
}
return EitherIO(io)
}
fun composeDomainUserWithSubscribers(userResource: UserResource, subscribers: Option<Subscribers>): User? {
return subscribers.map { userResource.toDomainUser(it) }.orNull()
}
}
private fun getSubscribersForUserResource(userResource: UserResource, subscriberRepository: SubscriberRepository): IO<Option<Subscribers>> {
return IO {
val msisdnList = userResource.getMsisdnList()
Option.invoke(subscriberRepository.findAllByMsisdnInAndDeletedIsFalse(msisdnList).associateBy(Subscriber::msisdn))
}
}
private fun getUserResourcesBySsoId(ssoId: Long, myAccountClient: MyAccountClient): IO<Either<UserError, UserResource>> {
return IO {
val response = myAccountClient.getUserBySsoId(ssoId)
if (response.isSuccessful) {
val userResource = JacksonUtils.fromJsonToObject(response.body()?.string()!!, UserResource::class.java)
Either.Right(userResource)
} else {
when (response.code()) {
401 -> Either.Left(UserError.AuthenticationError)
404 -> Either.Left(UserError.UserNotFound)
else -> Either.Left(UserError.InternalServerError)
}
}
}.handleError { Either.Left(UserError.InternalServerError) }
}
which, as you can see is accumulating some results into an IO monad. I should run this program using unsafeRunSync() from arrow, but on javadoc it's stated the following: **NOTE** this function is intended for testing, it should never appear in your mainline production code!.
I should mention that I know about unsafeRunAsync, but in my case I want to be synchronous.
Thanks!
Instead of running unsafeRunSync, you should favor unsafeRunAsync.
If you have myFun(): IO<A> and want to run this, then you call myFun().unsafeRunAsync(cb) where cb: (Either<Throwable, A>) -> Unit.
For instance, if your function returns IO<List<Int>> then you can call
myFun().unsafeRunAsync { /* it (Either<Throwable, List<Int>>) -> */
it.fold(
{ Log.e("Foo", "Error! $it") },
{ println(it) })
}
This will run the program contained in the IO asynchronously and pass the result safely to the callback, which will log an error if the IO threw, and otherwise it will print the list of integers.
You should avoid unsafeRunSync for a number of reasons, discussed here. It's blocking, it can cause crashes, it can cause deadlocks, and it can halt your application.
If you really want to run your IO as a blocking computation, then you can precede this with attempt() to have your IO<A> become an IO<Either<Throwable, A>> similar to the unsafeRunAsync callback parameter. At least then you won't crash.
But unsafeRunAsync is preferred. Also, make sure your callback passed to unsafeRunAsync won't throw any errors, at it's assumed it won't. Docs.