Inside a coroutine I am doing a http-request with OkHttpClient. The request is done from a function that has the suspend keyword:
suspend fun doSomethingFromHttp(someParam:String): Something {
...
val response = HttpReader.get(url)
return unmarshalSomething(response)!!
}
I assume that the function can be suspended on entry since it has the suspend keyword, but will the coroutine also be suspended when doing the http-request? What about other kinds of blocking IO?
There's no automagic going on with Kotlin coroutines. If you call a blocking function like HttpReader.get(), the coroutine won't be suspended and instead the call will block. You can easily assure yourself that a given function won't cause the coroutine to suspend: if it's not a suspend function, it cannot possibly do it, whether or not it's called from a suspend function.
If you want to turn an existing blocking API into non-blocking, suspendable calls, you must submit the blocking calls to a threadpool. The easiest way to achieve it is as follows:
val response = withContext(Dispatchers.IO) { HttpReader.get(url) }
withContext is a suspend fun that will suspend the coroutine, submit the provided block to another coroutine dispatcher (here IO) and resume when that block is done and has come up with its result.
You can also easily instantiate your own ExecutorService and use it as a coroutine dispatcher:
val myPool = Executors.newCachedThreadPool().asCoroutineDispatcher()
Now you can write
val response = withContext(myPool) { HttpReader.get(url) }
This PR has example code for proper OkHttp coroutines support
https://github.com/square/okhttp/pull/4129/files
It uses the thread pools of OkHttp to do the work. The key bit of code is this generic library code.
suspend fun OkHttpClient.execute(request: Request): Response {
val call = this.newCall(request)
return call.await()
}
suspend fun Call.await(): Response {
return suspendCancellableCoroutine { cont ->
cont.invokeOnCancellation {
cancel()
}
enqueue(object : Callback {
override fun onFailure(call: Call, e: IOException) {
if (!cont.isCancelled) {
cont.resumeWithException(e)
}
}
override fun onResponse(call: Call, response: Response) {
if (!cont.isCancelled) {
cont.resume(response)
}
}
})
}
}
There are two types of IO libraries in JAVA world, using IO or NIO.
You can find more documentation at https://dzone.com/articles/high-concurrency-http-clients-on-the-jvm
The ones using NIO, can theoretically provide true nonblocking suspension unlike IO ones which only offload the task to a separate thread.
NIO uses some dispatcher threads in the JVM to handle the input output sockets using multiplexing (Reactor design pattern). The way it works is, we request the NIO/dispatchers to load/unload something and they return us some future reference. This code can be turned into coroutines easily.
For IO based libraries, coroutine implementation is not true non blocking. It actually blocks one of the threads just like in Java, however the general usage pattern is, to use Dispatcher.IO which is a threadpool for such blocking IO tasks.
Instead of using OkHttpClient, I would recommend using https://ktor.io/docs/client.html
Related
I have a situation in an app, where there are a lot of network calls of the same endpoint (with different parameters) at the same time. This can cause other calls to be blocked.
The setup uses Retrofit + Kotlin Coroutines.
One solution I can think of is to run the calls with different instances of Retrofit+OkHttp using separate thread pools.
However, I'd prefer a single thread pool (and Retrofit instance) defining limitations via different kotlin coroutine dispatchers and the use of limitedParallelism().
See this code snippet:
class NetworkApi(
private val retrofitWebserviceApi: RetrofitWebserviceApi,
threadPoolExecutor: ThreadPoolExecutor,
private val dispatcher: CoroutineDispatcher = threadPoolExecutor.asCoroutineDispatcher()
.limitedParallelism(CoroutineDispatcherConfig.ioDispatcherLimit),
// A separate IO dispatcher pool so the many calls to getEntries don't block other calls
private val noParallelismDispatcher: CoroutineDispatcher = dispatcher.limitedParallelism(1),
) {
/**
* Represents an endpoint, which needs to be called with a lot of different
* parameters at the same time (about 1000 times).
* It's important these calls don't block the whole thread pool.
*/
suspend fun getEntries(description: String) = withContext(noParallelismDispatcher) {
retrofitWebserviceApi.getEntries(description)
}
/**
* This call should not be blocked by [getEntries] calls, but be executed shortly after it is called.
*/
suspend fun getCategories() = withContext(dispatcher) {
retrofitWebserviceApi.getCategories()
}
}
Full executable JVM code sample here: github sample code - question branch
So the idea here is to limit parallel requests using Kotlin Coroutine Dispatchers.
However, the project logs show that OkHttp uses its own OkHttp Dispatcher.
Is there a way to de-activate the OkHttp Dispatcher and just run a network call in the current thread (defined by a Coroutine Dispatcher here)?
Is this possible without losing the possibility to cancel requests?
Thanks for your help!
To use another dispatcher I think you need to remove suspend modifiers in RetrofitWebserviceApi for functions you want to use another dispatcher for:
internal interface RetrofitWebserviceApi {
#GET("entries")
fun getEntries(#Query("description") description: String): EntriesResponse
#GET("categories")
fun getCategories(): CategoriesResponse
}
Custom dispatcher can be set to OkhttpClient as below
private fun createDispatcher(): Dispatcher {
val dispatcher = Dispatcher(Executors.newCachedThreadPool())
dispatcher.maxRequests = 100
dispatcher.maxRequestsPerHost = 100
return dispatcher
}
private fun getOkHttpClient() = OkHttpClient.Builder()
.addInterceptor(getLoggingInterceptor())
.dispatcher(createDispatcher())
.build()
Short answer:
Yes, the OkHttp dispatcher is ignored if Retrofit calls are executed in a synchronuous way.
Long answer:
I went the same way Sergio suggested.
Besides removing the suspend keyword it's necessary to wrap the result type with Call
internal interface RetrofitWebserviceApi {
#GET("entries")
fun getEntries(#Query("description") description: String): Call<EntriesResponse>
#GET("categories")
fun getCategories(
): Call<CategoriesResponse>
}
Defining a Call<T> return type is the canonical way to define Retrofit interfaces and provides 2 options:
Synchronuous exection calling execute() on the Call object. This returns Response<T>.
Asynchronous execution calling enqueue(). This provides T in the callback.
I needed to go with option 1.
Now, the OkHttp thread pool is ignored. The caller side is now responsible to dispatch the execution of the network call to a background thread.
That was my original intention.
The functions in NetworkApi now additionally need to call execute() and body() to obtain the result:
suspend fun getEntries(description: String) =
retrofitWebserviceApi.getEntries(description)
.executeWithDispatcher(noParallelismDispatcher)
suspend fun getCategories() =
retrofitWebserviceApi.getCategories()
.executeWithDispatcher(dispatcher)
private suspend fun <T> Call<T>.executeWithDispatcher(dispatcher: CoroutineDispatcher): T =
withContext(dispatcher)
{
val response = execute()
if (response.isSuccessful) {
checkNotNull(response.body())
} else {
throw HttpException(response)
}
}
Full solution code sample
Suppose I have a blocking function because of some third party library. Something along these lines:
fun useTheLibrary(arg: String): String {
val result = BlockingLibrary.doSomething(arg)
return result
}
Invocations to BlockingLibrary.doSomething should run on a separate ThreadPoolExecutor.
What's the proper way (assuming there is a way) of achieving this with kotlin?
Note: I've read this thread but seems pretty outdated
If the blocking code is blocking because of CPU use, you should use Dispatchers.Default. If it is network- or disk-bound, use Dispatchers.IO. You can make this into a suspending function and wrap the blocking call in withContext to allow this function to properly suspend when called from a coroutine:
suspend fun useTheLibrary(arg: String): String = withContext(Dispatchers.Default) {
BlockingLibrary.doSomething(arg)
}
If you need to use a specific ThreadPoolExecutor because of API requirements, you can use asCoroutineDispatcher().
val myDispatcher = myExecutor.asCoroutineDispatcher()
//...
suspend fun useTheLibrary(arg: String): String = withContext(myDispatcher) {
BlockingLibrary.doSomething(arg)
}
If your library contains a callback-based way to run the blocking code, you can convert it into a suspend function using suspendCoroutine() or suspendCancellableCoroutine(). In this case, you don't need to worry about executors or dispatchers, because it's handled by the library's own thread pool. Here's an example in the Retrofit library, where they convert their own callback-based API into a suspend function.
I'm new to coroutines
This is a popular example:
suspend fun findBigPrime(): BigInteger =
withContext(Dispatchers.IO) {
BigInteger.probablePrime(4096, Random()))
}
However, it could be written as well as:
suspend fun findBigPrime(): BigInteger =
suspendCancellableCoroutine {
it.resume(BigInteger.probablePrime(4096, Random()))
}
What's the real difference?
What's the real difference?
There's hardly any relationship, in fact.
suspendCancellableCoroutine {
it.resume(BigInteger.probablePrime(4096, Random()))
}
This does nothing but add useless overhead above the simple direct call
BigInteger.probablePrime(4096, Random())
If you resume the continuation while still inside the suspendCancellableCoroutine block, the coroutine doesn't suspend at all.
withContext(Dispatchers.IO) {
BigInteger.probablePrime(4096, Random()))
}
This suspends the coroutine and launches an internal coroutine on another thread. When the internal coroutine completes, it resumes the current one with the result.
Use of suspendCancellableCoroutine here is a "BIG NO".
withContext changes the context on which the block (coroutine) will run, here the Dispatcher which dispatches the coroutine to a specified thread is overridden. While the suspendCoroutine/suspendCancellableCoroutine are used for wrapping asynchronous callbacks which does not block the thread instead they run on thread of their own.
Usually work on the suspendCoroutine/suspendCancellableCoroutine is non-blocking and gets completed quite quickly and the continuation is resumed after the work has completed in a non-blocking way maybe in other thread or so.
If you put blocking code in there the concept of coroutine is lost, it will just going to block the thread it is running on.
Use of suspendCoroutine/suspendCancellableCoroutine:
// This function immediately creates and starts thread and returns
fun findBigPrimeInNewThread(after: (BigInteger) -> Unit) {
Thread {
BigInteger.probablePrime(4096, Random()).also(after)
}
}
// This just wraps the async function so that when the result is ready resume the coroutine
suspend fun findBigPrime(): BigInteger =
suspendCancellableCoroutine { cont ->
findBigPrimeInNewThread {
cont.resume(it)
}
}
I'm new to Kotlin/Coroutines and I've noticed two different ways to use CoroutineScope.
Option 1 is as follows, within any function:
CoroutineScope(Dispatchers.Default).launch {
expensiveOperation()
}
Option 2 is by implementing the CoroutineScope interface in your class, overriding the CoroutineContext, and then you can just launch coroutines easily with launch or async:
#Service
class ServiceImpl() : CoroutineScope {
override val coroutineContext: CoroutineContext
get() = Dispatchers.Default + Job()
fun someFunction() {
launch {
expensiveOperation()
}
}
}
I am currently developing a backend endpoint that will do the following:
take a request
save the request context to a database
launch a non blocking coroutine in the background to perform an expensive/lengthy operation on the request, and immediately return an http 200. (essentially, once we have the context saved, we can return a response and let the request process in the background)
What is the difference in the two use cases, and for this scenario, which is the preferred method for obtaining a CoroutineScope?
This endpoint may receive multiple requests per second, and the lengthy operation will take a minute or two, so there will definitely be multiple requests processing at the same time, originating from various requests.
Also, if it's option 2, do I want to pass the scope/context to the function that does the heavy processing? Or is that unnecessary? For example:
class ServiceImpl() : CoroutineScope {
override val coroutineContext: CoroutineContext
get() = Dispatchers.Default + Job()
fun someFunction() {
launch {
expensiveOperation(CoroutineScope(coroutineContext))
}
}
private fun expensiveOperation(scope: CoroutineScope)
{
// perform expensive operation
}
}
This is a Spring Boot app, and I'm using version 1.3 of Kotlin.
Please let me know if you have any thoughts/suggestions on how to best structure this service class. Thanks
I would recommend option 2. It will give you chance to clearly define parent Job for all of your coroutines. That gives a chance to shut down the whole execution correctly too.
There are several more coroutine context keys to include - CoroutineName, CoroutineExceptionHandler and so one.
Lastly, the structural concurrency may work better if you pass the CoroutineScope and the associated Job explicitly.
https://medium.com/#elizarov/structured-concurrency-722d765aa952
Also, take a look the explanation on that from Roman:
https://medium.com/#elizarov/coroutine-context-and-scope-c8b255d59055
I'm writing an app using coroutines (code below is greatly simplified). Recently I've watched Coroutines in Practice talk and got a little confused. Turns out I don't know when to use a CoroutineScope's extension function and when to use a suspending function.
I have a mediator (Presenter/ViewModel/Controller/etc) that implements CoroutineScope:
class UiMediator : CoroutineScope {
private val lifecycleJob: Job = Job()
override val coroutineContext = lifecycleJob + CoroutineDispatchersProvider.MAIN
// cancel parent Job somewhere
fun getChannel() {
launch {
val channel = useCase.execute()
view.show(channel)
}
}
}
Business logic (Interactor/UseCase):
class UseCase {
suspend fun execute(): RssChannel = repository.getRssChannel()
}
And a repository:
class Repository {
suspend fun getRssChannel(): RssChannel {
// `getAllChannels` is a suspending fun that uses `withContext(IO)`
val channels = localStore.getAllChannels()
if (channels.isNotEmpty()) {
return channels[0]
}
// `fetchChannel` is a suspending fun that uses `suspendCancellableCoroutine`
// `saveChannel` is a suspending fun that uses `withContext(IO)`
return remoteStore.fetchChannel()
.also { localStore.saveChannel(it) }
}
}
So I have a few questions:
Should I declare Repository#getRssChannel as a CoroutineScope's extension function (because
it spawns new suspending functions: getAllChannels,
fetchChannel, saveChannel)? How can I use it in the UseCase then?
Should I just wrap a Repository#getRssChannel into a
coroutineScope function in order to make all spawned suspending
functions to be children of the latter?
Or maybe it's already fine and I should change nothing. When to
declare a function as a CoroutineScope's extension then?
A suspending function should return once it has completed its task, it executes something, possibly taking some time while not blocking the UI, and when it's done it returns.
A CoroutineScope extension function is for a fire-and-forget scenario, you call it, it spawns a coroutine and returns immediately, while the task continues to execute.
Answer to question 1:
No, you should not declare Repository#getRssChannel as an extension function of CoroutineScope, because you only invoke suspend functions but not start (launch/ async) new jobs. As #Francesc explained extension function of CoroutineScope should only start new jobs, but cannot return immediatly result and should not be declared as suspend by itself.
Answer to question 2:
No, you should not wrap Repository#getRssChannel into a CoroutineScope. Wrapping makes only sense if you start (launch/ async) new coroutines in this method. The new jobs would be children of the current job and the outer method will only return after all parallel jobs are finished. In your case you have sequential invocations of other suspending coroutines and there is no need of a new scope.
Answer to question 3:
Yes, you can stay with your code. If you would need the functionality of UiMediator#getChannel more then once, then this method would be a candidate of an extension function for CoroutineScope.