viewModelScope blocks UI in Jetpack Compose - kotlin

viewModelScope blocks UI in Jetpack Compose
I know viewModelScope.launch(Dispatchers.IO) {} can avoid this problem, but how to use viewModelScope.launch(Dispatchers.IO) {}?
This is my UI level code
#Composable
fun CountryContent(viewModel: CountryViewModel) {
SingleRun {
viewModel.getCountryList()
}
val pagingItems = viewModel.countryGroupList.collectAsLazyPagingItems()
// ...
}
Here is my ViewModel, Pager is my pagination
#HiltViewModel
class CountryViewModel #Inject constructor() : BaseViewModel() {
var countryGroupList = flowOf<PagingData<CountryGroup>>()
private val config = PagingConfig(pageSize = 26, prefetchDistance = 1, initialLoadSize = 26)
fun getCountryList() {
countryGroupList = Pager(config) {
CountrySource(api)
}.flow.cachedIn(viewModelScope)
}
}
This is the small package
#Composable
fun SingleRun(onClick: () -> Unit) {
val execute = rememberSaveable { mutableStateOf(true) }
if (execute.value) {
onClick()
execute.value = false
}
}

I don't use Compose much yet, so I could be wrong, but this stood out to me.
I don't think your thread is being blocked. I think you subscribed to an empty flow before replacing it, so there is no data to show.
You shouldn't use a var property for your flow, because the empty original flow could be collected before the new one replaces it. Also, it defeats the purpose of using cachedIn because the flow could be replaced multiple times.
You should eliminate the getCountryList() function and just directly assign the flow. Since it is a cachedIn flow, it doesn't do work until it is first collected anyway. See the documentation:
It won't execute any unnecessary code unless it is being collected.
So your view model should look like:
#HiltViewModel
class CountryViewModel #Inject constructor() : BaseViewModel() {
private val config = PagingConfig(pageSize = 26, prefetchDistance = 1, initialLoadSize = 26)
val countryGroupList = Pager(config) {
CountrySource(api)
}.flow.cachedIn(viewModelScope)
}
}
...and you can remove the SingleRun block from your Composable.
You are not doing anything that would require you to specify dispatchers. The default of Dispatchers.Main is fine here because you are not calling any blocking functions directly anywhere in your code.

Related

Issue IDE warning if annotated member is not surrounded with a particular block

I have a data structure which has members that are not thread safe and the caller needs to lock the resource for reading and writing as appropriate. Here's a minimal code sample:
class ExampleResource : LockableProjectItem {
override val readWriteLock: ReadWriteLock = ReentrantReadWriteLock()
#RequiresReadLock
val nonThreadSafeMember: String = ""
}
interface LockableProjectItem {
val readWriteLock: ReadWriteLock
}
fun <T : LockableProjectItem, Out> T.readLock(block: T.() -> Out): Out {
try {
readWriteLock.readLock().lock()
return block(this)
} finally {
readWriteLock.readLock().unlock()
}
}
fun <T : LockableProjectItem, Out> T.writeLock(block: T.() -> Out): Out {
try {
readWriteLock.writeLock().lock()
return block(this)
} finally {
readWriteLock.writeLock().unlock()
}
}
annotation class RequiresReadLock
A call ExampleResource.nonThreadSafeMember might then look like this:
val resource = ExampleResource()
val readResult = resource.readLock { nonThreadSafeMember }
To make sure that the caller is aware that the resource needs to be locked, I would like the IDE to issue a warning for any members that are annotated with #RequiresReadLock and are not surrounded with a readLock block. Is there any way to do this in IntelliJ without writing a custom plugin for the IDE?
I think this is sort of a hack, but using context receivers might work. I don't think they are intended to be used in this way though.
You can declare a dummy object to act as the context receiver, and add that as a context receiver to the property:
object ReadLock
class ExampleResource : LockableProjectItem {
override val readWriteLock: ReadWriteLock = ReentrantReadWriteLock()
// properties with context receivers cannot have a backing field, so we need to explicitly declare this
private val nonThreadSafeMemberField: String = ""
context(ReadLock)
val nonThreadSafeMember: String
get() = nonThreadSafeMemberField
}
Then in readLock, you pass the object:
fun <T : LockableProjectItem, Out> T.readLock(block: context(ReadLock) T.() -> Out): Out {
try {
readWriteLock.readLock().lock()
return block(ReadLock, this)
} finally {
readWriteLock.readLock().unlock()
}
}
Notes:
This will give you an error if you try to access nonThreadSafeMember without the context receiver:
val resource = ExampleResource()
val readResult = resource.nonThreadSafeMember //error
You can still access nonThreadSafeMember without acquiring a read lock by doing e.g.
with(ReadLock) { // with(ReadLock) doesn't acquire the lock, just gets the context receiver
resource.nonThreadSafeMember // no error
}
But it's way harder to accidentally write something like this, which I think is what you are trying to prevent.
If you call another function inside readLock, and you want to access nonThreadSafeMember inside that function, you should mark that function with context(ReadLock) too. e.g.
fun main() {
val resource = ExampleResource()
val readResult = resource.readLock {
foo(this)
}
}
context(ReadLock)
fun foo(x: ExampleResource) {
x.nonThreadSafeMember
}
The context receiver is propagated through.

Flow working incorrectly. Called again when it shouldn't, but liveData is working correct

I use Jetpack Compose and have 2 screens. When I open second screen and back to the fisrt, flow variable calling again and ui updated again. But, I don't understand why... When I use liveData was working perfect.
My code with LiveData:
class MainViewModel(private val roomRepository: Repository, private val sPref:SharedPreferences) : ViewModel() {
val words: LiveData<List<WordModel>> by lazy {
roomRepository.getAllWords()
}
...
}
MainScreen.kt:
#ExperimentalMaterialApi
#Composable
fun MainScreen(viewModel: MainViewModel) {
...
val words: List<WordModel> by viewModel
.words
.observeAsState(listOf())
...
WordList(
words = words,
onNoticeClick = { viewModel.onWordClick(it) },
state = textState,
lazyState = viewModel.listState!!
)
...
}
#Composable
private fun WordList(
words: List<WordModel>,
onNoticeClick: (WordModel) -> Unit,
state: MutableState<TextFieldValue>,
lazyState: LazyListState
) {
var filteredCountries: List<WordModel>
LazyColumn(state = lazyState) {
val searchedText = state.value.text
filteredCountries = if (searchedText.isEmpty()) {
words
} else {
words.filter {
it.word.lowercase().contains(searchedText) || it.translation.lowercase()
.contains(searchedText)
}
}
items(count = filteredCountries.size) { noteIndex ->
val note = filteredCountries[noteIndex]
Word(
word = note,
onWordClick = onNoticeClick
)
}
}
}
WordDao.kt:
#Dao
interface WordDao {
#Query("SELECT * FROM WordDbModel")
fun getAll(): LiveData<List<WordDbModel>>
}
RoomRepositoryImpl.kt:
class RoomRepositoryImpl(
private val wordDao: WordDao,
private val noticeDao: NoticeDao,
private val dbMapper: DbMapper
) : Repository {
override fun getAllWords(): LiveData<List<WordModel>> =
Transformations.map(wordDao.getAll()) {dbMapper.mapWords(it)}
...
}
DbMapperImpl.kt:
class DbMapperImpl: DbMapper {
...
override fun mapWords(words: List<WordDbModel>): List<WordModel> =
words.map { word -> mapWord(word, listOf<NoticeModel>()) }
}
My code with Flow, which calling every time when open the first screen:
class MainViewModel(private val roomRepository: Repository, private val sPref:SharedPreferences) : ViewModel() {
val words: Flow<List<WordModel>> = flow {
emitAll(repository.getAllWords())
}
}
MainScreen.kt:
#ExperimentalMaterialApi
#Composable
fun MainScreen(viewModel: MainViewModel) {
...
val words: List<WordModel> by viewModel
.words
.collectAsState(initial = listOf())
...
}
WordDao.kt:
#Dao
interface WordDao {
#Query("SELECT * FROM WordDbModel")
fun getAll(): Flow<List<WordDbModel>>
}
RoomRepositoryImpl.kt:
class RoomRepositoryImpl(
private val wordDao: WordDao,
private val noticeDao: NoticeDao,
private val dbMapper: DbMapper
) : Repository {
override fun getWords(): Flow<List<WordModel>> = wordDao.getAll().map { dbMapper.mapWords(it) }
}
And my router from MainRouting.kt:
sealed class Screen {
object Main : Screen()
object Notice : Screen()
object Your : Screen()
object Favorites : Screen()
}
object MainRouter {
var currentScreen: Screen by mutableStateOf(Screen.Main)
var beforeScreen: Screen? = null
fun navigateTo(destination: Screen) {
beforeScreen = currentScreen
currentScreen = destination
}
}
And MainActivity.kt:
class MainActivity : ComponentActivity() {
...
#Composable
#ExperimentalMaterialApi
private fun MainActivityScreen(viewModel: MainViewModel) {
Surface {
when (MainRouter.currentScreen) {
is Screen.Main -> MainScreen(viewModel)
is Screen.Your -> MainScreen(viewModel)
is Screen.Favorites -> MainScreen(viewModel)
is Screen.Notice -> NoticeScreen(viewModel = viewModel)
}
}
}
...
}
Perhaps someone knows why a new drawing does not occur with liveData (or, it is performed so quickly that it is not noticeable that it is), but with Flow the drawing of the list is visible.
You're passing the viewModel around, which is a terrible practice in a framework like Compose. The Model is like a waiter. It hangs around you, serves you water, does its job while you make the order. As you get distracted talking, it leaves. When it comes back, it is not the same waiter you had earlier. It wears the same uniform, with the same characteristics, but is still essentially a different object. When you pass the model around, it gets destroyed in the process of navigation. In case of flow, you are getting biased. Notice how you manually do a lazy initialization for the LiveData, but a standard proc. for Flow? Seems like the only logical reason for your observed inconsistency. If you want to use Flow in your calls instead of LiveData, just convert it at the site of initialization in the ViewModel. Your symptoms should go away.

Alternative solution to injecting dispatchers to make the code testable

I run into a problem during writing tests for a viewModel. The problem occurred when I was trying to verify LiveData that is updated with channelFlow flow on Dispatchers.IO.
I created a simple project to show the issue.
There is a data provider class that is providing 10 numbers:
As it is, the numbers variable in the test is empty and the test fails. I know it is a problem with coroutine dispatchers.
val numbersFlow: Flow<Int> = channelFlow {
var i = 0
while (i < 10) {
delay(100)
send(i)
i++
}
}.flowOn(Dispatchers.IO)
a simple viewModel that is collecting data:
class NumbersViewModel: ViewModel() {
private val _numbers: MutableLiveData<IntArray> = MutableLiveData(IntArray(0))
val numbers: LiveData<IntArray> = _numbers
val dataProvider = NumbersProvider()
fun startCollecting() {
viewModelScope.launch(Dispatchers.Main) {
dataProvider.numbersFlow
.onStart { println("start") }
.onCompletion { println("end") }
.catch { exception -> println(exception.message.orEmpty())}
.collect { data -> onDataRead(data) }
}
}
fun onDataRead(data: Int) {
_numbers.value = _numbers.value?.plus(data)
}
}
and the test:
class NumbersViewModelTest {
#get:Rule
var instantTaskExecutorRule = InstantTaskExecutorRule()
#get:Rule
var mainCoroutineRule = MainCoroutineRule()
private lateinit var viewModel: NumbersViewModel
#Before
fun setUp() {
viewModel = NumbersViewModel()
}
#Test
fun `provider_provides_10_values`() {
viewModel.startCollecting()
mainCoroutineRule.advanceTimeBy(2000)
val numbers = viewModel.numbers.value
assertThat(numbers?.size).isEqualTo(10)
}
}
There is a common solution with changing the main dispatcher for test usage but... is there any good solution for dealing with the IO one?
I found a solution with injecting dispatchers everywhere - similarly to how I would inject NumbersProvider using Hilt in a real app - and that enables injecting our test dispatcher when we need it. It works but now I have to inject dispatchers everywhere in the code and I don't really like that if it only serves to solve the testing problem
I tried another solution and created a Singleton which makes all the standard dispatchers available in the production code and which I can configure for tests (by setting every dispatcher to the test one). I like how the resulting source code looks more - there is no additional code in viewModels and data providers but there is this singleton and everyone shouting 'Don't use singletons'
Is there any better option to correctly test code with coroutines?

Access ApplicationCall in object without propagation

Is there a thread-safe method in Ktor where it is possible to statically access the current ApplicationCall? I am trying to get the following simple example to work;
object Main {
fun start() {
val server = embeddedServer(Jetty, 8081) {
intercept(ApplicationCallPipeline.Call) {
// START: this will be more dynamic in the future, we don't want to pass ApplicationCall
Addon.processRequest()
// END: this will be more dynamic in the future, we don't want to pass ApplicationCall
call.respondText(output, ContentType.Text.Html, HttpStatusCode.OK)
return#intercept finish()
}
}
server.start(wait = true)
}
}
fun main(args: Array<String>) {
Main.start();
}
object Addon {
fun processRequest() {
val call = RequestUtils.getCurrentApplicationCall()
// processing of call.request.queryParameters
// ...
}
}
object RequestUtils {
fun getCurrentApplicationCall(): ApplicationCall {
// Here is where I am getting lost..
return null
}
}
I would like to be able to get the ApplicationCall for the current context to be available statically from the RequestUtils so that I can access information about the request anywhere. This of course needs to scale to be able to handle multiple requests at the same time.
I have done some experiments with dependency inject and ThreadLocal, but to no success.
Well, the application call is passed to a coroutine, so it's really dangerous to try and get it "statically", because all requests are treated in a concurrent context.
Kotlin official documentation talks about Thread-local in the context of coroutine executions. It uses the concept of CoroutineContext to restore Thread-Local values in specific/custom coroutine context.
However, if you are able to design a fully asynchronous API, you will be able to bypass thread-locals by directly creating a custom CoroutineContext, embedding the request call.
EDIT: I've updated my example code to test 2 flavors:
async endpoint: Solution fully based on Coroutine contexts and suspend functions
blocking endpoint: Uses a thread-local to store application call, as referred in kotlin doc.
import io.ktor.server.engine.embeddedServer
import io.ktor.server.jetty.Jetty
import io.ktor.application.*
import io.ktor.http.ContentType
import io.ktor.http.HttpStatusCode
import io.ktor.response.respondText
import io.ktor.routing.get
import io.ktor.routing.routing
import kotlinx.coroutines.asContextElement
import kotlinx.coroutines.launch
import kotlin.coroutines.AbstractCoroutineContextElement
import kotlin.coroutines.CoroutineContext
import kotlin.coroutines.coroutineContext
/**
* Thread local in which you'll inject application call.
*/
private val localCall : ThreadLocal<ApplicationCall> = ThreadLocal();
object Main {
fun start() {
val server = embeddedServer(Jetty, 8081) {
routing {
// Solution requiring full coroutine/ supendable execution.
get("/async") {
// Ktor will launch this block of code in a coroutine, so you can create a subroutine with
// an overloaded context providing needed information.
launch(coroutineContext + ApplicationCallContext(call)) {
PrintQuery.processAsync()
}
}
// Solution based on Thread-Local, not requiring suspending functions
get("/blocking") {
launch (coroutineContext + localCall.asContextElement(value = call)) {
PrintQuery.processBlocking()
}
}
}
intercept(ApplicationCallPipeline.ApplicationPhase.Call) {
call.respondText("Hé ho", ContentType.Text.Plain, HttpStatusCode.OK)
}
}
server.start(wait = true)
}
}
fun main() {
Main.start();
}
interface AsyncAddon {
/**
* Asynchronicity propagates in order to properly access coroutine execution information
*/
suspend fun processAsync();
}
interface BlockingAddon {
fun processBlocking();
}
object PrintQuery : AsyncAddon, BlockingAddon {
override suspend fun processAsync() = processRequest("async", fetchCurrentCallFromCoroutineContext())
override fun processBlocking() = processRequest("blocking", fetchCurrentCallFromThreadLocal())
private fun processRequest(prefix : String, call : ApplicationCall?) {
println("$prefix -> Query parameter: ${call?.parameters?.get("q") ?: "NONE"}")
}
}
/**
* Custom coroutine context allow to provide information about request execution.
*/
private class ApplicationCallContext(val call : ApplicationCall) : AbstractCoroutineContextElement(Key) {
companion object Key : CoroutineContext.Key<ApplicationCallContext>
}
/**
* This is your RequestUtils rewritten as a first-order function. It defines as asynchronous.
* If not, you won't be able to access coroutineContext.
*/
suspend fun fetchCurrentCallFromCoroutineContext(): ApplicationCall? {
// Here is where I am getting lost..
return coroutineContext.get(ApplicationCallContext.Key)?.call
}
fun fetchCurrentCallFromThreadLocal() : ApplicationCall? {
return localCall.get()
}
You can test it in your navigator:
http://localhost:8081/blocking?q=test1
http://localhost:8081/blocking?q=test2
http://localhost:8081/async?q=test3
server log output:
blocking -> Query parameter: test1
blocking -> Query parameter: test2
async -> Query parameter: test3
The key mechanism you want to use for this is the CoroutineContext. This is the place that you can set key value pairs to be used in any child coroutine or suspending function call.
I will try to lay out an example.
First, let us define a CoroutineContextElement that will let us add an ApplicationCall to the CoroutineContext.
class ApplicationCallElement(var call: ApplicationCall?) : AbstractCoroutineContextElement(ApplicationCallElement) {
companion object Key : CoroutineContext.Key<ApplicationCallElement>
}
Now we can define some helpers that will add the ApplicationCall on one of our routes. (This could be done as some sort of Ktor plugin that listens to the pipeline, but I don't want to add to much noise here).
suspend fun PipelineContext<Unit, ApplicationCall>.withCall(
bodyOfCall: suspend PipelineContext<Unit, ApplicationCall>.() -> Unit
) {
val pipeline = this
val appCallContext = buildAppCallContext(this.call)
withContext(appCallContext) {
pipeline.bodyOfCall()
}
}
internal suspend fun buildAppCallContext(call: ApplicationCall): CoroutineContext {
var context = coroutineContext
val callElement = ApplicationCallElement(call)
context = context.plus(callElement)
return context
}
And then we can use it all together like in this test case below where we are able to get the call from a nested suspending function:
suspend fun getSomethingFromCall(): String {
val call = coroutineContext[ApplicationCallElement.Key]?.call ?: throw Exception("Element not set")
return call.parameters["key"] ?: throw Exception("Parameter not set")
}
fun Application.myApp() {
routing {
route("/foo") {
get {
withCall {
call.respondText(getSomethingFromCall())
}
}
}
}
}
class ApplicationCallTest {
#Test
fun `we can get the application call in a nested function`() {
withTestApplication({ myApp() }) {
with(handleRequest(HttpMethod.Get, "/foo?key=bar")) {
assertEquals(HttpStatusCode.OK, response.status())
assertEquals("bar", response.content)
}
}
}
}

coroutine scope and async - right approach?

I love the concept of co-routines and I've been using in my android projects. Currently i'm working on a JVM module which i'll be including in a Ktor project and i know ktor has support for co-routines.
(find the attached code snippet)
Just wanted to know is this the right approach?
How do i use async with recursion?
Any resources that you can recommend which can help me grasp more in-depth knowledge of co-routines would be helpful.
Thanks in advance!
override suspend fun processInstruction(args.. ): List<Any> = coroutineScope {
val dataWithFields = async{
listOfFields.fold(mutableList()){ acc,field ->
val data = someProcess(field)
val nested = processInstruction(...nestedField) // nested call
acc.addAll(data)
acc.addAll(nested)
acc
}
}
return#coroutineScope postProcessData(dataWithFields.await())
}
If you want to process all nested calls in parallel, you should wrap each of them in async (async should be inside of the loop). And then, after the loop, you should await all the results. (In your code you run await right after single async, so there is no parallel execution).
For example, if you have Element:
interface Element {
val subElements: List<Element>
suspend fun calculateData(): SomeData
}
interface SomeData
And you want to calculateData of all subElements in parallel, you can do it like this:
suspend fun Element.calculateAllData(): List<SomeData> = coroutineScope {
val data = async { calculateData() }
val subData = subElements.map { sub -> async { sub.calculateAllData() } }
return#coroutineScope listOf(data.await()) + subData.awaitAll().flatten()
}
As you said in a comments section, you need parent-data to calculate sub-data, therefore the first thing calculateAllData() should do is calculate the parent-data:
suspend fun Element.calculateAllData(
parentData: SomeData = defaultParentData()
): List<SomeData> = coroutineScope {
val data = calculateData(parentData)
val subData = subElements.map { sub -> async { sub.calculateAllData(data) } }
return#coroutineScope listOf(data) + subData.awaitAll().flatten()
}
Now you may wonder how fast it works. Consider the following Element implementation:
class ElementImpl(override val subElements: List<Element>) : Element {
override suspend fun calculateData(parentData: SomeData): SomeData {
delay(1000)
return SomeData()
}
}
fun elmOf(vararg elements: Element) = ElementImpl(listOf(*elements))
And the following test:
println(measureTime {
elmOf(
elmOf(),
elmOf(
elmOf(),
elmOf(
elmOf(),
elmOf(),
elmOf()
)
),
elmOf(
elmOf(),
elmOf()
),
elmOf()
).calculateAllData()
})
If parent-data isn't needed to calculate sub-data, it prints 1.06s, since in this case, all the data is calculated in parallel. Otherwise, it prints 4.15s, since elements tree height is 4.