I'm currently looking at having a KMM application backed by SQLdelight for all domain-related operations.
SQLdelight seems to provide really nice interfaces, however it seems like all the write calls (insert/update/delete) are implemented using blocking calls, so I'm worried that it would hurt the responsiveness of the app by blocking the main thread a lot.
Is there a recommended way to perform such operations without blocking the main thread?
The app would have to work on iOS as well, so I can't afford freezing too much.
A bit late to answer but it might be useful for others:
You should use wiwthContext(Dispatchers.Default) assuming you are using the native-mt version of coroutine libraries. That allow you to ensure insert/update/delete are not executed on the main thread.
You also have the possibility of using sqldelight coroutine-extensions library to return a flow from your queries to observe changes in your database.
Related
I have a program I'm writing in vb.net that has ballooned into the most complicated thing I've ever written. Because of some complex math and image rendering that's happening constantly I've been delving into multithreading for the first time to improve overall performance. Things have honestly been running really smoothly, but we've just added more functionality that's causing me some trouble.
The new functionality comes from a pair of DLLs that are each processing a video stream from a USB camera and looking for moving objects. When I start my program I initiate the DLLs and they start viewing the cameras and processing the videos. I then periodically ping them to see if they have detected anything. This is how I start and stop them:
Declare Function StartLeftCameraDetection Lib "DetectorLibLeft.dll" Alias "StartCameraDetection" () As Integer
Declare Function StopLeftCameraDetection Lib "DetectorLibLeft.dll" Alias "StopCameraDetection" () As Integer
When I need to check if they've found any objects I use several functions like this:
Declare Function LeftDetectedObjectLeft Lib "DetectorLibLeft.dll" Alias "DetectedObjectLeft" () As Integer
All of that works really well. The problem is, I've started to notice some significant lag in my UI and I'm thinking it may be coming from the DLLs. Forgive my ignorance on this, but as I said I'm new to using multiple threads (and incorporating DLLs too if I'm honest). It seems to me that when I start a DLL it running it's background tasks on my main thread and just waiting for me to ping it for information. Is that the case? If so, is it possible to have the DLL running on a sperate thread so it doesn't affect my UI?
I've tried a few different things but I can't seem to address the lag. I moved the code that pings the DLL and processes whatever information it gets into a sperate thread, but that hasn't made any difference. I also tried calling StartLeftCameraDetection from a separate thread but that didn't seem to help either. Again, I'm guessing that's because the real culprit is the DLL itself running these constant background tasks on my main thread no what thread I actually call it's functions from.
Thanks in advance for any help you might be able to offer!
There's a lot to grok when it comes to threading, but I'll try to write a concise summary that hits the high points with enough details to cover what you need to know.
Multi-threaded synchronization is hard, so you should try to avoid it as much as possible. That doesn't mean avoiding multi-threading at all, it just means avoiding doing much more than sending a self-contained task off to a thread to run to completion and getting the results back when it's done.
Recognizing that multi-threaded synchronization is hard, it's even worse when it involves UI elements. So in .NET, the design is that any access to UI elements will only occur through one thread, typically referred to as the UI thread. If you are not explicitly writing multi-threaded code, then all of your code runs on the UI thread. And, while your code is running, the UI is blocked.
This also extends to external routines that you run through Declare Function. It's not really accurate to say that they are doing anything with "background tasks on the main thread", if they are doing anything with "background tasks" they are almost certainly implementing their own threading. More likely, they aren't doing any task breakdown at all, and all of their work is being done on whichever thread you use to call them---the UI thread if you're not doing anything else.
If the work being done in these routines is CPU-bound, then it would definitely make sense to push it off onto a worker thread. Based on your comments on what you already tried:
I moved the code that pings the DLL and processes whatever information it gets into a sperate thread, but that hasn't made any difference. I also tried calling StartLeftCameraDetection from a separate thread but that didn't seem to help either.
I think the most likely problem is that you're blocking in the UI thread waiting for a result from the background thread.
The best way to avoid this depends on exactly what the routines are doing and how they produce results. If they do some sort of extended process and return everything in function results, then I would suggest that using Await would work well. This will basically return control to the UI until the operation finishes, then resume whatever the rest of the calling routine was going to do.
Note that if you do this, the user will have full interaction with the UI, and you should react accordingly. You might need to disable some (or all) operations until it's done.
There are a lot of resources on Async and Await. I'd particularly recommend reading Stephen Cleary's blog articles to get a better understanding of how they work and potential pitfalls that you might encounter.
I usually write imperative code on Java/Spring MVC, but now my team implement project on WebFlux. I tried to research the topic, but I can't find the answer to the question about locks.
It's normal when we have code that should always be executed by only one thread, or that has locks by some condition (for example, the code should not be executed concurrently for the same entity). These locks can be distributed, for example, through a Redis.
But how is this problem solved in Project Reactor? As far as I understand, it would be a bad idea to use a synchronized block, or ReentrantLock, because they will block threads while we avoid blocking.
It turns out that we need to design the application in such a way that there is no need for locks. Which is not always possible.
Or is there any solution? I will be grateful for any information.
There is no official implementation, here are some resources for reference.
How to trigger Mono execution after another Mono terminates
https://github.com/chenggangpro/reactive-lock
As I understand it, by default, if you start a Kotlin Coroutine via launch or async it'll launch in CommonPool (or if you use GlobalScope). And CommonPool is a ForkJoinPool and that, by default, is in non-async mode so it executes tasks in LIFO order. That seems like a very bad choice for something like asynchronous web server applications where we'd want fair scheduling: we don't want the poor sucker who hit our web server first to wait for all calls that came later.
However, Kotlin coroutines add an additional wrinkle here in that there's some bit of code from the Kotlin standard library that will arrange to have those coroutines executed (some variation of the standard asyc select/epoll loop as I understand it). So maybe the LIFO thing isn't a concern?
I could certainly run some experiments and/or step into the code in a debugger to see how this works but I suspect other's have the same question and I bet somebody "just knows" the answer...
Per discussion on Kotlin Discuss CommonPool is no longer the default and they now default to a "mostly fair" scheduler. Details in the linked discussion.
This shouldn't be a concern, because ForkJoinPool is not really LIFO.
That is, it's LIFO for a single thread in the pool, but that's where things become interesting with "work stealing part". Task queue for each thread is double linked. So, what is LIFO for one thread is FIFO for another thread that became free.
In general, ForkJoinPool is a great solution for small tasks, and usually your coroutines are considered small, if you use suspending functions wisely.
Also, you can read more about asyncMode in documentation, as it's not that "async": https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html
asyncMode - if true, establishes local first-in-first-out scheduling
mode for forked tasks that are never joined. This mode may be more
appropriate than default locally stack-based mode in applications in
which worker threads only process event-style asynchronous tasks. For
default value, use false.
In my app, I want to create a class that receives a certain type of notifications, begins it's work and sends out notifications when it's done. I think that later I may need to use concurrency to optimize the app — so this work that the class does is done in separate threads — but right now I don't have any knowledge or experience of working with concurrency and I don't want to spend time on premature optimizaion. However, if I understand correctly, the default usage of notifications doesn't mix with concurrency so well.
Is there a way that I can just follow few simple rules with notifications right now without diving into concurrency, and avoid rewriting all that code later?
Yes, you can avoid a rewrite.
I would write your work/background tasks inside blocks and use GCD (Grand Central Dispatch). This works fine and is easy to use in the non-parallel case, but will also allow you to easily parallelize your work later.
I'd look into NSBlockOperation and NSOperationQueue and/or dispatch_async()
Or, equivalently, how would you design such an API. Expected/example usage would be illustrative as well.
My curiosity comes directly from the comments (and subsequent editting on my part) of this answer. Similar questions/discussions in the past provide a bit of inspiration to actually asking it.
Executive summary:
I don't feel a multithreaded UI api is possible in a meaningful way, nor particularly desirable. This view seems somewhat contentious and being a (relatively) humble man I'd like to see the error of my ways, if they actually are erroneous.
*Multithreaded is defined pretty loosely in this context, treat** it however makes sense to you.
Since this is pretty free-form, I'll be accepting whichever answer has the most coherent and well supported answer in my opinion; regardless of whether I agree with it.
Answer Accepted
**Ok, perhaps more clarification is necessary.
Pretty much every serious application has more than one thread. At the very least, they'll spin up an additional thread to do some background task in response to a UI event.
I do not consider this a multithreaded UI.
All the UI work is being done on single thread still. I'd say, at a basic level, a multithreaded UI api would have to do away with (in some way) thread based ownership of UI objects or dispatching events to a single thread.
Remeber, this is about the UI api itself; not the applications that makes use of it.
I don't see how a multithreaded UI API would differ much from existing ones. The major differences would be:
(If using a non-GC'd language like C++) Object lifetimes are tracked by reference-counted pointer wrappers such as std::tr1::shared_ptr. This ensures you don't race with a thread trying to delete an object.
All methods are reentrant, thread-safe, and guaranteed not to block on event callbacks (therefore, event callbacks shall not be invoked while holding locks)
A total order on locks would need to be specified; for example, the implementation of a method on a control would only be allowed to invoke methods on child controls, except by scheduling an asynchronous callback to run later or on another thread.
With those two changes, you can apply this to almost any GUI framework you like. There's not really a need for massive changes; however, the additional locking overhead will slow it down, and the restrictions on lock ordering will make designing custom controls somewhat more complex.
Since this usually is a lot more trouble than it's worth, most GUI frameworks strike a middle ground; UI objects can generally only be manipulated from the UI thread (some systems, such as win32, allow there to be multiple UI threads with seperate UI objects), and to communicate between threads there is a threadsafe method to schedule a callback to be invoked on the UI thread.
Most GUI's are multithreaded, at least in the sense that the GUI is running in a separate thread from the rest of the application, and often one more thread for an event handler. This has the obvious benefit of complicated backend work and synchronous IO not bringing the GUI to a screeching halt, and vice versa.
Adding more threads tends to be a proposition of diminishing returns, unless you're handling things like multi-touch or multi-user. However, most multi-touch input seems to be handled threaded at the driver level, so there's usually no need for it at the GUI level. For the most part you only need 1:1 thread to user ratio plus some constant number depending on what exactly you're doing.
For example, pre-caching threads are popular. The thread can burn any extra CPU cycles doing predictive caching, to make things run faster in general. Animation threads... If you have intensive animations, but you want to maintain responsiveness you can put the animation in a lower priority thread than the rest of the UI. Event handler threads are also popular, as mentioned above, but are usually provided transparently to the users of the framework.
So there are definitely uses for threads, but there's no point in spawning large numbers of threads for a GUI. However, if you were writing your own GUI framework you would definitely have to implement it using a threaded model.
There is nothing wrong with, nor particularly special about multithreaded ui apps. All you need is some sort of synchronization between threads and a way to update the ui across thread boundaries (BeginInvoke in C#, SendMessage in a plain Win32 app, etc).
As for uses, pretty much everything you see is multithreaded, from Internet Browsers (they have background threads downloading files while a main thread is taking care of displaying the parts downloaded - again, making use of heavy synchronization) to Office apps (the save function in Microsoft Office comes to mind) to games (good luck finding a single threaded big name game). In fact the C# WinForms UI spawns a new thread for the UI out of the box!
What specifically do you think is not desirable or hard to implement about it?
I don't see any benifit really. Let's say the average app has 3 primary goals:
Rendering
User input / event handlers
Number crunching / Network / Disk / Etc
Dividing these into one thread each(several for #3) would be pretty logical and I would call #1 and #2 UI.
You could say that #1 is already multithreaded and divided on tons of shader-processors on the GPU. I don't know if adding more threads on the CPU would help really. (at least if you are using standard shaders, IIRC some software ray tracers and other CGI renderers use several threads - but i would put such applications under #3)
The user input metods, #2, should only be really really short, and invoke stuff from #3 if more time is needed, that adding more threads here wouldn't be of any use.