I am working in bluetooth in android kotlin. I found this blessed library and this class BluetoothPeripheral is using ConcurrentLinkedQueue. I don't understand what is the use of
private val commandQueue: Queue<Runnable> = ConcurrentLinkedQueue()
I am looking this enqueue function and I cannot understand the use case in here. what the author trying to achieved in here?
This enqueue function calls in different place i.e. readCharacteristic what is the use case in this function?
Thanks
Building on #broot's comment:
ConcurrentLinkedQueue is part of the java.util.concurrent package which is all about collections that are thread-safe
A Queue is a kind of collection that is designed for efficient adding and removal. Typically they offer First In First Out.
If you have an application that is dealing with a high throughput of tasks, producers put items in a queue and consumers take them. Depending which is faster, you may have more producer threads than consumer threads, or the other way around. You achieve process isolation by using a thread-safe queue, such as ConcurrentLinkedQueue
Some Queue implementations have bounded capacity, but a queue like ConcurrentLinkedQueue is based on a Linked List so typically have have far greater capacity, but mean that some operations, such as search might perform less well.
There is also a Dequeue which is a Queue that you can remove items easily from both ends.
I have no idea what the Bluetooth application is about and why it needs ConcurrentLinkedQueue so I cannot comment on whether it is the "best options to use in bluetooth case"
Related
I'm recently studying and reading a lot about Flow and Kotlin Coroutines. But I still get confused about when I should use Flow and when I should use Channel.
At the beginning it looked more simple. Working with hot streams of data? Channel. Cold ones? Flows. Same goes if you need to listen to streams of data from more than a single place; if that's the case Channel is the choice to go. There are still a lot of examples and questions.
But recently FlowChannels where introduced, together with tons of methods and classes that encourage the use of Flow, which facilities transforming Channels into Flows and so on. With all this new stuff coming on each Kotlin release I am getting more and more confused. So the question is:
When should I use Channel and when should I use Flow?
For many use cases where the best tool so far was Channel, Flow has become the new best tool.
As a specific example, callbackFlow is now the best approach to receiving data from a 3rd-party API's callback. This works especially well in a GUI setting. It couples the callback, a channel, and the associated receiving coroutine all in the same self-contained Flow instance. The callback is registered only while the flow is being collected. Cancellation of the flow automatically propagates into closing the channel and deregistering the callback. You just have to provide the callback-deregistering code once.
You should look at Channel as a lower-level primitive that Flow uses in its implementation. Consider working with it directly only after you realize Flow doesn't fit your requirements.
In my opinion a great explanation is here (Roman Elizarov) Cold flows, hot channels:
Channels are a great fit to model data sources that are intrinsically hot, data sources that exist without application’s requests for them: incoming network connections, event streams, etc.
Channels, just like futures, are synchronization primitives. You shall use a channel when you need to send data from one coroutine to another coroutine in the same or in a different process
But what if we don’t need either concurrency or synchronization, but need just non-blocking streams of data? We did not have a type for that until recently, so welcome Kotlin Flow type...
Unlike channels, flows do not inherently involve any concurrency. They are non-blocking, yet sequential. The goal of flows is to become for asynchronous data streams what suspending functions are for asynchronous operations — convenient, safe, easy to learn and easy to use.
I need to speed up a search over some collection with millions of elements.
Search predicate needs to be passed as argument.
I have been wondering wether the simplest solution(at least for now) wouldn't be just using coroutines for the task.
The question I am facing right now is how many coroutines can I actually create at once. :D As a side note there might be more than one such search running concurrently.
Can I make millions of coroutines(one for every item) for every such search? Should I decide on some workload per coroutine(for example 1000 items per coroutine)? Should I also decide on some cap for coroutines amount?
I have rough understanding of coroutines and how they actually work, however, I have no idea what are the performance limitations of this feature.
Thanks!
The memory weight of a coroutine scales with the depth of the call trace from the coroutine builder block to the suspension point. Each suspend fun call adds another Continuation object to a linked list and this is retained while the coroutine is suspended. A rough figure for one Continuation instance is 100 bytes.
So, if you have a call trace depth of, say, 5, that amounts to 500 bytes per item. A million items is 500 MB.
However, unless your search code involves blocking operations that would leave a thread idle, you aren't gaining anything from coroutines. Your task looks more like an instance of data paralellism and you can solve it very efficiently using the java.util.stream API (as noted by user marstran in the comment).
According the kotlin coroutine starter guide, the example launches 100K coroutines. I believe what you intend to do is exactly what kotlin coroutine is designed for.
If you will not do many modifications over your collection then just store it in a HashMap,
else store it in a TreeMap. Then just search items there. I believe the search methods implemented there are optimized enough to handle a million items in a blink. I would not use coroutines in this case.
Documentation (for Kotlin):
HashMap: https://developer.android.com/reference/kotlin/java/util/HashMap
TreeMap: https://developer.android.com/reference/kotlin/java/util/TreeMap
For my macOS application, I'd like to use concurrent map and queue data structure to be shared between multithread process and support parallel operations.
After some research I've found what what I need, but unfortunately those are only implemented in windows.
concurrency::concurrent_unordered_map<key,value> concurrency::concurrent_queue<key>
Perhaps there are synonyms internal implementations in macOS in CoreFoundation or other framework that comes with Xcode SDK (disregarding the language implementation) ?
thanks,
Perhaps there are synonyms internal implementations in macOS in CoreFoundation or other framework that comes with Xcode SDK (disregarding the language implementation) ?
Nope. You must roll-your-own or source elsewhere.
The Apple provided collections are not thread safe, however the common recommendation is to combine them with Grand Central Dispatch (GCD) to provide lightweight thread-safe wrappers, and this is quite easy to do.
Here is an outline of one way to do it for NSMutableDictionary, which you would use for your concurrent map:
Write a subclass, say ThreadSafeDictionary, of NSMutabableDictionary. This will allow your thread safe version to be passed anywhere an NSMutableDictionary is accepted.
The subclass should have a private instance of a standard NSMutableDictionary, say actualDictionary
To subclass NSMutableDicitonary you just need to override 2 methods from NSMutableDictionary and 4 methods from NSDictionary. Each of these methods should invoke the same method on actualDictionary after meeting any concurrency requirements.
To handle concurrency requirements the subclass should first create a private concurrent dispatch queue using dispatch_queue_create() and save this in an instance variable, say operationQueue.
For operations which read from the dictionary the override method uses a dispatch_sync() to schedule the read on actualDicitonary and return the value. As operationQueue is concurrent this allows multiple concurrent readers.
For operations which write to the dictionary the override method uses a dispatch_async_barrier() to schedule the write on actualDicitonary. The async means the writer is allowed to continue without waiting for any other writers, and the barrier ensures there are no other concurrent operations mutating the dictionary.
You repeat the above outline to implement the concurrent queue based on one of the other collection types.
If after studying the documentation you get stuck on the design or implementation ask a new question, show what you have, describe the issue, include a link back to this question so people can follow the thread, and someone will undoubtedly help you take the next step.
HTH
I am working with contiki and trying to understand the terminology used in it.
I am observing certain words such as yield, stackless here and there on internet. Some examples
PROCESS_EVENT_CONTINUE : This event is sent by the kernel to a process that is waiting in a PROCESS_YIELD() statement.
PROCESS_YIELD(); // Wait for any event, equivalent to PROCESS_WAIT_EVENT().
PROCESS_WAIT_UNTIL(); // Wait for a given condition; may not yield the process.
Does yielding a process means, executing a process in contiki. Also what does it mean that contiki is stackless.
Contiki uses so-called protothreads (a Contiki-specific term) in order to support multiple application-level processes in this OS. Protothread is just a fancy name for a programming abstraction known as coroutine in computer science.
"Yield" in this context is short for "yield execution" (i.e. give up execution). It means "let other protothreads to execute until an event appears that is addressed to the current protothread". Such events can be generated both by other protothreads and by interrupt handler functions. The "wait" macros are similar, but allow to yield and wait for specific events or conditions.
Contiki protothreads are stackless in the sense that they all share the same global stack of execution, as opposed to "real" threads which typically get their own stack space. As a consequence, the values local variables are not preserved in Contiki protothreads across yields. For example, doing this is undefined behavior:
int i = 1;
PROCESS_YIELD();
printf("i=%d\n", i); // <- prints garbage
The traditional Contiki way how deal with this limitation is to declare all protothread-local variables as static:
static int i = 1;
PROCESS_YIELD();
printf("i=%d\n", i);
Other options is to use global variables, of course, but having a lot of global variables is bad programming style. The benefit of static variables declared inside protothread functions is that they are hidden from other functions (including other protothreads), even though at the low level they are allocated in the global static memory region.
In the general case, to "Yield" in any OS means to synchronously invoke the scheduler (i.e. on demand rather then through interrupt) in order to give the opportunity of control to some other thread. In an RTOS such a feature would only affect threads of the same priority, and may be used in addition or instead of pre-emptive round-robin scheduling is required. Most RTOS do not have an explicit yield function, or in some cases (such as VxWorks) the same effect can be achieved using a zero length delay.
In a cooperative scheduler such as that in Contiki, such a function is necessary to allow other threads to run in an otherwise non-blocking thread. A thread always has control until it calls a bocking or yielding function.
The cooperative nature of Contiki's scheduler mean that it cannot be classified as an RTOS. It may be possible to achieve real-time behaviour suitable to a specific application, but only through careful and appropriate application design, rather the through intrinsic scheduler behaviour.
Good day all,
I'm having a hell of a time figuring out which multithreading approach to utilize in my current work project. Since I've never written a multithreaded app in my life, this is all confusing and very overwhelming. Without further ado, here's my background story:
I've been assigned to take over work on a control application for a piece of test equipment in my companies R&D lab. The program has to be able to send and receive serial communications with three different devices semi-concurrently. The original program was written in VB 6 (no multithreading) and I did plan on just modding it to work with the newer products that need to be tested until it posed a safety hazard when the UI locked up due to excessive serial communications during a test. This resulted in part of the tester hardware blowing up, so I decided to try rewriting the app in VB.Net as I'm more comfortable with it to begin with and because I thought multithreading might help solve this problem.
My plan was to send commands to the other pieces of equipment from the main app thread and spin the receiving ends off into their own threads so that the main thread wouldn't lock up when timing is critical. However, I've yet to come to terms with my options. To add to my problems, I need to display the received communications in separate rich text boxes as they're received while the data from one particular device needs to be parsed by the main program, but only the text that results from the most current test (I need the text box to contain all received data though).
So far, I've investigated delegates, handling the threads myself, and just began looking into BackgroundWorkers. I tried to use delegates earlier today, but couldn't figure out a way to update the text boxes. Would I need to use a call back function to do this since I can't do it in the body of the delegate function itself? The problem I see with handling threads myself is figuring out how to pass data back and forth between the thread and the rest of the program. BackgroundWorkers, as I said, I just started investigating so I'm not sure what to think about them yet.
I should also note that the plan was for the spawned threads to run continuously until somehow triggered to stop. Is this possible with any of the above options? Are there other options I haven't discovered yet?
Sorry for the length and the fact that I seem to ramble disjointed bits of info, but I'm on a tight deadline and stressed out to the point I can't think straight! Any advice/info/links is more than appreciated. I just need help weighing the options so I can pick a direction and move forward. Thanks to everybody who took the time to read this mess!
OK, serial ports, inter-thread comms, display stuff in GUI components like RichTextBox, need to parse incoming data quickly to decode the protocol and fire into a state-machine.
Are all three serial ports going to fire into the same 'processControl' state-machine?
If so, then you should probably do this by assembling event/data objects and queueing them to the state-machine run by one thread,(see BlockingCollection). This is like hugely safer and easier to understand/debug than locking up the state-engine with a mutex.
Define a 'comms' class to hold data and carry it around the system. It should have a 'command' enum so that threads that get one can do the right thing by switching on the enum. An 'Event' member that can be set to whatever is used by the state-engine. A 'bool loadChar(char inChar)' that can have char-by-char data thrown into it and will return 'true' only if a complete, validated protocol-unit has been assembled, checked and parsed into data mambers. A 'string textify()' method that dumps info about the contained data in text form. A general 'status' string to hold text stuff. An 'errorMess' string and Exception member.
You probably get the idea - this comms class can transport anything around the system. It's encapsulated so that a thread can use it's data and methods without reference to any other instance of comms - it does not need any locking. It can be queued to work threads on a Blocking Collection and BeginInvoked to the GUI thread for displaying stuff.
In the serialPort objects, create a comms at startup and load a member with the serialPort instance. and, when the DataReceived event fires, get the data from the args a char at a time and fire into the comms.loadChar(). If the loadChar call returns true, queue the comms instance to the state-machine input BlockingCollection and then immediately create another comms and start loading up the new one with data. Just keep doing that forever - loading up comms instances with chars until they have a validated protocol unit and queueing them to the state-machine. It may be that each serial port has its own protocol - OK, so you may need three comms descendants that override the loadChar to correctly decode their own protocol.
In the state-machine thread, just take() comms objects from the input and do the state-engine thing, using the current state and the Event from the comms object. If the SM action routine decides to display something, BeginInvoke the comms to the GUI thread with the command set to 'displaySomeStuff'. When the GUI thread gets the comms, it can case-switch on the command to decide what to display/whatever.
Anyway, that's how I build all my process-control type apps. Data flows around the system in 'comms' object instances, no comms object is ever operated on by more than one thead at a time. It's all done by message-passing on either BlockingCollection, (or similar), queues or BeginInvoke() if going to the GUI thread.
The only locks are in the queues and so are encapsulated. There are no explicit locks at all. This means there can be no explicit deadlocks at all. I do get headaches, but I don't get lockups.
Oh - don't go near 'Thread.Join()'.