What does 'Yield' means in Contiki rtos - process

I am working with contiki and trying to understand the terminology used in it.
I am observing certain words such as yield, stackless here and there on internet. Some examples
PROCESS_EVENT_CONTINUE : This event is sent by the kernel to a process that is waiting in a PROCESS_YIELD() statement.
PROCESS_YIELD(); // Wait for any event, equivalent to PROCESS_WAIT_EVENT().
PROCESS_WAIT_UNTIL(); // Wait for a given condition; may not yield the process.
Does yielding a process means, executing a process in contiki. Also what does it mean that contiki is stackless.

Contiki uses so-called protothreads (a Contiki-specific term) in order to support multiple application-level processes in this OS. Protothread is just a fancy name for a programming abstraction known as coroutine in computer science.
"Yield" in this context is short for "yield execution" (i.e. give up execution). It means "let other protothreads to execute until an event appears that is addressed to the current protothread". Such events can be generated both by other protothreads and by interrupt handler functions. The "wait" macros are similar, but allow to yield and wait for specific events or conditions.
Contiki protothreads are stackless in the sense that they all share the same global stack of execution, as opposed to "real" threads which typically get their own stack space. As a consequence, the values local variables are not preserved in Contiki protothreads across yields. For example, doing this is undefined behavior:
int i = 1;
PROCESS_YIELD();
printf("i=%d\n", i); // <- prints garbage
The traditional Contiki way how deal with this limitation is to declare all protothread-local variables as static:
static int i = 1;
PROCESS_YIELD();
printf("i=%d\n", i);
Other options is to use global variables, of course, but having a lot of global variables is bad programming style. The benefit of static variables declared inside protothread functions is that they are hidden from other functions (including other protothreads), even though at the low level they are allocated in the global static memory region.

In the general case, to "Yield" in any OS means to synchronously invoke the scheduler (i.e. on demand rather then through interrupt) in order to give the opportunity of control to some other thread. In an RTOS such a feature would only affect threads of the same priority, and may be used in addition or instead of pre-emptive round-robin scheduling is required. Most RTOS do not have an explicit yield function, or in some cases (such as VxWorks) the same effect can be achieved using a zero length delay.
In a cooperative scheduler such as that in Contiki, such a function is necessary to allow other threads to run in an otherwise non-blocking thread. A thread always has control until it calls a bocking or yielding function.
The cooperative nature of Contiki's scheduler mean that it cannot be classified as an RTOS. It may be possible to achieve real-time behaviour suitable to a specific application, but only through careful and appropriate application design, rather the through intrinsic scheduler behaviour.

Related

What is the use of ConcurrentLinkedQueue?

I am working in bluetooth in android kotlin. I found this blessed library and this class BluetoothPeripheral is using ConcurrentLinkedQueue. I don't understand what is the use of
private val commandQueue: Queue<Runnable> = ConcurrentLinkedQueue()
I am looking this enqueue function and I cannot understand the use case in here. what the author trying to achieved in here?
This enqueue function calls in different place i.e. readCharacteristic what is the use case in this function?
Thanks
Building on #broot's comment:
ConcurrentLinkedQueue is part of the java.util.concurrent package which is all about collections that are thread-safe
A Queue is a kind of collection that is designed for efficient adding and removal. Typically they offer First In First Out.
If you have an application that is dealing with a high throughput of tasks, producers put items in a queue and consumers take them. Depending which is faster, you may have more producer threads than consumer threads, or the other way around. You achieve process isolation by using a thread-safe queue, such as ConcurrentLinkedQueue
Some Queue implementations have bounded capacity, but a queue like ConcurrentLinkedQueue is based on a Linked List so typically have have far greater capacity, but mean that some operations, such as search might perform less well.
There is also a Dequeue which is a Queue that you can remove items easily from both ends.
I have no idea what the Bluetooth application is about and why it needs ConcurrentLinkedQueue so I cannot comment on whether it is the "best options to use in bluetooth case"

Do we need to lock the immutable list in kotlin?

var list = listOf("one", "two", "three")
fun One() {
list.forEach { result ->
/// Does something here
}
}
fun Two() {
list = listOf("four", "five", "six")
}
Can function One() and Two() run simultaneously? Do they need to be protected by locks?
No, you dont need to lock the variable. Even if the function One() still runs while you change the variable, the forEach function is running for the first list. What could happen is that the assignment in Two() happens before the forEach function is called, but the forEach would either loop over one or the other list and not switch due to the assignment
if you had a println(result) in your forEach, your program would output either
one
two
three
or
four
five
six
dependent on if the assignment happens first or the forEach method is started.
what will NOT happen is something like
one
two
five
six
Can function One() and Two() run simultaneously?
There are two ways that that could happen:
One of those functions could call the other.  This could happen directly (where the code represented by // Does something here in One()⁽¹⁾ explicitly calls Two()), or indirectly (it could call something else which ends up calling Two() — or maybe the list property has a custom setter which does something that calls One()).
One thread could be running One() while a different thread is running Two().  This could happen if your program launches a new thread directly, or a library or framework could do so.  For example, GUI frameworks tend to have one thread for dispatching events, and others for doing work that could take time; and web server frameworks tend to use different threads for servicing different requests.
If neither of those could apply, then there would be no opportunity for the functions to run simultaneously.
Do they need to be protected by locks?
If there's any possibility of them being run on multiple threads, then yes, they need to be protected somehow.
99.999% of the time, the code would do exactly what you'd expect; you'd either see the old list or the new one.  However, there's a tiny but non-zero chance that it would behave strangely — anything from giving slightly wrong results to crashing.  (The risk depends on things like the OS, CPU/cache topology, and how heavily loaded the system is.)
Explaining exactly why is hard, though, because at a low level the Java Virtual Machine⁽²⁾ does an awful lot of stuff that you don't see.  In particular, to improve performance it can re-order operations within certain limits, as long as the end result is the same — as seen from that thread.  Things may look very different from other threads — which can make it really hard to reason about multi-threaded code!
Let me try to describe one possible scenario…
Suppose Thread A is running One() on one CPU core, and Thread B is running Two() on another core, and that each core has its own cache memory.⁽³⁾
Thread B will create a List instance (holding references to strings from the constant pool), and assign it to the list property; both the object and the property are likely to be written to its cache first.  Those cache lines will then get flushed back to main memory — but there's no guarantee about when, nor about the order in which that happens.  Suppose the list reference gets flushed first; at that point, main memory will have the new list reference pointing to a fresh area of memory where the new object will go — but since the new object itself hasn't been flushed yet, who knows what's there now?
So if Thread A starts running One() at that precise moment, it will get the new list reference⁽⁴⁾, but when it tries to iterate through the list, it won't see the new strings.  It might see the initial (empty) state of the list object before it was constructed, or part-way through construction⁽⁵⁾.  (I don't know whether it's possible for it to see any of the values that were in those memory locations before the list was created; if so, those might represent an entirely different type of object, or even not a valid object at all, which would be likely to cause an exception or error of some kind.)
In any case, if multiple threads are involved, it's possible for one to see list holding neither the original list nor the new one.
So, if you want your code to be robust and not fail occasionally⁽⁶⁾, then you have to protect against such concurrency issues.
Using #Synchronized and #Volatile is traditional, as is using explicit locks.  (In this particular case, I think that making list volatile would fix the problem.)
But those low-level constructs are fiddly and hard to use well; luckily, in many situations there are better options.  The example in this question has been simplified too much to judge what might work well (that's the down-side of minimal examples!), but work queues, actors, executors, latches, semaphores, and of course Kotlin's coroutines are all useful abstractions for handling concurrency more safely.
Ultimately, concurrency is a hard topic, with a lot of gotchas and things that don't behave as you'd expect.
There are many source of further information, such as:
These other questions cover some of the issues.
Chapter 17: Threads And Locks from the Java Language Specification is the ultimate reference on how the JVM behaves.  In particular, it describes what's needed to ensure a happens-before relationship that will ensure full visibility.
Oracle has a tutorial on concurrency in Java; much of this applies to Kotlin too.
The java.util.concurrent package has many useful classes, and its summary discusses some of these issues.
Concurrent Programming In Java: Design Principles And Patterns by Doug Lea was at one time the best guide to handling concurrency, and these excerpts discuss the Java memory model.
Wikipedia also covers the Java memory model
(1) According to Kotlin coding conventions, function names should start with a lower-case letter; that makes them easier to distinguish from class/object names.
(2) In this answer I'm assuming Kotlin/JVM.  Similar risks are likely apply to other platforms too, though the details differ.
(3) This is of course a simplification; there may be multiple levels of caching, some of which may be shared between cores/processors; and some systems have hardware which tries to ensure that the caches are consistent…
(4) References themselves are atomic, so a thread will either see the old reference or the new one — it can't see a bit-pattern comprising parts of the old and new ones, pointing somewhere completely random.  So that's one problem we don't have!
(5) Although the reference is immutable, the object gets mutated during construction, so it might be in an inconsistent state.
(6) And the more heavily loaded your system is, the more likely it is for concurrency issues to occur, which means that things will probably fail at the worst possible time!

What are the specifics about the continuations upon which Raku(do) relies?

The topic of delimited continuations was barely discussed among programming language enthusiasts in the 1990s and 2000s. It has recently been re-emerging as a major thing in programming language discussions.
My hope is that someone can at least authoritatively say whether the continuations underlying Rakudo (as contrasted with Raku) do or don't have each of the six characteristics listed below. I say a bit more about the sort of answer I'm hoping for after the list.
Quoting verbatim (with a formatting touch up) from an online message[1] written by the person driving the work on adding continuations to the JVM:
Asymmetric: When the continuation suspends or yields, the execution returns to the caller (of Continuation.run()). Symmetric continuations don't have the notion of a caller. When they yield, they must specify another continuation to transfer the execution to. Neither symmetric nor asymetric continuations are more powerful than one another, and each could be used to simulate the other.
Stackful: The continuation can be suspended at any depth in the call-stack, rather than in the same subroutine where the delimited context begins when the continuation is stackless (as is the case in C#). I.e the continuation has its own stack rather than just a single subroutine frame. Stackful continuations are more powerful than stackless ones.
Delimited: The continuation captures the execution context that starts with a specific call (in our case, the body of a certain runnable) rather than the entire execution state all the way up to main(). Delimited continuations are strictly more powerful than undelimited ones (http://okmij.org/ftp/continuations/undelimited.html), the latter considered "not practically useful" (http://okmij.org/ftp/continuations/against-callcc.html).
Multi-prompt: Continuations can be nested, and anywhere in the call stack, any of the enclosing continutions can be suspended. This is similar to nesting of try/catch blocks, and throwing an exception of a certain type that unwinds the stack up to the nearest catch that handles it rather than just the nearest catch. An example of nested continuations can be using a Python-like generator inside a virtual thread. The generator code can do a blocking IO call, which will suspend the enclosing thread continuation, and not just the generator: https://youtu.be/9vupFNsND6o?t=2188
One-shot/non-reentrant: Every time we continue a suspended continuation its state is mutated, and we cannot continue it from the same suspension state multiple times (i.e we can't go back in time). This is unlike reentrant continuations where every time we suspend them, a new immutable continuation object that represents a particular suspension point is returned. I.e. the continuation is a single point in time, and every time we continue it we go back to that state. Reentrant continuations are strictly more powerful than non-reentrant ones; i.e. they can do things that are strictly impossible with just one-shot continuations.
Cloneable: If we are able to clone a one-shot continuation we can provide the same ability as reentrant continuations. Even though the continuation is mutated every time we continue it, we can clone its state before continuing to create a snapshot of that point in time that we can return to later.
Aiui continuations aren't directly exposed in Raku, so perhaps the correct answer related to Raku (as against Rakudo) would be "there are no continuations". But that's not clear to me so in the following, in which I describe what I'm hoping might be in an answer if I'm very lucky, I'll pretend it makes some sense to talk about them in the context of both Raku and Rakudo as two distinct realms.
Here's the sort of answer I'm imagining would be possible (though I'm just somewhat wildly guessing at what is actually true):
"As a "100 year" language design, Raku's current underlying semantic [execution?] model requires, at minimum, stackless one-shot multi prompt delimited continuations.
From a theoretic pov, Raku's design can never expand to require that continuations are cloneable but it could theoretically expand to require they are stackful.
Rakudo implements the currently required continuation semantics.
MoarVM has support for these semantics built in, and could realistically track the theoretically possible expansions of requirements if Raku's design ever so expands.
The JVM and JS backends have suitable shims that achieve the same thing, albeit at a cost to performance. It seems plausible that the JVM backend could switch to using continuations that are native to the JVM if it comes to pass that it gets them, provided of course that they meet requirements, but my current impression is that it would likely realistically be perhaps a decade away, or more, before we would need to consider crossing that bridge."
(Or something vaguely like that.)
If an answer also provided a bit more detail on something like the above, perhaps some code links, that would be a particularly awesome addition.
Similarly, if an answer included a couple brief examples of how this continuation power surfaces in current Raku features, and a speculation about how it might one day, say 10 years from now, surface in other features, that would make an answer an over-the-top brilliant one.
PS. Thank you to #Larry who understood things deeply enough to know continuations needed to be part of the picture; to Stefan O'Rear for his contributions, including the initial implementations of what I think are one-shot multi prompt delimited continuations; and to jnthn for making the dream come true.
Footnotes
1 There is work underway to introduce continuations as a first class construct to the JVM. A key driver of this effort is Ron Pressler. The above is based on a message he wrote in November.
Rakudo uses continuations as an implementation strategy for two features:
gather/take - for implementing lazy iterators
Making await on the thread pool non-blocking
The characteristics of the continuations implemented follow the requirements of these language features. I'll go through them in a slightly different order than above because it eases explaining.
Stackful - yes, because we need to be able to do the take or await at any depth in the callstack relative to the gather or the thread pool worker's work loop. For example, you could write a recursive graph traversal algorithm inside of a gather and then take each encountered node. For await, this is at the heart of the difference between Raku's await and await as seen in many other languages: you don't have to refactor all the way up the call stack.
Delimited - yes. The continuation reset operation installs a tag (or "prompt"), and when we do a continuation control operation, we slice the stack at this delimiter. I can't imagine how you'd implement the Raku features involved without them being delimited.
Multi-prompt - yes, this is required because you can be iterating one data source provided by a gather inside of another gather's implementation, or do an await inside of a gather.
Asymmetric - after the continuation has been taken, execution continues after the reset instruction. In the await case, we go and find another task in the worker task queue, and in the take case we're back in the pull-one method of the iterator and can return the taken value. I think this approach fits well in a language where only a few features use continuations.
One-shot/non-reentrant - yes, and at least in MoarVM the memory safety of the runtime depends on this property. It is enforced by an atomic compare and swap operation, so if two threads were to race to invoke the continuation, only one could ever succeed. No Raku features need the additional complexity that reentrant continuations would imply.
Cloneable - no, because no Raku features need it. In theory this isn't too awful to implement in MoarVM in terms of saying "yes, we can do it", but I suspect it raises a lot of questions like "how deep should be clone". If you just cloned all the invocation records and similar, you'd still share Scalar containers, Arrays, etc. between the clones.
As I understand it - though I follow from a distance - the JVM continuations are at least partly aimed at the same design space that the Raku await mechanism is in, and so I'd be surprised if they didn't end up providing what Raku needs. This would clearly simplify compilation of Raku code to the JVM (currently it does the global CPS transform as it does code generation, which curiously turned out simpler than I expected), and it'd almost certainly perform much better too, because the transform required probably obscures quite a few things from the perspective of the JIT compiler.
So far as code goes, you can see the current continuations implementation, which uses the continuation data structure which in turn has various bits of memory management. At the time of writing, these have all been significantly refactored as part of the new callstack representation required by ongoing dispatcher work; those changes do make working with continuations more efficient, but don't change the overall set of operations.

How to understand coroutine cancellation is cooperative

In Kotlin, coroutine cancellation is cooperative. How should I understand it?
Link to Kotlin documentation.
If you have a Java background, you may be familiar with the thread interruption mechanism. Any thread can call thread.interrupt() and the receiving thread will get a signal in the form of a Boolean isInterrupted flag becoming true. The receiving thread may check the flag at any time with currentThread.isInterrupted() — or it may ignore it completely. That's why this mechanism is said to be cooperative.
Kotlin's coroutine cancellation mechanism is an exact replica of this: you have a coroutineContext.isActive flag that you (or a function you call) may check.
In both cases some well-known functions, for example Thread.sleep() in Java and delay() in Kotlin, check this flag and throw an InterruptedException and CancellationException, respectively. These methods/functions are said to be "interruptible" / "cancellable".
I'm not 100% sure whether I understand your question, but maybe this helps:
Coroutines are usually executed within the same thread you start them with. You can use different dispatchers, but they are designed to work when being started from the same thread. There's no extra scheduling happening.
You can compare this with scheduling mechanisms in an OS. Coroutines behave similar like to cooperative scheduling. You find similar concepts in many frameworks and languages to deal with async operations. Ruby for example has fibers which behave similar.
Basically this means that if a coroutine is hogging on your CPU in a busy loop, you cannot cancel it (unless you kill the whole process). Instead, your coroutines has to regularly check for cancellation and also add waits/delays/yields so that other coroutines can work.
This also defines on when coroutines are helpful the most: when running in a single-threaded-context, it doesn't help to use co-routines for local-only calculations. I used them mostly for processing async calls like interactions with databases or web servers.
This article also has some explanations on how coroutines work - maybe it helps you with any additional questions: https://antonioleiva.com/coroutines/

(stm32f4) GPIOx_BSRR vs GPIOx_ODR

I am learning stm32f4.
Why do we have GPIO port output data register (GPIOx_ODR) when GPIO port bit set/reset register (GPIOx_BSRR) still exists?
Main reason is to have atomic access to GPIOs.
In case of ODR register, if you want change only one bit then you need to use read - modify - write method which is non atomic, is slow and also unsafe if you want to control some GPIOS from different threads or also from interrupt handler, then can happen race condition.
Usage of BSRR register is atomic and this has some advantage, you can with single write set or clear certain output(s) without reading and modifying before write. It is faster and is thread safe.
Disadvantage of using BSRR is only f you want only toggle one bit without knowing actual state of certain bit. (to keep atomicity, you need remember actual value)