Does it make sense to poll `pthread_mutex_trylock`? - locking

Consider a multicore system with two threads running: thread A and thread B, which share some data. Thread A needs to do its job as fast as possible so we want it to be awake as often as possible.
Using spinlocks (either pthread's or implemented on top of atomic primitives) is discarded because we prefer thread B to sleep when waiting to get the lock.
Would it be an acceptable solution to mix busy wait (spin-lock) and "sleepy wait" the following way?:
pthread_mutex_t mutex; // Already initialized somewhere
SharedData data; // Structure for interthread communication
// Thread A (high throughput needed => spin)
while (appRunning) {
while (pthread_mutex_trylock(&mutex) != 0) { } // Controversial point
// Read/write to data
pthread_mutex_unlock(&mutex);
}
// Thread B (should sleep during wait => standard locking)
while (appRunning) {
pthread_mutex_lock(&mutex);
// Read/write to data
pthread_mutex_unlock(&mutex);
}
PS: I've tried to be generic, but if it matters, the actual scenario is that thread A is filling low-latency audio buffers according to what thread B asks for via the shared data and writes there some results. Thread B reacts to user input and it's acceptable for it to take a while to react as long as thread A doesn't underrun.

Using a lock (mutex or spin lock) is not the right solution for low latency scenarios.
It'd be best if you could share only a small amount of data, and then you could just exchange the data in a single atomic instruction.
Another solution is to have 2 data instances and exchange the pointer to it.
If the above cannot work for you, and you have to fallback to a mutex or spin lock, you should prefer a spin lock.
With a spin lock the wake up time will be shorter as you don't have a context switch.
Thread A will have to wait (spinning) when thread B is holding the lock. And you can limit thread B to not acquiring the lock too often.
Thread B can try acquire the lock, not more than once in so often, and sleep for a bit if it fails.

Related

How do I call some blocking method with a timeout in Obj-C?

Is there a standard nice way to call a blocking method with a timeout in Objective C? I want to be able to do:
// call [something blockingMethod];
// if it hasn't come back within 2 seconds, forget it
Thanks.
It is not possible to interrupt a function that is not designed to be interrupted. Doing so would generally cause data corruption and resource leaks.
The standard way to achieve what you're describing is to redesign blockingMethod so that it accepts a timeout or other cancelation mechanism.
If that's not possible, and it is required that you timeout blockingMethod, the standard approach is to fork a child process to run blockingMethod, and kill it (usually by sending SIGTERM) if it doesn't finish by the timeout. This is somewhat complex to implement in ObjC, and you'll need to also implement a mechanism to send the results back to the parent process. Since the operating system manages resources (memory, file handles, etc) at the process level, the only way to forcibly interrupt a function is to create a separate process for it. This still can lead to data corruption depending on what blockingMethod does, but it will work for a much larger set of problems.
Note that it's not generally possible to fork a process from non-Apple code on iOS, so this can't be done there.
As an example of what I mean by "data corruption," consider some simple code like:
[self.cache lock];
[self.cache removeObject: object];
[self.cache decrementCountOfObjects];
[self.cache unlock];
Now imagine that the process were forcibly terminated in the middle of this operation. What should happen? How does the cache get unlocked? How are the cache contents and the count reconciled? It's even possible that the object would be in the middle of being copied; then what? How would the system automatically deal with all of these issues unless blockingMethod were written with cancelation in mind?
How about using a semaphore? This can be locked across threads and then you can do something like
dispatch_semaphore_t s = dispatch_semaphore_create ( 0 );
// In a different thread or on some queue,
// fire up some process, when done signal
// the semaphore with
[ fire up thread ... some task, when done
dispatch_semaphore_signal( s );
... ]
// This waits 2 seconds for the semaphore
if ( dispatch_semaphore_wait( s, 2 ) )
{
// ... it hasn't come back after 2 seconds so 'forget it'
}
else
{
// ... you now have the semaphore within 2 seconds so 'do it'
}
// This waits forever, just for reference
dispatch_semaphore_wait( s, DISPATCH_TIME_FOREVER );

priority control with semaphore

Suppose I have a semaphore to control access to a dispatch_queue_t.
I wait for the semaphore (dispatch_semaphore_wait) before scheduling a block on the dispatch queue.
dispatch_semaphore_wait(semaphore,DISPATCH_TIME_FOREVER)
dispatch_async(queue){ //do work ; dispatch_semaphore_signal(semaphore); }
Suppose I have work waiting in several separate locations. Some "work" have higher priority than the other "work".
Is there a way to control which of the "work" will be scheduled next?
Additional information: using a serial queue without a semaphore is not an option for me because the "work" consist of its own queue with several blocks. All of the work queue has to run, or none of it. No work queues can run simultaneously. I have all of this working fine, except for the priority control.
Edit: (in response to Jeremy, moved from comments)
Ok, suppose you have a device/file/whatever like a printer. A print job consists of multiple function calls/blocks (print header, then print figure, then print text,...) grouped together in a transaction. Put these blocks on a serial queue. One queue per transaction.
However you can have multiple print jobs/transactions. Blocks from different print jobs/transactions can not be mixed. So how do you ensure that a transaction queue runs all of its jobs and that a transaction queue is not started before another queue has finished? (I am not printing, just using this as an example).
Semaphores are used to regulate the use of finite resources.
https://www.mikeash.com/pyblog/friday-qa-2009-09-25-gcd-practicum.html
Concurrency Programming Guide
The next step I am trying to figure out is how to run one transaction before another.
You are misusing the API here. You should not be using semaphores to control what gets scheduled to dispatch queues.
If you want to serialize execution of blocks on the queue, then use a serial queue rather than a concurrent queue.
If different blocks that you are enqueuing have different priority, then you should express that different priority using the QOS mechanisms added in OS X 10.10 and iOS 8.0. If you need to run on older systems, then you can use the different priority global concurrent queues for appropriate work. Beyond that, there isn't much control on older systems.
Furthermore, semaphores inherently work against priority inheritance since there is no way for the system to determine who will signal the semaphore and thus you can easily end up in a situation where a higher priority thread will be blocked for a long time waiting for a lower priority thread to signal the semaphore. This is called priority inversion.

what's different between the Blocked and Busy Waiting?

I known the implement of Busy Waiting. it's a death loop like this:
//main thread
while (true) {
msg = msgQueue.next();
msg.runnable.run();
}
//....msg queue
public Message next() {
while (true) {
if (!queue.isEmpty()) {
return queue.dequeue();
}
}
}
so, the method "next()" just looks like blocked, actually it runs all the time.
this was called "busy waiting" on book.
and what's the "process blocked"? what about its implement details?
is a death loop too? or some others? like signal mechanism?
For instance:
cat xxx | grep "abc"
process "cat" read a file and output them.
process "grep" waiting for input from "cat".
so before the "cat" output data, "grep" should be blocked, waiting for input and go on.
what details about this "blocked", a death loop read the input stream all the time? or really stop running, waiting a signal to wake up it to run?
The difference is basically in what happens to the process:
1. Busy Waiting
A process that is busy waiting is essentially continuously running, asking "Are we there yet? Are we there yet? How about now, are we there yet?" which consumes 100% of CPU cycles with this question:
bool are_we_there = false;
while(!are_we_there)
{
// ask if we're there (without blocking)
are_we_there = ask_if_we_are_there();
}
2. A process that is blocked (or that blocks)
A process that is blocked is suspended by the operating system and will be automatically notified when the data that it is waiting on becomes available. This cannot be accomplished without assistance from the operating system.
And example is a process that is waiting for a long-running I/O operation, or waiting for a timer to expire:
// use a system call to create a waitable timer
var timer = CreateWaitableTime()
// use another system call that waits on a waitable object
WaitFor(timer); // this will block the current thread until the timer is signaled
// .. some time in the future, the timer might expire and it's object will be signaled
// causing the WaitFor(timer) call to resume operation
UPDATE
Waitable objects may be implemented in different ways at the operating system level, but generally it's probably going to be a combination of hardware timers, interrupts and lists of waitable objects that are registered with the operating system by client code. When an interrupt occurs, the operating system's interrupt handler is called which in turn will scan though any waitable objects associated with that event, and invoke certain callback which in turn will eventually signal the waitable objects (put them in a signaled state). This is an over-simplification but if you'd like to learn more you could read up on interrupts and hardware timers.
When you say "a process is blocked" you actually mean "a thread is blocked" because those are the only schedulable entities getting CPU time. When a thread is busy waiting, it wastes CPU time in a loop. When a thread is blocked, the kernel code inside the system call sees that data or lock is not immediately available so it marks the thread as waiting. It then jumps to the scheduler which picks up another thread ready for execution. Such a code in a blocking system call might look like this:
100: if (data_available()) {
101: return;
102: } else {
103: jump_to_scheduler();
104: }
Later on the thread is rescheduled and restarts at line 100 but it immediately gets to the else branch and gets off the CPU again. When data becomes available, the system call finally returns.
Don't take this verbatim, it's my guess based on what I know about operating systems, but you should get the idea.

Strategy for feeding a watchdog in a multitask environment

Having moved some embedded code to FreeRTOS, I'm left with an interesting dilemma about the watchdog. The watchdog timer is a must for our application. Using FreeRTOS has been a huge boon for us too. When the application was more single-tasked, it fed the watchdog at timely points in its logic flow so that we could make sure the task was making logical progress in a timely fashion.
With multiple tasks though, that's not easy. One task could be bound up for some reason, not making progress, but another is doing just fine and making enough progress to keep the watchdog fed happily.
One thought was to launch a separate task solely to feed the watchdog, and then use some counters that the other tasks increment regularly, when the watchdog task ticks, it would make sure that all the counters looked like progress was being made on all the other tasks, and if so, go ahead and feed the watchdog.
I'm curious what others have done in situations like this?
A watchdog task that monitors the status of all the other tasks is a good solution. But instead of a counter, consider using a status flag for each task. The status flag should have three possible values: UNKNOWN, ALIVE, and ASLEEP. When a periodic task runs, it sets the flag to ALIVE. Tasks that block on an asynchronous event should set their flag to ASLEEP before they block and ALIVE when the run. When the watchdog monitor task runs it should kick the watchdog if every task is either ALIVE or ASLEEP. Then the watchdog monitor task should set all of the ALIVE flags to UNKNOWN. (ASLEEP flags should remain ASLEEP.) The tasks with the UNKNOWN flag must run and set their flags to ALIVE or ASLEEP again before the monitor task will kick the watchdog again.
See the "Multitasking" section of this article for more details: http://www.embedded.com/design/debug-and-optimization/4402288/Watchdog-Timers
This is indeed a big pain with watchdog timers.
My boards have an LED on a GPIO line, so I flash that in a while/sleep loop, (750ms on, 250ms off), in a next-to-lowest priority thread, (lowest is idle thread which just goes onto low power mode in a loop). I have put a wdog feed in the LED-flash thread.
This helps with complete crashes and higher-priority threads that CPU loop, but doesn't help if the system deadlocks. Luckily, my message-passing designs do not deadlock, (well, not often, anyway:).
Do not forget to handle possible situation where tasks are deleted, or dormant for longer periods of time. If those tasks were previously checked in with a watchdog task, they also need to have a 'check out' mechanism.
In other words, the list of tasks for which a watchdog task is responsible should be dynamic, and it should be organized so that some wild code cannot easily delete the task from the list.
I know, easier said then done...
I've design the solution using the FreeRTOS timers:
SystemSupervisor SW Timer which feed the HW WD. FreeRTOS Failure
causes reset.
Each task creates "its own" SW timer with SystemReset function.
Each task responsible to "manually" reload its timer before it expired.
SystemReset function saves data before commiting a suiside
Here is some pseudo-code listing:
//---------------------------------
//
// System WD
//
void WD_init(void)
{
HW_WD_Init();
// Read Saved Failure data, Send to Monitor
// Create Monitor timer
xTimerCreate( "System WD", // Name
HW_WD_INTERVAL/2, // Reload value
TRUE, // Auto Reload
0, // Timed ID (Data per timer)
SYS_WD_Feed);
}
void SYS_WD_Feed(void)
{
HW_WD_Feed();
}
//-------------------------
// Tasks WD
//
WD_Handler WD_Create()
{
return xTimerCreate( "", // Name
100, // Dummy Reload value
FALSE, // Auto Reload
pxCurrentTCB, // Timed ID (Data per timer)
Task_WD_Reset);
}
Task_WD_Reset(pxTimer)
{
TaskHandler_t th = pvTimerGetTimerID(pxTimer)
// Save Task Name and Status
// Reset
}
Task_WD_Feed(WD_Handler, ms)
{
xTimerChangePeriod(WD_Handler, ms / portTICK_PERIOD_MS, 100);
}

What happens when a thread makes kernel disable the interrupts and then that thread goes to sleep

I have this kernel code where I disable the interrupt to make this lock acquire operation atomic, but if u see the last else condition i.e. when lock is not available thread goes to sleep and interrupts are enable only after thread comes back from sleep. My question is so interrupts are disabled for whole OS until this thread comes out of sleep?
void Lock::Acquire()
{
IntStatus oldLevel = interrupt->SetLevel(IntOff); // Disabling the interrups to make the following statements atomic
if(lockOwnerThread == currentThread) //Checking if the requesting thread already owns lock
{
//printf("SM:error:%s already owns the lock\n",currentThread->getName());
DEBUG('z', "SM:error:%s already owns the lock\n",currentThread->getName());
(void) interrupt->SetLevel(oldLevel);
return;
}
if(lockOwnerThread==NULL)
{
lockOwnerThread = currentThread; // Lock owner ship is given to current thread
DEBUG('z', "SM:The ownership of the lock %s is given to %s \n",name,currentThread->getName());
}
else
{
DEBUG('z', "SM:Adding thread %s to request queue and putting it to sleep\n",currentThread->getName());
queueForLock->Append((void *)currentThread); // Lock is busy so add the thread to queue;
currentThread->Sleep(); // And go to sleep
}
(void) interrupt->SetLevel(oldLevel); // Enable the interrupts
}
I don't know the NACHOS and I would not make any assumptions on my own. So you have to test it.
The idea is simple. If this interrupt enable/disable functionality is local to the current process context then the following should happen when you call Sleep():
the process is marked as not-running, i.e. it is excluded from the list of processes the scheduler will consider to give a CPU time. Then the Sleep() function enforces the scheduler to do it's regular work - to find a process to run. If the list of running processes is not empty, the scheduler picks up a next available process and makes a context switch to this process. After this the state of interrupt management is restored from this new context.
If there are no processes to run then scheduler enters the Idle loop state and usually enables the interrupts. While the scheduler is in Idle loop it continues to poll the queue of the running processes until it get something to schedule.
Your process will get the control when it will be marked as running again. This could happen if some other process calls WakeUp() (or a like, as I mentioned the API is unknown to me)
When the scheduler will pick up your process to switch to it performs the usual (for your system) context switch that has the interrupts enabled flag set to false, so the execution continues at statement after the Sleep() call with interrupts disabled.
If the assumptions above are incorrect and the interrupts enabled flag is global, then there are two possibilities: either the system hangs as it can't serve the interrupts, or it has some workaround for such a situations.
So, you need to try. The best way is to read the kernel sources of course, if you have the access.))