Sure, the while loop over the function call blocks inside your app's scope, but something outside still has to be looping right? Does it finally lead up to some hardware blocking event? How else can the CPU not be pegged at 100%?
Remember that the operating system is in charge of the CPU. Your code only gets to run when the operating system calls it.
If you ask the operating system to wait for something, the operating system won't call your code until that thing happens.
Imagine the operating system scheduler as a loop like this:
while(true)
{
for(Process *p : all_processes)
{
RunSomeCodeInProcess(p);
}
}
This would always use 100% CPU, even if your process wasn't running. But actually, the loop is more like this: (still simplified)
while(true)
{
bool all_processes_blocked = false;
for(Process *p : all_processes)
{
if(!IsProcessBlocked(p))
{
all_processes_blocked = false;
RunSomeCodeInProcess(p);
}
}
if (all_processes_blocked)
{
StopCPU();
}
}
The OS will not bother running processes that are blocked. It will skip over your process and only run other processes. If all processes are blocked (note: this is normal) then the OS will stop the CPU. When the CPU is stopped, it uses way less power, creates way less heat, and it doesn't execute instructions. That means StopCPU won't return.
... until the CPU gets an interrupt from some hardware device, like a mouse saying it got moved. Then the CPU automatically starts up again and runs the interrupt handler. When the interrupt handler returns, it goes back to StopCPU, so StopCPU returns and the OS checks for unblocked processes again. The hardware interrupt probably unblocked one of the processes. For example, if the interrupt was because the computer got a network packet, then now the process that was waiting for the packet is unblocked. If it was because the user pressed a key on the keyboard, then the process that was waiting for the key is unblocked, and so on.
So there are two main advantages to using blocking I/O instead of polling:
You don't waste CPU time that other processes could get.
If all processes are blocked (this is most of the time!) the CPU can save power and heat.
This is also how sleep works. There's a hardware timer that counts down and then sends an interrupt. When you do sleep(1), the OS sets the timer to one second, then blocks the process. When the interrupt comes in, it unblocks the process.
There's only one timer, but if more than one process is sleeping, the OS sets the timer to the one that wakes up first, and then when the interrupt comes in, it unblocks the first process and sets the timer for the next one. This technique is called a "timer queue".
Related
I am building a small embedded device. I am using a reset switch, and when this is pressed for more than 5 seconds, the whole device should reset and clear all the data and go to factory reset state.
I know what to clear when this event happens. What I want to know is how do I raise this event? I mean when switch is pressed, how do I design the system to know that 5 seconds have elapsed and I have to reset now. I need high level design with any timers and interrupts. Can some one please help me?
Depends on the device. But few rough ideas:
Possibly the device manual may say about the number of interrupts per second that is produced by "holding down the switch" (switch down). If you have this value, you can easily calculate the 5 seconds.
If not, you would need to use timer too. Start the timer when you get the first interrupt of "switch down" and count up to 5 seconds.
Note that, You should also monitor for "switch up", that is, "release of switch". I hope there will be an interrupt for that too. (Possibly with different status value).
So you should break the above loop (you shouldn't do the reset) when you see this interrupt.
Hope this helps.
Interrupt-driven means low level, close to the hardware. An interrupt-driven solution, with for example a bare metal microcontroller, would look like this:
Like when reading any other switch, sample the switch n number of times and filter out the signal bounce (and potential EMI).
Start a hardware timer. Usually the on-chip timers are far too fast to count a whole 5 seconds, even when you set it to run as slow as possible. So you need to set the timer with a pre-scale value, picked so that one whole timer cycle equals a known time unit (like for example 10 milliseconds).
Upon timer overflow, trigger an interrupt. Inside the interrupt, check that the switch is still pressed, then increase a counter. When the counter reaches a given value, execute the reset code. For example, if you get a timer overflow every 10 milliseconds, your counter should count up to 5000ms/10ms = 500.
If the switch is released before the time is elapsed, reset the counter and stop the timer interrupt.
How to reset the system is highly system-specific. You should put the system in a safe system, then overwrite your current settings by overwriting the NVM where settings is stored with some default factory settings stored elsewhere in NVM. Once that is done, you should force the processor to reset itself and reboot with the new settings in place.
This means that you must have a system with electronically-erasable NVM. Depending on the size of the data, this NVM could either be data flash on-chip in a microcontroller, or some external memory circuit.
Detecting a 5S or 30S timeout can be done using a GPIO on an interrupt.
If using an rtos,
. Interrupt would wake a thread from sleep and disables itself,
. All the thread does is count the time the switch is pressed for (you scan the switch at regular intervals)
. If the switch is pressed for desired time set a global variable/setting in eeprom which will trigger the factory reset function
. Else enable the interrupt again and put the thread to sleep
. Also, use a de-bounce circuit to avoid issues.
Also define what do you mean by factory reset?
There are two kinds in general, both cases I will help using eeprom
Revert all configurations (Low cost, easier)
In this case, you partition the eeprom, have a working configuration and factory configuration. You copy over the factory configurations to the working partition and perform a software reset
Restore complete firmware (Costly, needs more testing)
This is more tricky, but can be done with help of bootloaders that allow for flashing from eeprom/or sd card.
In this case the binary firmware blob will also be stored with the factory configuration, in the safe partition and will be used to flash controller flash and configurations.
All depends on the size/memory and cost. can be designed in many more ways, i am just laying out simplest examples.
I created some products with a combined switch to. I did so by using a capacitator to initiate a reset pulse on the reset pin of the device (current and levels limit by some resistors and/or diodes). At start-up I monitor the state of the input pin connected to the switch. I simply wait until this pin goes height with a time-out of 5 seconds. In case of a time-out I reset my configuration to default.
I known the implement of Busy Waiting. it's a death loop like this:
//main thread
while (true) {
msg = msgQueue.next();
msg.runnable.run();
}
//....msg queue
public Message next() {
while (true) {
if (!queue.isEmpty()) {
return queue.dequeue();
}
}
}
so, the method "next()" just looks like blocked, actually it runs all the time.
this was called "busy waiting" on book.
and what's the "process blocked"? what about its implement details?
is a death loop too? or some others? like signal mechanism?
For instance:
cat xxx | grep "abc"
process "cat" read a file and output them.
process "grep" waiting for input from "cat".
so before the "cat" output data, "grep" should be blocked, waiting for input and go on.
what details about this "blocked", a death loop read the input stream all the time? or really stop running, waiting a signal to wake up it to run?
The difference is basically in what happens to the process:
1. Busy Waiting
A process that is busy waiting is essentially continuously running, asking "Are we there yet? Are we there yet? How about now, are we there yet?" which consumes 100% of CPU cycles with this question:
bool are_we_there = false;
while(!are_we_there)
{
// ask if we're there (without blocking)
are_we_there = ask_if_we_are_there();
}
2. A process that is blocked (or that blocks)
A process that is blocked is suspended by the operating system and will be automatically notified when the data that it is waiting on becomes available. This cannot be accomplished without assistance from the operating system.
And example is a process that is waiting for a long-running I/O operation, or waiting for a timer to expire:
// use a system call to create a waitable timer
var timer = CreateWaitableTime()
// use another system call that waits on a waitable object
WaitFor(timer); // this will block the current thread until the timer is signaled
// .. some time in the future, the timer might expire and it's object will be signaled
// causing the WaitFor(timer) call to resume operation
UPDATE
Waitable objects may be implemented in different ways at the operating system level, but generally it's probably going to be a combination of hardware timers, interrupts and lists of waitable objects that are registered with the operating system by client code. When an interrupt occurs, the operating system's interrupt handler is called which in turn will scan though any waitable objects associated with that event, and invoke certain callback which in turn will eventually signal the waitable objects (put them in a signaled state). This is an over-simplification but if you'd like to learn more you could read up on interrupts and hardware timers.
When you say "a process is blocked" you actually mean "a thread is blocked" because those are the only schedulable entities getting CPU time. When a thread is busy waiting, it wastes CPU time in a loop. When a thread is blocked, the kernel code inside the system call sees that data or lock is not immediately available so it marks the thread as waiting. It then jumps to the scheduler which picks up another thread ready for execution. Such a code in a blocking system call might look like this:
100: if (data_available()) {
101: return;
102: } else {
103: jump_to_scheduler();
104: }
Later on the thread is rescheduled and restarts at line 100 but it immediately gets to the else branch and gets off the CPU again. When data becomes available, the system call finally returns.
Don't take this verbatim, it's my guess based on what I know about operating systems, but you should get the idea.
Having moved some embedded code to FreeRTOS, I'm left with an interesting dilemma about the watchdog. The watchdog timer is a must for our application. Using FreeRTOS has been a huge boon for us too. When the application was more single-tasked, it fed the watchdog at timely points in its logic flow so that we could make sure the task was making logical progress in a timely fashion.
With multiple tasks though, that's not easy. One task could be bound up for some reason, not making progress, but another is doing just fine and making enough progress to keep the watchdog fed happily.
One thought was to launch a separate task solely to feed the watchdog, and then use some counters that the other tasks increment regularly, when the watchdog task ticks, it would make sure that all the counters looked like progress was being made on all the other tasks, and if so, go ahead and feed the watchdog.
I'm curious what others have done in situations like this?
A watchdog task that monitors the status of all the other tasks is a good solution. But instead of a counter, consider using a status flag for each task. The status flag should have three possible values: UNKNOWN, ALIVE, and ASLEEP. When a periodic task runs, it sets the flag to ALIVE. Tasks that block on an asynchronous event should set their flag to ASLEEP before they block and ALIVE when the run. When the watchdog monitor task runs it should kick the watchdog if every task is either ALIVE or ASLEEP. Then the watchdog monitor task should set all of the ALIVE flags to UNKNOWN. (ASLEEP flags should remain ASLEEP.) The tasks with the UNKNOWN flag must run and set their flags to ALIVE or ASLEEP again before the monitor task will kick the watchdog again.
See the "Multitasking" section of this article for more details: http://www.embedded.com/design/debug-and-optimization/4402288/Watchdog-Timers
This is indeed a big pain with watchdog timers.
My boards have an LED on a GPIO line, so I flash that in a while/sleep loop, (750ms on, 250ms off), in a next-to-lowest priority thread, (lowest is idle thread which just goes onto low power mode in a loop). I have put a wdog feed in the LED-flash thread.
This helps with complete crashes and higher-priority threads that CPU loop, but doesn't help if the system deadlocks. Luckily, my message-passing designs do not deadlock, (well, not often, anyway:).
Do not forget to handle possible situation where tasks are deleted, or dormant for longer periods of time. If those tasks were previously checked in with a watchdog task, they also need to have a 'check out' mechanism.
In other words, the list of tasks for which a watchdog task is responsible should be dynamic, and it should be organized so that some wild code cannot easily delete the task from the list.
I know, easier said then done...
I've design the solution using the FreeRTOS timers:
SystemSupervisor SW Timer which feed the HW WD. FreeRTOS Failure
causes reset.
Each task creates "its own" SW timer with SystemReset function.
Each task responsible to "manually" reload its timer before it expired.
SystemReset function saves data before commiting a suiside
Here is some pseudo-code listing:
//---------------------------------
//
// System WD
//
void WD_init(void)
{
HW_WD_Init();
// Read Saved Failure data, Send to Monitor
// Create Monitor timer
xTimerCreate( "System WD", // Name
HW_WD_INTERVAL/2, // Reload value
TRUE, // Auto Reload
0, // Timed ID (Data per timer)
SYS_WD_Feed);
}
void SYS_WD_Feed(void)
{
HW_WD_Feed();
}
//-------------------------
// Tasks WD
//
WD_Handler WD_Create()
{
return xTimerCreate( "", // Name
100, // Dummy Reload value
FALSE, // Auto Reload
pxCurrentTCB, // Timed ID (Data per timer)
Task_WD_Reset);
}
Task_WD_Reset(pxTimer)
{
TaskHandler_t th = pvTimerGetTimerID(pxTimer)
// Save Task Name and Status
// Reset
}
Task_WD_Feed(WD_Handler, ms)
{
xTimerChangePeriod(WD_Handler, ms / portTICK_PERIOD_MS, 100);
}
I have this kernel code where I disable the interrupt to make this lock acquire operation atomic, but if u see the last else condition i.e. when lock is not available thread goes to sleep and interrupts are enable only after thread comes back from sleep. My question is so interrupts are disabled for whole OS until this thread comes out of sleep?
void Lock::Acquire()
{
IntStatus oldLevel = interrupt->SetLevel(IntOff); // Disabling the interrups to make the following statements atomic
if(lockOwnerThread == currentThread) //Checking if the requesting thread already owns lock
{
//printf("SM:error:%s already owns the lock\n",currentThread->getName());
DEBUG('z', "SM:error:%s already owns the lock\n",currentThread->getName());
(void) interrupt->SetLevel(oldLevel);
return;
}
if(lockOwnerThread==NULL)
{
lockOwnerThread = currentThread; // Lock owner ship is given to current thread
DEBUG('z', "SM:The ownership of the lock %s is given to %s \n",name,currentThread->getName());
}
else
{
DEBUG('z', "SM:Adding thread %s to request queue and putting it to sleep\n",currentThread->getName());
queueForLock->Append((void *)currentThread); // Lock is busy so add the thread to queue;
currentThread->Sleep(); // And go to sleep
}
(void) interrupt->SetLevel(oldLevel); // Enable the interrupts
}
I don't know the NACHOS and I would not make any assumptions on my own. So you have to test it.
The idea is simple. If this interrupt enable/disable functionality is local to the current process context then the following should happen when you call Sleep():
the process is marked as not-running, i.e. it is excluded from the list of processes the scheduler will consider to give a CPU time. Then the Sleep() function enforces the scheduler to do it's regular work - to find a process to run. If the list of running processes is not empty, the scheduler picks up a next available process and makes a context switch to this process. After this the state of interrupt management is restored from this new context.
If there are no processes to run then scheduler enters the Idle loop state and usually enables the interrupts. While the scheduler is in Idle loop it continues to poll the queue of the running processes until it get something to schedule.
Your process will get the control when it will be marked as running again. This could happen if some other process calls WakeUp() (or a like, as I mentioned the API is unknown to me)
When the scheduler will pick up your process to switch to it performs the usual (for your system) context switch that has the interrupts enabled flag set to false, so the execution continues at statement after the Sleep() call with interrupts disabled.
If the assumptions above are incorrect and the interrupts enabled flag is global, then there are two possibilities: either the system hangs as it can't serve the interrupts, or it has some workaround for such a situations.
So, you need to try. The best way is to read the kernel sources of course, if you have the access.))
in my Cocoa project, I communicate with a device connected to a serial port. Now, I am waiting for the serial device to send a particular message of some bytes. For the read operation (and the reaction for once the desired message has been received), I created a new thread. On user request, I want to be able to cancel the thread.
As Apple suggests in the docs, I added a flag to the thread dictionary, periodically check if the flag has been set and if so, call [NSThread exit]. This works fine.
Now, the thread may be stuck waiting for the serial device to finally send the 12 byte message. The read call looks like this:
numBytes = read(fileDescriptor, buffer, 12);
Once the thread starts reading from the device, but no data comes in, I can set the flag to tell the thread to finish, but the thread is not going to read the flag unless it finally received at least 12 bytes of data and continues processing.
Is there a way to kill a thread that currently performs a read operation on a serial device?
Edit for clarification:
I do not insist in creating a separate thread for the I/O operations with the serial device. If there is a way to encapsulate the operations such that I am able to "kill" them if the user presses a cancel button, I am perfectly happy.
I am developing a Cocoa application for desktop Mac OS X, so no restrictions regarding mobile devices and their capabilities apply.
A workaround would be to make the read function return immediately if there are no bytes to read. How can I do this?
Use select or poll with a timeout to detect when the descriptor is ready for reading.
Set the timeout to (say) half a second and call it in a loop while checking to see if your thread should exit.
Asynchronous thread cancellation is almost always a bad idea. Try to stick with event-driven interfaces (and, if necessary, timeouts).
This is exactly what the pthread_cancel interface was designed for. You'll want to wrap the block with read in pthread_cleanup_push and pthread_cleanup_pop in order that you can safely clean up if the thread is cancelled, and also disable cancellation (with pthread_setcancelstate) in other code that runs in this thread that you don't want to be cancellable. This can be a pain if proper cleanup would involve multiple call frames; it essentially forces you to use pthread_cleanup_push at every call level and structure your thread code like C++ or Java with try/catch style exception handling.
An alternative approach would be to install a signal handler for an otherwise-unused signal (like SIGUSR1 or one of the realtime signals) without the SA_RESTART flag, so that it interrupts syscalls with EINTR. The signal handler itself can be a complete no-op; the only purpose of it is to interrupt things. Then you can use pthread_kill to interrupt the read (or any other syscall) in a particular thread. This has the advantage that you don't have to switch your code to using C++/Java-type idioms. You can handle the EINTR error by checking a flag (indicating whether the thread was requested to abort) and resume the read if the flag is not set, or return an error code that causes the caller to clean up and eventually pthread_exit.
If you do use interrupting signal handlers, make sure all your syscalls that can return EINTR are wrapped in loops that retry (or check the abort flag and optionally retry) on EINTR. Otherwise things can break badly.