in my Cocoa project, I communicate with a device connected to a serial port. Now, I am waiting for the serial device to send a particular message of some bytes. For the read operation (and the reaction for once the desired message has been received), I created a new thread. On user request, I want to be able to cancel the thread.
As Apple suggests in the docs, I added a flag to the thread dictionary, periodically check if the flag has been set and if so, call [NSThread exit]. This works fine.
Now, the thread may be stuck waiting for the serial device to finally send the 12 byte message. The read call looks like this:
numBytes = read(fileDescriptor, buffer, 12);
Once the thread starts reading from the device, but no data comes in, I can set the flag to tell the thread to finish, but the thread is not going to read the flag unless it finally received at least 12 bytes of data and continues processing.
Is there a way to kill a thread that currently performs a read operation on a serial device?
Edit for clarification:
I do not insist in creating a separate thread for the I/O operations with the serial device. If there is a way to encapsulate the operations such that I am able to "kill" them if the user presses a cancel button, I am perfectly happy.
I am developing a Cocoa application for desktop Mac OS X, so no restrictions regarding mobile devices and their capabilities apply.
A workaround would be to make the read function return immediately if there are no bytes to read. How can I do this?
Use select or poll with a timeout to detect when the descriptor is ready for reading.
Set the timeout to (say) half a second and call it in a loop while checking to see if your thread should exit.
Asynchronous thread cancellation is almost always a bad idea. Try to stick with event-driven interfaces (and, if necessary, timeouts).
This is exactly what the pthread_cancel interface was designed for. You'll want to wrap the block with read in pthread_cleanup_push and pthread_cleanup_pop in order that you can safely clean up if the thread is cancelled, and also disable cancellation (with pthread_setcancelstate) in other code that runs in this thread that you don't want to be cancellable. This can be a pain if proper cleanup would involve multiple call frames; it essentially forces you to use pthread_cleanup_push at every call level and structure your thread code like C++ or Java with try/catch style exception handling.
An alternative approach would be to install a signal handler for an otherwise-unused signal (like SIGUSR1 or one of the realtime signals) without the SA_RESTART flag, so that it interrupts syscalls with EINTR. The signal handler itself can be a complete no-op; the only purpose of it is to interrupt things. Then you can use pthread_kill to interrupt the read (or any other syscall) in a particular thread. This has the advantage that you don't have to switch your code to using C++/Java-type idioms. You can handle the EINTR error by checking a flag (indicating whether the thread was requested to abort) and resume the read if the flag is not set, or return an error code that causes the caller to clean up and eventually pthread_exit.
If you do use interrupting signal handlers, make sure all your syscalls that can return EINTR are wrapped in loops that retry (or check the abort flag and optionally retry) on EINTR. Otherwise things can break badly.
Related
I'm new to threading, so there are a few things I'm trying to grasp correctly.
I have a windows form application that uses threading to keep my UI responsive while some server shenanigans are going on.
My question is: when I quit my application, what happens to ongoing threads? Will they run to completion or will the abruptly be interrupted?
If they are interrupted, what can I do to make sure they at least don't get interrupted in such a way that would corrupt data on my server (force them to run to a safe place in the code where I know it's ok to interrupt the execution)
You will want to keep a reference of said threads, and call .Abort() on them when you want to terminate. Then you put your thread's code in a try/catch block and handle ThreadAbortException's. This will let you clean up what you are doing and terminate the thread cleanly at your own pace. In the main thread, after you called .Abort(), you just wait until the thread is no longer running (by polling the .IsAlive property of the Thread object) and close your application afterwards.
A thread needs a process to run in. The process won't be able to terminate if you don't terminate all the non-background threads you have started. Threads marked as background thread will be aborted.
So, the behavior is entirely up to your implementation. If you want to close the application, you could wait for all threads to terminate by themself, you could set an event to ask them to terminate and wait or you could just kill the threads.
The UI thread will terminate by itself because it runs a messageloop that stops when requested by the operating system, also see wikipedia and this answer.
I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.
I'm using GPUImageFilter in a chain, and most of the time it works OK. I've recently come across a few random crashes that match the symptoms in this github issue (albeit I'm using GPUImageFilter not live capture or video). I'm trying to find a suitable method that can ensure I've cleared the frame buffer and any other GPUImage-related activities in willResignActive.
Currently I have:
[[GPUImageContext sharedFramebufferCache] purgeAllUnassignedFramebuffers];
Is this sufficient? Should I use something else instead/in addition to?
As indicated there, seeing gpus_ReturnNotPermittedKillClient in a stack trace almost always is due to OpenGL ES operations being performed while your application is in the background or is just about to go to the background.
To deal with this, you need to guarantee that all GPUImage-related work is finished before your application heads to the background. You'll want to listen for delegate notifications that your application is heading to the background, and make sure all processing is complete before that delegate callback exits. The suggestion there by henryl is one way to ensure this. Add the following near the end of your delegate callback:
runSynchronouslyOnVideoProcessingQueue(^{
// Do some operation
});
What that will do is inject a synchronous block into the video processing pipeline (which runs on a background queue). Your delegate callback will block the main thread at that point until this block has a chance to execute, guaranteeing that all processing blocks before it have finished. That will make sure all pending operations are done (assuming you don't add new ones) before your application heads to the background.
There is a slight chance of this introducing a deadlock in your application, but I don't think any of my code in the processing pipeline calls back into the main queue. You might want to watch out for that, because if I do still have something in there that does that, this will lock your application. That internal code would need to be fixed if so.
So I am running into a race condition and I have a few solutions on how to fix the issue. I am new to threading so obviously, my opinion and research is limited. I have a large amount of asynchronization calls that can happen if a user receives certain messages from server. Thus, my design is poor due to the dependent nature of my objects.
Lets say I have a function called
adduser:(NSString s){
does some asynchronize activity
}
Messageuser:(NSString s)
{
Does some more asychronize activity
}
if a user were to recieve a message telling it to addUser "Ryan". he would than create a thread and proceed with looking up Ryan and storing him. However, if the user has the application in suspended mode, and in the buffered of messages waiting to be recieved there is a addUser request and a MessageUser request, a race condition occures because it takes longer to complete Adduser than it does to complete MessageUser. Thus, If messageUser is called , and (in our example) "Ryan" has not been fully added, it throws an error.
What would be a possible solution to this issue. I looked into locks and semaphores, and what I am trying to do is, when MessageUser recieves a call, check to make sure there is no thread currently proccessing addUser. If there is none, proceed. Else wait, than proceed after it has finished.
Well it depends on how the messages are being issued in the first place and what the async response events are.
If the operations have dependencies (ordering requirements) then perhaps a background serial queue would be appropriate? That is a simple way to ensure the messages are processed in order.
If the async operations take completion blocks, then you could have the completion block issue the request for the next operation to be performed, though you may not know about that ahead of time.
If you need to solve this in a more general way then you need some kind of system for tracking prerequisites so you can skip work items that don't have their prerequisites met yet. That probably means your own background thread that monitors a list of waiting tasks and receives notification of all task completions so it can scan for items waiting on that completion and issue them.
It seems really complicated though... I suspect you don't really have such strong async parallel processing requirements and a much simpler design would be just as effective. Given your situation where you are receiving messages from a server, I think a serial queue would be the best option. Then you can process messages in the order the server sent them and keep things simple.
//do this once at app startup
dispatch_queue_t queue = dispatch_queue_create("com.example.myapp", NULL);
//handle server responses
dispatch_async(queue, ^{
//handle server message here, one at a time
});
In reality, depending on how you connect to your server you might be able to just move the entire connection handling to the background queue and communicate with it via messages from the UI, and update the UI by dispatching to the dispatch_get_main_queue() which will be the UI thread.
I have an app where the network activity is done in its separate thread (and the network thread continuously gets data from the server and updates the display - the display calls are made back on the main thread). When the user logs out, the main thread calls a disconnect method on the network thread as follows:
[self performSelector:#selector(disconnectWithErrorOnNetworkThread:) onThread:nThread withObject:e waitUntilDone:YES];
This selector gets called most of the time and everything works fine. However, there are times (maybe 2 out of ten times) that this call never returns (in other words the selector never gets executed) and the thread and the app just hang. Anyone know why performSelector is behaving erratically?
Please note that I need to wait until the call gets executed, that's why waitUntilDone is YES, so changing that to NO is not an option for me. Also the network thread has its run loop running (I explicitly start it when the thread is created).
Please also note that due to the continuous nature of the data transfer, I need to explicitly use NSThreads and not GCD or Operations queues.
That'll hang if:
it is attempting to perform a selector on the same thread the method was called from
the call to perform the selector is to a thread from which a synchronous call was made that triggered the perform selector
When your program is hung, have a look at the backtraces of all threads.
Note that when implementing any kind of networking concurrency, it is generally really bad to have synchronous calls from the networking code into the UI layers or onto other threads. The networking thread needs to be very responsive and, thus, just like blocking the main thread is bad, anything that can block the networking thread is a bad, too.
Note also that some APIs with callbacks don't necessarily guarantee which thread the callback will be delivered on. This can lead to intermittent lockups, as described.
Finally, don't do any active polling. Your networking thread should be fully quiescent unless some event arrives. Any kind of looped polling is bad for battery life and responsiveness.