I have a static object that needs to initialize an imaging API. The allocated resources of this imaging API need to be released by the same thread.
So I'm starting a thread in my static object that initializes everything and then waits for a counter to reach zero. When this happens the thread cleans all up and finishes.
This is an unmanaged class inside a managed library, so I can't use System::Threading::Thread (needs a managed static member function) or std::thread (compiler error, not supported with /clr).
So I have to start my thread like:
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)&Initialize, this, 0, 0);
All works fine, the init is done and the API functions work. But when I close the application I see that the usage counter of my static object reaches zero but the clean up function is never called by the thread, as if the thread was killed. Is there a way to make sure the thread will continue to exist and execute until its end?
After turning this around in all possible ways and adding events etc I guess this is not possible so I'll have to change the structure of my code and encapsulate the non managed class inside a managed class, and add the thread to the managed class.
I think you could proceed in one of two ways:
Wrap the resources in RAII-style classes, and refactor to have the objects' lifetimes be on the stack of your created thread, ensuring their destructors get called when the thread loop exits without having to call any additional cleanup. If there is no issue with the thread returning correctly when your counter reaches 0, this should be the simplest and cleanest way of addressing this.
I'm thinking you could intercept the WM_CLOSE message using window procedures, process necessary cleanup and then pass the message on, effectively "stalling" it until you are ready to close. Note that even though you are in a DLL you can still set up a window procedure and message pump system, you don't need a GUI to do that. I am however not 100% sure on whether you'll receive the WM_CLOSE message that concerns the application that "owns" your DLL, it's not something I've tried out yet.
You will have to implement some form of messaging through events within your thread's loop however, as the WindowProc will be called on a different thread, so you know when to call the cleanup procedure.
I also am not very familiar with CLR, so there might be a simpler way of interacting with those APIs than with raw C++ calls and handles.
I have BBController instances (my custom objects), where some may need to wait for a few others to complete first (dependencies). I have decided to have each controller lock some synchronisation object at initialisation, lets call it a Padlock, and then unlock it when its done processing. When its unlocked, any controllers that depend (or were waiting for) on the aforementioned controller can then continue. So this is not about protecting a section of code by allowing one thread, but instead telling anything that that depends on an output to wait until that output is available.
I have experience with Semaphores in objective c, so I thought I could use those here by having each controller initialise its semaphore with a value of 0, and then when finished signal it with a value of infinite or max. While that would work, I'm sure there is a better locking object to make use of, since the value property of Semaphore is of no use here since as many BBControllers can continue when the semaphore is signalled. I am new to VB.Net
I am currently interning in a company and just starting to get into their code. I noticed that they have tasks that use singleton classes, but inside the singleton class there is a future object that is used to fetch thread dumps.
The code goes something like this:
singltonclass{
private ExecutorService x= Executors.newFixedThreadPool(1);
getInstance method(){}
methodThatFetchsThreadDumps(){
future is used here;
}
}
Is it a good idea to use a future inside a singleton? What happens if the task using this singleton runs twice and overlaps? Wouldn’t using the singleton multiple times cause the future to give unexpected behavior?
This isn't necessarily a bad thing. The Future will make sure that the objects returned will be visible across threads. The thread pool is fixed at size of 1, so if there are concurrent requests the second one blocks until the only worker thread becomes available, by which time it has handed off its results from the previous task. No overlap should be occurring.
I have written a Core Data abstraction class which holds the persistent store, object model and object context. To make the multithreading easier, I have written the accessor for the object context so that it returns a instance that is only available for the current thread by using [NSThread currentThread] to identify the threads.
This works perfectly as long as I don't use GCD, which I want to use as replacement for the old NSThread's. So my question is, how do I identify a GCD thread? The question applies for both iOS and Mac OS X but I guess that its the same for both platforms.
You could check whether dispatch_get_current_queue() returns anything. I like Jeremy's idea of transitioning to a CD-context-per-queue instead of CD-context-per-thread model using the queue's context storage though.
Perhaps you can store the CD context for each thread in the GCD context using dispatch_set_context()
The contextForCurrentThread helper method in Magical Record is very similar to what to said (i.e. keep one context per thread). The GCD execution block, while running on a single queue, can potentially run on any thread managed by GCD, which will cause some random crashes. Check this article: http://saulmora.com/2013/09/15/why-contextforcurrentthread-doesn-t-work-in-magicalrecord/
Was wondering if anyone knows, or has pointers to good documentation that discusses, the low-level implementation details of Cocoa's 'performSelectorOnMainThread:' method.
My best guess, and one I think is probably pretty close, is that it uses mach ports or an abstraction on top of them to provide intra-thread communication, passing selector information along as part of the mach message.
Right? Wrong? Thanks!
Update 09:39AMPST
Thank you Evan DiBiase and Mecki for the answers, but to clarify: I understand what happens in the run loop, but what I'm looking for an answer to is; "where is the method getting queued? how is the selector information getting passed into the queue?" Looking for more than Apple's doc info: I've read 'em
Update 14:21PST
Chris Hanson brings up a good point in a comment: my objective here is not to learn the underlying mechanisms in order to take advantage of them in my own code. Rather, I'm just interested in a better conceptual understanding of the process of signaling another thread to execute code. As I said, my own research leads me to believe that it's takes advantage of mach messaging for IPC to pass selector information between threads, but I'm specifically looking for concrete information on what is happening, so I can be sure I'm understanding things correctly. Thanks!
Update 03/06/09
I've opened a bounty on this question because I'd really like to see it answered, but if you are trying to collect please make sure you read everything, including all currently posed answers, comments to both these answers and to my original question, and the update text I posted above. I'm look for the lowest-level detail of the mechanism used by performSelectorOnMainThread: and the like, and as I mentioned earlier, I suspect it has something to do with Mach ports but I'd really like to know for sure. The bounty will not be awarded unless I can confirm the answer given is correct. Thanks everyone!
Yes, it does use Mach ports. What happens is this:
A block of data encapsulating the perform info (the target object, the selector, the optional object argument to the selector, etc.) is enqueued in the thread's run loop info. This is done using #synchronized, which ultimately uses pthread_mutex_lock.
CFRunLoopSourceSignal is called to signal that the source is ready to fire.
CFRunLoopWakeUp is called to let the main thread's run loop know it's time to wake up. This is done using mach_msg.
From the Apple docs:
Version 1 sources are managed by the run loop and kernel. These sources use Mach ports to signal when the sources are ready to fire. A source is automatically signaled by the kernel when a message arrives on the source’s Mach port. The contents of the message are given to the source to process when the source is fired. The run loop sources for CFMachPort and CFMessagePort are currently implemented as version 1 sources.
I'm looking at a stack trace right now, and this is what it shows:
0 mach_msg
1 CFRunLoopWakeUp
2 -[NSThread _nq:]
3 -[NSObject(NSThreadPerformAdditions) performSelector:onThread:withObject:waitUntilDone:modes:]
4 -[NSObject(NSThreadPerformAdditions) performSelectorOnMainThread:withObject:waitUntilDone:]
Set a breakpoint on mach_msg and you'll be able to confirm it.
One More Edit:
To answer the question of the comment:
what IPC mechanism is being used to
pass info between threads? Shared
memory? Sockets? Mach messaging?
NSThread stores internally a reference to the main thread and via that reference you can get a reference to the NSRunloop of that thread. A NSRunloop internally is a linked list and by adding a NSTimer object to the runloop, a new linked list element is created and added to the list. So you could say it's shared memory, the linked list, that actually belongs to the main thread, is simply modified from within a different thread. There are mutexes/locks (possibly even NSLock objects) that will make sure editing the linked list is thread-safe.
Pseudo code:
// Main Thread
for (;;) {
lock(runloop->runloopLock);
task = NULL;
do {
task = getNextTask(runloop);
if (!task) {
// function below unlocks the lock and
// atomically sends thread to sleep.
// If thread is woken up again, it will
// get the lock again before continuing
// running. See "man pthread_cond_wait"
// as an example function that works
// this way
wait_for_notification(runloop->newTasks, runloop->runloopLock);
}
} while (!task);
unlock(runloop->runloopLock);
processTask(task);
}
// Other thread, perform selector on main thread
// selector is char *, containing the selector
// object is void *, reference to object
timer = createTimerInPast(selector, object);
runloop = getRunloopOfMainThread();
lock(runloop->runloopLock);
addTask(runloop, timer);
wake_all_sleeping(runloop->newTasks);
unlock(runloop->runloopLock);
Of course this is oversimplified, most details are hidden between functions here. E.g. getNextTask will only return a timer, if the timer should have fired already. If the fire date for every timer is still in the future and there is no other event to process (like a keyboard, mouse event from UI or a sent notification), it would return NULL.
I'm still not sure what the question is. A selector is nothing more than a C string containing the name of a method being called. Every method is a normal C function and there exists a string table, containing the method names as strings and function pointers. That are the very basics how Objective-C actually works.
As I wrote below, a NSTimer object is created that gets a pointer to the target object and a pointer to a C string containing the method name and when the timer fires, it finds the right C method to call by using the string table (hence it needs the string name of the method) of the target object (hence it needs a reference to it).
Not exactly the implementation, but pretty close to it:
Every thread in Cocoa has a NSRunLoop (it's always there, you never need to create on for a thread). PerformSelectorOnMainThread creates a NSTimer object like this, one that fires only once and where the time to fire is already located in the past (so it needs firing immediately), then gets the NSRunLoop of the main thread and adds the timer object there. As soon as the main thread goes idle, it searches for the next event in its Runloop to process (or goes to sleep if there is nothing to process and being woken up again as soon as an event is added) and performs it. Either the main thread is busy when you schedule the call, in which case it will process the timer event as soon as it has finished its current task or it is sleeping at the moment, in which case it will be woken up by adding the event and processes it immediately.
A good source to look up how Apple is most likely doing it (nobody can say for sure, as after all its closed source) is GNUStep. Since the GCC can handle Objective-C (it's not just an extension only Apple ships, even the standard GCC can handle it), however, having Obj-C without all the basic classes Apple ships is rather useless, the GNU community tried to re-implement the most common Obj-C classes you use on Mac and their implementation is OpenSource.
Here you can download a recent source package.
Unpack that and have a look at the implementation of NSThread, NSObject and NSTimer for details. I guess Apple is not doing it much different, I could probably prove it using gdb, but why would they do it much different than that approach? It's a clever approach that works very well :)
The documentation for NSObject's performSelectorOnMainThread:withObject:waitUntilDone: method says:
This method queues the message on the run loop of the main thread using the default run loop modes—that is, the modes associated with the NSRunLoopCommonModes constant. As part of its normal run loop processing, the main thread dequeues the message (assuming it is running in one of the default run loop modes) and invokes the desired method.
As Mecki said, a more general mechanism that could be used to implement -performSelectorOn… is NSTimer.
NSTimer is toll-free bridged to CFRunLoopTimer. An implementation of CFRunLoopTimer – although not necessarily the one actually used for normal processes in OS X – can be found in CFLite (open-source subset of CoreFoundation; package CF-476.14 in the Darwin 9.4 source code. (CF-476.15, corresponding to OS X 10.5.5, is not yet available.)