I have BBController instances (my custom objects), where some may need to wait for a few others to complete first (dependencies). I have decided to have each controller lock some synchronisation object at initialisation, lets call it a Padlock, and then unlock it when its done processing. When its unlocked, any controllers that depend (or were waiting for) on the aforementioned controller can then continue. So this is not about protecting a section of code by allowing one thread, but instead telling anything that that depends on an output to wait until that output is available.
I have experience with Semaphores in objective c, so I thought I could use those here by having each controller initialise its semaphore with a value of 0, and then when finished signal it with a value of infinite or max. While that would work, I'm sure there is a better locking object to make use of, since the value property of Semaphore is of no use here since as many BBControllers can continue when the semaphore is signalled. I am new to VB.Net
Related
I have a static object that needs to initialize an imaging API. The allocated resources of this imaging API need to be released by the same thread.
So I'm starting a thread in my static object that initializes everything and then waits for a counter to reach zero. When this happens the thread cleans all up and finishes.
This is an unmanaged class inside a managed library, so I can't use System::Threading::Thread (needs a managed static member function) or std::thread (compiler error, not supported with /clr).
So I have to start my thread like:
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)&Initialize, this, 0, 0);
All works fine, the init is done and the API functions work. But when I close the application I see that the usage counter of my static object reaches zero but the clean up function is never called by the thread, as if the thread was killed. Is there a way to make sure the thread will continue to exist and execute until its end?
After turning this around in all possible ways and adding events etc I guess this is not possible so I'll have to change the structure of my code and encapsulate the non managed class inside a managed class, and add the thread to the managed class.
I think you could proceed in one of two ways:
Wrap the resources in RAII-style classes, and refactor to have the objects' lifetimes be on the stack of your created thread, ensuring their destructors get called when the thread loop exits without having to call any additional cleanup. If there is no issue with the thread returning correctly when your counter reaches 0, this should be the simplest and cleanest way of addressing this.
I'm thinking you could intercept the WM_CLOSE message using window procedures, process necessary cleanup and then pass the message on, effectively "stalling" it until you are ready to close. Note that even though you are in a DLL you can still set up a window procedure and message pump system, you don't need a GUI to do that. I am however not 100% sure on whether you'll receive the WM_CLOSE message that concerns the application that "owns" your DLL, it's not something I've tried out yet.
You will have to implement some form of messaging through events within your thread's loop however, as the WindowProc will be called on a different thread, so you know when to call the cleanup procedure.
I also am not very familiar with CLR, so there might be a simpler way of interacting with those APIs than with raw C++ calls and handles.
Lets say, i have a complex calculation running in NSOperation block. I have paused it. Closed the app. Then restarted the app. Can i recover the last state and continue from there?
Is there existing solution for such a problem or it can be only custom built for certain purposes?
The question is a bit vague, so it's hard to say without knowing all of the code in play. With that said, I may approach the problem by:
Option 1. In your subclass of NSOperation, add your own atomic KVO property "isPaused". Within the operation itself, observe that property and handle accordingly if it ever changes.
Option 2. Are you ever suspending the Operation Queue itself? If so, consider observing that property from within your operations, and each one independently can take action if that value changes.
Option 3. Cancel all operations in the queue, and if the view appears again, just restart with new operations.
Overall, though, there is no magic bullet for pausing operations already in progress. You'll have to bake your own solution. The damage shouldn't be too bad though.
Suspending and Resuming Queues If you want to issue a temporary halt to the execution of operations, you can suspend the corresponding operation queue using the setSuspended: method.
Suspending a queue does not cause already executing operations to pause in the middle of their tasks. It simply prevents new operations from being scheduled for execution. You might suspend a queue in response to a user request to pause any ongoing work, because the expectation is that the user might eventually want to resume that work.
For more detail Refer this link apple docs: http://developer.apple.com/library/mac/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationObjects/OperationObjects.html
I have a situation where a session of background processing can finish by timing out, user asynchronously cancelling or the session completing. Any of those completion events can run a single shot completion method. The completion method must only be run once. Assume that the session is an instance of an object so any synchronisation must use instance constructs.
Currently I'm using an Atomic Compare and Swap operation on a completion state variable so that each event can test and set the completion state when it runs. The first completion event to fire gets to set the completed state and run the single shot method and the remaining events fail. This works nicely.
However I can't help feeling that I should be able to do this in a higher level way. I tried using a Lock object (NSLock as I'm writing this with Cocoa) but then got a warning that I was releasing a lock that was still in the locked state. This is what I want of course. The lock gets locked once and never unlocked but I was afraid that system resources representing the lock might get leaked.
Anyway, I'm just interested as to whether anyone knows of a more high level way to achieve a single shot method like this.
sample code for any of the completion events:
if(OSAtomicCompareAndSwapInt(0, 1, &completed))
{
self.completionCallback();
}
Doing a CAS is almost certainly the right thing to do. Locks are not designed for what you need, they are likely to be much more expensive and are semantically a poor match anyway -- the completion is not "locked". It is "done". A boolean flag is the right representation, and doing a CAS ensures that it is manipulated safely in concurrent scenarios. In C++, I'd use std::atomic_flag for this, maybe check whether Cocoa has anything similar (this just wraps the CAS in a nicer interface, so that you never accidentally use a non-CAS test on the variable, which would be racy).
(edit: in pthreads, there's a function called pthread_once which does what you want, but I wouldn't know about Cocoa; the pthread_once interface is quite unwieldy anyway, in my opinion...)
I have an iOS application using NSThreads for concurrency tasks. I will try to migrate it to be using the Grand Central Dispatch (GCD) for handling concurrency.
The problem is that the app needs information regarding how many threads has been created since a given time. And how many threads that was spawned since that given time is currently running.
At the moment this is done by creating a category that does a method swizzling on the -main method in NSThread. In the new swizzled method it simply increments the total number of threads running and then decrement the same variable before the new swizzled -main method returns.
The problem is that when I use GCD dispatch_async it does not create a NSThread, hence my category approach does not work. How can I achieve the same while using GCD to handle concurrency?
What I would like to detect is when a new block is added to GCD, and when that block has been executed.
Any suggestions on how to achieve the same is very welcome.
EDIT
Many thanks to #ipmcc and #RyanR for helping me out on this. :) I believe I need to tell some more about the background and what I am trying to accomplish.
What I am actually trying is to extend the iOS testing framework Frank. Frank embeds a small web-server within a given app which enables sending HTTP request to the iOS application and thereby simulating events, a swipe or a tap gesture as an example.
I would like to extend it in a way that enables it to wait until all work triggered by a specific simulated event has ended before returning upon a request.
However I found it hard to detect exactly what work was triggered by the received event. And thats how I came to the solution to just reset a thread counter and then increment this counter for all created threads after the event was simulated, and decrement it when the threads are finishing. And then block until threads count became zero again. I know this approach is not perfect either, and it wont work with GCP.
Is there any other way to achieve it? Another possible solution which I have thought of is to specify that everything must run synchronized except the thread handling the HTTP request. However I don't know if this possible.
Any suggestions on how to achieve blocking after each simulated event until work triggered by that event has completed?
The problem is that the app needs information regarding how many
threads has been created since a given time. And how many threads that
was spawned since that given time is currently running.
You will not be able to get this information from GCD. One of the points of GCD is that you do not manage the thread pool. It is opaque. You'll note that even pthreads, the underlying threading library on which NSThread and GCD are built, does not have a (public) means to enumerate all existing threads or get the number of running threads. This is not going to be doable without hard core low level hackery. If you need to control or know the number of threads, then you need to be the one to spawn and manage them, and GCD is the wrong abstraction for you.
At the moment this is done by creating a category that does a method
swizzling on the -main method in NSThread. In the new swizzled method
it simply increments the total number of threads running and then
decrement the same variable before the new swizzled -main method
returns.
Note that this only tells you the number of threads started using NSThread. As mentioned, NSThread is a fairly high level abstraction on top of pthreads. There is nothing to prevent library code from spawning its own threads using the pthreads API that will be invisible to your count.
The problem is that when I use GCD dispatch_async it does not create a
NSThread, hence my category approach does not work. How can I achieve
the same while using GCD to handle concurrency?
In short, you can't. If you want to go forth and patch functions all over the various frameworks, then you should look up a library called mach_override. (But please don't.)
What I would like to detect is when a new block is added to GCD, and
when that block has been executed.
Since GCD uses thread pools, the act of adding a block does not imply a new thread. (And that's sorta the whole point.)
If you have some limited resource whose consumption you need to manage, the traditional way to do that would be with a limiting semaphore, but that is just one option.
This whole question just reeks of a poor design. Like the number of pthreads, GCD's queue widths are opaque/non-public. Your previous solution was not particularly viable (as discussed), and further efforts are likely to yield similarly poor solutions. You should really rethink your architecture such that knowing how many threads are running isn't important.
EDIT: Thanks for the clarification. There's not really a generic way, from the outside, to tell when all the "work" is done. What if an action sets up a timer that won't call back for ten minutes? At the extreme, consider this: the main runloop continues to spin for the entire life of the app, and as long as the main runloop is spinning, "work" could be being done on it.
In order to detect "doneness" your app has to signal doneness. In order to signal doneness, the app has to have some way (internal to itself) to know it's done. Put differently, the app can't tell something else (i.e. Frank) something it doesn't know. One way to go about this would be to encapsulate all the work you do in your app in NSOperations. NSOperation/NSOperationQueue provide good ways of reporting "doneness." At the simplest level, you could wrap the code where you kickoff work in an NSBlockOperation, then add a completion block to that operation that signals something else when it's done, and enqueue it to an NSOperationQueue for execution. (You could also do this with dispatch_group and dispatch_group_notify if you prefer working in the GCD style.)
If you have specific questions about how to package up your app's work into NSOperations, I would suggest starting a new question.
You can hook into the dispatch introspection functions (introspection.h, methods all start with dispatch_introspection), but you have to link with that library which is supposed to be only for debugging. I don't think you can include that in a release build. Your best bet would be to encapsulate GCD into your own object, so all your code submits blocks to execute through that object and it submits them to GCD after tracking whatever you're interested in. You won't be able to track thread consumption though, because GCD intentionally abstracts that and reuses threads.
I am in the middle of creating a cloud integration framework for iOS. We allow you to save, query, count and remove with synchronous and asynchronous with selector/callback and block implementations. What is the correct practice? Running the completion blocks on the main thread or a background thread?
For simple cases, I just parameterize it and do all the work i can on secondary threads:
By default, callbacks will be made on any thread (where it is most efficient and direct - typically once the operation has completed). This is the default because messaging via main can be quite costly.
The client may optionally specify that the message must be made on the main thread. This way, it requires one line or argument. If safety is more important than efficiency, then you may want to invert the default value.
You could also attempt to batch and coalesce some messages, or simply use a timer on the main run loop to vend.
Consider both joined and detached models for some of your work.
If you can reduce the task to a result (remove the capability for incremental updates, if not needed), then you can simply run the task, do the work, and provide the result (or error) when complete.
Apple's NSURLConnection class calls back to its delegate methods on the thread from which it was initiated, while doing its work on a background thread. That seems like a sensible procedure. It's likely that a user of your framework will not enjoy having to worry about thread safety when writing a simple callback block, as they would if you created a new thread to run it on.
The two sides of the coin: If the callback touches the GUI, it has to be run on the main thread. On the other hand, if it doesn't, and is going to do a lot of work, running it on the main thread will block the GUI, causing frustration for the end user.
It's probably best to put the callback on a known, documented thread, and let the app programmer make the determination of the effect on the GUI.