I have created an NSMetadataQuery to search for all audio available through Spotlight, modelled on the following command, which returns plenty of results:
mdfind kMDItemContentTypeTree == "public.audio"
Here is the code I'm using:
NSMetadataQuery * q = [[[NSMetadataQuery alloc] init] autorelease];
[q setPredicate:[NSPredicate predicateWithFormat:#"kMDItemContentTypeTree == 'public.audio'", nil]];
NSLog(#"%#", [[q predicate] predicateFormat]);
if ([q startQuery])
while ([q isGathering]) {
NSLog(#"Polling results: %i", [q resultCount]);
[NSThread sleepForTimeInterval: 0.1];
}
[q stopQuery];
}
For some reason, the query seems to remain in the gathering phase indefinitely, and never gets a single result. I would like to know why this is the case, and whether there would be a more elegant way to block the thread while waiting for a result, preferably avoiding polling.
My application is actually not based on Cocoa but on NSFoundation, and thus far has no event loop. I realize that the conventional approach to dealing with Spotlight queries is to subscribe to an event notification, but I don't know how to block while waiting for one, and that approach seems a little overkill for my purposes.
To phrase my question as simply as possible, can I block my thread while waiting for the NSMetadataQuery to conclude the initial gathering phase? If so, how?
Instead of [NSThread sleepForTimeInterval:0.1] try:
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.1]];
The former is actually stopping the thread altogether, which means the query can't be running. The latter is kind of like sleeping, except that it also allows event sources to fire.
Related
This is the way I am trying to manage my thread
-(void)ExecuteThread {
#autoreleasepool {
bInsideDifferentThread = YES;
//some code...
bInsideDifferentThread = NO;
}
[NSThread exit];
}
-(void)ThreadCallerEvent {
NSThread *myThread = [[NSThread alloc] initWithTarget:self selector:#selector(ExecuteThread) object:nil];
if (!bInsideThread)
[myThread start];
else
{
[myThread cancel];
}
}
I do it this way becuase I don't want the thread to be started until it has finished working. The problem is that this is generating leaks from a non released memory allocated at [NSThread init]
Any ideas of how to fix this problem?
I ran a fragment similar to yours and wasn't able to detect leaks; but the context is almost certainly different. I'm really not sure what ARC is doing with myThread in your example when it goes out of scope. A typical pattern for using NSThread would be:
[NSThread detachNewThreadSelector:#selector(executeThread)
toTarget:self
withObject:nil];
in which case you're not responsible for dealing directly with the thread you are detaching. (Note: I changed your method name to use camel case which is the preferred method and variable naming convention in Cocoa.)
All of the above said, managing threads are no longer the preferred way of achieving concurrent design. It's perfectly acceptable; but Apple is encouraging developers to migrate to GCD. Far better to think in terms of units of work that need to be performed concurrently.
Without understanding your needs more deeply in this case, it's hard to know what advantages, if any, working directly with threads might offer you; but I would consider looking at concurrent queues/GCD more closely. Perhaps you could simply use something like:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// do your background work
});
and achieve both a clearer design for concurrency and avoid whatever memory management issues you are now seeing.
I've not found any decent documentation that explains the threading process for NSStream. To be specific, let's go for NSInputStream. Threading in Objective-C to me is currently a mystery simply because it appears to be so simple.
What my question is refers to this line primarily:
[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
You can specify the run loop that the input stream will run in, which I thought was quite cool. The thing is, if I want the input and output streams to run in their own threads, and both are instantiated in a single class, say Connection, then how do you get them to run in their own threads?
The reason I ask is because of delegates. Previously we would've done [inputStream setDelegate:self] which means we have to declare stream:handleEvent to handle incoming/outgoing data.
So ultimately my question is, if you have one class which sets up the input and output stream, how do you both thread each stream and delegate responsibility for handling stream events to the current class?
Here's some code to chomp on:
[inputStream setDelegate:self];
[outputStream setDelegate:self];
[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];
I'm thinking the following:
You can't delegate responsibility for both threads in the current class, you'd have to delegate to separate objects.
One thread would do for both streams? (I don't personally think so, because input/output will run concurrently)
I'm thinking this through wrong, and you can create a separate run loop and call scheduleRunLoop against some separate thread?
Any ideas?
Note the differences between heap and stack.
Each thread has its own stack but all threads access the same heap.
The input stream needs its own thread because when reading the stream without reaching any EOF, the thread blocks if no new characters come and therefore waits. So to prevent your application from blocking the input stream needs a separate thread.
Continuing since delegates can't be static methods you have to copy or at least synchronise the use of a buffer to return results. Remember each thread has its own stack but both access to the same heap.
The NSStreamDelegate is an interface that allows you to specify who will manage events from a stream. Therefore allowing you to split programming for the handling of streams and the handling of their events. As your can think of delegates as pointers to functions you have to make sure they exist at runtime, which is why they are usually used and defined with protocols. But calling a method of a delegate does not mean you call a function of another thread. You only apply parameters/objects to a method of another object/class that will/must exist at runtime.
Apples NSThread class and NSRunloop class make it easy but confuse because a runloop is not the same as a thread.
Each thread can have a runloop which is at least looped once and return immediately when nothing else is there to do. With [NSRunLoop currentRunLoop] you are asking for the runloop of the thread you are in, you are not creating another thread.
So calling for the very same runloop from the same thread twice results in work in the same thread. Following this means if one part is blocking, you are blocking the thread so the other parts in the same thread wait also. (Where in the last sentence you could exchange the word thread with runloop and its still the same, its is blocking the thread)
As a matter of simultaneousness usually more than one socket port is involved when input and output streams should work at the 'same' time. NSInputStream is read -only & NSOutputStream is write-only.
Where it is wise to give the input stream its own thread for reasons of possibly unexpected results from data and timing given by the nature of a remote sender which is out of your control. You are responsible to define if a runloop (respective thread) should stay active after invoking once.
Doing so your output stream is then in an/the other thread, the one you asked for its current runloop.
You can not create or manage runloops, you just ask for it because each thread has one and if not - one is created for you.
On iOS you have a bunch of solutions available to implement your personal design pattern for simultan input and output streams.
You could go with NSThread also with -performSelectorInBackground:SEL withObject:(nullable Id) which is actually an extension of NSObject defined in NSThread.h. But the last one does not allow you to define a special run-mode.
If you do not want to create a costume NSThread subclass to your needs here is a simple solution mentioned that may also work for you.
iOS how can i perform multiple NSInputStream
Here [NSRunLoop currentRunloop] is the runloop of the thread that is detached. It is also possible to detach a new thread with a block
id<NSStreamDelegate> streamDelegate = //object that conforms to the protocol
[NSThread detachNewThreadWithBlock:^(void){
NSInputStream *inputStream;
[inputStream setDelegate:streamDelegate];
// define your stream here
[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop]
forMode:NSRunLoopCommonModes];
[inputStream open];
}];`
PS: the example from #Ping is a switch to interact on events of both streams, it does not make streaming in & out simultaneous. Well, but you could use the example on both streams and their events no matter if they are simultaneous or not, typical NSStreamDelegate stuff.
- (void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
switch (eventCode) {
case NSStreamEventNone:
break;
case NSStreamEventOpenCompleted:
break;
case NSStreamEventHasBytesAvailable:
[self _readData];
break;
case NSStreamEventHasSpaceAvailable:
[self _writeData];
break;
case NSStreamEventErrorOccurred:
break;
case NSStreamEventEndEncountered:
break;
default:
break;
}
}
After using Instruments, I have found the a spot in my code that is very long running and blocking my UI: lots of Core Data fetches (it's part of a process of ingesting a large JSON packet and building up managed objects while making sure that objects have not be duplicated).
While my intentions are to break up this request into smaller pieces and process them serially, that only means I'll be spreading out those fetches - I anticipate the effect will be small bursts of jerkiness in the app instead of one long hiccup.
Everything I've read both in Apple's docs and online at various blog posts indicates that Core Data and concurrency is akin to poking a beehive. So, timidly I've sat down to give it the ol' college try. Below is what I've come up with, and I would appreciate someone wiser pointing out any errors I'm sure I've written.
The code posted below works. What I have read has me intimidated that I've surely done something wrong; I feel like if pulled the pin out of a grenade and am just waiting for it to go off unexpectedly!
NSBlockOperation *downloadAllObjectContainers = [NSBlockOperation blockOperationWithBlock:^{
NSArray *containers = [webServiceAPI findAllObjectContainers];
}];
[downloadAllObjectContainers setCompletionBlock:^{
NSManagedObjectContext *backgroundContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
[backgroundContext setPersistentStoreCoordinator:[_managedObjectContext persistentStoreCoordinator]];
[[NSNotificationCenter defaultCenter] addObserverForName:NSManagedObjectContextDidSaveNotification
object:backgroundContext
queue:[NSOperationQueue mainQueue]
usingBlock:^(NSNotification *note) {
[_managedObjectContext mergeChangesFromContextDidSaveNotification:note];
}];
Builder *builder = [[Builder alloc] init];
[builder setManagedObjectContext:backgroundContext];
for (ObjectContainer *objCont in containers) { // This is the long running piece, it's roughly O(N^2) yuck!
[builder buildCoreDataObjectsFromContainer:objCont];
}
NSError *backgroundContextSaveError = nil;
if ([backgroundContext hasChanges]) {
[backgroundContext save:&backgroundContextSaveError];
}
}];
NSOperationQueue *background = [[NSOperationQueue alloc] init];
[background addOperation:downloadAllObjectContainers];
Since you are using NSPrivateQueueConcurrencyType you must be doing it for iOS5, you do not have to go through all the trouble of creating context in a background thread and merging it in the main thread.
All you need is to create a managed object context with concurrency type NSPrivateQueueConcurrencyType in the main thread and do all operations with managed objects inside a block passed in to managedObjectContext:performBlock method.
I recommend you take a look at WWDC2011 session 303 - What's New in Core Data on iOS.
Also, take a look at Core Data Release Notes for iOS5.
Here's a quote from the release notes:
NSManagedObjectContext now provides structured support for concurrent operations. When you create a managed object context using initWithConcurrencyType:, you have three options for its thread (queue) association
Confinement (NSConfinementConcurrencyType).
This is the default. You promise that context will not be used by any thread other than the one on which you created it. (This is exactly the same threading requirement that you've used in previous releases.)
Private queue (NSPrivateQueueConcurrencyType).
The context creates and manages a private queue. Instead of you creating and managing a thread or queue with which a context is associated, here the context owns the queue and manages all the details for you (provided that you use the block-based methods as described below).
Main queue (NSMainQueueConcurrencyType).
The context is associated with the main queue, and as such is tied into the application’s event loop, but it is otherwise similar to a private queue-based context. You use this queue type for contexts linked to controllers and UI objects that are required to be used only on the main thread.
Concurrency
Concurrency is the ability to work with the data on more than one queue at the same time. If you choose to use concurrency with Core Data, you also need to consider the application environment. For the most part, AppKit and UIKit are not thread-safe. In OS X in particular, Cocoa bindings and controllers are not threadsafe Core Data, Multithreading, and the Main Thread
I'm writing a multi threaded application for iOS.
I am new to Objective-C, so I haven't played around with threads in iPhone before.
Normally when using Java, I create a thread, and send "self" as object to thread. And from the thread I can therefor call the main thread.
How is this done in Objective C?
How can I call the main thread from a thread? I have been trying with NSNotificationCenter, but I get a sigbart error :/
This is how the thread is started:
NSArray *extraParams = [NSArray arrayWithObjects:savedUserName, serverInfo, nil]; // Parameters to pass to thread object
NSThread *myThread = [[NSThread alloc] initWithTarget:statusGetter // New thread with statusGetter
selector:#selector(getStatusFromServer:) // run method in statusGetter
object:extraParams]; // parameters passed as arraylist
[myThread start]; // Thread started
activityContainerView.hidden = NO;
[activityIndicator startAnimating];
Any help would be great!
you accomplish this by adding a message to the main thread's run loop.
Foundation provides some conveniences for this, notably -[NSObject performSelectorOnMainThread:withObject:waitUntilDone:] and NSInvocation.
using the former, you can simply write something like:
[self performSelectorOnMainThread:#selector(updateUI) withObject:nil waitUntilDone:NO]
a notification may be dispatched from a secondary thread (often the calling thread).
You can either use performSelectorOnMainThread:withObject:waitUntilDone: or, if you're targeting iOS 4 and later, Grand Central Dispatch, which doesn't require you to implement a method just to synchronize with the main thread:
dispatch_async(dispatch_get_main_queue(), ^ {
// Do stuff on the main thread here...
});
This often makes your code easier to read.
Though this is not a direct answer to your question I would highly recommend you take a look at Grand Central Dispatch. It generally gives better performance than trying to use threads directly.
As Justin pointed out, you can always perform a function in the main thread by calling performSelectorOnMainThread, if you really needed too.
I'm working with some code that does a bunch of asynchronous operating with various callbacks; Snow Leopard has made this incredibly easy with blocks and GCD.
I'm calling NSTask from an NSBlockOperation like so:
[self.queue addOperationWithBlock:^{
NSTask *task = [NSTask new];
NSPipe *newPipe = [NSPipe new];
NSFileHandle *readHandle = [newPipe fileHandleForReading];
NSData *inData = nil;
[task setLaunchPath:path];
[task setArguments:arguments];
[task launch];
while ((inData = [readHandle availableData]) && [inData length]) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// callback
}];
}
[task waitUntilExit];
}];
This approach works perfectly. It's like magic, as long as my callbacks handle the concurrency correctly.
Now, I want to be able to coalesce some of these calls; this is inside a model object's "refresh" method and may take a long time to complete. Having the user pound on the refresh button shouldn't tie up the machine and all that.
I can see an implementation dilemma here. I can make a whole bunch of queues - one per call type - and set their concurrent operation counts to 1 and then call -cancelAllOperations whenever it's time for a new call.
Alternately, I could do some more manual bookkeeping on which calls are currently happening and manage a single queue per model object (as I'm doing) or I could go even further and use a global queue.
How heavy is NSOperationQueue? Is creating a lot of queues a bad architecture decision? Is there a better way to coalesce these tasks?
If you're concerned about performance, don't guess: measure and then fix any bottlenecks you find. Adding queues is simple; try it and see what Instruments tells you about the effect on performance.
The main reason for creating multiple queues is in case you have some reason for wanting to start and stop them. If you just want to get the benefits of libdispatch, you can get that by just adding operations to the main queue.
You can add multiple blocks
to an NSBlockOperation which will be executed concurrently and can be canceled by
canceling the containing operation. As long as your individual tasks don't have to be serialized, this may work.
Just use as many operation queues as you like. They are here to separate logical parts of your program. I don't think you should be too concerned about the performance as long as you aren't allocating hundreds of queues per second.