Sleep or semaphore for background thread - objective-c

I have third party code which creates a lot of threads with code like this:
while (true) {
{
my::Lock lock(&mMutex); // mutex implementation in c++
if (!reseting) {
// some code
break;
}
}
usleep(1000 / 20); // 20 time per second
}
I can rewrite this code with semaphore. What to use, semaphore or sleep? As I understood they work same. Semaphore will work little bit faster because we can immediately continue this thread when reset is changed.
Or maybe you have another idea how to do this better?
Implementation of my::Lock:
Lock::Lock(pthread_mutex_t *mutex) {
_mutex = mutex;
pthread_mutex_lock(_mutex);
}
Lock::~Lock() {
pthread_mutex_unlock(_mutex);
}

You are correct that polling is an inefficient approach. This mutex lock implementation only makes it worse.
You ask if a semaphore might be better pattern: It probably is, but I suspect you can do even better. Specifically, three asynchronous patterns come to mind:
The "completion handler" pattern, where the API call takes a block parameter which is a block of code that will be called when the asynchronous task is complete. This is ideal when you need a simple interface for informing the caller of the completion of the asynchronous task.
The "delegate-protocol" pattern, where the API would have a delegate property to specify who to inform and a protocol that defines what methods the delegate may or must implement. This is useful pattern where the interface for communicating updates is more complicated (e.g. not simply when the task is complete, but perhaps various progress updates, too).
The "notification" pattern (using NSNotificationCenter to inform other object(s) of status changes). This is useful if it's possible that more than one object may want to be informed of the completion of the task.
Frankly, the choice may be dictated by the details of this third-party library. It's hard to assess on the basis of the information provided.

Related

does boost::asio co_spawn create an actual thread?

When looking through boost asio co_spawn documentation (https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/co_spawn/overload6.html), I see this statement, "Spawn a new coroutined-based thread of execution", however my understanding is that co_spawn does not create an actual thread, but uses threads that are part of the boost::asio::io_context pool. It's a "coroutine-based thread of execution" in a sense, that this coroutine would be a root of all coroutines that are spawned from inside this one
Is my understanding correct here or an actual thread is created whenever co_spawn is used like this:
::boost::asio::co_spawn(io_ctx, [&] -> ::boost::asio::awaitable<void> {
// do something
}, ::boost::asio::detached);
thanks!
It does not. See The Proactor Design Pattern: Concurrency Without Threads and https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/threads.html
What does detached mean/do? The documentation says:
The detached_t class is used to indicate that an asynchronous operation is detached. That is, there is no completion handler waiting for the operation's result.
It comes down to writing a no-op handler but (a) less work (b) more room for the library to optimize.
Another angle to look at this from is this: if the execution context for the executor (io_ctx) is never run/polled, nothing will ever happen. As always in boost, you decide where you run the service (whether you use threads e.g.)

How does KVC deal with speed and errors?

I've been reading about KVC and Cocoa Scripting, and how properties can be used for this. I have a model class in mind, but the element/property data has to be obtained from the Internet. But the design of properties and KVC looks like it assumes fast & in-memory retrieval, while network calls can be slow and/or error-prone. How can these be reconciled?
For speed, do we just say "screw it" and post a waiting icon? (Of course, we should keep things multi-threaded so the UI doesn't stop while we wait.)
If your property is supposed to be always available, we could set it to nil if the resource call gets an error. But we would have no way to get the specifics. Worse would be a property that supports "missing values," then nil would represent that and we would have no spare state to use for errors.
Although Apple-events support error handling, I couldn't use it because between my potentially error-generating model calls and the Apple event, the KVC layer would drop the error to the floor (of oblivion). The Scripting Bridge API saw this problem, since its designers added a secret protocol to handle errors.
Am I wrong? Is there a way to handle errors with KVC-based designs?
Addendum
I forgot to mention exceptions. Objective-C now supports them, but the little I read about them implies that they're meant for catastrophic "instead of straight crashing" use, not for regular error handling like in C++. Except for that, they could've been useful here....
I think I understand what you're asking now. I would say using KVC (or property getters) is not a good way to accomplish what you're trying to do. If the code gets called on the main thread, you will then block that thread. which you don't want to do. As you have discovered you'll also have a hard time returning other state information such as errors.
Instead, you should use block syntax to create an asynchronous method that operates on a background queue. Here is a basic template for what this might look like:
// called from main thread
- (void) fetchDataInBackgroundWithCompletionHandler:(void (^)(id responseData, NSError *error))handler
{
// perform in background
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^()
{
// perform your background operation here
// call completion block on main thread
dispatch_async(dispatch_get_main_queue(), ^{
if(// whatever your error case is)
{
handler(nil, error);
}
else // success
{
handler(responseData, nil);
}
});
});
}
This also gives you the benefit of being able to pass in as many other parameters are you want as well as return as many values as you want in the completion block.
Some very good examples of this pattern can be seen in AFNetworking, which is one of the more popular networking libraries written for iOS. All of the calls in the library can be made from the main queue and will return on the main queue asycnhronously while performing all networking in the background.

Is it worth it to refactor common HTTP request code, performance-wise?

I've been developing an iPhone app, which handed by an experienced developer. I'm just an apprentice programmer and still struggling with practical Objective-C/iOS application development (I have learned Java and PHP on my own, but objective-c is nothing like these to me).
Our app is just another "web-centric" (I don't even know this word is appropriate...) app which heavily relies on server-side operations, making frequent http post request every single time (such as tracking user locations, send messages to another users etc.).
When I was assigned to develop this app, I saw in the code, that every single http request was written inside each method. Each request was done by dispatching another thread, and each action were written for those requests' response accordingly.
E.g.
-(void) methodA {
// Making http request headers...
// Dispatch another thread
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT , 0);
dispatch_async(queue, ^{
// Send synchronous request and handle the response...
});
}
-(void) methodB {
// Making http request headers...
// Dispatch another thread
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT , 0);
dispatch_async(queue, ^{
// Send synchronous request and handle the response...
});
}
The codes like above are every where when the app needs to send request to the server.
I'm wondering, why he didn't create a class that handles http requests.
In Java, you could create a class that make synchronous request to the server:
public class ClassHttpRequest {
public int makePost {
// Send synchronous request and return result...
}
}
Then make an instance of this class and execute it's instance method (in this case, makePost) inside a thread:
public class methodA {
Thread t = new Thread(new Runnable() {
public void run() {
public ClassHttpRequest requestHandler = new ClassHttpRequest();
if (success == requestHandler.makePost()) {
// Handle response...
}
}
}
});
t.start();
}
Is there any performance penalty or issues in creating a class and let it handles frequent http request in Objective-C? Or, it's just simply not "recommended" or something? I have heard that, in Objective-C, it is not common to use try-catch statement for exception handling, because it would consume much resources. I do have read several iOS and Objective-C books (and googled), but such kind of "practical" answer for real application development is hard to find, most of the time it's rather confusing to beginners like me.
I should ask him why he didn't create a such class, but he's away now and I couldn't get in touch with him. Also, I belive that the professionals here in stackoverflow can provide me much more accurate and concise solutions than my predecessor. (I have asked several questions and already got what I wanted to know.)
Thanks in advance.
Normal rules of object-oriented design apply: if it makes sense to represent a HTTP request as a tangible object - in particular, there's a bunch of boilerplate code that's necessary and would otherwise be copy-pasted - then it's probably a good idea to use a class. Otherwise, there's no need. Though in this specific case, is there a reason you're not just using the standard, asynchronous system APIs - NSURLRequest, NSURLConnection, NSURLDownload, etc?
#try/#catch are by definition used for exception handling, and should be used as necessary. If you skimp on them your code may fail in unnecessarily interesting ways (e.g. leaving locks dangling) or to unnecessarily degrees (e.g. crashing completely instead of simply failing a specific operation). What you shouldn't do is use them for flow control - unlike other languages, Objective-C exceptions are for programmer errors, "impossible" conditions, and other such events. Unfortunately a lot of existing Objective-C code is not exception-safe, so while you should utilise them you shouldn't rely on them.
They're not particularly expensive in any of the runtimes you're likely to use these days - the #try is very cheap, almost free. Only if an exception is thrown is there any significant work done, and since you should only be seeing them in very bad situations - i.e. not frequently - the performance cost is irrelevant.
Refactoring the code is a question of balance. The current code is verbose and a bit repeating, but refactoring it into a separate class will introduce a new indirection, an intermediate API. It’s probably worth it if the new API has a decent semantics, like if you can create a SomeNetworkService class with methods like postStatus, listItems and such. The interface should be asynchronous, something like this:
typedef void (^StatusCompletionBlock)(BOOL success, NSError *error);
- (void) postStatus: (NSString*) status withCompletion: (StatusCompletionBlock) completion;
This should make the code more readable, more DRY and even more testable, since you can replace the whole SomeNetworkService object with a stub. So that would be certainly worth it.
The performance hit of sending one extra message is not worth mentioning. Generally speaking, people worry about performance too much. If you can sacrifice performance for better readability, 99 times out of 100 it’s worth it.

Locking an object from being accessed by multiple threads - Objective-C

I have a question regarding thread safety in Objective-C. I've read a couple of other answers, some of the Apple documentation, and still have some doubts regarding this, so thought I'd ask my own question.
My question is three fold:
Suppose I have an array, NSMutableArray *myAwesomeArray;
Fold 1:
Now correct me if I'm mistaken, but from what I understand, using #synchronized(myAwesomeArray){...} will prevent two threads from accessing the same block of code. So, basically, if I have something like:
-(void)doSomething {
#synchronized(myAwesomeArray) {
//some read/write operation on myAwesomeArray
}
}
then, if two threads access the same method at the same time, that block of code will be thread safe. I'm guessing I've understood this part properly.
Fold 2:
What do I do if myAwesomeArray is being accessed by multiple threads from different methods?
If I have something like:
- (void)readFromArrayAccessedByThreadOne {
//thread 1 reads from myAwesomeArray
}
- (void)writeToArrayAccessedByThreadTwo {
//thread 2 writes to myAwesomeArray
}
Now, both the methods are accessed by two different threads at the same time. How do I ensure that myAwesomeArray won't have problems? Do I use something like NSLock or NSRecursiveLock?
Fold 3:
Now, in the above two cases, myAwesomeArray was an iVar in memory. What if I have a database file, that I don't always keep in memory. I create a databaseManagerInstance whenever I want to perform database operations, and release it once I'm done. Thus, basically, different classes can access the database. Each class creates its own instance of DatabaseManger, but basically, they are all using the same, single database file. How do I ensure that data is not corrupted due to race conditions in such a situation?
This will help me clear out some of my fundamentals.
Fold 1
Generally your understanding of what #synchronized does is correct. However, technically, it doesn't make any code "thread-safe". It prevents different threads from aquiring the same lock at the same time, however you need to ensure that you always use the same synchronization token when performing critical sections. If you don't do it, you can still find yourself in the situation where two threads perform critical sections at the same time. Check the docs.
Fold 2
Most people would probably advise you to use NSRecursiveLock. If I were you, I'd use GCD. Here is a great document showing how to migrate from thread programming to GCD programming, I think this approach to the problem is a lot better than the one based on NSLock. In a nutshell, you create a serial queue and dispatch your tasks into that queue. This way you ensure that your critical sections are handled serially, so there is only one critical section performed at any given time.
Fold 3
This is the same as Fold 2, only more specific. Data base is a resource, by many means it's the same as the array or any other thing. If you want to see the GCD based approach in database programming context, take a look at fmdb implementation. It does exactly what I described in Fold2.
As a side note to Fold 3, I don't think that instantiating DatabaseManager each time you want to use the database and then releasing it is the correct approach. I think you should create one single database connection and retain it through your application session. This way it's easier to manage it. Again, fmdb is a great example on how this can be achieved.
Edit
If don't want to use GCD then yes, you will need to use some kind of locking mechanism, and yes, NSRecursiveLock will prevent deadlocks if you use recursion in your methods, so it's a good choice (it is used by #synchronized). However, there may be one catch. If it's possible that many threads will wait for the same resource and the order in which they get access is relevant, then NSRecursiveLock is not enough. You may still manage this situation with NSCondition, but trust me, you will save a lot of time using GCD in this case. If the order of the threads is not relevant, you are safe with locks.
As in Swift 3 in WWDC 2016 Session Session 720 Concurrent Programming With GCD in Swift 3, you should use queue
class MyObject {
private let internalState: Int
private let internalQueue: DispatchQueue
var state: Int {
get {
return internalQueue.sync { internalState }
}
set (newValue) {
internalQueue.sync { internalState = newValue }
}
}
}
Subclass NSMutableArray to provide locking for the accessor (read and write) methods. Something like:
#interface MySafeMutableArray : NSMutableArray { NSRecursiveLock *lock; } #end
#implementation MySafeMutableArray
- (void)addObject:(id)obj {
[self.lock lock];
[super addObject: obj];
[self.lock unlock];
}
// ...
#end
This approach encapsulates the locking as part of the array. Users don't need to change their calls (but may need to be aware that they could block/wait for access if the access is time critical). A significant advantage to this approach is that if you decide that you prefer not to use locks you can re-implement MySafeMutableArray to use dispatch queues - or whatever is best for your specific problem. For example, you could implement addObject as:
- (void)addObject:(id)obj {
dispatch_sync (self.queue, ^{ [super addObject: obj] });
}
Note: if using locks, you'll surely need NSRecursiveLock, not NSLock, because you don't know of the Objective-C implementations of addObject, etc are themselves recursive.

Should my block based methods return on the main thread or not when creating an iOS cloud integration framework?

I am in the middle of creating a cloud integration framework for iOS. We allow you to save, query, count and remove with synchronous and asynchronous with selector/callback and block implementations. What is the correct practice? Running the completion blocks on the main thread or a background thread?
For simple cases, I just parameterize it and do all the work i can on secondary threads:
By default, callbacks will be made on any thread (where it is most efficient and direct - typically once the operation has completed). This is the default because messaging via main can be quite costly.
The client may optionally specify that the message must be made on the main thread. This way, it requires one line or argument. If safety is more important than efficiency, then you may want to invert the default value.
You could also attempt to batch and coalesce some messages, or simply use a timer on the main run loop to vend.
Consider both joined and detached models for some of your work.
If you can reduce the task to a result (remove the capability for incremental updates, if not needed), then you can simply run the task, do the work, and provide the result (or error) when complete.
Apple's NSURLConnection class calls back to its delegate methods on the thread from which it was initiated, while doing its work on a background thread. That seems like a sensible procedure. It's likely that a user of your framework will not enjoy having to worry about thread safety when writing a simple callback block, as they would if you created a new thread to run it on.
The two sides of the coin: If the callback touches the GUI, it has to be run on the main thread. On the other hand, if it doesn't, and is going to do a lot of work, running it on the main thread will block the GUI, causing frustration for the end user.
It's probably best to put the callback on a known, documented thread, and let the app programmer make the determination of the effect on the GUI.