can do anything during a callback ? a basic objective c issue - objective-c

Sorry for the many posts here regarding this issue but i am having a progress here.
I have a callback function, which is in C , and been called when a new buffer is arrived.
I was told here to not do ANYTHING in that callback ,not malloc , or anything .
Now i want to send my new buffer to another class( which will create a circle buffer and save many buffers).
BUT, the basic thing that i dont get, is that if i call another function from the callback - its the same as doing it in there- becuase in that function i do the DSP and it takes time, so its the same as doing it in that callback- because its in serial.
froof :
i am sending the data to another function in another class, and its ok, but if i try to NSLOG it in there, i AGAIN have this memory leaks.
here is the other class that i call from the callback :
- (id)init
{
self = [super init];
if (self)
{
data = malloc (sizeof(SInt16) * 4000);
}
return self;
}
-(void)sendNewBuffer:(SInt16*)buffer
{
data=buffer;
NSLog(#"data arrived size is : %lu",sizeof(data));
for(int i=0; i<sizeof(data);i++)
{
NSLog(#"%d",data[i]);
}
}
ONLY when comment the log, it works without memory leaks.
that means the callback is waiting for that !
How would i process that data somwhere else in parallel ?
i am spending a week for that now.
thanks.

One possibility for the memory leak when using Objective-C objects such as the NSString in the NSLog is that those objects may be autoreleased (or may internally used autoreleased objects).
Your callback may be called from a different thread. You can confirm this by putting a breakpoint in the callback and looking in the debugger if this is the main thread or a secondary thread.
Any secondary thread must have its own AutoRelease pool. The system creates one automatically for the main thread, but you must create one explicitly if you are to create a secondary thread.
Also, one reason for not allocating stuff in a callback is usually for performances. Often the callback needs to be kept at minimum to avoid blocking the thread that called it.

I would suggest you read a C tutorial. There are at least two problems with your code which we can't really help you with:
data=buffer;: this leaks the previous value of data. You need to copy it to data (memcpy) or release the memory first (free) and then keep the pointer... unless the buffer goes out of scope after the callback, in which case your only option is to copy
sizeof(data): this can't work. data is a pointer; it doesn't know the amount of data that it being pointed at
The second means that you can't correctly implement the call back, at least not without further information. (Either the buffer has some indication of the volume of data or it's a constant size.)

If I had to guess (and I suppose I do) the callback is probably called in an interrupt context, hence malloc etc. would possibly be fatal.
What I would do is copy (ie. memcpy) the data to a buffer, and schedule/signal handling code to run later (eg. using condition variables, a runloop source, etc.)

Related

Usage of autorelease pool in objectAtindex swizzling for NSMutableArray

- (nullable id)myObjectAtIndex:(NSUInteger)index{
#autoreleasepool {
id value = nil;
if (index < self.count)
{
value = [self myObjectAtIndex:index];
}
return value;
}
}
I have no idea about the purpose of using autoreleasepool here. Can someone give me a hand?
Unless I'm missing the obvious, which is always possible, we can only guess:
There is a stack of autorelease pools, the top of the stack being the one in use. When the #autoreleasepool { ... } construct is entered a new pool is created and pushed on the stack, on exit from the construct the pool is drained and popped off the stack.
The reason to create local pools is given in the NSAutoReleasePool docs (emphasis added):
The Application Kit creates an autorelease pool on the main thread at the beginning of every cycle of the event loop, and drains it at the end, thereby releasing any autoreleased objects generated while processing an event. If you use the Application Kit, you therefore typically don’t have to create your own pools. If your application creates a lot of temporary autoreleased objects within the event loop, however, it may be beneficial to create “local” autorelease pools to help to minimize the peak memory footprint.
So what is the purpose in the code you are looking at? Some guesses:
Either the original author knows/believes that the called methods count and objectAtIndex (post the swizzle) add a significant amount of objects to the autorelease pool and wishes to clean these up; or
The original author was/is planning to add future code to myObjectAtIndex which will add a significant amount of objects to the autorelease pool and wishes to clean these up; or
Wishes to be able to call objectAtIndex and ensure there is no impact on the memory used for live objects (e.g. maybe they were measuring memory use by something else); or
Who knows, accept the original author (hopefully!)
HTH
There is no scientific reason.
The whole code is an example of "this app crashes and I do not know why" panic code:
Obviously the author had a problem with guaranteeing correct indexes, what would be the correct approach. Therefore he wrote a special method to "repair" it. The naming ("my") shows, that he thought: I can do it better (instead of insuring correct indexes).
Moreover, adding an ARP to a piece of code, that obviously does not create an bigger amount of objects, is a sure sign for the fact, that he couldn't oversee his code anymore.
Move the whole code to /dev/null.

Enforcing one-at-a-time access to pointer from a primative wrapper

I've read a fair amount on thread-safety, and have been using GCD to keep the math-heavy code off the main thread for a while now (I learned about it before NSOperation, and it seems to still be the easier option). However, I wonder if I could improve part of my code that currently uses a lock.
I have an Objective-C++ class that is a wrapper for a c++ vector. (Reasons: primitive floats are added constantly without knowing a limit beforehand, the container must be contiguous, and the reason for using a vector vs NSMutableData is "just cause" it's what I settled on, and NSMutableData will still suffer from the same "expired" pointer when it goes to resize itself).
The class has instance methods to add data points that are processed and added to the vector (vector.push_back). After new data is added I need to analyze it (by a different object). That processing happens on a background thread, and it uses a pointer directly to the vector. Currently the wrapper has a getter method that will first lock the instance (it suspends a local serial queue for the writes) and then return the pointer. For those that don't know, this is done because when the vector runs out of space push_back causes the vector to move in memory to make room for the new entries - invalidating the pointer that was passed. Upon completion, the math-heavy code will call unlock on the wrapper, and the wrapper will resume the queued writes finish.
I don't see a way to pass the pointer along -for an unknown length of time- without using some type of lock or making a local copy -which would be prohibitively expensive.
Basically: Is there a better way to pass a primitive pointer to a vector (or NSMutableData, for those that are getting hung up by a vector), that while the pointer is being used, any additions to the vector are queued and then when the consumer of the pointer is done, automatically "unlock" the vector and process the write queue
Current Implementation
Classes:
DataArray: a wrapper for a C++ vector
DataProcessor: Takes the most raw data and cleans it up before sending it to the 'DataArray'
DataAnalyzer: Takes the 'DataArray' pointer and does analysis on array
Worker: owns and initializes all 3, it also coordinates the actions (it does other stuff as well that is beyond the scope here). it is also a delegate to the processor and analyzer
What happens:
Worker is listening for new data from another class that handles external devices
When it receives a NSNotification with the data packet, it passes that onto DataProcessor by -(void)checkNewData:(NSArray*)data
DataProcessor, working in a background thread cleans up the data (and keeps partial data) and then tells DataArray to -(void)addRawData:(float)data (shown below)
DataArray then stores that data
When DataProcessor is done with the current chunk it tells Worker
When Worker is notified processing is done it tells DataAnalyzer to get started on the new data by -(void)analyzeAvailableData
DataAnalyzer does some prep work, including asking DataArray for the pointer by - (float*)dataPointer (shown below)
DataAnalyzer does a dispatch_async to a global thread and starts the heavy-lifting. It needs access to the dataPointer the entire time.
When done, it does a dispatch_async to the main thread to tell DataArray to unlock the array.
DataArray can is accessed by other objects for read only purposes as well, but those other reads super quick.
Code snips from DataArray
-(void)addRawData:(float)data {
//quick sanity check
dispatch_async(addDataQueue, ^{
rawVector.push_back(data);
});
}
- (float*)dataPointer {
[self lock];
return &rawVector[0];
}
- (void)lock {
if (!locked) {
locked = YES;
dispatch_suspend(addDataQueue);
}
}
- (void)unlock {
if (locked) {
dispatch_resume(addDataQueue);
locked = NO;
}
}
Code snip from DataAnalyzer
-(void)analyzeAvailableData {
//do some prep work
const float *rawArray = [self.dataArray dataPointer];
dispatch_async(global_queue, ^{
//lots of analysis
//done
dispatch_async(main_queue, ^{
//tell `Worker` analysis is done
[self.dataArray unlock];
};
};
}
If you have a shared resource (your vector) which will be concurrently accessed through reads and writes from different tasks, you may associated a dedicated dispatch queue with this resource where these tasks will exclusively run.
That is, every access to this resource (read or write) will be executed on that dispatch queue exclusively. Let's name this queue "sync_queue".
This "sync_queue" may be a serial queue or a concurrent queue.
If it's a serial queue, it should be immediately obvious that all accesses are thread-safe.
If it's a concurrent queue, you can allow read accesses to happen simultaneously, that is you simply call dispatch_async(sync_queue, block):
dispatch_async(sync_queue, ^{
if (_shared_value == 0) {
dispatch_async(otherQueue, block);
}
});
If that read access "moves" the value to a call-site executing on a different execution context, you should use the synchronous version:
__block int x;
dispatch_sync(sync_queue, ^{
x = _shared_value;
});
return x;
Any write access requires exclusive access to the resource. Having a concurrent queue, you accomplish this through using a barrier:
dispatch_barrier_async(sync_queue, ^{
_shared_value = 0;
dispatch_async(mainQueue, ^{
NSLog(#"value %d", _shared_value);
});
});
It really depends what you're doing, most of the time I drop back to the main queue (or a specifically designated queue) using dispatch_async() or dispatch_sync().
Async is obviously better, if you can do it.
It's going to depend on your specific use case but there are times when dispatch_async/dispatch_sync is multiple orders of magnitude faster than creating a lock.
The entire point of grand central dispatch (and NSOperationQueue) is to take away many of the bottlenecks found in traditional threaded programming, including locks.
Regarding your comment about NSOperation being harder to use... that's true, I don't use it very often either. But it does have useful features, for example if you need to be able to terminate a task half way through execution or before it's even started executing, NSOperation is the way to go.
There is a simple way to get what you need even without locking. The idea is that you have either shared, immutable data or you exclusive, mutable data. The reason why you don't need a lock for shared, immutable data is that it is simply read-only, so no race conditions during writing can occur.
All you need to do is to switch between both depending on what you currently need:
When you are adding samples to your storage, you need exclusive access to the data. If you already have a "working copy" of the data, you can just extend it as you need. If you only have a reference to the shared data, you create a working copy which you then keep for later exclusive access.
When you want to evaluate your samples, you need read-only access to the shared data. If you already have a shared copy, you just use that. If you only have an exclusive-access working copy, you convert that to a shared one.
Both of these operations are performed on demand. Assuming C++, you could use std::shared_ptr<vector const> for the shared, immutable data and std::unique_ptr<vector> for the exclusive-access, mutable data. For the older C++ standard those would be boost::shared_ptr<..> and std::auto_ptr<..> instead. Note the use of const in the shared version and that you can convert from the exclusive to the shared one easily, but the inverse is not possible, in order to get a mutable from an immutable vector, you have to copy.
Note that I'm assuming that copying the sample data is not possible and doesn't explode the complexity of your algorithm. If that doesn't work, your approach with the scrap space that is used while the background operations are in progress is probably the best way to go. You can automate a few things using a dedicated structure that works similar to a smart pointer though.

Must I copy a block here?

I understand that you must copy blocks in order for them to stick around after a stack frame exits. But, how does that apply to stack-allocated blocks used within a nested block as in the following code example:
- doSomethingFunkyThenCall:(void(^)(int someValue))callback
{
[[NSOperationQueue currentQueue] addOperationWithBlock:^{
// ... do some work here, potentially nesting into further blocks ...
callback(result);
}];
}
Obviously, the doSomethingFunkyThenCall: stack frame will terminate before the callback is executed, so it will have to be copied. But will this happen automatically in the call to addOperationWithBlock: or do I have to do it manually?
Most likely, it will happen automatically. Cocoa's design principles imply in general that you're not responsible for objects (their memory management, passing blocks [which are, in fact, implemented as proper Objective-C objects], etc.) you haven't created. So you can just pass down the block you received as a parameter, and the runtime will manage it as per its needs.
Yes, you should do a callback = [[callback copy] autorelease]; at the top of this method.
Objects used in blocks are retained automatically, but sending a stack-block retain actually does nothing (because the semantics of retain require it to return the receiver), so will be gone once we leave the frame it was created in.
Sources:
http://cocoawithlove.com/2009/10/how-blocks-are-implemented-and.html
http://thirdcog.eu/pwcblocks/#objcblocks
EDIT: It turns out I'm wrong. #bbum points out below that Block_copy will copy recursively, and since addOperationWithBlock: copies it's block, the callback is also copied.

Dealing with infinite loops in Objective C

I've recently joined an iPad project. In looking through the code base I've come across some unusual things, one of which is this (I've meta-coded this):
while (something.isAlwaysTrue) {
// Wait for something to happen. i.e. Get data from an internet connection.
// Respond to something.
}
I didn't notice any problems with this until I started memory profiling the app and found that these loops are memory leaking massively. The reason being that because they never end, any autorelease instances created inside them are never freed because the method never ends and the autorelease pools never get a chance to free up.
After talking it through with the developers who wrote the code I came up with the following technique:
-(void) queueTask {
// Using GCD or perform with delay, call the process method.
}
-(void) process {
// Wait and/or do stuff.
[self queueTask];
}
The basic idea is that by using a method to queuing through GCD or the runloop, it gives the autorelease pool a chance to execute and clean up autorelease instances. This appears to work just fine.
My question is - is this the best way to go about dealing with these loops? or is there a better way to do it?
Two points;
Minimizing Heap Growth
anyway, here's how minimize memory growth:
while (something.isAlwaysTrue) {
NSAutoreleasePool * pool = [NSAutoreleasePool new];
// Wait for something to happen. i.e. Get data from an internet connection.
// Respond to something.
[pool release], pool = 0;
}
or if you prefer the bleating (sic) edge:
while (something.isAlwaysTrue) {
#autoreleasepool{
// Wait for something to happen. i.e. Get data from an internet connection.
// Respond to something.
}
}
autorelease pools just operate like thread local stacks. when you push a pool, autoreleased objects are added to the top pool in the current thread. when you pop the pool, the pool sends a release message for each autorelease.
using GCD as a substitute for an autorelease pool is odd; similar to using an NSArray of single character NSStrings where you should simply use a single NSString.
Mutithreaded Program Flow
the infinite loop is a very suspicious program. it suggest you may be trying to reinvent run loops. the main run loop is of course common. a secondary thread 1) with a run loop 2) that never ends is unusual.
you should reconsider how the program flows. typically, you act on events, rather than holding your breath, polling until they complete. you may be trying to break from that situation in the program you proposed -- but i don't have enough detail.

Why is this Objective-C code allocating GBs of RAM, releasing it later, and not reporting any leaks?

I have inherited some code, and it looks like this:
- (bool)makeOneLevel:(int)nummines objects:(int)numobjects
{
[field release];
state = gameWait;
field = [[MineField alloc] createLevel:nummines objects:numobjects];
if([field rating] == -1)
{
return false;
}
...
There is always one MineField allocated. Whenever you make a new field, the first thing the function does is release the old one. If the function succeeds in making a MineField, then it returns true.
I also have this:
while(numsaved < self.makeNumber)
{
while(![mineView makeOneLevel:self.makeNumMines objects:self.makeNumObjects])
{
}
{
//saving code here
}
numsaved++;
}
Which calls the function until it creates a valid MineField. This all works. But it allocates GBs of RAM while doing it. But the Leaks tool finds no leaks, and when the outer while finishes and the OS gets control back, all that RAM is deallocated just fine.
Am I doing something wrong with the MineField allocation, or should I be looking elsewhere in the creation process?
Without knowing the internals it's impossible to say for sure, but the behavior you're describing sounds like -[MineView makeOneLevel:objects:] is internally allocating and autoreleasing objects. Since the AppKit default event loop creates and cleans up an autorelease pool for each event it processes, everything does end up going away eventually, but not until the event is finished processing (e.g, after your method exits).
The easiest solution will be to wrap your own autorelease pool around the while() loop, and drain it either every time around the loop or periodically. If you aren't too scared of the internals of the method you're calling in the loop, though, you may be better off just finding where it's autoreleasing objects and fix it (by making it explicitly release objects when appropriate).
If you do not get any better answers, try using the heap profiler from Google perftools to track down where the huge allocations are happening.