How to free memory after using NSJSONSerialization? - objective-c

I'm reading large json file from my iphone (file saved on iphone) using this code:
dispatch_async(kBgQueue, ^{
// do something that takes a long time with receivedData here
NSError* error;
NSString *filePath = [self getFilePathWithName:LOCAL_JSON];
NSData *jsonData = [NSData dataWithContentsOfFile:filePath];
NSDictionary *dict = (NSDictionary*)[NSJSONSerialization JSONObjectWithData:jsonData options:kNilOptions error:&error];
dispatch_async( dispatch_get_main_queue(), ^{
HideHUD;
DLog(#"Reading done");
ShowMemoryUsageWithTitle(#"after reading json");
return;
});
dict = nil;
});
After I'm done with reading, memory usage has increased for 30 MB and for example total memory usage is 60 MB. After that I'm trying to read again and is 90 MB, and after that 120 MB.
So conclusion is that memory is not autoreleasing. Every time I'm reading memory is increasing.I'm using ARC.
How can I free memory after I'm done with reading json file?
UPDATE 1:
Tried to put "hard" work inside #autoreleasepool {} but there was no changed with memory usage
UPDATE 2:
Tried without dispatch_async
UPDATE 3:
I've tried to use third part library like https://github.com/johnezang/JSONKit but same problem. When I'm testing I'm triying only that thing, nothig else.

Related

Coredata: Performance issue when saving a lot of data

I want to fill my app's database with some data from a CSV file during initial startup (50'000 entries). I however run into performance issues...it is way too slow right now and in the simulator takes like 2-3 minutes. My context hierarchy is as follows:
parentContext (separate thread)
context (main thread)
importContext (separate thread)
The code:
for (NSString *row in rows){
columns = [row componentsSeparatedByString:#";"];
//Step 1
Airport *newAirport = [Airport addAirportWithIcaoCode: [columns[0] stringByTrimmingCharactersInSet:
[NSCharacterSet whitespaceCharacterSet]]
name:columns[1]
latitude:[columns[2] doubleValue]
longitude:[columns[3] doubleValue]
elevation:[columns[4] doubleValue]
continent:columns[5]
country:columns[6]
andIataCode:columns[7]
inManagedObjectContext:self.cdh.importContext];
rowCounter++;
//Increment progress
dispatch_async(dispatch_get_main_queue(), ^{
[progress setProgress:rowCounter/totalRows animated:YES];
[progress setNeedsDisplay];
});
// STEP 2: Save new objects to the parent context (context).
//if(fmodf(rowCounter, batchSize)== 0) {
NSError *error;
if (![self.cdh.importContext save:&error]) {
NSLog(#"error saving %#", error);
}
//}
// STEP 3: Turn objects into faults to save memory
[self.cdh.importContext refreshObject:newAirport mergeChanges:NO];
}
If I activate the if with modulo for batch size 2000 in step 2, then of course only every 2000nd entry gets saved and then the performance is fast. But like this it is super slow and I wonder why? I have my import context in a separate thread and still it lags very much...
Normally, it is enough to issue a single save after processing all your entries. You don't need to save this often. Just do it after the for loop. I see that you try to save memory by turning the objects into faults each time, so this might require a save (did not try this before).
I suggest you use #autoreleasepool instead and let the system decide where to save memory:
for (NSString *row in rows) {
#autoreleasepool {
// Just STEP 1 here
...
}
}
NSError *error;
if (![self.cdh.importContext save:&error]) {
NSLog(#"error saving %#", error);
}

Can objective c calls be optimized using stack variables?

Am I missing something, or to call sendData do I really need to create 3 NS objects on the heap like this? Or is this even created on the heap? Is there anyway to create them on the stack instead? This seems inefficient!
NSData *data = [NSData dataWithBytes:packet->data length:packet->dataLength];
if(!data)
return -5;
NSString *player = [NSString initWithCString:(char*)peer->data encoding:NSASCIIStringEncoding];
if(!player)
return -6;
NSArray *to = [NSArray arrayWithObject:player];
if(!to)
return -7;
NSError *error;
BOOL success = [[GCHelper sharedInstance].match sendData:data toPlayers:to withDataMode:GKMatchSendDataReliable error:&error];
if (!success) {
printf("Error sending packet %08x %d\n", packet->data, packet->dataLength);
return -8;
}
Can I do something like this instead?
NSData data;
[data dataWithBytes:packet->data length:packet->dataLength];
NSString player;
[player initWithCString:(char*)peer->data encoding:NSASCIIStringEncoding];
NSArray to;
[to arrayWithObject:player];
Sorry for my ignorance, I am well versed with C++ but am new to Objective-C.
An alternative for you if performance is an issue. As you rightly point out in response to my comment on the question you cannot unfortunately add your own method which takes your data as C pointers as you are calling into a framework. However you can do a similar thing one level up, you can create the NSData and NSString without copying your data itself to the heap:
NSData *data = [NSData dataWithBytesNoCopy:packet->data
length:packet->dataLength
freeWhenDone:NO];
if(!data)
return -5;
NSString *player = [NSString initWithBytesNoCopy:peer->data
length:strlen(peer->data)
encoding:NSASCIIStringEncoding
freeWhenDone:NO];
if(!player)
return -6;
NSError *error;
BOOL success = [[GCHelper sharedInstance].match sendData:data
toPlayers:#[player] // array expression
withDataMode:GKMatchSendDataReliable
error:&error];
if (!success)
{
printf("Error sending packet %08x %d\n", packet->data, packet->dataLength);
return -8;
}
This still wraps your data as heap objects but doesn't both the NSData and NSString heap objects directly reference your data. You must make sure your data stays alive as long as in needed of course!
Note: If you are getting into Objective-C and will need this functionality often then you can wrap the above code up as a category on GKMatch - that is left as an exercise :-)
Can I do something like this instead?
No. Besides the fact that -init and friends do no actual initialization (take a look at NSObject.mm, all it does is return self;), you're just messaging nil with those calls. +alloc exists solely to provide an implementation-independent allocator function; one that happens to allocate objects on the heap. If you are worried about the performance of Objective-C itself, then you don't have to use it. You can drop back to C and C++ at any time and return to the land of stack-allocated variables and complex pointer arithmetic that you know and love. Objective-C is still a performant language, despite it's "inefficiencies."
Remember though: while C and C++ were designed for embedded systems and mission critical applications where memory and processor efficiency are king, Objective-C is designed to run on fairly consistent, performant, and (relatively) memory-unconstrained hardware.

Issue with ELC Image Picker when too many images are selected iPhone

I am using ELC Image Picker in my project. Here i am getting one issue that is:
when i selected images like 20 picker is working fine but when I select images like 32(selected images count) my app is crashing before dismissal of controller itself and I am getting the error:
Program received signal: “0”. Data Formatters temporarily
unavailable, will re-try after a 'continue'. (Unknown error loading
shared library "/Developer/usr/lib/libXcodeDebuggerSupport.dylib")
And also I am getting:
Received memory warning. Level=1
NOTE: when this situation is happened is, first i selected 32 images worked fine and again I selected same number of images it was crashing.
Also I've tried with the example: github ELCImagePickerController project.
Can any one give me the answer to over come this?
From error you can see that its a memory issue
So you have 2 options
set a limit for number of images can be choosed
in background save images to temp folder
OR
Customize ELC picker code so that...when a person selects an image... it will take only image path but not image content
and when they are done... now run a loop to get those images into your app.
#SteveGear following code will solve your problem. Just provide the UIImagePickerControllerReferenceURL and you will get the NSData. Its long time but still, it may help others.
ALAssetsLibrary *assetLibrary=[[ALAssetsLibrary alloc] init];
NSURL *assetURL = [infoObject objectForKey:UIImagePickerControllerReferenceURL];
__block NSData *assetData;
[assetLibrary assetForURL:assetURL resultBlock:^(ALAsset *asset) // substitute assetURL with your url
{
ALAssetRepresentation *rep = [asset defaultRepresentation];
Byte *buffer = (Byte*)malloc((long)rep.size);
NSUInteger buffered = [rep getBytes:buffer fromOffset:0.0 length:(NSUInteger)rep.size error:nil];
assetData = [NSData dataWithBytesNoCopy:buffer length:buffered freeWhenDone:YES];//this is NSData what you need.
//[data writeToFile:assetData atomically:YES]; //Uncomment this if you want to store the data as file.
}
failureBlock:^(NSError *err) {
NSLog(#"Error: %#",[err localizedDescription]);
}];
Here assetData is what you need.

NSFileHandle & Writing Asynch to a file in iOS

I have a situation that I receive a byte data through Web Services request and want to write it to a file on my iOS device. I used to append all data (till end of data) in a memory variable and at the end writing the data using NSStream to a file in my iOS device using method:
stream:(NSStream *)theStream handleEvent:(NSStreamEvent)streamEvent
It works fine for small size of data, but the problem is if I am receiving data via web services it could be a big chunk (couple MBs) and I don't want to collect all in memory to write it to the file, to make it efficent I think I have to switch to NSFileHandle to write data in a small chunk size to the same file in several times. Now my question is what is the best approach to do this? I mean how can I do write to the file in BACKGROUND using NSFileHandle? I use code like this:
- (void) setUpAsynchronousContentSave:(NSData *) data {
NSString *newFilePath = [NSHomeDirectory() stringByAppendingPathComponent:#"/Documents/MyFile.xml"];
if(![[NSFileManager defaultManager] fileExistsAtPath:newFilePath ]) {
[[NSFileManager defaultManager] createFileAtPath:newFilePath contents:nil attributes:nil];
}
if(!fileHandle_writer) {
fileHandle_writer = [NSFileHandle fileHandleForWritingAtPath:newFilePath];
}
[fileHandle_writer seekToEndOfFile];
[fileHandle_writer writeData:data];
}
but with passing a data size of 1-2 Mb to above method, do I need to make it running in background? FYI I'm writing in main thread.
Maybe you can try Grand Central Dispatch.
I spent some time trying it, bellow is my way to do it.
According to Apple's document, if our program need executing only one task at a time, we should create a "Serial Dispatch Queue".So, first declare a queue as iVar.
dispatch_queue_t queue;
create a serial dispatch queue in init or ViewDidLoad using
if(!queue)
{
queue = dispatch_queue_create("yourOwnQueueName", NULL);
}
When data occurs, call your method.
- (void) setUpAsynchronousContentSave:(NSData *) data {
NSString *newFilePath = [NSHomeDirectory() stringByAppendingPathComponent:#"/Documents/MyFile.xml"];
NSFileManager *fileManager = [[NSFileManager alloc] init];
if(![fileManager fileExistsAtPath:newFilePath ]) {
[fileManager createFileAtPath:newFilePath contents:nil attributes:nil];
}
if(!fileHandle_writer) {
self.fileHandle_writer = [NSFileHandle fileHandleForWritingAtPath:newFilePath];
}
dispatch_async( queue ,
^ {
// execute asynchronously
[fileHandle_writer seekToEndOfFile];
[fileHandle_writer writeData:data];
});
}
At last, we need to release the queue in ViewDidUnload or dealloc
if(queue)
{
dispatch_release(queue);
}
I combine these code with ASIHttp, and it works.
Hope it helps.

Constantly growing memory allocation while fetching images over HTTP in iOS

I am implementing an iOS App that needs to fetch a huge amount of images over HTTP. I've tried several approaches but independently what I do, Instuments shows constantly increasing memory allocations and the App crashes sooner or later when I run it on a device. There are no leaks shown by Instruments.
So far I have tried the following approches:
Fetch the images using a synchronous NSURLConnection within an NSOperation
Fetch the images using a asynchronous NSURLConnection within an NSOperation
Fetch the images using [NSData dataWithContentsOfURL:url] in the Main-Thread
Fetch the images using synchronous ASIHTTPRequest within an NSOperation
Fetch the images using asynchronous ASIHTTPRequest and adding it to a NSOperationQueue
Fetch the images using asynchronous ASIHTTPRequest and using a completionBlock
The Call Tree in Instrumetns shows that the memory is consumed while processing the HTTP-Response. In case of asynchronous NSURLConnection this is in
- (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data {
[receivedData appendData:data];
}
In case of the synchronous NSURLConnection, Instruments shows a growing CFData (store) entry.
The problem with ASIHTTPRequest seems to be the same as with the asynchronous NSURLConnection in a analogous code-position. The [NSData dataWithContentsOfURL:url] approach shows an increasing amount of total memory allocation in exactely that statement.
I am using an NSAutoReleasePool when the request is done in a separate thread and I have tried to free up memory with [[NSURLCache sharedURLCache] removeAllCachedResponses] - no success.
Any ideas/hints to solve the problem? Thanks.
Edit:
The behaviour only shows up if I persist the images using CoreData. Here is the code I run as a NSInvocationOperation:
-(void) _fetchAndSave:(NSString*) imageId {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSString *url = [NSString stringWithFormat:#"%#%#", kImageUrl, imageId];
HTTPResponse *response = [SimpleHTTPClient GET:url headerOrNil:nil];
NSData *data = [response payload];
if(data && [data length] > 0) {
UIImage *thumbnailImage = [UIImage imageWithData:data];
NSData *thumbnailData = UIImageJPEGRepresentation([thumbnailImage scaleToSize:CGSizeMake(55, 53)], 0.5); // UIImagePNGRepresentation(thumbnail);
[self performSelectorOnMainThread:#selector(_save:) withObject:[NSArray arrayWithObjects:imageId, data, thumbnailData, nil] waitUntilDone:NO];
}
[pool release];
}
All CoreData related stuff is done in the Main-Thread here, so there should not be any CoreData multithreading issue. However, if I persist the images, Instruments shows constantely increasing memory allocations at the positions described above.
Edit II:
CoreData related code:
-(void) _save:(NSArray*)args {
NSString *imageId = [args objectAtIndex:0];
NSData *data = [args objectAtIndex:1];
NSData *thumbnailData = [args objectAtIndex:2];
Image *image = (Image*)[[CoreDataHelper sharedSingleton] createObject:#Image];
image.timestamp = [NSNumber numberWithDouble:[[NSDate date] timeIntervalSince1970]];
image.data = data;
Thumbnail *thumbnail = (Thumbnail*)[[CoreDataHelper sharedSingleton] createObject:#"Thumbnail"];
thumbnail.data = thumbnailData;
thumbnail.timestamp = image.timestamp;
[[CoreDataHelper sharedSingleton] save];
}
From CoreDataHelper (self.managedObjectContext is picking the NSManagedObjectContext usable in the current thread):
-(NSManagedObject *) createObject:(NSString *) entityName {
return [NSEntityDescription insertNewObjectForEntityForName:entityName inManagedObjectContext:self.managedObjectContext];
}
We had a similar problem. While fetching lots of images over http, there was huge growth and a sawtooth pattern in the memory allocation. We'd see the system clean up, more or less, as it went, but slowly, and not predictably. Meanwhile the downloads were streaming in, piling up on whatever was holding onto the memory. Memory allocation would crest around 200M and then we'd die.
The problem was an NSURLCache issue. You stated that you tried [[NSURLCache sharedURLCache] removeAllCachedResponses]. We tried that, too, but then tried something a little different.
Our downloads are done in groups of N images/movies, where N was typically 50 to 500. It was important that we get all of N as an atomic operation.
Before we started our group of http downloads, we did this:
NSURLCache *sharedCache = [[NSURLCache alloc] initWithMemoryCapacity:0 diskCapacity:0 diskPath:0];
[NSURLCache setSharedURLCache:sharedCache];
We then get each image in N over http with a synchronous call. We do this group download in an NSOperation, so we're not blocking the UI.
NSData *movieReferenceData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
Finally, after each individual image download, and after we're done with our NSData object for that image, we call:
[sharedCache removeAllCachedResponses];
Our memory allocation peak behavior dropped to a very comfortable handful of megabytes, and stopped growing.
In this case, you're seeing exactly what you're supposed to see. -[NSMutableData appendData:] increases the size of its internal buffer to hold the new data. Since an NSMutableData is always located in memory, this causes a corresponding increase in memory usage. What were you expecting?
If the ultimate destination for these images is on disk, try using an NSOutputStream instead of NSMutableData. If you then want to display the image, you can create a UIImage pointing to the file when you're done.