I'm using CIDetector as follows multiple times:
-(NSArray *)detect:(UIImage *)inimage
{
UIImage *inputimage = inimage;
UIImageOrientation exifOrientation = inimage.imageOrientation;
NSNumber *orientation = [NSNumber numberWithInt:exifOrientation];
NSDictionary *imageOptions = [NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation];
CIImage* ciimage = [CIImage imageWithCGImage:inputimage.CGImage options:imageOptions];
NSDictionary *detectorOptions = [NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation];
NSArray* features = [self.detector featuresInImage:ciimage options:detectorOptions];
if (features.count == 0)
{
PXLog(#"no face found");
}
ciimage = nil;
NSMutableArray *returnArray = [NSMutableArray new];
for(CIFaceFeature *feature in features)
{
CGRect rect = feature.bounds;
CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height);
FaceFeatures * ff = [[FaceFeatures new] initWithLeftEye:CGPointMake(feature.leftEyePosition.x, inputimage.size.height - feature.leftEyePosition.y )
rightEye:CGPointMake(feature.rightEyePosition.x, inputimage.size.height - feature.rightEyePosition.y )
mouth:CGPointMake(feature.mouthPosition.x, inputimage.size.height - feature.mouthPosition.y )];
Face *ob = [[Face new] initFaceInRect:r withFaceFeatures:ff] ;
[returnArray addObject:ob];
}
features = nil;
return returnArray;
}
-(CIContext*) context{
if(!_context){
_context = [CIContext contextWithOptions:nil];
}
return _context;
}
-(CIDetector *)detector
{
if (!_detector)
{
// 1 for high 0 for low
#warning not checking for fast/slow detection operation
NSString *str = #"fast";//[SettingsFunctions retrieveFromUserDefaults:#"face_detection_accuracy"];
if ([str isEqualToString:#"slow"])
{
//DDLogInfo(#"faceDetection: -I- Setting accuracy to high");
_detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil
options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
} else {
//DDLogInfo(#"faceDetection: -I- Setting accuracy to low");
_detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil
options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy]];
}
}
return _detector;
}
but after having various memory issues and according to Instruments it looks like NSArray* features = [self.detector featuresInImage:ciimage options:detectorOptions]; isn't being released
Is there a memory leak in my code?
I came across the same issue and it seems to be a bug (or maybe by design, for caching purposes) with reusing a CIDetector.
I was able to get around it by not reusing the CIDetector, instead instantiating one as needed and then releasing it (or, in ARC terms, just not keeping a reference around) when the detection is completed. There is some cost to doing this, but if you are doing the detection on a background thread as you said, that cost is probably worth it when compared to unbounded memory growth.
Perhaps a better solution would be, if you a detecting multiple images in a row, to create one detector, use it for all (or maybe, if the growth is too large, release & create a new one every N images. You'll have to experiment to see what N should be).
I've filed a Radar bug about this issue with Apple: http://openradar.appspot.com/radar?id=6645353252126720
I have fixed this problem, you should use #autorelease where you invode the detect method, like this in swift
autoreleasepool(invoking: {
let result = self.detect(image: image)
// do other things
})
Related
Problem
I'm trying out the performance shaders for the first time and encountered a runtime problem. The MTLTexture that MTKTextureLoader returns seems to be uncompatible with Metal Performance Shaders' MPSImageFindKeypoints encoder.
The only hint so far that I found is from #warrenm's sample code on MPS that specifies MTKTextureLoaderOptions just like I did. I did not find any other mentions in the docs.
Any help is highly appreciated.
Error
/BuildRoot/Library/Caches/com.apple.xbs/Sources/MetalImage/MetalImage-121.0.2/MPSImage/Filters/MPSKeypoint.mm:166: failed assertion `Source 0x282ce8fc0 texture type (80) is unsupported
where 0x282ce8fc0 is the MTLTexture from the texture loader.
As far as I could see there is no MTLTexture type 80, the enum ranges up to 8 or so (not hex).
Code
CGFloat w = CGImageGetWidth(_image);
CGFloat h = CGImageGetHeight(_image);
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];
NSDictionary* textureOptions = #{ MTKTextureLoaderOptionSRGB: [[NSNumber alloc] initWithBool:NO] };
id<MTLTexture> texture = [[[MTKTextureLoader alloc] initWithDevice:device] newTextureWithCGImage:_image
options:textureOptions
error:nil];
id<MTLBuffer> keypointDataBuffer;
id<MTLBuffer> keypointCountBuffer;
MTLRegion region = MTLRegionMake2D(0, 0, w, h);
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
MPSImageKeypointRangeInfo rangeInfo = {100,0.5};
MPSImageFindKeypoints* imageFindKeypoints = [[MPSImageFindKeypoints alloc] initWithDevice:device
info:&rangeInfo];
[imageFindKeypoints encodeToCommandBuffer:commandBuffer
sourceTexture:texture
regions:®ion
numberOfRegions:1
keypointCountBuffer:keypointCountBuffer
keypointCountBufferOffset:0
keypointDataBuffer:keypointDataBuffer
keypointDataBufferOffset:0];
[commandBuffer commit];
NSLog(keypointCountBuffer);
NSLog(keypointDataBuffer);
Edit
After converting my image to the correct pixel format I am now initialising the buffers like so:
id<MTLBuffer> keypointDataBuffer = [device newBufferWithLength:maxKeypoints*(sizeof(MPSImageKeypointData)) options:MTLResourceOptionCPUCacheModeDefault];
id<MTLBuffer> keypointCountBuffer = [device newBufferWithLength:sizeof(int) options:MTLResourceOptionCPUCacheModeDefault];
There is no error anymore. But how can I reading the contents now?
((MPSImageKeypointData*)[keypointDataBuffer contents])[0].keypointCoordinate returns (0,0) for all indexes. Also I don't know how to read the keypointsCountBuffer. The buffer contents converted to an int value show a higher value than the defined maxKeypoints. I don't see where the docs say what kind of format the count buffer has.
Finally the code is running and just for completeness sake I thought I should post the whole code as an answer
code
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];
// init textures
NSDictionary* textureOptions = #{ MTKTextureLoaderOptionSRGB: [[NSNumber alloc] initWithBool:NO] };
id<MTLTexture> texture = [[[MTKTextureLoader alloc] initWithDevice:device] newTextureWithCGImage:_lopoImage
options:textureOptions
error:nil];
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:(MTLPixelFormatR8Unorm) width:w height:h mipmapped:NO];
descriptor.usage = (MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite);
id<MTLTexture> unormTexture = [device newTextureWithDescriptor:descriptor];
// init arrays and buffers for keypoint finder
int maxKeypoints = w*h;
id<MTLBuffer> keypointDataBuffer = [device newBufferWithLength:sizeof(MPSImageKeypointData)*maxKeypoints options:MTLResourceOptionCPUCacheModeWriteCombined];
id<MTLBuffer> keypointCountBuffer = [device newBufferWithLength:sizeof(int) options:MTLResourceOptionCPUCacheModeWriteCombined];
MTLRegion region = MTLRegionMake2D(0, 0, w, h);
// init colorspace converter
CGColorSpaceRef srcColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CGColorSpaceRef dstColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceLinearGray);
CGColorConversionInfoRef conversionInfo = CGColorConversionInfoCreate(srcColorSpace, dstColorSpace);
MPSImageConversion *conversion = [[MPSImageConversion alloc] initWithDevice:device
srcAlpha:(MPSAlphaTypeAlphaIsOne)
destAlpha:(MPSAlphaTypeNonPremultiplied)
backgroundColor:nil
conversionInfo:conversionInfo];
// init keypoint finder
MPSImageKeypointRangeInfo rangeInfo = {maxKeypoints,0.75};
MPSImageFindKeypoints* imageFindKeypoints = [[MPSImageFindKeypoints alloc] initWithDevice:device
info:&rangeInfo];
// encode command buffer
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
[conversion encodeToCommandBuffer:commandBuffer sourceTexture:texture destinationTexture:unormTexture];
[imageFindKeypoints encodeToCommandBuffer:commandBuffer
sourceTexture:unormTexture
regions:®ion
numberOfRegions:1
keypointCountBuffer:keypointCountBuffer
keypointCountBufferOffset:0
keypointDataBuffer:keypointDataBuffer
keypointDataBufferOffset:0];
// run command buffer
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
// read keypoints
int count = ((int*)[keypointCountBuffer contents])[0];
MPSImageKeypointData* keypointDataArray = ((MPSImageKeypointData*)[keypointDataBuffer contents]);
for (int i = 0 ; i<count;i++) {
simd_ushort2 coordinate = keypointDataArray[i].keypointCoordinate;
NSLog(#"color:%f | at:(%u,%u)", keypointDataArray[i].keypointColorValue, coordinate[0], coordinate[1] );
}
I guess there should be a more clever way to allocate the keypoint buffers with [device newBufferWithBytesNoCopy] so then you would not need to copy the contents back into your allocated arrays. It just didn't figure out to correctly align the buffer.
Also I should mention that I guess usually you will have a grayscale texture after any kind of feature detection so that the image converting part will not be necessary.
I am trying to create a non-Document-based application for Mac OS X that randomizes cards for the game of Dominion.
From many of the ones I have tried, the only thing I cannot seem to do is limit the number of sets picked from a selection made by the user, and things worked pretty well in my program, but I am having issues.
I am trying to get the results to print in a custom view, but every time I look at the print preview, nothing shows, except header text, as specified in an NSMutableString.
This piece of code is what is being used to print and is found in MasterViewController:
- (IBAction)print:(id)sender
{
NSMutableString *content = [[NSMutableString alloc] initWithString:#"Cards\r\n\r\n"];
for (int i = 0; i < [supply.game count]; i++)
{
[content appendFormat:#"Card: %# Set: %# Cost: %d\r\n", [supply.game[i] name], [supply.game[i] collection], [supply.game[i] cost]];
}
[content appendFormat:#"\r\n\r\nRequired\r\n\r\n"];
for (int i = 0; i < [[setup supply] count]; i++)
{
NSDictionary* current = [setup supply][i];
NSString* key = [current allKeys][0]; // get the key of the current dictionary must be 0, as there is only one key
int value = [[current valueForKey:key] integerValue]; // variable to hold key value
if (value > 0) {
[content appendFormat:#"%#: %#", key, #"Yes"];
}
else
{
[content appendFormat:#"%#: %#", key, #"No"];
}
}
printView.content = [NSMutableString stringWithString:content];
[printView print:sender];
}
the data initially gets filled into some tableviews, which displays the correct content, and the supply.game array is the exact array that contains cards used for games.
setup is a property that refers to a view controller that populates a table with kinds of cards that may be required for games (e.g. shelters, colonies, ruins, spoils, and potions) and the supply method is supposed to return the array that view controller creates, which is itself not empty, as that table populates properly.
printView is a property that is assigned to a custom view found in MainMenu.xib and is the real view being used to print from.
the printView class looks like this:
header:
#import <Cocoa/Cocoa.h>
#interface PrintView : NSView
{
NSMutableString* content;
}
#property NSMutableString* content;
- (void)drawStringInRect:(NSRect)rect; // method to draw string to page
- (void)print:(id)sender; // method to print
#end
implementation:
#import "PrintView.h"
#implementation PrintView
#synthesize content;
- (BOOL)acceptsFirstResponder
{
return YES;
}
- (void)print:(id)sender
{
[[NSPrintOperation printOperationWithView:self] runOperation];
}
- (void)drawRect:(NSRect)dirtyRect {
NSGraphicsContext *context = [NSGraphicsContext currentContext];
if ([context isDrawingToScreen])
{
}
else
{
[[NSColor whiteColor] set];
NSRect bounds = [self bounds];
if (content == nil || [content length] == 0)
{
NSRectFill(bounds);
}
else
{
[self drawStringInRect:bounds];
}
}
}
- (void)drawStringInRect:(NSRect)rect
{
NSSize strSize; // variable to hold string size
NSPoint strOrigin; // variable used to position text
NSMutableDictionary *attributes = [[NSMutableDictionary alloc] init];
[attributes setObject:[NSFont fontWithName:#"Helvetica" size:12] forKey:NSFontAttributeName];
[attributes setObject:[NSColor blackColor] forKey:NSForegroundColorAttributeName];
strSize = [content sizeWithAttributes:attributes];
strOrigin.x = rect.origin.x + (rect.size.width - strSize.width)/2;
strOrigin.y = rect.origin.y + (rect.size.height - strSize.height)/2;
[content drawAtPoint:strOrigin withAttributes:attributes];
}
#end
When I check the array sizes for printing operation, the size of the arrays is reported as zero, thus resulting in my current problem
If you need more code, here is code from Github, but I do not have the experimental branch up there, which is where the above code came from, though it should not be too different.
The MasterViewController will show how the supply.game array is made and SetupViewController houses the code that is used to determine what is needed in the game, as well as show how the supply array from [setup supply] is being produced.
MasterViewController has also been added as an object to MainMenu.xib, so I do not know if that affects anything.
Any idea of what I need to do?
Edit: Added in info that might be relevant
First of all, I'm an Objective-C novice. So I'm not very familiar with OS X or iOS development. My experience is mostly in Java.
I'm creating an agent-based modeling-framework. I'd like to display the simulations and to do that I'm writing a little application. First, a little bit about the framework. The framework has a World class, in which there is a start method, which iterates over all agents and has them perform their tasks. At the end of one "step" of the world (i.e., after all the agents have done their thing), the start method calls the intercept method of an object that implements InterceptorProtocol. This object was previously passed in via the constructor. Using the interceptor, anyone can get a hook into the state of the world. This is useful for logging, or in the scenario that I'm trying to accomplish: displaying the information in a graphical manner. The call to intercept is synchronous.
Now as far as the GUI app is concerned, it is pretty simple. I have a controller that initializes a custom view. This custom view also implements InterceptorProtocol so that it can listen in, to what happens in the world. I create a World object and pass in the view as an interceptor. The view maintains a reference to the world through a private property and so once I have initialized the world, I set the view's world property to the world I have just created (I realize that this creates a cycle, but I need a reference to the world in the drawRect method of the view and the only way I can have it is if I maintain a reference to it from the class).
Since the world's start method is synchronous, I don't start the world up immediately. In the drawRect method I check to see if the world is running. If it is not, I start it up in a background thread. If it is, I examine the world and display all the graphics that I need to.
In the intercept method (which gets called from start running on the background thread), I set setNeedsToDisplay to YES. Since the start method of the world is running in a separate thread, I also have a lock object that I use to synchronize so that I'm not working on the World object while it's being mutated (this part is kind of janky and it's probably not working the way I expect it to - there are more than a few rough spots and I'm simply trying to get a little bit working; I plan to clean up later).
My problem is that the view renders some stuff, and then it pretty much locks up. I can see that the NSLog statements are being called and so the code is running, but nothing is getting updated on the view.
Here's some of the pertinent code:
MasterViewController
#import "MasterViewController.h"
#import "World.h"
#import "InfectableBug.h"
#interface MasterViewController ()
#end
#implementation MasterViewController
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
_worldView = [[WorldView alloc] init];
World* world = [[World alloc] initWithName: #"Bhumi"
rows: 100
columns: 100
iterations: 2000
snapshotInterval: 1
interceptor: _worldView];
for(int i = 0; i < 999; i++) {
NSMutableString* name = [NSMutableString stringWithString: #"HealthyBug"];
[name appendString: [[NSNumber numberWithInt: i] stringValue]];
[world addBug: [[InfectableBug alloc] initWithWorld: world
name: name
layer: #"FirstLayer"
infected: NO
infectionRadius: 1
incubationPeriod: 10
infectionStartIteration: 0]];
}
NSLog(#"Added all bugs. Going to add infected");
[world addBug: [[InfectableBug alloc] initWithWorld: world
name: #"InfectedBug"
layer: #"FirstLayer"
infected: YES
infectionRadius: 1
incubationPeriod: 10
infectionStartIteration: 0]];
[_worldView setWorld: world];
//[world start];
}
return self;
}
- (NSView*) view {
return self.worldView;
}
#end
WorldView
#import "WorldView.h"
#import "World.h"
#import "InfectableBug.h"
#implementation WorldView
#synthesize world;
- (id) initWithFrame:(NSRect) frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
}
return self;
}
- (void) drawRect:(NSRect) dirtyRect {
CGContextRef myContext = [[NSGraphicsContext currentContext] graphicsPort];
CGContextClearRect(myContext, CGRectMake(0, 0, 1024, 768));
NSUInteger rows = [world rows];
NSUInteger columns = [world columns];
NSUInteger cellWidth = 1024 / columns;
NSUInteger cellHeight = 768 / rows;
if([world running]) {
#synchronized (_lock) {
//Ideally we would need layers, but for now let's just get this to display
NSArray* bugs = [world bugs];
NSEnumerator* enumerator = [bugs objectEnumerator];
InfectableBug* bug;
while ((bug = [enumerator nextObject])) {
if([bug infected] == YES) {
CGContextSetRGBFillColor(myContext, 128, 0, 0, 1);
} else {
CGContextSetRGBFillColor(myContext, 0, 0, 128, 1);
}
NSLog(#"Drawing bug %# at %lu, %lu with width %lu and height %lu", [bug name], [bug x] * cellWidth, [bug y] * cellHeight, cellWidth, cellHeight);
CGContextFillRect(myContext, CGRectMake([bug x] * cellWidth, [bug y] * cellHeight, cellWidth, cellHeight));
}
}
} else {
[world performSelectorInBackground: #selector(start) withObject: nil];
}
}
- (BOOL) isFlipped {
return YES;
}
- (void) intercept: (World *) aWorld {
struct timespec time;
time.tv_sec = 0;
time.tv_nsec = 500000000L;
//nanosleep(&time, NULL);
#synchronized (_lock) {
[self setNeedsDisplay: YES];
}
}
#end
start method in World.m:
- (void) start {
running = YES;
while(currentIteration < iterations) {
#autoreleasepool {
[bugs shuffle];
NSEnumerator* bugEnumerator = [bugs objectEnumerator];
Bug* bug;
while((bug = [bugEnumerator nextObject])) {
NSString* originalLayer = [bug layer];
NSUInteger originalX = [bug x];
NSUInteger originalY = [bug y];
//NSLog(#"Bug %# is going to act and location %i:%i is %#", [bug name], [bug x], [bug y], [self isOccupied: [bug layer] x: [bug x] y: [bug y]] ? #"occupied" : #"not occupied");
[bug act];
//NSLog(#"Bug has acted");
if(![originalLayer isEqualToString: [bug layer]] || originalX != [bug x] || originalY != [bug y]) {
//NSLog(#"Bug has moved");
[self moveBugFrom: originalLayer atX: originalX atY: originalY toLayer: [bug layer] atX: [bug x] atY: [bug y]];
//NSLog(#"Updated bug position");
}
}
if(currentIteration % snapshotInterval == 0) {
[interceptor intercept: self];
}
currentIteration++;
}
}
//NSLog(#"Done.");
}
Please let me know if you'd like to see any other code. I realize that the code is not pretty; I was just trying to get stuff to work and I plan on cleaning it up later. Also, if I'm violating an Objective-C best practices, please let me know!
Stepping out for a bit; sorry if I don't respond immediately!
Whew, quiet a question for probably a simple answer: ;)
UI updates have to be performed on the main thread
If I read your code correctly, you call the start method on a background thread. The start method contains stuff like moveBugFrom:... and also the intercept: method. The intercept method thus calls setNeedsDisplay: on a background thread.
Have all UI related stuff perform on the main thread. Your best bet is to use Grand Central Dispatch, unless you need to support iOS < 4 or OS X < 10.6 (or was it 10.7?), like this:
dispatch_async(dispatch_get_main_queue(), ^{
// perform UI updates
});
My iPad app that I am creating has to be able to create the tiles for a 4096x2992 image that is generated earlier in my app..
4096x2992 image isn't very complex (what i'm testing with) and when written to file in png format is approximately 600kb...
On the simulator, this code seems to work fine however when I run the app in tighter memory conditions (on my iPad) the process quits because it ran out of memory...
I've been using the same code in the app previously what was working fine (was only creating tiles for 3072x2244 images however)...
Either I must be doing something stupidly wrong or my #autoreleasepool's aren't working as they should (i think i mentioned that im using ARC)... When running in instruments I can just see the memory used climb up until ~500mb where it then crashes!
I've analysed the code and it hasn't found a single memory leak related to this part of my app so I'm really confused on why this is crashing on me...
Just a little history on how my function gets called so you know whats happening... The app uses CoreGraphics to render a UIView (4096x2992) with some UIImageView's inside it then it sends that UIImage reference into my function buildFromImage: (below) where it begins cutting up/resizing the image to create my file...
Here is the buildFromImage: code... the memory issues are built up from within the main loop under NSLog(#"LOG ------------> Begin tile loop ");...
-(void)buildFromImage:(UIImage *)__image {
NSLog(#"LOG ------------> Begin Build ");
//if the __image is over 4096 width of 2992 height then we must resize it! (stop crashes ect)
if (__image.size.width > __image.size.height) {
if (__image.size.width > 4096) {
__image = [self resizeImage:__image toSize:CGSizeMake(4096, (__image.size.height * 4096 / __image.size.width))];
}
} else {
if (__image.size.height > 2992) {
__image = [self resizeImage:__image toSize:CGSizeMake((__image.size.width * 2992 / __image.size.height), 2992)];
}
}
//create preview image (if landscape, no more than 748 high... if portrait, no more than 1004 high) must keep scale
NSString *temp_archive_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_imgdat.zip",NSTemporaryDirectory(),arc4random()];
NSString *temp_tile_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_tilestore/",NSTemporaryDirectory(),arc4random()];
//create the temp dir for the tile store
[[NSFileManager defaultManager] createDirectoryAtPath:temp_tile_store withIntermediateDirectories:YES attributes:nil error:nil];
//create each tile and add it to the compressor once its made
//size of tile
CGSize tile_size = CGSizeMake(256, 256);
//the scales that we will be generating the tiles too
NSMutableArray *scales = [[NSMutableArray alloc] initWithObjects:[NSNumber numberWithInt:1000],[NSNumber numberWithInt:500],[NSNumber numberWithInt:250],[NSNumber numberWithInt:125], nil]; //scales to loop round over
NSLog(#"LOG ------------> Begin tile loop ");
#autoreleasepool {
//loop through the scales
for (NSNumber *scale in scales) {
//scale the image
UIImage *imageForScale = [self resizedImage:__image scale:[scale intValue]];
//calculate number of rows...
float rows = ceil(imageForScale.size.height/tile_size.height);
//calulate number of collumns
float cols = ceil(imageForScale.size.width/tile_size.width);
//loop through rows and cols
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
NSLog(#"LOG ------> Creating Tile (%i,%i,%i)",col,row,[scale intValue]);
//build name for tile...
NSString *tile_name = [NSString stringWithFormat:#"%#_%i_%i_%i.png",#"image",[scale intValue],col,row];
#autoreleasepool {
//build tile for this coordinate
UIImage *tile = [self tileForRow:row column:col size:tile_size image:imageForScale];
//convert image to png data
NSData *tile_data = UIImagePNGRepresentation(tile);
[tile_data writeToFile:[NSString stringWithFormat:#"%#%#",temp_tile_store,tile_name] atomically:YES];
}
}
}
}
}
}
Here are my resizing/cropping functions too as these could also be causing the issue..
-(UIImage *)resizeImage:(UIImage *)inImage toSize:(CGSize)scale {
#autoreleasepool {
CGImageRef inImageRef = [inImage CGImage];
CGColorSpaceRef clrRf = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, ceil(scale.width), ceil(scale.height), CGImageGetBitsPerComponent(inImageRef), CGImageGetBitsPerPixel(inImageRef)*ceil(scale.width), clrRf, kCGImageAlphaPremultipliedFirst );
CGColorSpaceRelease(clrRf);
CGContextDrawImage(ctx, CGRectMake(0, 0, scale.width, scale.height), inImageRef);
CGImageRef img = CGBitmapContextCreateImage(ctx);
UIImage *image = [[UIImage alloc] initWithCGImage:img scale:1 orientation:UIImageOrientationUp];
CGImageRelease(img);
CGContextRelease(ctx);
return image;
}
}
- (UIImage *)tileForRow: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
#autoreleasepool {
//get the selected tile
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef inImageRef = [inImage CGImage];
CGImageRef tiledImage = CGImageCreateWithImageInRect(inImageRef, subRect);
UIImage *tileImage = [[UIImage alloc] initWithCGImage:tiledImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(tiledImage);
return tileImage;
}
}
Now I never use to be that good with memory management, so I did take the time to read up on it and also converted my project to ARC to see if that could address my issues (that was a while ago) but from the results i get after profiling it in instruments I must be doing something STUPIDLY wrong for the memory to leak as bad as it does but I just can't see what i'm doing wrong.
If anybody can point out anything I may be doing wrong it would be great!
Thanks
Liam
(let me know if you need more info)
I use "UIImage+Resize". Inside #autoreleasepool {} it works fine with ARC in a loop.
https://github.com/AliSoftware/UIImage-Resize
-(void)compress:(NSString *)fullPathToFile {
#autoreleasepool {
UIImage *fullImage = [[UIImage alloc] initWithContentsOfFile:fullPathToFile];
UIImage *compressedImage = [fullImage resizedImageByHeight:1024];
NSData *compressedData = UIImageJPEGRepresentation(compressedImage, 75.0);
[compressedData writeToFile:fullPathToFile atomically:NO];
}
}
I have a MKMapView (obviously), that shows housing locations around the user.
I have a Radius tool that when a selection is made, the annotations should add/remove based on distance around the user.
I have it add/removing fine but for some reason the annotations won't show up until I zoom in or out.
This is the method that adds/removes the annotations based on distance. I have tried two different variations of the method.
Adds the new annotations to an array, then adds to the map by [mapView addAnnotations:NSArray].
Add the annotations as it finds them using [mapView addAnnotation:MKMapAnnotation];
1.
- (void)updateBasedDistance:(NSNumber *)distance {
//Setup increment for HUD animation loading
float hudIncrement = ( 1.0f / [[[[self appDelegate] rssParser]rssItems] count]);
//Remove all the current annotations from the map
[self._mapView removeAnnotations:self._mapView.annotations];
//Hold all the new annotations to add to map
NSMutableArray *tempAnnotations;
/*
I have an array that holds all the annotations on the map becuase
a lot of filtering/searching happens. So for memory reasons it is
more efficient to load annoations once then add/remove as needed.
*/
for (int i = 0; i < [annotations count]; i++) {
//Current annotations location
CLLocation *tempLoc = [[CLLocation alloc] initWithLatitude:[[annotations objectAtIndex:i] coordinate].latitude longitude:[[annotations objectAtIndex:i] coordinate].longitude];
//Distance of current annotaiton from user location converted to miles
CLLocationDistance miles = [self._mapView.userLocation.location distanceFromLocation:tempLoc] * 0.000621371192;
//If distance is less than user selection, add it to the map.
if (miles <= [distance floatValue]){
if (tempAnnotations == nil)
tempAnnotations = [[NSMutableArray alloc] init];
[tempAnnotations addObject:[annotations objectAtIndex:i]];
}
//For some reason, even with ARC, helps a little with memory consumption
tempLoc = nil;
//Update a progress HUD I use.
HUD.progress += hudIncrement;
}
//Add the new annotaitons to the map
if (tempAnnotations != nil)
[self._mapView addAnnotations:tempAnnotations];
}
2.
- (void)updateBasedDistance:(NSNumber *)distance {
//Setup increment for HUD animation loading
float hudIncrement = ( 1.0f / [[[[self appDelegate] rssParser]rssItems] count]);
//Remove all the current annotations from the map
[self._mapView removeAnnotations:self._mapView.annotations];
/*
I have an array that holds all the annotations on the map becuase
a lot of filtering/searching happens. So for memory reasons it is
more efficient to load annoations once then add/remove as needed.
*/
for (int i = 0; i < [annotations count]; i++) {
//Current annotations location
CLLocation *tempLoc = [[CLLocation alloc] initWithLatitude:[[annotations objectAtIndex:i] coordinate].latitude longitude:[[annotations objectAtIndex:i] coordinate].longitude];
//Distance of current annotaiton from user location converted to miles
CLLocationDistance miles = [self._mapView.userLocation.location distanceFromLocation:tempLoc] * 0.000621371192;
//If distance is less than user selection, add it to the map.
if (miles <= [distance floatValue])
[self._mapView addAnnotation:[annotations objectAtIndex:i]];
//For some reason, even with ARC, helps a little with memory consumption
tempLoc = nil;
//Update a progress HUD I use.
HUD.progress += hudIncrement;
}
}
I have also attempted at the end of the above method:
[self._mapView setNeedsDisplay];
[self._mapView setNeedsLayout];
Also, to force a refresh (saw somewhere it might work):
self._mapView.showsUserLocation = NO;
self._mapView.showsUserLocation = YES;
Any help would be very much appreciated and as always, thank you for taking the time to read.
I'm going to guess that updateBasedDistance: gets called from a background thread. Check with NSLog(#"Am I in the UI thread? %d", [NSThread isMainThread]);. If it's 0, then you should move the removeAnnotations: and addAnnotation: to a performSelectorOnMainThread: invocation, or with GCD blocks on the main thread.