I am developing an iPhone game with cocos2d-iphone.
I am particularly interested in how much memory is CCSpriteFrameCache "holding" at the moment. I am wondering - is there a way to know that? Without using any Xcode tools?
Perhaps there is a variable that will already let me know an estimate memory consumption value on my app?
Generally speaking the problem you are posing is not easy to solve.
In the case of CCSpriteFrameCache, since this class contains a pointer to an NSMutableDictionary of sprite frames, which are textures, you could iterate the dictionary and accumulating the texture dimensions (multiplied by the size of each pixel).
Another approach would be converting the dictionary into NSData like this:
NSData * data = [NSPropertyListSerialization dataFromPropertyList:spriteFrameDictionary
format:NSPropertyListBinaryFormat_v1_0 errorDescription:NULL];
NSLog(#"size: %d", [data length]);
but this would require you to implement the NSCoding protocol for the CCSpriteFrame class.
About accumulating the textures size, you can multiply width by height by pixel size; the pixel size depends on the pixel format: RGBA8888 is 32 bits, RGB565 is 16 bits; also you have to take into account that open gl textures have only sizes that are power of 2: 256x256, 512x512, 1024x512 etc.
Actually if you are concerned about memory consumption of your textures, they are stored by CCTextureCache. There is a CCTextureCache (Debug) method called dumpCachedTextureInfo method in there. I have not tried it myself, but here :
#implementation CCTextureCache (Debug)
-(void) dumpCachedTextureInfo
{
NSUInteger count = 0;
NSUInteger totalBytes = 0;
for (NSString* texKey in textures_) {
CCTexture2D* tex = [textures_ objectForKey:texKey];
NSUInteger bpp = [tex bitsPerPixelForFormat];
// Each texture takes up width * height * bytesPerPixel bytes.
NSUInteger bytes = tex.pixelsWide * tex.pixelsHigh * bpp / 8;
totalBytes += bytes;
count++;
CCLOG( #"cocos2d: \"%#\" rc=%lu id=%lu %lu x %lu # %ld bpp => %lu KB",
texKey,
(long)[tex retainCount],
(long)tex.name,
(long)tex.pixelsWide,
(long)tex.pixelsHigh,
(long)bpp,
(long)bytes / 1024 );
}
CCLOG( #"cocos2d: CCTextureCache dumpDebugInfo: %ld textures, for %lu KB (%.2f MB)", (long)count, (long)totalBytes / 1024, totalBytes / (1024.0f*1024.0f));
}
You want to calculate per texture the bit format, since it is possible to store different texture formats in the cache, depending on your current needs. It will give you (last line) the summary of contents, including total memory consumed.
Related
I've been working on getting a clean sine wave sound that can change frequencies when different notes are played. From what I've understood, I need to resize the buffer's frameLength relative to the frequency to avoid those popping sounds caused when the frame ends on a sine's peak.
So on every iteration, I set the frameLength and then populate buffer with the signal.
AVAudioPlayerNode *audioPlayer = [[AVAudioPlayerNode alloc] init];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:[audioPlayer outputFormatForBus:0] frameCapacity:44100*10];`
while(YES){
AVAudioFrameCount frameCount = ceil(44100.0/osc.frequency);
[buffer setFrameLength:frameCount];
[audioPlayer scheduleBuffer:buffer atTime:0 options:AVAudioPlayerNodeBufferLoops completionHandler:nil];
for(int i = 0; i < [buffer frameLength]; i++){
for (int channelNumber = 0; channelNumber < channelCount ; channelNumber++) {
float * const channelBuffer = floatChannelData[channelNumber];
channelBuffer[i] = [self getSignalOnFrame:i];
}
}
}
where the signal is generated from:
(float)getSignalOnFrame:(int)i {
float sampleRate = 44100.0;
return [osc amplitude] * sinf([osc frequency] * i * 2.0 * M_PI / sampleRate);
}
The starting tone sounds fine and there are no popping sounds when notes change but the notes themselves sound like they're being turned into sawtooth waves or something.
Any ideas on what I might be missing here?
Or should I just create a whole new audioPlayer with a fresh buffer for each note played?
Thanks for any advice!
If the buffers are contiguous, then a better method to not have discontinuities in sine wave generation is to remember the phase of the sinewave at the end of one buffer, and use that phase as the starting point (angle) to generate the next buffer.
If the buffers are not contiguous, then a common way to avoid clicks is to gradually taper the first and last few milliseconds of each buffer from full gain to zero. A linear gain taper will do, but a raised cosine taper is a slightly smoother taper.
Is there a way in AppKit to measure the width of a large number of NSString objects(say a million) really fast? I have tried 3 different ways to do this:
[NSString sizeWithAttributes:]
[NSAttributedString size]
NSLayoutManager (get text width instead of height)
Here are some performance metrics
Count\Mechanism sizeWithAttributes NSAttributedString NSLayoutManager
1000 0.057 0.031 0.007
10000 0.329 0.325 0.064
100000 3.06 3.14 0.689
1000000 29.5 31.3 7.06
NSLayoutManager is clearly the way to go, but the problem being
High memory footprint(more than 1GB according to profiler) because of the creation of heavyweight NSTextStorage objects.
High creation time. All of the time taken is during creation of the above strings, which is a dealbreaker in itself.(subsequently measuring NSTextStorage objects which have glyphs created and laid out only takes about 0.0002 seconds).
7 seconds is still too slow for what I am trying to do. Is there a faster way? To measure a million strings in about a second?
In case you want to play around, Here is the github project.
Here are some ideas I haven't tried.
Use Core Text directly. The other APIs are built on top of it.
Parallelize. All modern Macs (and even all modern iOS devices) have multiple cores. Divide up the string array into several subarrays. For each subarray, submit a block to a global GCD queue. In the block, create the necessary Core Text or NSLayoutManager objects and measure the strings in the subarray. Both APIs can be used safely this way. (Core Text) (NSLayoutManager)
Regarding “High memory footprint”: Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint.
Regarding “All of the time taken is during creation of the above strings, which is a dealbreaker in itself”: Are you saying all the time is spent in these lines:
double random = (double)arc4random_uniform(1000) / 1000;
NSString *randomNumber = [NSString stringWithFormat:#"%f", random];
Formatting a floating-point number is expensive. Is this your real use case? If you just want to format a random rational of the form n/1000 for 0 ≤ n < 1000, there are faster ways. Also, in many fonts, all digits have the same width, so that it's easy to typeset columns of numbers. If you pick such a font, you can avoid measuring the strings in the first place.
UPDATE
Here's the fastest code I've come up with using Core Text. The dispatched version is almost twice as fast as the single-threaded version on my Core i7 MacBook Pro. My fork of your project is here.
static CGFloat maxWidthOfStringsUsingCTFramesetter(
NSArray *strings, NSRange range) {
NSString *bigString =
[[strings subarrayWithRange:range] componentsJoinedByString:#"\n"];
NSAttributedString *richText =
[[NSAttributedString alloc]
initWithString:bigString
attributes:#{ NSFontAttributeName: (__bridge NSFont *)font }];
CGPathRef path =
CGPathCreateWithRect(CGRectMake(0, 0, CGFLOAT_MAX, CGFLOAT_MAX), NULL);
CGFloat width = 0.0;
CTFramesetterRef setter =
CTFramesetterCreateWithAttributedString(
(__bridge CFAttributedStringRef)richText);
CTFrameRef frame =
CTFramesetterCreateFrame(
setter, CFRangeMake(0, bigString.length), path, NULL);
NSArray *lines = (__bridge NSArray *)CTFrameGetLines(frame);
for (id item in lines) {
CTLineRef line = (__bridge CTLineRef)item;
width = MAX(width, CTLineGetTypographicBounds(line, NULL, NULL, NULL));
}
CFRelease(frame);
CFRelease(setter);
CFRelease(path);
return (CGFloat)width;
}
static void test_CTFramesetter() {
runTest(__func__, ^{
return maxWidthOfStringsUsingCTFramesetter(
testStrings, NSMakeRange(0, testStrings.count));
});
}
static void test_CTFramesetter_dispatched() {
runTest(__func__, ^{
dispatch_queue_t gatherQueue = dispatch_queue_create(
"test_CTFramesetter_dispatched result-gathering queue", nil);
dispatch_queue_t runQueue =
dispatch_get_global_queue(QOS_CLASS_UTILITY, 0);
dispatch_group_t group = dispatch_group_create();
__block CGFloat gatheredWidth = 0.0;
const size_t Parallelism = 16;
const size_t totalCount = testStrings.count;
// Force unsigned long to get 64-bit math to avoid overflow for
// large totalCounts.
for (unsigned long i = 0; i < Parallelism; ++i) {
NSUInteger start = (totalCount * i) / Parallelism;
NSUInteger end = (totalCount * (i + 1)) / Parallelism;
NSRange range = NSMakeRange(start, end - start);
dispatch_group_async(group, runQueue, ^{
double width =
maxWidthOfStringsUsingCTFramesetter(testStrings, range);
dispatch_sync(gatherQueue, ^{
gatheredWidth = MAX(gatheredWidth, width);
});
});
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
return gatheredWidth;
});
}
So I'm working on processing audio with Objective C, and am attempting to write a gain change function. I have limited the accepted audio formats to 16-bit AIFF files only for now. The process I am using is straightforward: I grab the audio data from my AIFF object, I skip to the point in the audio where I want to process (if x1: 10 and x2: 20 the goal is to change the amplitude of the samples from 10 seconds into the audio to 20 seconds in), and then step through the samples applying the gain change through multiplication. The problem is after I write the processed samples to a new NSMutableData, and then write a new AIFF file using the sound data, the processed samples are completely messed up, and the audio is basically just noise.
-(NSMutableData *)normalizeAIFF:(AIFFAudio *)audio x1:(int)x1 x2:(int)x2{
// obtain audio data bytes from AIFF object
SInt16 * bytes = (SInt16 *)[audio.ssndData bytes];
NSUInteger length = [audio.ssndData length] / sizeof(SInt16);
NSMutableData *newAudio = [[NSMutableData alloc] init];
int loudestSample = [self findLoudestSample:audio.ssndData];
// skip offset and blocksize in SSND data and proceed to user selected point
// For 16 bit, 44.1 audio, each second of sound data holds 88.2 thousand samples
int skipTo = 4 + (x1 * 88200);
int processChunk = ((x2 - x1) * 88200) + skipTo;
for(int i = skipTo; i < processChunk; i++){
// convert to float format for processing
Float32 sampleFloat = (Float32)bytes[i];
sampleFloat = sampleFloat / 32768.0;
// This is where I would change the amplitude of the sample
// sampleFloat = sampleFloat + (sampleFloat * 0.5);
// make sure not clipping
if (sampleFloat > 1.0){
sampleFloat = 1.0;
} else if (sampleFloat < -1.0){
sampleFloat = -1.0;
}
// convert back to SInt16
sampleFloat = sampleFloat * 32768.0;
if (sampleFloat > 32767.0){
sampleFloat = 32767.0;
} else if (sampleFloat < -32768.0){
sampleFloat = -32768.0;
}
bytes[i] = (SInt16)sampleFloat;
}
[newAudio appendBytes:bytes length:length];
return newAudio;
}
Where in this process could I be going wrong? Is it converting the sample from SInt16 -> float -> SInt16? Printing the data before during and after this conversion seems to show that there is nothing going wrong there. It seems to be after I pack it back into an NSMutableData object, but I'm not too sure.
Any help is appreciated.
EDIT: I also want to mention when I send audio through this function and set the change gain factor to 0 such that the resulting waveform is identical to the input, there are no issues. The waveform comes out looking and sounding exactly the same. It is only when the change gain factor is set to a value that actually changes the samples.
EDIT2: I changed the code to use a pointer and a type cast rather than memcpy(). I still am getting weird results when multiplying the floating point representation of the sample by any number. When I multiply the sample as an SInt16 by an integer I get the proper result, though. This leads me to believe my problem lies in the way I am going about floating point arithmetic. Is there anything anyone sees with the floating point equation I commented out that could be leading to errors?
The problem turned out to be an endianness issue as Zaph alluded to. I thought I was handling the conversion of big-endian to little-endian correctly when I was not. Now the code looks like:
-(NSMutableData *)normalizeAIFF:(AIFFAudio *)audio x1:(int)x1 x2:(int)x2{
// obtain audio data bytes from AIFF object
SInt16 * bytes = (SInt16 *)[audio.ssndData bytes];
NSUInteger length = [audio.ssndData length];
NSMutableData *newAudio = [[NSMutableData alloc] init];
// skip offset and blocksize in SSND data and proceed to user selected point
// For 16 bit, 44.1 audio, each second of sound data holds 88.2 thousand samples
int skipTo = 4 + (x1 * 88200);
int processChunk = ((x2 - x1) * 88200) + skipTo;
for(int i = skipTo; i < processChunk; i++){
SInt16 sample = CFSwapInt16BigToHost(bytes[i]);
bytes[i] = CFSwapInt16HostToBig(sample * 0.5);
}
[newAudio appendBytes:bytes length:length];
return newAudio;
}
The gain change factor of 0.5 will change, and I still have to actually normalize the data in relation to the sample with the greatest amplitude in the selection, but the issue I had is solved. When writing the new waveform out to a file it sounds and looks as expected.
i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:
NSString *string = [[NSBundle mainBundle] pathForResource:#"pic" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:string];
unsigned char *bytesArray = dataI.bytes;
NSUInteger byteslenght = data.length;
//--------pixel to array
NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
for (int i = 0; i<byteslenght; i++) {
[array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
}
Here i try to change color of pixels since 95 till 154.
NSNumber *number = [NSNumber numberWithInt:200];
for (int i=95; i<155; i++) {
[array replaceObjectAtIndex:i withObject:number];
}
But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?
The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.
Bottom line, to get the uncompressed pixel data for a UIImage, you would:
Get the CGImage for the UIImage.
Get the data provider for that CGImageRef via CGImageGetDataProvider.
Get the binary data associated with that data provider via CGDataProviderCopyData.
Extract some of the information about the image, so you know how to interpret that buffer.
Thus:
UIImage *image = ...
CGImageRef imageRef = image.CGImage; // get the CGImageRef
NSAssert(imageRef, #"Unable to get CGImageRef");
CGDataProviderRef provider = CGImageGetDataProvider(imageRef); // get the data provider
NSAssert(provider, #"Unable to get provider");
NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider)); // get copy of the data
NSAssert(data, #"Unable to copy image data");
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.
To create the buffer:
void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, #"Unable to allocate buffer");
For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.
The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:
CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
outputProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);
Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:
void releaseData(void *info, const void *data, size_t size)
{
free((void *)data);
}
In a GPS app that allows the user to display a list of complex location points that we call tracks on various different types of map, each track can consist of between 2k to 10k of location points. The tracks are copiously clipped, pruned and path-simplified when they are rendered on non-Google map types. This is to keep memory usage down and performance up. We typically only wind up submitting far less than a thousand (aggregate) transformed location points to the OpenGL pipeline, even in the worst cases.
In integrating the Google Maps SDK for iOS, we initially attempted to continue to leverage our own OpenGL track rendering system, but ran into issues with conflicting OpenGL context usage (rendering worked, but we couldn't get GMSMapView to and our own internal OpenGL resources to both release without someone touching deleted memory).
So we are trying to leverage the GMSPolyline constructs and just let the Google SDK do the track rendering, but we've run into major memory usage issues, and are looking for guidance in working around them.
Using Xcode Instruments, we've monitored memory usage when creating about 25 poly lines with about 23k location points total (not each). Over the course of poly line creation, app memory usage grows from about 14 MB to about 172 MB, a net peak of about 158 MB. Shortly after all the poly lines are created, memory usage finally drops back down to around 19 MB and seems stable, for a cumulative net of around 5 MB, so it seems each location point requires around 220 bytes (5 MB / 23k points) to store.
What hurts us is the peak memory usage. While our laboratory test only used 23k location points, in the real world there are often many more, and iOS seems to jettison our application after Google Maps has consumed around 450 MB on an iPhone 5 (whereas our internal poly line rendering system peaks at around 12 MB for the same test case).
Clearly the GMSPolyLine construct is not intended for the heavy weight usage that we require.
We tried wrapping some of the poly line creation loops with separate autorelease pools, and then draining those at appropriate points, but this has no impact on memory use. The peak memory use after the poly lines are created and control is returned to the main run loop didn't change at all. Later it became clear why; the Google Map system isn't releasing resources until the first DisplayLink callback after the poly lines are created.
Our next effort will be to manually throttle the amount of data we're pushing at GMSPolyline, probably using our own bounds testing, clipping, pruning & minimization, rather than relying on Google Maps to do this efficiently.
The drawback here is that it will mean many more GMSPolyline objects will be allocated and deallocated, potentially while the user is panning/zooming around the map. Each of these objects will have far fewer location points, yet still, we're concerned about unforeseen consequences of this approach, hidden overhead of many GMSPolyline allocs and deallocate.
So the question is, what is the best approach for dealing with this situation, and can someone from Google shed some light on any GMSPolyline best practices, upper bounds, bottlenecks, etc. ?
why don´t you try to use google API for direction, based on basic http requests. https://developers.google.com/maps/documentation/directions/ . (check the conditions on licensing and nº of requests).
And then plot the the data with IOS MKPolyline. i´m Sure you will have better performance. And you will only depend on google for the positioning data.
to convert the response from google API to coordinates, use the well known method (taken from other post) below:
- (NSMutableArray *)parseResponse:(NSDictionary *)response
{
NSArray *routes = [response objectForKey:#"routes"];
NSDictionary *route = [routes lastObject];
if (route) {
NSString *overviewPolyline = [[route objectForKey: #"overview_polyline"] objectForKey:#"points"];
return [self decodePolyLine:overviewPolyline];
}
return nil;
}
-(NSMutableArray *)decodePolyLine:(NSString *)encodedStr {
NSMutableString *encoded = [[NSMutableString alloc]initWithCapacity:[encodedStr length]];
[encoded appendString:encodedStr];
[encoded replaceOccurrencesOfString:#"\\\\" withString:#"\\"
options:NSLiteralSearch range:NSMakeRange(0,
[encoded length])];
NSInteger len = [encoded length];
NSInteger index = 0;
NSMutableArray *array = [[NSMutableArray alloc] init]; NSInteger lat=0;
NSInteger lng=0;
while (index < len) {
NSInteger b; NSInteger shift = 0; NSInteger result = 0; do {
b = [encoded characterAtIndex:index++] - 63; result |= (b & 0x1f) << shift;
shift += 5;
} while (b >= 0x20);
NSInteger dlat = ((result & 1) ? ~(result >> 1)
: (result >> 1)); lat += dlat;
shift = 0; result = 0; do {
b = [encoded characterAtIndex:index++] - 63; result |= (b & 0x1f) << shift;
shift += 5;
} while (b >= 0x20);
NSInteger dlng = ((result & 1) ? ~(result >> 1)
: (result >> 1)); lng += dlng;
NSNumber *latitude = [[NSNumber alloc] initWithFloat:lat * 1e-5]; NSNumber *longitude = [[NSNumber alloc] initWithFloat:lng * 1e-5];
CLLocation *location = [[CLLocation alloc] initWithLatitude: [latitude floatValue] longitude:[longitude floatValue]];
[array addObject:location]; }
return array;
}
I had a similar problem with performance on google sdk and it work for me.