Is there an API to obtain the NSDate or NSTimeInterval representing the time the system booted? Some APIs such as [NSProcessInfo systemUptime] and Core Motion return time since boot. I need to precisely correlate these uptime values with NSDates, to about a millisecond.
Time since boot ostensibly provides more precision, but it's easy to see that NSDate already provides precision on the order of 100 nanoseconds, and anything under a microsecond is just measuring interrupt latency and PCB clock jitter.
The obvious thing is to subtract the uptime from the current time [NSDate date]. But that assumes that time does not change between the two system calls, which is, well, hard to accomplish. Moreover if the thread is preempted between the calls, everything is thrown off. The workaround is to repeat the process several times and use the smallest result, but yuck.
NSDate must have a master offset it uses to generate objects with the current time from the system uptime, is there really no way to obtain it?
In OSX you could use sysctl(). This is how the OSX Unix utility uptime does it. Source code is available - search for boottime.
Fair warning though, in iOS i have no idea if this would work.
UPDATE: found some code :)
#include <sys/types.h>
#include <sys/sysctl.h>
#define MIB_SIZE 2
int mib[MIB_SIZE];
size_t size;
struct timeval boottime;
mib[0] = CTL_KERN;
mib[1] = KERN_BOOTTIME;
size = sizeof(boottime);
if (sysctl(mib, MIB_SIZE, &boottime, &size, NULL, 0) != -1)
{
// successful call
NSDate* bootDate = [NSDate dateWithTimeIntervalSince1970:
boottime.tv_sec + boottime.tv_usec / 1.e6];
}
see if this works...
The accepted answer, using systcl, works, but the values returned by sysctl for KERN_BOOTTIME, at least in my testing (Darwin Kernel Version 11.4.2), are always in whole seconds (the microseconds field, tv_usec, is 0). This means the resulting time may be up to 1 second off, which is not very accurate.
Also, having compared that value, to one derived experimentally from the difference between the REALTIME_CLOCK and CALENDAR_CLOCK, they sometimes differ by a couple seconds, so its not clear whether the KERN_BOOTTIME value corresponds exactly to the time-basis for the uptime clocks.
There is another way. It could give result slightly different (less or more) than accepted answer
I have compared them. I get difference -7 second for OSX 10.9.3 and +2 second for iOS 7.1.1
As i understand this way gives same result if wall clock changed, but accepted answer gives different results if wall clock changed...
Here code:
static CFAbsoluteTime getKernelTaskStartTime(void) {
enum { MICROSECONDS_IN_SEC = 1000 * 1000 };
struct kinfo_proc info;
bzero(&info, sizeof(info));
// Initialize mib, which tells sysctl the info we want, in this case
// we're looking for information about a specific process ID = 0.
int mib[] = {CTL_KERN, KERN_PROC, KERN_PROC_PID, 0};
// Call sysctl.
size_t size = sizeof(info);
const int sysctlResult = sysctl(mib, COUNT_ARRAY_ELEMS(mib), &info, &size, NULL, 0);
assert(0 != sysctlResult);
const struct timeval * timeVal = &(info.kp_proc.p_starttime);
NSTimeInterval result = -kCFAbsoluteTimeIntervalSince1970;
result += timeVal->tv_sec;
result += timeVal->tv_usec / (double)MICROSECONDS_IN_SEC;
return result;
}
Refer to this category
NSDate+BootTime.h
#import <Foundation/Foundation.h>
#interface NSDate (BootTime)
+ (NSDate *)bootTime;
+ (NSTimeInterval)bootTimeTimeIntervalSinceReferenceDate;
#end
NSDate+BootTime.m
#import "NSDate+BootTime.h"
#include <sys/types.h>
#include <sys/sysctl.h>
#implementation NSDate (BootTime)
+ (NSDate *)bootTime {
return [NSDate dateWithTimeIntervalSinceReferenceDate:[NSDate bootTimeTimeIntervalSinceReferenceDate]];
}
+ (NSTimeInterval)bootTimeTimeIntervalSinceReferenceDate {
return getKernelTaskStartTime();
}
////////////////////////////////////////////////////////////////////////
#pragma mark - Private
////////////////////////////////////////////////////////////////////////
#define COUNT_ARRAY_ELEMS(arr) sizeof(arr)/sizeof(arr[0])
static CFAbsoluteTime getKernelTaskStartTime(void) {
enum { MICROSECONDS_IN_SEC = 1000 * 1000 };
struct kinfo_proc info;
bzero(&info, sizeof(info));
// Initialize mib, which tells sysctl the info we want, in this case
// we're looking for information about a specific process ID = 0.
int mib[] = {CTL_KERN, KERN_PROC, KERN_PROC_PID, 0};
// Call sysctl.
size_t size = sizeof(info);
const int sysctlResult = sysctl(mib, COUNT_ARRAY_ELEMS(mib), &info, &size, NULL, 0);
if (sysctlResult != -1) {
const struct timeval * timeVal = &(info.kp_proc.p_starttime);
NSTimeInterval result = -kCFAbsoluteTimeIntervalSince1970;
result += timeVal->tv_sec;
result += timeVal->tv_usec / (double)MICROSECONDS_IN_SEC;
return result;
}
return 0;
}
#end
The routines inside mach/mach_time.h are guaranteed to be monotonically increasing, unlike NSDate.
Related
When using CFSet and CFDictionary configured with custom callbacks to use integers as their keys, I've noticed some wildly varying performance of their internal hashing implementation. I'm using 64 bit integers (int64_t) with a range of roughly 1 - 1,000,000.
While profiling my application with, I noticed that every so often, a certain combination of factors would produce unusually poor performance. Looking at Instruments, CFBasicHash was taking much longer than usual.
After a bunch of investigating, I finally narrowed things down to a set of 400,000 integers that, when added to a CFSet or CFDictionary cause terrible performance with hashing.
The hashing implementation in CFBasicHash.m is beyond my understating for a problem like this, so I was wondering if anyone had any idea why such a completely random set of integers could cause such dreadful performance.
The following test application will output an average iteration time of 37ms for adding sequential integers to a set, but an average run time of 3622ms when adding the same number of integers but from the problematic data set.
(And if you insert the same number of completely random integers, then performance is much closer to 37ms. As well, adding these problematic integers to an std::map or std:set produces acceptable performance.)
#import <Foundation/Foundation.h>
extern uint64_t dispatch_benchmark(size_t count, void (^block)(void));
int main(int argc, char *argv[]) {
#autoreleasepool {
NSString *data = [NSString stringWithContentsOfFile:#"Integers.txt" encoding:NSUTF8StringEncoding error:NULL];
NSArray *components = [data componentsSeparatedByString:#","];
NSInteger count = components.count;
int64_t *numbers = (int64_t *)malloc(sizeof(int64_t) * count);
int64_t *sequentialNumbers = (int64_t *)malloc(sizeof(int64_t) * count);
for (NSInteger c = 0; c < count; c++) {
numbers[c] = [components[c] integerValue];
sequentialNumbers[c] = c;
}
NSLog(#"Beginning test with %# numbers...", #(count));
// Test #1 - Loading sequential integers
uint64_t t1 = dispatch_benchmark(10, ^{
CFMutableSetRef mutableSetRef = CFSetCreateMutable(NULL, 0, NULL);
for (NSInteger c = 0; c < count; c++) {
CFSetAddValue(mutableSetRef, (const void *)sequentialNumbers[c]);
}
NSLog(#"Sequential iteration completed with %# items in set.", #(CFSetGetCount(mutableSetRef)));
CFRelease(mutableSetRef);
});
NSLog(#"Sequential Numbers Average Runtime: %llu ms", t1 / NSEC_PER_MSEC);
NSLog(#"-----");
// Test #2 - Loading data set
uint64_t t2 = dispatch_benchmark(10, ^{
CFMutableSetRef mutableSetRef = CFSetCreateMutable(NULL, 0, NULL);
for (NSInteger c = 0; c < count; c++) {
CFSetAddValue(mutableSetRef, (const void *)numbers[c]);
}
NSLog(#"Dataset iteration completed with %# items in set.", #(CFSetGetCount(mutableSetRef)));
CFRelease(mutableSetRef);
});
NSLog(#"Dataset Average Runtime: %llu ms", t2 / NSEC_PER_MSEC);
free(sequentialNumbers);
free(numbers);
}
}
Example output:
Sequential Numbers Average Runtime: 37 ms
Dataset Average Runtime: 3622 ms
The integers are available here:
Gist (Integers.txt) or Dropbox (Integers.txt)
Can anyone help explain what is "special" about the given integers that might cause such a degradation in the hashing implementation used by CFSet and CFDictionary?
I have these functions in Objective-C:
-(void)emptyFunction
{
NSTimeInterval startTime = [[NSDate date] timeIntervalSinceReferenceDate];
float b;
for (int i=0; i<1000000; i++) {
b = [self returnNr:i];
}
NSTimeInterval endTime = [[NSDate date] timeIntervalSinceReferenceDate];
double elapsedTime = endTime - startTime;
NSLog(#"1. %f", elapsedTime);
}
-(float)returnNr:(float)number
{
return number;
}
and
-(void)sqrtFunction
{
NSTimeInterval startTime = [[NSDate date] timeIntervalSinceReferenceDate];
float b;
for (int i=0; i<1000000; i++) {
b = sqrtf(i);
}
NSTimeInterval endTime = [[NSDate date] timeIntervalSinceReferenceDate];
double elapsedTime = endTime - startTime;
NSLog(#"2. %f", elapsedTime);
}
When I call them, in any order, it prints in console the following:
2014-01-13 12:23:00.458 RapidTest[443:70b] 1. 0.011970
2014-01-13 12:23:00.446 RapidTest[443:70b] 2. 0.006308
How is this happening? How can sqrtf() function be twice as faster than a function that just returns a value? I know sqrtf() works on bits with assembly language and such, but faster than just a return? How is it possible?
Calling [self returnNr:i] is not the same as simply calling a C function. Instead you're sending a message to self, which gets translated to the equivalent in C:
objc_msgSend(self, #selector(returnNr:), i);
This will eventually call your returnNr: implementation, but there's some overhead involved. For more details on what's going on in objc_msgSend see objc_msgSend tour or Let's build objc_msgSend
[edit]
Also see An Illustrated History of objc_msgSend, which shows how the implementation changed over time. Executing the code from your Q will have resulted in different results on different platform versions because of improvements / trade-offs made during the evolution of objc_msgSend.
Your benchmark is flawed.
First, in both cases a compiler may optimize your loop as follows:
for (int i=0; i<1000000-1; i++) {
[self returnNr:i];
}
float b = [self returnNr:i];
respectively:
for (int i=0; i<1000000-1; i++) {
sqrtf(i);
}
float b = sqrtf(i);
Then, IFF the compiler can deduce that the statement sqrtf(i) has no side effect (other than returning a value), it may further optimize as follows:
float b = sqrtf(1000000-1);
It's very likely that clang will apply this optimization, though it's implementation dependent.
See also: Do C and C++ optimizers typically know which functions have no side effects?
In case of Objective-C method invocations a compiler has far less optimization opportunities, and it will very likely always assume that a method call might have a possible side effect and must be called always. Thus, the second optimization will likely not applied in an optimized build.
Additionally, your approach to measure the elapsed time is by far not accurate enough. You should use the mach timer to get a precise absolute time. And you need to execute a number of "runs" and take the minimum of the runs.
Your Obj-C method
-(float)returnNr:(float)number
{
return number;
}
first gets compiled to C-function and then it is executed, and sqrtf() is a C-function.
Hence C-functions will be faster as compared to Objective-C methods.
how can i get the boot time of ios in objective c ?
Is there a way to get it?
Don't know if this will work in iOS, but in OS X (which is essentially the same OS) you would use sysctl(). This is how the OS X Unix utility uptime does it. Source code is available - search for "boottime".
#include <sys/types.h>
#include <sys/sysctl.h>
// ....
#define MIB_SIZE 2
int mib[MIB_SIZE];
size_t size;
struct timeval boottime;
mib[0] = CTL_KERN;
mib[1] = KERN_BOOTTIME;
size = sizeof(boottime);
if (sysctl(mib, MIB_SIZE, &boottime, &size, NULL, 0) != -1)
{
// successful call
NSDate* bootDate = [NSDate dateWithTimeIntervalSince1970:boottime.tv_sec];
}
The restricted nature of programming in the iOS sandboxed environment might make it not work, I don't know, I haven't tried it.
I took JeremyP's answer, gave the result the full microsecond precision, clarified the names of local variables, improved the order, and put it into a method:
#include <sys/types.h>
#include <sys/sysctl.h>
// ....
+ (nullable NSDate *)bootDate
{
// nameIntArray and nameIntArrayLen
int nameIntArrayLen = 2;
int nameIntArray[nameIntArrayLen];
nameIntArray[0] = CTL_KERN;
nameIntArray[1] = KERN_BOOTTIME;
// boot_timeval
struct timeval boot_timeval;
size_t boot_timeval_size = sizeof(boot_timeval);
if (sysctl(nameIntArray, nameIntArrayLen, &boot_timeval, &boot_timeval_size, NULL, 0) == -1)
{
return nil;
}
// bootSince1970TimeInterval
NSTimeInterval bootSince1970TimeInterval = (NSTimeInterval)boot_timeval.tv_sec + ((NSTimeInterval)boot_timeval.tv_usec / 1000000);
// return
return [NSDate dateWithTimeIntervalSince1970:bootSince1970TimeInterval];
}
If I have an unsigned char *data pointer and I want to check whether size_t length of the data at that pointer is NULL, what would be the fastest way to do that? In other words, what's the fastest way to make sure a region of memory is blank?
I am implementing in iOS, so you can assume iOS frameworks are available, if that helps. On the other hand, simple C approaches (memcmp and the like) are also OK.
Note, I am not trying to clear the memory, but rather trying to confirm that it is already clear (I am trying to find out whether there is anything at all in some bitmap data, if that helps). For example, I think the following would work, though I have not tried it yet:
- BOOL data:(unsigned char *)data isNullToLength:(size_t)length {
unsigned char tester[length] = {};
memset(tester, 0, length);
if (memcmp(tester, data, length) != 0) {
return NO;
}
return YES;
}
I would rather not create a tester array, though, because the source data may be quite large and I'd rather avoid allocating memory for the test, even temporarily. But I may just being too conservative there.
UPDATE: Some Tests
Thanks to everyone for the great responses below. I decided to create a test app to see how these performed, the answers surprised me, so I thought I'd share them. First I'll show you the version of the algorithms I used (in some cases they differ slightly from those proposed) and then I'll share some results from the field.
The Tests
First I created some sample data:
size_t length = 1024 * 768;
unsigned char *data = (unsigned char *)calloc(sizeof(unsigned char), (unsigned long)length);
int i;
int count;
long check;
int loop = 5000;
Each test consisted of a loop run loop times. During the loop some random data was added to and removed from the data byte stream. Note that half the time there was actually no data added, so half the time the test should not find any non-zero data. Note the testZeros call is a placeholder for calls to the test routines below. A timer was started before the loop and stopped after the loop.
count = 0;
for (i=0; i<loop; i++) {
int r = random() % length;
if (random() % 2) { data[r] = 1; }
if (! testZeros(data, length)) {
count++;
}
data[r] = 0;
}
Test A: nullToLength. This was more or less my original formulation above, debugged and simplified a bit.
- (BOOL)data:(void *)data isNullToLength:(size_t)length {
void *tester = (void *)calloc(sizeof(void), (unsigned long)length);
int test = memcmp(tester, data, length);
free(tester);
return (! test);
}
Test B: allZero. Proposal by Carrotman.
BOOL allZero (unsigned char *data, size_t length) {
bool allZero = true;
for (int i = 0; i < length; i++){
if (*data++){
allZero = false;
break;
}
}
return allZero;
}
Test C: is_all_zero. Proposed by Lundin.
BOOL is_all_zero (unsigned char *data, size_t length)
{
BOOL result = TRUE;
unsigned char* end = data + length;
unsigned char* i;
for(i=data; i<end; i++) {
if(*i > 0) {
result = FALSE;
break;
}
}
return result;
}
Test D: sumArray. This is the top answer from the nearly duplicate question, proposed by vladr.
BOOL sumArray (unsigned char *data, size_t length) {
int sum = 0;
for (int i = 0; i < length; ++i) {
sum |= data[i];
}
return (sum == 0);
}
Test E: lulz. Proposed by Steve Jessop.
BOOL lulz (unsigned char *data, size_t length) {
if (length == 0) return 1;
if (*data) return 0;
return memcmp(data, data+1, length-1) == 0;
}
Test F: NSData. This is a test using NSData object I discovered in the iOS SDK while working on all of these. It turns out Apple does have an idea of how to compare byte streams that is designed to be hardware independent.
- (BOOL)nsdTestData: (NSData *)nsdData length: (NSUInteger)length {
void *tester = (void *)calloc(sizeof(void), (unsigned long)length);
NSData *nsdTester = [NSData dataWithBytesNoCopy:tester length:(NSUInteger)length freeWhenDone:NO];
int test = [nsdData isEqualToData:nsdTester];
free(tester);
return (test);
}
Results
So how did these approaches compare? Here are two sets of data, each representing 5000 loops through the check. First I tried this on the iPhone Simulator running on a relatively old iMac, then I tried this running on a first generation iPad.
On the iPhone 4.3 Simulator running on an iMac:
// Test A, nullToLength: 0.727 seconds
// Test F, NSData: 0.727
// Test E, lulz: 0.735
// Test C, is_all_zero: 7.340
// Test B, allZero: 8.736
// Test D, sumArray: 13.995
On a first generation iPad:
// Test A, nullToLength: 21.770 seconds
// Test F, NSData: 22.184
// Test E, lulz: 26.036
// Test C, is_all_zero: 54.747
// Test B, allZero: 63.185
// Test D, sumArray: 84.014
These are just two samples, I ran the test many times with only slightly varying results. The order of performance was always the same: A & F very close, E just behind, C, B, and D. I'd say that A, F, and E are virtual ties, on iOS I'd prefer F because it takes advantage of Apple's protection from processor change issues, but A & E are very close. The memcmp approach clearly wins over the simple loop approach, close to ten times faster in the simulator and twice as fast on the device itself. Oddly enough, D, the winning answer from the other thread performed very poorly in this test, probably because it does not break out of the loop when it hits the first difference.
I think you should do it with an explicit loop, but just for lulz:
if (length == 0) return 1;
if (*pdata) return 0;
return memcmp(pdata, pdata+1, length-1) == 0;
Unlike memcpy, memcmp does not require that the two data sections don't overlap.
It may well be slower than the loop, though, because the un-alignedness of the input pointers means there probably isn't much the implementation of memcmp can do to optimize, plus it's comparing memory with memory rather than memory with a constant. Easy enough to profile it and find out.
Not sure if it's the best, but I probably would do something like this:
bool allZero = true;
for (int i = 0; i < size_t; i++){
if (*data++){
//Roll back so data points to the non-zero char
data--;
//Do whatever is needed if it isn't zero.
allZero = false;
break;
}
}
If you've just allocated this memory, you can always call calloc rather than malloc (calloc requires that all the data is zeroed out). (Edit: reading your comment on the first post, you don't really need this. I'll just leave it just in case)
If you're allocating the memory yourself, I'd suggest using the calloc() function. It's just like malloc(), except it zeros out the buffer first. It's what's used to allocate memory for Objective-C objects and is the reason that all ivars default to 0.
On the other hand, if this is a statically declared buffer, or a buffer you're not allocating yourself, memset() is the easy way to do this.
Logic to get a value, check it, and set it will be at least as expensive as just setting it. You want it to be null, so just set it to null using memset().
This would be the preferred way to do it in C:
BOOL is_all_zero (const unsigned char* data, size_t length)
{
BOOL result = TRUE;
const unsigned char* end = data + length;
const unsigned char* i;
for(i=data; i<end; i++)
{
if(*i > 0)
{
result = FALSE;
break;
}
}
return result;
}
(Though note that strictly and formally speaking, a memory cell containing a NULL pointer mustn't necessarily be 0, as long as a null pointer cast results in the value zero, and a cast of a zero to a pointer results in a NULL pointer. In practice, this shouldn't matter as all known compilers use 0 or (void*) 0 for NULL.)
Note the edit to the initial question above. I did some tests and it is clear that the memcmp approach or using Apple's NSData object and its isEqualToData: method are the best approaches for speed. The simple loops are clearer to me, but slower on the device.
I have been working on reading in an audio asset using AVAssetReader so that I can later play back the audio with an AUGraph with an AudioUnit callback. I have the AUGraph and AudioUnit callback working but it reads files from disk and if the file is too big it would take up too much memory and crash the app. So I am instead reading the asset directly and only a limited size. I will then manage it as a double buffer and get the AUGraph what it needs when it needs it.
(Note: I would love know if I can use Audio Queue Services and still use an AUGraph with AudioUnit callback so memory is managed for me by the iOS frameworks.)
My problem is that I do not have a good understanding of arrays, structs and pointers in C. The part where I need help is taking the individual AudioBufferList which holds onto a single AudioBuffer and add that data to another AudioBufferList which holds onto all of the data to be used later. I believe I need to use memcpy but it is not clear how to use it or even initialize an AudioBufferList for my purposes. I am using MixerHost for reference which is the sample project from Apple which reads in the file from disk.
I have uploaded my work in progress if you would like to load it up in Xcode. I've figured out most of what I need to get this done and once I have the data being collected all in one place I should be good to go.
Sample Project: MyAssetReader.zip
In the header you can see I declare the bufferList as a pointer to the struct.
#interface MyAssetReader : NSObject {
BOOL reading;
signed long sampleTotal;
Float64 totalDuration;
AudioBufferList *bufferList; // How should this be handled?
}
Then I allocate bufferList this way, largely borrowing from MixerHost...
UInt32 channelCount = [asset.tracks count];
if (channelCount > 1) {
NSLog(#"We have more than 1 channel!");
}
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {NSLog (#"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[arrayIndex] = emptyBuffer;
bufferList->mBuffers[arrayIndex].mNumberChannels = 1;
// How should mData be initialized???
bufferList->mBuffers[arrayIndex].mData = malloc(sizeof(AudioUnitSampleType));
}
Finally I loop through the reads.
int frameCount = 0;
CMSampleBufferRef nextBuffer;
while (assetReader.status == AVAssetReaderStatusReading) {
nextBuffer = [assetReaderOutput copyNextSampleBuffer];
AudioBufferList localBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(nextBuffer, NULL, &localBufferList, sizeof(localBufferList), NULL, NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
// increase the number of total bites
bufferList->mBuffers[0].mDataByteSize += localBufferList.mBuffers[0].mDataByteSize;
// carefully copy the data into the buffer list
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
// get information about duration and position
//CMSampleBufferGet
CMItemCount sampleCount = CMSampleBufferGetNumSamples(nextBuffer);
Float64 duration = CMTimeGetSeconds(CMSampleBufferGetDuration(nextBuffer));
Float64 presTime = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(nextBuffer));
if (isnan(duration)) duration = 0.0;
if (isnan(presTime)) presTime = 0.0;
//NSLog(#"sampleCount: %ld", sampleCount);
//NSLog(#"duration: %f", duration);
//NSLog(#"presTime: %f", presTime);
self.sampleTotal += sampleCount;
self.totalDuration += duration;
frameCount++;
free(nextBuffer);
}
I am unsure about the what that I handle mDataByteSize and mData, especially with memcpy. Since mData is a void pointer this is an extra tricky area.
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
In this line I think it should be copying the value from the data in localBufferList to the position in the bufferList plus the number of frames to position the pointer where it should write the data. I have a couple of ideas on what I need to change to get this to work.
Since a void pointer is just 1 and not the size of the pointer for an AudioUnitSampleType I may need to multiply it also by sizeof(AudioUnitSampleType) to get the memcpy into the right position
I may not be using malloc properly to prepare mData but since I am not sure how many frames there will be I am not sure what to do to initialize it
Currently when I run this app it ends this function with an invalid pointer for bufferList.
I appreciate your help with making me better understand how to manage an AudioBufferList.
I've come up with my own answer. I decided to use an NSMutableData object which allows me to appendBytes from the CMSampleBufferRef after calling CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer to get an AudioBufferList.
[data appendBytes:localBufferList.mBuffers[0].mData length:localBufferList.mBuffers[0].mDataByteSize];
Once the read loop is done I have all of the data in my NSMutableData object. I then create and populate the AudioBufferList this way.
audioBufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList));
if (NULL == audioBufferList) {
NSLog (#"*** malloc failure for allocating audioBufferList memory");
[data release];
return;
}
audioBufferList->mNumberBuffers = 1;
audioBufferList->mBuffers[0].mNumberChannels = channelCount;
audioBufferList->mBuffers[0].mDataByteSize = [data length];
audioBufferList->mBuffers[0].mData = (AudioUnitSampleType *)malloc([data length]);
if (NULL == audioBufferList->mBuffers[0].mData) {
NSLog (#"*** malloc failure for allocating mData memory");
[data release];
return;
}
memcpy(audioBufferList->mBuffers[0].mData, [data mutableBytes], [data length]);
[data release];
I'd appreciate a little code review on how I use malloc to create the struct and populate it. I am getting a EXC_BAD_ACCESS error sporadically but I cannot pinpoint where the error is just yet. Since I am using malloc on the struct I should not have to retain it anywhere. I do call "free" to release child elements within the struct and finally the struct itself everywhere that I use malloc.