How do you convert CMSampleBufferRef to NSData?
I've managed to get the data for an MPMediaItem by following Erik Aigner's answer on this thread, however the data is of type CMSampleBufferRef.
I know CMSampleBufferRef is a struct and is defined in the CMSampleBuffer Reference in the iOS Dev Library, but I don't think I fully understand what it is. None of the CMSampleBuffer functions seem to be an obvious solution.
Here you go this works for audio sample buffer which is what you are looking at, and if you want to look at the whole process (getting all audio data from MPMediaItem into a file check out this question
CMSampleBufferRef ref=[output copyNextSampleBuffer];
// NSLog(#"%#",ref);
if(ref==NULL)
break;
//copy data to file
//read next one
AudioBufferList audioBufferList;
NSMutableData *data=[[NSMutableData alloc] init];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
// NSLog(#"%#",blockBuffer);
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
}
CFRelease(blockBuffer);
CFRelease(ref);
ref=NULL;
blockBuffer=NULL;
[data release];
Related
I'm playing around with the AVScreenShack example from Apple's website (Xcode project) which captures the desktop and displays the capture in a window in quasi real-time.
I have modified the project a little bit and inserted this line of code:
-(void)captureOutput:(AVCaptureOutput*) captureOutput didOutputSampleBuffer:(CMSampleBufferRef) sampleBuffer fromConnection:(AVCaptureConnection*) connection
{
...
}
My question is : How do I convert the CMSampleBufferRef instance to CGImageRef ?
Thank you.
Here is how you can create a UIImage from a CMSampleBufferRef. This worked for me when responding to captureStillImageAsynchronouslyFromConnection:completionHandler: on AVCaptureStillImageOutput.
// CMSampleBufferRef imageDataSampleBuffer;
CMBlockBufferRef buff = CMSampleBufferGetDataBuffer(imageDataSampleBuffer);
size_t len = CMBlockBufferGetDataLength(buff);
char * data = NULL;
CMBlockBufferGetDataPointer(buff, 0, NULL, &len, &data);
NSData * d = [[NSData alloc] initWithBytes:data length:len];
UIImage * img = [[UIImage alloc] initWithData:d];
It looks like the data coming out of CMBlockBufferGetDataPointer is JPEG data.
UPDATE: To fully answer your question, you can call CGImage on img from my code to actually get a CGImageRef.
Im new developer on ObjC and trying to make a fill color app. When I touch on a image, the color will be changed but I got the merory leak with this function need your help:
-(void) updateImageWithColorSelected:(int) pos{
CGImageRef imageRef = self.basicImage.CGImage;
NSData *data = CGDataProviderCopyData(CGImagerGetDataProvider(imageRef));//leak here
Byte *pixels = (Byte *)[data bytes];
//change color...
for(int i = 0; i< IMG_SIZE; i++){
pixels[j] = 255;
}
CGDataProvider provider = CGDataProviderCreateWithData( NULL, pixels, [data length], NULL];
CGImageRef newImageRef = CGImageCreate(w,h....);
self.basicImage = [UIImage imageWithCGImage:newImageRef];
//release newImageRef
CGImagerRelease(newImageRef);
// set basic image to img
[self.img setImage:self.basicImage];
data = nil;
[data release];
}
I try to remove all the code except NSData *data = CGDataProviderCopyData and the app still leak.
Do you guys have any idea how to release "data" ?
Thank you in advance,
}
// set basic image to img
[self.img setImage:self.basicImage];
data = nil;
[data release];
}
You're sending release to a nil pointer.
[data release];
data = nil;
}
This will do better.
Edit: the issue with CGDataProviderCreateWithData
When data is released, the data pointer you passed to CGDataProviderCreateWithData becomes invalid. This is expected. The proper use of this function requires you allocate a buffer for the data and provide a callback to release the data when the provider is released.
The best solution for you is to use CGDataProviderCreateWithCFData instead, taking advantage of the toll-free bridging between Foundation and CoreFoundation objects.
Use:
CGDataProvider provider = CGDataProviderCreateWithCFData( (CFDataRef) data );
Note that at present the data provider created by the call to CGDataProviderCreateWithData() or CGDataProviderCreateWithCFData() is also being leaked, and should be released by calling CGDataProviderRelease(). (This leak is undoubtedly minor compared to the original leaked data.)
Currently, I'm doing a little test project to see if I can get samples from an AVAssetReader to play back using an AudioQueue on iOS.
I've read this:
( Play raw uncompressed sound with AudioQueue, no sound )
and this: ( How to correctly read decoded PCM samples on iOS using AVAssetReader -- currently incorrect decoding ),
Which both actually did help. Before reading, I was getting no sound at all. Now, I'm getting sound, but the audio is playing SUPER fast. This is my first foray into audio programming, so any help is greatly appreciated.
I initialize the reader thusly:
NSDictionary * outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
output = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:uasset.tracks audioSettings:outputSettings];
[reader addOutput:output];
...
And I grab the data thusly:
CMSampleBufferRef ref= [output copyNextSampleBuffer];
// NSLog(#"%#",ref);
if(ref==NULL)
return;
//copy data to file
//read next one
AudioBufferList audioBufferList;
NSMutableData *data = [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
// NSLog(#"%#",blockBuffer);
if(blockBuffer==NULL)
{
[data release];
return;
}
if(&audioBufferList==NULL)
{
[data release];
return;
}
//stash data in same object
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
// NSData* throwData;
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
[self.delegate streamer:self didGetAudioBuffer:audioBuffer];
/*
Float32 *frame = (Float32*)audioBuffer.mData;
throwData = [NSData dataWithBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
[self.delegate streamer:self didGetAudioBuffer:throwData];
[data appendBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
*/
}
which eventually brings us to the audio queue, set up in this way:
//Apple's own code for canonical PCM
audioDesc.mSampleRate = 44100.0;
audioDesc.mFormatID = kAudioFormatLinearPCM;
audioDesc.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
audioDesc.mBytesPerPacket = 2 * sizeof (AudioUnitSampleType); // 8
audioDesc.mFramesPerPacket = 1;
audioDesc.mBytesPerFrame = 1 * sizeof (AudioUnitSampleType); // 8
audioDesc.mChannelsPerFrame = 2;
audioDesc.mBitsPerChannel = 8 * sizeof (AudioUnitSampleType); // 32
err = AudioQueueNewOutput(&audioDesc, handler_OSStreamingAudio_queueOutput, self, NULL, NULL, 0, &audioQueue);
if(err){
#pragma warning handle error
//never errs, am using breakpoint to check
return;
}
and we enqueue thusly
while (inNumberBytes)
{
size_t bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
if (bufSpaceRemaining < inNumberBytes)
{
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
fillBuf->mAudioDataByteSize = bytesFilled;
err = AudioQueueEnqueueBuffer(audioQueue, fillBuf, 0, NULL);
}
bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
size_t copySize;
if (bufSpaceRemaining < inNumberBytes)
{
copySize = bufSpaceRemaining;
}
else
{
copySize = inNumberBytes;
}
if (bytesFilled > packetBufferSize)
{
return;
}
AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize);
bytesFilled += copySize;
packetsFilled = 0;
inNumberBytes -= copySize;
offset += copySize;
}
}
I tried to be as code inclusive as possible so as to make it easy for everyone to point out where I'm being a moron. That being said, I have a feeling my problem occurs either in the output settings declaration of the track reader or in the actual declaration of the AudioQueue (where I describe to the queue what kind of audio I'm going to be sending it). The fact of the matter is, I don't really know mathematically how to actually generate those numbers (bytes per packet, frames per packet, what have you). An explanation of that would be greatly appreciated, and thanks for the help in advance.
Not sure how much of an answer this is, but there will be too much text and links for a comment and hopefully it will help (maybe guide you to your answer).
First off I know with my current project adjusting the sample rate will effect the speed of the sound, so you can try to play with those settings. But 44k is what I see in most default implementation including the apple example SpeakHere. However I would spend some time comparing your code to that example because there are quite a few differences. like checking before enqueueing.
First check out this posting https://stackoverflow.com/a/4299665/530933
It talks about how you need to know the audio format, specifically how many bytes in a frame, and casting appropriately
also good luck. I have had quite a few questions posted here, apple forums, and the ios forum (not the official one). With very little responses/help. To get where I am today (audio recording & streaming in ulaw) I ended up having to open an Apple Dev Support Ticket. Which prior to tackling the audio I never knew existed (dev support). One good thing is that if you have a valid dev account you get 2 incidents for free! CoreAudio is not fun. Documentation is sparse, and besides SpeakHere there are not many examples. One thing I did find is that the framework headers do have some good info and this book. Unfortunately I have only started the book otherwise I may be able to help you further.
You can also check some of my own postings which I have tried to answer to the best of my abilities.
This is my main audio question which I have spent alot of time on to compile all pertinent links and code.
using AQRecorder (audioqueue recorder example) in an objective c class
trying to use AVAssetWriter for ulaw audio (2)
For some reason, even though every example I've seen of the audio queue using LPCM had
ASBD.mBitsPerChannel = 8* sizeof (AudioUnitSampleType);
For me it turns out I needed
ASBD.mBitsPerChannel = 2*bytesPerSample;
for a description of:
ASBD.mFormatID = kAudioFormatLinearPCM;
ASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
ASBD.mBytesPerPacket = bytesPerSample;
ASBD.mBytesPerFrame = bytesPerSample;
ASBD.mFramesPerPacket = 1;
ASBD.mBitsPerChannel = 2*bytesPerSample;
ASBD.mChannelsPerFrame = 2;
ASBD.mSampleRate = 48000;
I have no idea why this works, which bothers me a great deal... but hopefully I can figure it all out eventually.
If anyone can explain to me why this works, I'd be very thankful.
For extracting the RAW image buffer from an uncompressed v210 (with pixel format kCVPixelFormatType_422YpCbCr10) I tried to follow this great post:
Reading samples via AVAssetReader
The problem is, that when it comes to startReading of my assetReader I got an AVAssetReaderStatusFailed (with setting NSMutableDictionary object kCVPixelBufferPixelFormatTypeKey for the output settings to kCVPixelFormatType_422YpCbCr10). If I leave the outputSettings be nil, it parses each frame, but the CMSampleBufferRef buffers are empty.
I tried lots of the pixel formats such as
kCVPixelFormatType_422YpCbCr10
kCVPixelFormatType_422YpCbCr8
kCVPixelFormatType_422YpCbCr16
kCVPixelFormatType_422YpCbCr8_yuvs
kCVPixelFormatType_422YpCbCr8FullRange
kCVPixelFormatType_422YpCbCr_4A_8BiPlanar
but none of them work.
What am I doing wrong?
Any comments welcome...
Here is my code:
/* construct an AVAssetReader based on an file based URL */
NSError *error=[[NSError alloc]init];
NSString *filePathString=[[NSString alloc]
initWithString:#"/Users/johann/Raw12bit.mov"];
NSURL *movieUrl=[[NSURL alloc] initFileURLWithPath:filePathString];
AVURLAsset *movieAsset=[[AVURLAsset alloc] initWithURL:movieUrl options:nil];
/* determine image dimensions of images stored in movie asset */
CGSize size=[movieAsset naturalSize];
NSLog(#"movie asset natual size: size.width=%f size.height=%f",
size.width, size.height);
/* allocate assetReader */
AVAssetReader *assetReader=[[AVAssetReader alloc] initWithAsset:movieAsset
error:&error];
/* get video track(s) from movie asset */
NSArray *videoTracks=[movieAsset tracksWithMediaType:AVMediaTypeVideo];
/* get first video track, if there is any */
AVAssetTrack *videoTrack0=[videoTracks objectAtIndex:0];
/* set the desired video frame format into attribute dictionary */
NSMutableDictionary* dictionary=[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_422YpCbCr10],
(NSString*)kCVPixelBufferPixelFormatTypeKey,
nil];
/* construct the actual track output and add it to the asset reader */
AVAssetReaderTrackOutput* assetReaderOutput=[[AVAssetReaderTrackOutput alloc]
initWithTrack:videoTrack0
outputSettings:dictionary]; //nil or dictionary
/* main parser loop */
NSInteger i=0;
if([assetReader canAddOutput:assetReaderOutput]){
[assetReader addOutput:assetReaderOutput];
NSLog(#"asset added to output.");
/* start asset reader */
if([assetReader startReading]==YES){
/* read off the samples */
CMSampleBufferRef buffer;
while([assetReader status]==AVAssetReaderStatusReading){
buffer=[assetReaderOutput copyNextSampleBuffer];
i++;
NSLog(#"decoding frame #%ld done.", i);
}
}
else {
NSLog(#"could not start reading asset.");
NSLog(#"reader status: %ld", [assetReader status]);
}
}
else {
NSLog(#"could not add asset to output.");
}
Best Regards,
Johann
Try kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, it worked fine for me.
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
kCVPixelFormatType_32BGRA
system only supports this.
I have been working on reading in an audio asset using AVAssetReader so that I can later play back the audio with an AUGraph with an AudioUnit callback. I have the AUGraph and AudioUnit callback working but it reads files from disk and if the file is too big it would take up too much memory and crash the app. So I am instead reading the asset directly and only a limited size. I will then manage it as a double buffer and get the AUGraph what it needs when it needs it.
(Note: I would love know if I can use Audio Queue Services and still use an AUGraph with AudioUnit callback so memory is managed for me by the iOS frameworks.)
My problem is that I do not have a good understanding of arrays, structs and pointers in C. The part where I need help is taking the individual AudioBufferList which holds onto a single AudioBuffer and add that data to another AudioBufferList which holds onto all of the data to be used later. I believe I need to use memcpy but it is not clear how to use it or even initialize an AudioBufferList for my purposes. I am using MixerHost for reference which is the sample project from Apple which reads in the file from disk.
I have uploaded my work in progress if you would like to load it up in Xcode. I've figured out most of what I need to get this done and once I have the data being collected all in one place I should be good to go.
Sample Project: MyAssetReader.zip
In the header you can see I declare the bufferList as a pointer to the struct.
#interface MyAssetReader : NSObject {
BOOL reading;
signed long sampleTotal;
Float64 totalDuration;
AudioBufferList *bufferList; // How should this be handled?
}
Then I allocate bufferList this way, largely borrowing from MixerHost...
UInt32 channelCount = [asset.tracks count];
if (channelCount > 1) {
NSLog(#"We have more than 1 channel!");
}
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {NSLog (#"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[arrayIndex] = emptyBuffer;
bufferList->mBuffers[arrayIndex].mNumberChannels = 1;
// How should mData be initialized???
bufferList->mBuffers[arrayIndex].mData = malloc(sizeof(AudioUnitSampleType));
}
Finally I loop through the reads.
int frameCount = 0;
CMSampleBufferRef nextBuffer;
while (assetReader.status == AVAssetReaderStatusReading) {
nextBuffer = [assetReaderOutput copyNextSampleBuffer];
AudioBufferList localBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(nextBuffer, NULL, &localBufferList, sizeof(localBufferList), NULL, NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
// increase the number of total bites
bufferList->mBuffers[0].mDataByteSize += localBufferList.mBuffers[0].mDataByteSize;
// carefully copy the data into the buffer list
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
// get information about duration and position
//CMSampleBufferGet
CMItemCount sampleCount = CMSampleBufferGetNumSamples(nextBuffer);
Float64 duration = CMTimeGetSeconds(CMSampleBufferGetDuration(nextBuffer));
Float64 presTime = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(nextBuffer));
if (isnan(duration)) duration = 0.0;
if (isnan(presTime)) presTime = 0.0;
//NSLog(#"sampleCount: %ld", sampleCount);
//NSLog(#"duration: %f", duration);
//NSLog(#"presTime: %f", presTime);
self.sampleTotal += sampleCount;
self.totalDuration += duration;
frameCount++;
free(nextBuffer);
}
I am unsure about the what that I handle mDataByteSize and mData, especially with memcpy. Since mData is a void pointer this is an extra tricky area.
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
In this line I think it should be copying the value from the data in localBufferList to the position in the bufferList plus the number of frames to position the pointer where it should write the data. I have a couple of ideas on what I need to change to get this to work.
Since a void pointer is just 1 and not the size of the pointer for an AudioUnitSampleType I may need to multiply it also by sizeof(AudioUnitSampleType) to get the memcpy into the right position
I may not be using malloc properly to prepare mData but since I am not sure how many frames there will be I am not sure what to do to initialize it
Currently when I run this app it ends this function with an invalid pointer for bufferList.
I appreciate your help with making me better understand how to manage an AudioBufferList.
I've come up with my own answer. I decided to use an NSMutableData object which allows me to appendBytes from the CMSampleBufferRef after calling CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer to get an AudioBufferList.
[data appendBytes:localBufferList.mBuffers[0].mData length:localBufferList.mBuffers[0].mDataByteSize];
Once the read loop is done I have all of the data in my NSMutableData object. I then create and populate the AudioBufferList this way.
audioBufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList));
if (NULL == audioBufferList) {
NSLog (#"*** malloc failure for allocating audioBufferList memory");
[data release];
return;
}
audioBufferList->mNumberBuffers = 1;
audioBufferList->mBuffers[0].mNumberChannels = channelCount;
audioBufferList->mBuffers[0].mDataByteSize = [data length];
audioBufferList->mBuffers[0].mData = (AudioUnitSampleType *)malloc([data length]);
if (NULL == audioBufferList->mBuffers[0].mData) {
NSLog (#"*** malloc failure for allocating mData memory");
[data release];
return;
}
memcpy(audioBufferList->mBuffers[0].mData, [data mutableBytes], [data length]);
[data release];
I'd appreciate a little code review on how I use malloc to create the struct and populate it. I am getting a EXC_BAD_ACCESS error sporadically but I cannot pinpoint where the error is just yet. Since I am using malloc on the struct I should not have to retain it anywhere. I do call "free" to release child elements within the struct and finally the struct itself everywhere that I use malloc.