This occurs in my attempts to render metal with a CAMetalLayer, and in a lot of 'Metal By Example' sample code I download. The problem is with the 'texture' I guess, here's some code, I can't provide all of it, but I'll try to provide the most relevant parts. It doesn't accept the texture descriptor, printing this into the console.
failed assertion `MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLStorageModePrivate or MTLStorageModeMemoryless storage mode.'
- (void)buildDepthTexture
{
CGSize drawableSize = self.layer.drawableSize;
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float
width:drawableSize.width
height:drawableSize.height
mipmapped:NO];
self.depthTexture = [self.device newTextureWithDescriptor:descriptor]; // Thread 1: signal SIGABRT
[self.depthTexture setLabel:#"Depth Texture"];
}
Again, this is sample code that presumably worked, but no longer does. So I'm like OK, let's allocate it with private storage mode or some junk. descriptor.storageMode = MTLStorageModePrivate;
But when I do that, the render pass descriptor can't be created in draw.
failed assertion `Texture at depthAttachment has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04)'
MTLRenderPassDescriptor *renderPass = [self newRenderPassWithColorAttachmentTexture:[drawable texture]];
id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
id<MTLRenderCommandEncoder> commandEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPass]; //Thread 1: signal SIGABRT
Here's the code for newRenderPassWithColorAttachmentTexture.
- (MTLRenderPassDescriptor *)newRenderPassWithColorAttachmentTexture:(id<MTLTexture>)texture
{
MTLRenderPassDescriptor *renderPass = [MTLRenderPassDescriptor new];
renderPass.colorAttachments[0].texture = texture;
renderPass.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPass.colorAttachments[0].storeAction = MTLStoreActionStore;
renderPass.colorAttachments[0].clearColor = MBEClearColor;
renderPass.depthAttachment.texture = self.depthTexture;
renderPass.depthAttachment.loadAction = MTLLoadActionClear;
renderPass.depthAttachment.storeAction = MTLStoreActionStore;
renderPass.depthAttachment.clearDepth = 1.0;
return renderPass;
}
So basically, it seems two different stages of rendering require two different mutually exclusive conditions to be the case. If one's works, the other doesn't work, and vice versa. Seems impossible, seriously, what am I supposed to do? What gives?
You should provide texture usage description:
- (void)buildDepthTexture
{
CGSize drawableSize = self.layer.drawableSize;
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float
width:drawableSize.width
height:drawableSize.height
mipmapped:NO];
descriptor.storageMode = MTLStorageModePrivate;
descriptor.usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite;
self.depthTexture = [self.device newTextureWithDescriptor:descriptor]; // Thread 1: signal SIGABRT
[self.depthTexture setLabel:#"Depth Texture"];
}
Related
I've been learning vulkan and following vulkan-tutorial and right now I'm at the Texture mapping part. I'm loading an image and uploading it to the host memory, but I'm having trouble understanding the layout transitions and barriers.
Consider this (pseudo)code for loading and transitioning an image (inspired by this), which will be sampled in a fragment shader:
auto texture = loadTexture(filePath);
auto stagingBuffer = createStagingBuffer(texture.pixels, texture.size);
// Create image with:
// usage - VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT
// properties - VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT
auto imageBuffer = createImage();
// -- begin single usage command buffer --
auto cb = beginCommandBuffer();
VkImageMemoryBarrier preCopyBarrier {
// ...
.image = imageBuffer.image,
.srcAccessMask = 0,
.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED,
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
// ...
};
// PipelineBarrier (preCopyBarrier):
// srcStageMask = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT
// dstStageMask = VK_PIPELINE_STAGE_TRANSFER_BIT
// imageMemoryBarrier = &preCopyBarrier
vkCmdPipelineBarrier(cb, ...);
// Copy stagingBuffer.buffer to imageBuffer.image
// dstImageLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL
vkCmdCopyBufferToImage(cb, ...);
VkImageMemoryBarrier postCopyBarrier {
// ...
.image = imageBuffer.image,
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
.dstAccessMask = VK_ACCESS_SHADER_READ_BIT,
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL,
// ...
};
// PipelineBarrier (postCopyBarrier):
// srcStageMask = VK_PIPELINE_STAGE_TRANSFER_BIT
// dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
// imageMemoryBarrier = &postCopyBarrier
vkCmdPipelineBarrier(cb, ...)
endAndSubmitCommandBuffer(cb);
The preCopyBarrier is there because of the vkCmdCopyBufferToImage(...) command and will be "used"/"activated" only once and that is during this command(?).
The postCopyBarrier is there because of the fact, that it will be sampled in the fragment shader, so the layout transition
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL -> VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL
has to happen every single time a frame is rendered(? Please correct me, if I'm wrong).
But (assuming I'm correct, which I'm probably not) I'm having trouble wrapping my head around the fact, that I'm creating a preCopyBarrier, which will be used only once and postCopyBarrier, which will be used continuously. If I were to load for example 200 textures, I'd have a bunch of their single usage preCopyBarriers laying around. Isn't this a...waste?
This might be a stupid question and I'm probably missing/misunderstanding something important, but I feel like I shouldn't move on without understanding this concept correctly.
I'm writing an application in xcode on a MacMini (Late 2012).
It's an app where I load some QuartzComposer files (with QCRenderer class). Then render those files into videomemory and read them back using glReadPixels to get all the pixel data. This pixel data is then pushed to a decklink frame (I'm using BlackMagic Decklink SDK) for playback onto a DeckLink Quad. Everything is working great. It's even possible to render 3 outputs without dropping frames on HD (1080i50). But after a while (like 5 mins) it is dropping frames even when I'm only rendering 1 output.
So I think there are 2 possible reasons. First. When there is a completed frame (frame dit played out) callback I'm receiving the bmdOutputFrameDisplayedLate from the SDK which means the frame was not played at the time it was scheduled for. So when this happens I'm pushing the next frame by 1 into the future.
Second. I've set a frameBuffer Size (3 frames are rendered out before playback will be started). So maybe after a while rendering is falling behind the scheduling which will cause the dropping/late frames. Maybe i'm not doing the openGL rendering process like it should be?
So here's my code:
-> first I'm loading a QCRenderer into memory
- (id)initWithPath:(NSString*)path forResolution:(NSSize)resolution
{
if (self = [super init])
{
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFAPixelBuffer,
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
NSOpenGLPFADepthSize, 24,
(NSOpenGLPixelFormatAttribute) 0
};
NSOpenGLPixelFormat* format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
quartzPixelBuffer = nil;
quartzPixelBuffer = [[NSOpenGLPixelBuffer alloc] initWithTextureTarget:GL_TEXTURE_2D textureInternalFormat:GL_RGBA textureMaxMipMapLevel:0 pixelsWide:resolution.width pixelsHigh:resolution.height];
if(quartzPixelBuffer == nil)
{
NSLog(#"Cannot create OpenGL pixel buffer");
}
//Create the OpenGL context to render with (with color and depth buffers)
quartzOpenGLContext = [[NSOpenGLContext alloc] initWithFormat:format shareContext:nil];
if(quartzOpenGLContext == nil)
{
NSLog(#"Cannot create OpenGL context");
}
[quartzOpenGLContext setPixelBuffer:quartzPixelBuffer cubeMapFace:0 mipMapLevel:0 currentVirtualScreen:[quartzOpenGLContext currentVirtualScreen]];
//Create the QuartzComposer Renderer with that OpenGL context and the specified composition file
NSString* correctPath = [path substringWithRange:NSMakeRange(0, path.length - 1)];
quartzRenderer = [[QCRenderer alloc] initWithOpenGLContext:quartzOpenGLContext pixelFormat:format file:correctPath];
if(quartzRenderer == nil)
{
NSLog(#"Cannot create QCRenderer");
}
}
return self;
}
-> next step is to render 3 frames (BUFFER_DEPTH is set to 3 currently) before starting playback
- (void) preRollFrames;
{
// reset scheduled
[self resetScheduled];
totalFramesScheduled = 0;
if (isRunning == TRUE)
{
[self stopPlayback];
}
#autoreleasepool
{
for (double i = 0.0; i < ((1.0 / framesPerSecond) * BUFFER_DEPTH); i += 1.0/framesPerSecond)
{
// render image at given time
[self createVideoFrame:TRUE];
}
}
}
-> this is the createVideoFrame function. When scheduleBlack is set to true there must be rendered a black frame. If false the function renderFrameAtTime for the QCRenderer class is called. The return of this function is then passed to the decklinkVideoFrame object. Next, this frame will be pushed into the schedule queue of the DeckLink Card (SDK).
- (void) createVideoFrame:(BOOL)schedule
{
#autoreleasepool
{
// get displaymode
IDeckLinkDisplayMode* decklinkdisplaymode = (IDeckLinkDisplayMode*)CurrentRes;
// create new videoframe on output
if (deckLinkOutput->CreateVideoFrame((int)decklinkdisplaymode->GetWidth(), (int)decklinkdisplaymode->GetHeight(), (int)decklinkdisplaymode->GetWidth() * 4, bmdFormat8BitARGB, bmdFrameFlagFlipVertical, &videoFrame) != S_OK)
{
// failed to create new video frame on output
// display terminal message
sendMessageToTerminal = [[mogiTerminalMessage alloc] initWithSendNotification:#"terminalErrorMessage" forMessage:[NSString stringWithFormat:#"DeckLink: Output %d -> Failed to create new videoframe", outputID]];
}
unsigned frameBufferRowBytes = ((int)decklinkdisplaymode->GetWidth() * 4 + 63) & ~63;
void* frameBufferPtr = valloc((int)decklinkdisplaymode->GetHeight() * frameBufferRowBytes);
// set videoframe pointer
if (videoFrame != NULL)
{
videoFrame->GetBytes((void**)&frameBufferPtr);
}
// fill pointer with pixel data
if (scheduleBlack == TRUE)
{
[qClear renderFrameAtTime:1.0 forBuffer:(void**)frameBufferPtr forScreen:0];
// render first frame qRenderer
if (qRender != NULL)
{
[qRender renderFirstFrame];
}
}
else
{
[qRender renderFrameAtTime:totalSecondsScheduled forBuffer:(void**)frameBufferPtr forScreen:screenID];
schedule = TRUE;
}
// if playback -> schedule frame
if (schedule == TRUE)
{
// schedule frame
if (videoFrame != NULL)
{
if (deckLinkOutput->ScheduleVideoFrame(videoFrame, (totalFramesScheduled * frameDuration), frameDuration, frameTimescale) != S_OK)
{
// failed to schedule new frame
// display message to terminal
sendMessageToTerminal = [[mogiTerminalMessage alloc] initWithSendNotification:#"terminalErrorMessage" forMessage:[NSString stringWithFormat:#"DeckLink: Output %d -> Failed to schedule new videoframe", outputID]];
}
else
{
// increase totalFramesScheduled
totalFramesScheduled ++;
// increase totalSecondsScheduled
totalSecondsScheduled += 1.0/framesPerSecond;
}
// clear videoframe
videoFrame->Release();
videoFrame = NULL;
}
}
}
}
-> render frameAtTime function from QCRenderer class
- (void) renderFrameAtTime:(double)time forBuffer:(void*)frameBuffer forScreen:(int)screen
{
#autoreleasepool
{
CGLContextObj cgl_ctx = [quartzOpenGLContext CGLContextObj];
// render frame at time
[quartzRenderer renderAtTime:time arguments:NULL];
glReadPixels(0, 0, [quartzPixelBuffer pixelsWide], [quartzPixelBuffer pixelsHigh], GL_BGRA, GL_UNSIGNED_INT_8_8_8_8, frameBuffer);
}
}
-> After perolling frames Playback is started. Each time there's a frame played out. This Callback method is called (Decklink SDK). If there's a late frame. I'm pushing the totalFrames 1 frame into future.
PlaybackDelegate::PlaybackDelegate (DecklinkDevice* owner)
{
pDecklinkDevice = owner;
}
HRESULT PlaybackDelegate::ScheduledFrameCompleted (IDeckLinkVideoFrame* completedFrame, BMDOutputFrameCompletionResult result)
{
if (result == bmdOutputFrameDisplayedLate)
{
// if displayed late bump scheduled time further into the future by one frame
[pDecklinkDevice increaseScheduledFrames];
NSLog(#"bumped %d", [pDecklinkDevice getOutputID]);
}
if ([pDecklinkDevice getIsRunning] == TRUE)
{
[pDecklinkDevice createVideoFrame:TRUE];
}
return S_OK;
}
So my question. Am I doing the openGL rendering process correct? Maybe that is causing the delay after some minutes. Or I'm a handling the displayedLate frame incorrect so the timing of the scheduling queue is messed up after some time?
Thx!
Thomas
When late, try advancing the frame counter according to the completion result specified by the ScheduledFrameCompleted callback. Consider an extra increment of two.
At least on Windows and for many years now, only workstation boards provide unthrottled pixel readback when using NVidia's products. My iMac has a GeForce series card but I haven't measured its performance. I wouldn't be surprised if glReadPixels is throttled.
Also try using GL_BGRA_EXT and GL_UNSIGNED_INT_8_8_8_8_REV.
You should have precision timing metrics for glReadPixels and the writes to hardware. I assume you're reading back progressive or interlaced frames but not fields. Ideally, pixel readback should be less than 10 ms. And obviously, the entire render cycle needs be faster than the video hardware's frame rate.
I am trying to generate a musical note that will play through the iPhone speakers using Objective-C and MIDI. I have the code below but it is not doing anything. What am I doing wrong?
MIDIPacketList packetList;
packetList.numPackets = 1;
MIDIPacket* firstPacket = &packetList.packet[0];
firstPacket->timeStamp = 0; // send immediately
firstPacket->length = 3;
firstPacket->data[0] = 0x90;
firstPacket->data[1] = 80;
firstPacket->data[2] = 120;
MIDIPacketList pklt=packetList;
MIDISend(MIDIGetSource(0), MIDIGetDestination(0), &pklt);
You've got three problems:
Declaring a MIDIPacketList doesn't allocate memory or initialize the structure
You're passing the results of MIDIGetSource (which returns a MIDIEndpointRef) as the first parameter to MIDISend where it is expecting a MIDIPortRef instead. (You probably ignored a compiler warning about this. Never ignore compiler warnings.)
Sending a MIDI note in iOS doesn't make any sound. If you don't have an external MIDI device connected to your iOS device, you need to set up something with CoreAudio that will generate sounds. That's beyond the scope of this answer.
So this code will run, but it won't make any sounds unless you've got external hardware:
//Look to see if there's anything that will actually play MIDI notes
NSLog(#"There are %lu destinations", MIDIGetNumberOfDestinations());
// Prepare MIDI Interface Client/Port for writing MIDI data:
MIDIClientRef midiclient = 0;
MIDIPortRef midiout = 0;
OSStatus status;
status = MIDIClientCreate(CFSTR("Test client"), NULL, NULL, &midiclient);
if (status) {
NSLog(#"Error trying to create MIDI Client structure: %d", (int)status);
}
status = MIDIOutputPortCreate(midiclient, CFSTR("Test port"), &midiout);
if (status) {
NSLog(#"Error trying to create MIDI output port: %d", (int)status);
}
Byte buffer[128];
MIDIPacketList *packetlist = (MIDIPacketList *)buffer;
MIDIPacket *currentpacket = MIDIPacketListInit(packetlist);
NSInteger messageSize = 3; //Note On is a three-byte message
Byte msg[3] = {0x90, 80, 120};
MIDITimeStamp timestamp = 0;
currentpacket = MIDIPacketListAdd(packetlist, sizeof(buffer), currentpacket, timestamp, messageSize, msg);
MIDISend(midiout, MIDIGetDestination(0), packetlist);
I'm playing a MIDI sequence via MusicPlayer which I loaded from a MIDI file and I want to change the sequence to another while playback.
When I try this:
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
it stops the playback. So I start it back again and set the time with
MusicPlayerSetTime(_player, currentTime);
so it plays again where the previous sequence stopped, but there is a little delay.
I've tried to add the time interval to currentTime, which I got by obtaining the time before stopping and after starting again. But there is still a delay.
I was wondering if there is an alternative to stopping -> changing sequence -> starting again.
You definitely need to manage the AUSamplers if you are adding and removing tracks or switching sequences. It probably is cleaner to dispose of the AUSampler and create a new one for each new track but it is also possible to 'recycle' AUSamplers but that means you will need to keep track of them.
Managing AUSamplers means that when you are no longer using an instance of one (for example if you delete or replace a MusicTrack), you need to disconnect it from the AUMixer instance, remove it from the AUGraph instance, and then update the AUGraph.
There are lots of ways to handle all this. For convenience in keeping track of AUSampler instances' bus number, sound font loaded and some other stuff, I use a subClass of NSObject named SamplerAudioUnitto contain all the needed properties and methods. Same for MusicTracks - I have a Track class - but this may not be needed in your project.
The gist though is that AUSamplers need to be managed for performance and memory. If an instance is no longer being used it should be removed and the AUMixer bus input freed up.
BTW - I check the docs and there is apparently no technical limit to the number of mixer busses - but the number does need to be specified.
// this is not cut and paste code - just an example of managing the AUSampler instance
- (OSStatus)deleteTrack:(Track*) trackObj
{
OSStatus result = noErr;
// turn off MP if playing
BOOL MPstate = [self isPlaying];
if (MPstate){
MusicPlayerStop(player);
}
//-disconnect node from mixer + update list of mixer buses
SamplerAudioUnit * samplerObj = trackObj.sampler;
UInt32 busNumber = samplerObj.busNumber;
result = AUGraphDisconnectNodeInput(graph, mixerNode, busNumber);
if (result) {[self printErrorMessage: #"AUGraphDisconnectNodeInput" withStatus: result];}
[self clearMixerBusState: busNumber]; // routine that keeps track of available busses
result = MusicSequenceDisposeTrack(sequence, trackObj.track);
if (result) {[self printErrorMessage: #"MusicSequenceDisposeTrack" withStatus: result];}
// remove AUSampler node
result = AUGraphRemoveNode(graph, samplerObj.samplerNode);
if (result) {[self printErrorMessage: #"AUGraphRemoveNode" withStatus: result];}
result = AUGraphUpdate(graph, NULL);
if (result) {[self printErrorMessage: #"AUGraphUpdate" withStatus: result];}
samplerObj = nil;
trackObj = nil;
if (MPstate){
MusicPlayerStart(player);
}
// CAShow(graph);
// CAShow(sequence);
return result;
}
Because
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
will still cause the player to stop, it is still possible to hear a little break.
So instead of updating the musicSequence, i went ahead and changed the content of the tracks instead, which won't cause any breaks:
MusicTrack currentTrack;
MusicTrack currentTrack2;
MusicSequenceGetIndTrack(musicSequence, 0, ¤tTrack);
MusicSequenceGetIndTrack(musicSequence, 1, ¤tTrack2);
MusicTrackClear(currentTrack, 0, _trackLen);
MusicTrackClear(currentTrack2, 0, _trackLen);
MusicSequence tmpSequence;
switch (number) {
case 0:
tmpSequence = musicSequence1;
break;
case 1:
tmpSequence = musicSequence2;
break;
case 2:
tmpSequence = musicSequence3;
break;
case 3:
tmpSequence = musicSequence4;
break;
default:
tmpSequence = musicSequence1;
break;
}
MusicTrack tmpTrack;
MusicTrack tmpTrack2;
MusicSequenceGetIndTrack(tmpSequence, 0, &tmpTrack);
MusicSequenceGetIndTrack(tmpSequence, 1, &tmpTrack2);
MusicTimeStamp trackLen = 0;
UInt32 trackLenLenLen = sizeof(trackLen);
MusicTrackGetProperty(tmpTrack, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLenLen);
_trackLen = trackLen;
MusicTrackCopyInsert(tmpTrack, 0, _trackLen, currentTrack, 0);
MusicTrackCopyInsert(tmpTrack2, 0, _trackLen, currentTrack2, 0);
No disconnection of nodes, no updating the graph, no stopping the player.
I have a Multichannel Mixer audio unit playing back audio files in an iOS app, and I need to figure out how to update the app's UI and perform a reset when the render callback hits the end of the longest audio file (which is set up to run on bus 0). As my code below shows I am trying to use KVO to achieve this (using the boolean variable tapesUnderway - the AutoreleasePool is necessary as this Objective-C code is running outside of its normal domain, see http://www.cocoabuilder.com/archive/cocoa/57412-nscfnumber-no-pool-in-place-just-leaking.html).
static OSStatus tapesRenderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
SoundBufferPtr sndbuf = (SoundBufferPtr)inRefCon;
UInt32 bufferFrames = sndbuf[inBusNumber].numFrames;
AudioUnitSampleType *in = sndbuf[inBusNumber].data;
// These mBuffers are the output buffers and are empty; these two lines are just setting the references to them (via outA and outB)
AudioUnitSampleType *outA = (AudioUnitSampleType *)ioData->mBuffers[0].mData;
AudioUnitSampleType *outB = (AudioUnitSampleType *)ioData->mBuffers[1].mData;
UInt32 sample = sndbuf[inBusNumber].sampleNum;
// --------------------------------------------------------------
// Set the start time here
if(inBusNumber == 0 && !tapesFirstRenderPast)
{
printf("Tapes first render past\n");
tapesStartSample = inTimeStamp->mSampleTime;
tapesFirstRenderPast = YES; // MAKE SURE TO RESET THIS ON SONG RESTART
firstPauseSample = tapesStartSample;
}
// --------------------------------------------------------------
// Now process the samples
for(UInt32 i = 0; i < inNumberFrames; ++i)
{
if(inBusNumber == 0)
{
// ------------------------------------------------------
// Bus 0 is the backing track, and is always playing back
outA[i] = in[sample++];
outB[i] = in[sample++]; // For stereo set desc.SetAUCanonical to (2, true) and increment samples in both output calls
lastSample = inTimeStamp->mSampleTime + (Float64)i; // Set the last played sample in order to compensate for pauses
// ------------------------------------------------------
// Use this logic to mark end of tune
if(sample >= (bufferFrames * 2) && !tapesEndPast)
{
// USE KVO TO NOTIFY METHOD OF VALUE CHANGE
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
FuturesEPMedia *futuresMedia = [FuturesEPMedia sharedFuturesEPMedia];
NSNumber *boolNo = [[NSNumber alloc] initWithBool: NO];
[futuresMedia setValue: boolNo forKey: #"tapesUnderway"];
[boolNo release];
[pool release];
tapesEndPast = YES;
}
}
else
{
// ------------------------------------------------------
// The other buses are the open sections, and are synched through the tapesSectionsTimes array
Float64 sectionTime = tapesSectionTimes[inBusNumber] * kGraphSampleRate; // Section time in samples
Float64 currentSample = inTimeStamp->mSampleTime + (Float64)i;
if(!isPaused && !playFirstRenderPast)
{
pauseGap += currentSample - firstPauseSample;
playFirstRenderPast = YES;
pauseFirstRenderPast = NO;
}
if(currentSample > (tapesStartSample + sectionTime + pauseGap) && sample < (bufferFrames * 2))
{
outA[i] = in[sample++];
outB[i] = in[sample++];
}
else
{
outA[i] = 0;
outB[i] = 0;
}
}
}
sndbuf[inBusNumber].sampleNum = sample;
return noErr;
}
At the moment when this variable is changed it triggers a method in self, but this leads to an unacceptable delay (20-30 seconds) when executed from this render callback (I am thinking because it is Objective-C code running in the high priority audio thread?). How do I effectively trigger such a change without the delay? (The trigger will change a pause button to a play button and call a reset method to prepare for the next play.)
Thanks
Yes. Don't use objc code in the render thread since its high priority. If you store state in memory (ptr or struct) and then get a timer in the main thread to poll (check) the value(s) in memory. The timer need not be anywhere near as fast as the render thread and will be very accurate.
Try this.
Global :
BOOL FlgTotalSampleTimeCollected = False;
Float64 HigestSampleTime = 0 ;
Float64 TotalSampleTime = 0;
in -(OSStatus) setUpAUFilePlayer:
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"couldn't get file's data format");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
Float64 sTime = nPackets * fileASBD.mFramesPerPacket;
if (HigestSampleTime < sTime)
{
HigestSampleTime = sTime;
}
In RenderCallBack :
if (*actionFlags & kAudioUnitRenderAction_PreRender)
{
if (!THIS->FlgTotalSampleTimeCollected)
{
[THIS setFlgTotalSampleTimeCollected:TRUE];
[THIS setTotalSampleTime:(inTimeStamp->mSampleTime + THIS->HigestSampleTime)];
}
}
else if (*actionFlags & kAudioUnitRenderAction_PostRender)
{
if (inTimeStamp->mSampleTime > THIS->TotalSampleTime)
{
NSLog(#"inTimeStamp->mSampleTime :%f",inTimeStamp->mSampleTime);
NSLog(#"audio completed");
[THIS callAudioCompletedMethodHere];
}
}
This worked for me.
Test in Device.