I'm trying to write 4 uint32's of data into the flash memory of my STM32F767ZI so I've looked at some examples and in the reference manual but still I cannot do it. My goal is to write 4 uint32's into the flash and read them back and compare with the original data, and light different leds depending on the success of the comparison.
My code is as follows:
void flash_write(uint32_t offset, uint32_t *data, uint32_t size) {
FLASH_EraseInitTypeDef EraseInitStruct = {0};
uint32_t SectorError = 0;
HAL_FLASH_Unlock();
EraseInitStruct.TypeErase = FLASH_TYPEERASE_SECTORS;
EraseInitStruct.VoltageRange = FLASH_VOLTAGE_RANGE_3;
EraseInitStruct.Sector = FLASH_SECTOR_11;
EraseInitStruct.NbSectors = 1;
//EraseInitStruct.Banks = FLASH_BANK_1; // or FLASH_BANK_2 or FLASH_BANK_BOTH
st = HAL_FLASHEx_Erase(&EraseInitStruct, &SectorError);
if (st == HAL_OK) {
for (int i = 0; i < size; i += 4) {
st = HAL_FLASH_Program(FLASH_TYPEPROGRAM_WORD, FLASH_USER_START_ADDR + offset + i, *(data + i)); //This is what's giving me trouble
if (st != HAL_OK) {
// handle the error
break;
}
}
}else {
// handle the error
}
HAL_FLASH_Lock();
}
void flash_read(uint32_t offset, uint32_t *data, uint32_t size) {
for (int i = 0; i < size; i += 4) {
*(data + i) = *(__IO uint32_t*)(FLASH_USER_START_ADDR + offset + i);
}
}
int main(void) {
uint32_t data[] = {'a', 'b', 'c', 'd'};
uint32_t read_data[] = {0, 0, 0, 0};
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
flash_write(0, data, sizeof(data));
flash_read(0, read_data, sizeof(read_data));
if (compareArrays(data,read_data,4))
{
HAL_GPIO_WritePin(GPIOB, GPIO_PIN_7,SET);
}
else
{
HAL_GPIO_WritePin(GPIOB, GPIO_PIN_14,SET);
}
return 0;
}
The problem is that before writing data I must erase a sector, and when I do it with the HAL_FLASHEx_Erase(&EraseInitStruct, &SectorError), function, the program always crashes, and sometimes even corrupts my codespace forcing me to update firmware.
I've selected the sector farthest from the code space but still it crashes when i try to erase it.
I've read in the reference manual that
Any attempt to read the Flash memory while it is being written or erased, causes the bus to
stall. Read operations are processed correctly once the program operation has completed.
This means that code or data fetches cannot be performed while a write/erase operation is
ongoing.
which I believe means the code should ideally be run from RAM while we operate on the flash, but I've seen other people online not have this issue so I'm wondering if that's the only problem I have. With that in mind I wanted to confirm if this is my only issue, or if I'm doing something wrong?
In your loop, you are adding multiples of 4 to i, but then you are adding i to data. When you add to a pointer it is automatically multiplied by the size of the pointed type, so you are adding multiples of 16 bytes and reading past the end of your input buffer.
Also, make sure you initialize all members of EraseInitStruct. Uncomment that line and set the correct value!
I am trying to do a simple playback from a file functionality and it appears that my callback function is never called. It doesn't really make sense because all of the OSStatuses come back 0 and other numbers all appear correct as well (like the output packets read pointer from AudioFileReadPackets).
Here is the setup:
OSStatus stat;
stat = AudioFileOpenURL(
(CFURLRef)urlpath, kAudioFileReadPermission, 0, &aStreamData->aFile
);
UInt32 dsze = 0;
stat = AudioFileGetPropertyInfo(
aStreamData->aFile, kAudioFilePropertyDataFormat, &dsze, 0
);
stat = AudioFileGetProperty(
aStreamData->aFile, kAudioFilePropertyDataFormat, &dsze, &aStreamData->aDescription
);
stat = AudioQueueNewOutput(
&aStreamData->aDescription, bufferCallback, aStreamData, NULL, NULL, 0, &aStreamData->aQueue
);
aStreamData->pOffset = 0;
for(int i = 0; i < NUM_BUFFERS; i++) {
stat = AudioQueueAllocateBuffer(
aStreamData->aQueue, aStreamData->aDescription.mBytesPerPacket, &aStreamData->aBuffer[i]
);
bufferCallback(aStreamData, aStreamData->aQueue, aStreamData->aBuffer[i]);
}
stat = AudioQueuePrime(aStreamData->aQueue, 0, NULL);
stat = AudioQueueStart(aStreamData->aQueue, NULL);
(Not shown is where I'm checking the value of stat in between the functions, it just comes back normal.)
And the callback function:
void bufferCallback(void *uData, AudioQueueRef queue, AudioQueueBufferRef buffer) {
UInt32 bread = 0;
UInt32 pread = buffer->mAudioDataBytesCapacity / player->aStreamData->aDescription.mBytesPerPacket;
OSStatus stat;
stat = AudioFileReadPackets(
player->aStreamData->aFile, false, &bread, NULL, player->aStreamData->pOffset, &pread, buffer->mAudioData
);
buffer->mAudioDataByteSize = bread;
stat = AudioQueueEnqueueBuffer(queue, buffer, 0, NULL);
player->aStreamData->pOffset += pread;
}
Where aStreamData is my user data struct (typedefed so I can use it as a class property) and player is a static instance of the controlling Objective-C class. If any other code is wanted please let me know. I am a bit at my wit's end. Printing any of the numbers involved here yields the correct result, including functions in bufferCallback when I call it myself in the allocate loop. It just never gets called thereafter. The start up method returns and nothing happens.
Also anecdotally, I am using a peripheral device (an MBox Pro 3) to play the sound which CoreAudio only boots up when it is about to output. IE if I start iTunes or something, the speakers pop faintly and there is an LED that goes from blinking to solid. The device boots up like it does so CA is definitely doing something. (Also I've of course tried it with the onboard Macbook sound sans the device.)
I've read other solutions to problems that sound similiar and they don't work. Stuff like using multiple buffers which I am doing now and doesn't appear to make any difference.
I basically assume I am doing something obviously wrong somehow but not sure what it could be. I've read the relevant documentation, looked at the available code examples and scoured the net a bit for answers and it appears that this is all I need to do and it should just go.
At the very least, is there anything else I can do to investigate?
My first answer was not good enough, so I compiled a minimal example that will play a 2 channel, 16 bit wave file.
The main difference from your code is that I made a property listener listening for play start and stop events.
As for your code, it seems legit at first glance. Two things I will point out, though:
1. Is seems you are allocating buffers with TOO SMALL a buffer size. I have noticed that AudioQueues won't play if the buffers are too small, which seems to fit your problem.
2. Have you verified the properties returned?
Back to my code example:
Everything is hard coded, so it is not exactly good coding practice, but it shows how you can do it.
AudioStreamTest.h
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
uint32_t bufferSizeInSamples;
AudioFileID file;
UInt32 currentPacket;
AudioQueueRef audioQueue;
AudioQueueBufferRef buffer[3];
AudioStreamBasicDescription audioStreamBasicDescription;
#interface AudioStreamTest : NSObject
- (void)start;
- (void)stop;
#end
AudioStreamTest.m
#import "AudioStreamTest.h"
#implementation AudioStreamTest
- (id)init
{
self = [super init];
if (self) {
bufferSizeInSamples = 441;
file = NULL;
currentPacket = 0;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerFrame = 4;
audioStreamBasicDescription.mBytesPerPacket = 4;
audioStreamBasicDescription.mChannelsPerFrame = 2;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mReserved = 0;
audioStreamBasicDescription.mSampleRate = 44100;
}
return self;
}
- (void)start {
AudioQueueNewOutput(&audioStreamBasicDescription, AudioEngineOutputBufferCallback, (__bridge void *)(self), NULL, NULL, 0, &audioQueue);
AudioQueueAddPropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);
AudioQueueStart(audioQueue, NULL);
}
- (void)stop {
AudioQueueStop(audioQueue, YES);
AudioQueueRemovePropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);
}
void AudioEngineOutputBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
if (file == NULL) return;
UInt32 bytesRead = bufferSizeInSamples * 4;
UInt32 packetsRead = bufferSizeInSamples;
AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, inBuffer->mAudioData);
inBuffer->mAudioDataByteSize = bytesRead;
currentPacket += packetsRead;
if (bytesRead == 0) {
AudioQueueStop(inAQ, false);
}
else {
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
}
void AudioEnginePropertyListenerProc (void *inUserData, AudioQueueRef inAQ, AudioQueuePropertyID inID) {
//We are only interested in the property kAudioQueueProperty_IsRunning
if (inID != kAudioQueueProperty_IsRunning) return;
//Get the status of the property
UInt32 isRunning = false;
UInt32 size = sizeof(isRunning);
AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &isRunning, &size);
if (isRunning) {
currentPacket = 0;
NSString *fileName = #"/Users/roy/Documents/XCodeProjectsData/FUZZ/03.wav";
NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: fileName];
AudioFileOpenURL((__bridge CFURLRef) fileURL, kAudioFileReadPermission, 0, &file);
for (int i = 0; i < 3; i++){
AudioQueueAllocateBuffer(audioQueue, bufferSizeInSamples * 4, &buffer[i]);
UInt32 bytesRead = bufferSizeInSamples * 4;
UInt32 packetsRead = bufferSizeInSamples;
AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, buffer[i]->mAudioData);
buffer[i]->mAudioDataByteSize = bytesRead;
currentPacket += packetsRead;
AudioQueueEnqueueBuffer(audioQueue, buffer[i], 0, NULL);
}
}
else {
if (file != NULL) {
AudioFileClose(file);
file = NULL;
for (int i = 0; i < 3; i++) {
AudioQueueFreeBuffer(audioQueue, buffer[i]);
buffer[i] = NULL;
}
}
}
}
-(void)dealloc {
[super dealloc];
AudioQueueDispose(audioQueue, true);
audioQueue = NULL;
}
#end
Lastly, I want to include some research I have done today to test the robustness of AudioQueues.
I have noticed that if you make too small AudioQueue buffers, it won't play at all. That made me play around a bit to see why it is not playing.
If I try buffer size that can hold only 150 samples, I get no sound at all.
If I try buffer size that can hold 175 samples, it plays the whole song through, but with A lot of distortion. 175 amounts to a tad less than 4 ms of audio.
AudioQueue keeps asking for new buffers as long as you keep supplying buffers. That is regardless of AudioQueue actually playing your buffers or not.
If you supply a buffer with size 0, the buffer will be lost and an error kAudioQueueErr_BufferEmpty is returned for that queue enqueue request. You will never see AudioQueue ask you to fill that buffer again. If this happened for the last queue you have posted, AudioQueue will stop asking you to fill any more buffers. In that case you will not hear any more audio for that session.
To see why AudioQueues is not playing anything with smaller buffer sizes, I made a test to see if my callback is called at all even when there is no sound. The answer is that the buffers gets called all the time as long as AudioQueues is playing and needs data.
So if you keep feeding buffers to the queue, no buffer is ever lost. It doesn't happen. Unless there is an error, of course.
So why is no sound playing?
I tested to see if 'AudioQueueEnqueueBuffer()' returned any errors. It did not. No other errors within my play routine either. The data returned from reading from file is also good.
Everything is normal, buffers are good, data re-enqueued is good, there is just no sound.
So my last test was to slowly increase buffer size till I could hear anything. I finally heard faint and sporadic distortion.
Then it came to me...
It seems that the problem lies with that the system tries to keep the stream in sync with time so if you enqueue audio, and the time for the audio you wanted to play has passed, it will just skip that part of the buffer. If the buffer size becomes too small, more and more data is dropped or skipped until the audio system is in sync again. Which is never if the buffer size is too small. (You can hear this as distortion if you chose a buffer size that is barely large enough to support continuous play.)
If you think about it, it is the only way the audio queue can work, but it is a good realisation when you are clueless like me and "discover" how it really works.
I decided to take a look at this again and was able to solve it by making the buffers larger. I've accepted the answer by #RoyGal since it was their suggestion but I wanted to provide the actual code that works since I guess others are having the same problem (question has a few favorites that aren't me at the moment).
One thing I tried was making the packet size larger:
aData->aDescription.mFramesPerPacket = 512; // or some other number
aData->aDescription.mBytesPerPacket = (
aData->aDescription.mFramesPerPacket * aData->aDescription.mBytesPerFrame
);
This does NOT work: it causes AudioQueuePrime to fail with an AudioConverterNew returned -50 message. I guess it wants mFramesPerPacket to be 1 for PCM.
(I also tried setting the kAudioQueueProperty_DecodeBufferSizeFrames property which didn't seem to do anything. Not sure what it's for.)
The solution seems to be to only allocate the buffer(s) with the specified size:
AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,
&aData->aBuffer[i]
);
And the size has to be sufficiently large. I found the magic number is:
mBytesPerPacket * 1024 / N_BUFFERS
(Where N_BUFFERS is the number of buffers and should be > 1 or playback is choppy.)
Here is an MCVE demonstrating the issue and solution:
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AudioToolbox/AudioQueue.h>
#import <AudioToolbox/AudioFile.h>
#define N_BUFFERS 2
#define N_BUFFER_PACKETS 1024
typedef struct AStreamData {
AudioFileID aFile;
AudioQueueRef aQueue;
AudioQueueBufferRef aBuffer[N_BUFFERS];
AudioStreamBasicDescription aDescription;
SInt64 pOffset;
volatile BOOL isRunning;
} AStreamData;
void printASBD(AudioStreamBasicDescription* desc) {
printf("mSampleRate = %d\n", (int)desc->mSampleRate);
printf("mBytesPerPacket = %d\n", desc->mBytesPerPacket);
printf("mFramesPerPacket = %d\n", desc->mFramesPerPacket);
printf("mBytesPerFrame = %d\n", desc->mBytesPerFrame);
printf("mChannelsPerFrame = %d\n", desc->mChannelsPerFrame);
printf("mBitsPerChannel = %d\n", desc->mBitsPerChannel);
}
void bufferCallback(
void *vData, AudioQueueRef aQueue, AudioQueueBufferRef aBuffer
) {
AStreamData* aData = (AStreamData*)vData;
UInt32 bRead = 0;
UInt32 pRead = (
aBuffer->mAudioDataBytesCapacity / aData->aDescription.mBytesPerPacket
);
OSStatus stat;
stat = AudioFileReadPackets(
aData->aFile, false, &bRead, NULL, aData->pOffset, &pRead, aBuffer->mAudioData
);
if(stat != 0) {
printf("AudioFileReadPackets returned %d\n", stat);
}
if(pRead == 0) {
aData->isRunning = NO;
return;
}
aBuffer->mAudioDataByteSize = bRead;
stat = AudioQueueEnqueueBuffer(aQueue, aBuffer, 0, NULL);
if(stat != 0) {
printf("AudioQueueEnqueueBuffer returned %d\n", stat);
}
aData->pOffset += pRead;
}
AStreamData* beginPlayback(NSURL* path) {
static AStreamData* aData;
aData = malloc(sizeof(AStreamData));
OSStatus stat;
stat = AudioFileOpenURL(
(CFURLRef)path, kAudioFileReadPermission, 0, &aData->aFile
);
printf("AudioFileOpenURL returned %d\n", stat);
UInt32 dSize = 0;
stat = AudioFileGetPropertyInfo(
aData->aFile, kAudioFilePropertyDataFormat, &dSize, 0
);
printf("AudioFileGetPropertyInfo returned %d\n", stat);
stat = AudioFileGetProperty(
aData->aFile, kAudioFilePropertyDataFormat, &dSize, &aData->aDescription
);
printf("AudioFileGetProperty returned %d\n", stat);
printASBD(&aData->aDescription);
stat = AudioQueueNewOutput(
&aData->aDescription, bufferCallback, aData, NULL, NULL, 0, &aData->aQueue
);
printf("AudioQueueNewOutput returned %d\n", stat);
aData->pOffset = 0;
for(int i = 0; i < N_BUFFERS; i++) {
// change YES to NO for stale playback
if(YES) {
stat = AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,
&aData->aBuffer[i]
);
} else {
stat = AudioQueueAllocateBuffer(
aData->aQueue,
aData->aDescription.mBytesPerPacket,
&aData->aBuffer[i]
);
}
printf(
"AudioQueueAllocateBuffer returned %d for aBuffer[%d] with capacity %d\n",
stat, i, aData->aBuffer[i]->mAudioDataBytesCapacity
);
bufferCallback(aData, aData->aQueue, aData->aBuffer[i]);
}
UInt32 numFramesPrepared = 0;
stat = AudioQueuePrime(aData->aQueue, 0, &numFramesPrepared);
printf("AudioQueuePrime returned %d with %d frames prepared\n", stat, numFramesPrepared);
stat = AudioQueueStart(aData->aQueue, NULL);
printf("AudioQueueStart returned %d\n", stat);
UInt32 pSize = sizeof(UInt32);
UInt32 isRunning;
stat = AudioQueueGetProperty(
aData->aQueue, kAudioQueueProperty_IsRunning, &isRunning, &pSize
);
printf("AudioQueueGetProperty returned %d\n", stat);
aData->isRunning = !!isRunning;
return aData;
}
void endPlayback(AStreamData* aData) {
OSStatus stat = AudioQueueStop(aData->aQueue, NO);
printf("AudioQueueStop returned %d\n", stat);
}
NSString* getPath() {
// change NO to YES and enter path to hard code
if(NO) {
return #"";
}
char input[512];
printf("Enter file path: ");
scanf("%[^\n]", input);
return [[NSString alloc] initWithCString:input encoding:NSASCIIStringEncoding];
}
int main(int argc, const char* argv[]) {
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
NSURL* path = [NSURL fileURLWithPath:getPath()];
AStreamData* aData = beginPlayback(path);
if(aData->isRunning) {
do {
printf("Queue is running...\n");
[NSThread sleepForTimeInterval:1.0];
} while(aData->isRunning);
endPlayback(aData);
} else {
printf("Playback did not start\n");
}
[pool drain];
return 0;
}
When I bypass the speex encode/decode steps the raw audio output is correct. What I'd like is for the entire buffer captured from my recording callback to be encoded, decoded, and sent back to the playback loop. The few items I'm unsure of are:
What size to allocate for the enc_buffer and dec_buffer
What length to specify in speex_bits_read_from(SpeexBits* bits,char* bytes,int len)
What max size to specify in int speex_bits_write(SpeexBits* bits,char* bytes,int max_len)
Here is my speex codec initialization:
#define SAMPLE_RATE 8000
#define MAX_FRAMES 100
#define FRAME_SIZE 160
enc_state = speex_encoder_init(&speex_nb_mode);
dec_state = speex_decoder_init(&speex_nb_mode);
spx_int32_t tmp;
tmp=5;
speex_encoder_ctl(enc_state, SPEEX_SET_QUALITY, &tmp);
tmp=1;
speex_encoder_ctl(enc_state, SPEEX_SET_COMPLEXITY, &tmp);
speex_encoder_ctl(enc_state, SPEEX_GET_FRAME_SIZE, &enc_frame_size );
speex_decoder_ctl(dec_state, SPEEX_GET_FRAME_SIZE, &dec_frame_size );
tmp = SAMPLE_RATE;
speex_encoder_ctl(enc_state, SPEEX_SET_SAMPLING_RATE, &tmp);
speex_decoder_ctl(dec_state, SPEEX_SET_SAMPLING_RATE, &tmp);
speex_bits_init(&enc_bits);
speex_bits_init(&dec_bits);
//Unsure of this allocation size
enc_buffer = (char*)malloc(sizeof(char)*enc_frame_size*MAX_FRAMES);
dec_buffer = (spx_int16_t*)malloc(sizeof(spx_int16_t)*dec_frame_size*MAX_FRAMES);
My encode/decode methods:
-(char*)encodeAudioBuffer:(spx_int16_t*)audioBuffer withByteSize:(int)numberOfFrames andWriteSizeTo:(int*)inSize{
speex_bits_reset(&enc_bits);
speex_encode_int(enc_state, audioBuffer, &enc_bits);
//Unsure of this third argument. 'numberOfFrames' is the stored number of input frames from my recording callback.
*inSize = speex_bits_write(&enc_bits, enc_buffer, numberOfFrames*enc_frame_size);
return enc_buffer;
}
-(spx_int16_t*)decodeSpeexBits:(char*)encodedAudio withEncodedSize:(int)encodedSize andDecodedSize:(int)decodedSize{
//Unsure of this third argument. 'encodedSize' is the number written to *inSize in the encode method
speex_bits_read_from(&dec_bits, encodedAudio, encodedSize*dec_frame_size);
speex_decode_int(dec_state, &dec_bits, dec_buffer);
return dec_buffer;
}
And they are called like this:
- (void)encodeBufferList:(AudioBufferList*)bufferList withNumberOfFrames:(int)numberOfFrames{
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
int speexSize = 0;
char* encodedAudio = [speexCodec encodeAudioBuffer:(spx_int16_t*)sourceBuffer.mData withByteSize:numberOfFrames andWriteSizeTo:&speexSize];
spx_int16_t* decodedAudio = [speexCodec decodeSpeexBits:encodedAudio withEncodedSize:speexSize andDecodedSize:sourceBuffer.mDataByteSize];
memcpy(audioBuffer.mData, sourceBuffer.mData, numberOfFrames * sizeof(SInt32));
}
where "bufferList" is that returned from my recording/playback callbacks. Can someone verify that I am filling my buffer properly? I saw a similar problem reported here, but couldn't see where in my code I could be doing it wrong:
static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioBuffer buffer;
OSStatus status;
AudioStreamer *input = (__bridge AudioStreamer*) inRefCon;
buffer.mDataByteSize = inNumberFrames * sizeof(SInt16);
buffer.mNumberChannels = 1;
buffer.mData = malloc( inNumberFrames * sizeof(SInt16));
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
status = AudioUnitRender([input rioAUInstance], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[input encodeBufferList:&bufferList withNumberOfFrames:inNumberFrames];
return noErr;
}
static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioStreamer* input = (__bridge AudioStreamer*)inRefCon;
UInt32 size = MIN(ioData->mBuffers[0].mDataByteSize, [input audioBuffer].mDataByteSize);
memcpy(ioData->mBuffers[0].mData, input.audioBuffer.mData, size);
return noErr;
}
The noise produced by encode/decode as it stands is a grainy static hiss, but it's not completely random information - when I blow into the microphone I can hear it behind the noise.
Any help putting this issue to bed would be greatly appreciated. I'll probably end up blogging about it once I get everything sorted out, seems like lots of folks are running into various trivial issues setting up this codec.
So it was a problem in the encode/decode functions, I needed to call speex_encode_int across a number of frames, as it only seems to handle 1 frame at a time, then write them to the encode buffer like this:
-(char*)encodeAudioBuffer:(spx_int16_t*)audioBuffer withNumberOfFrames:(int)numberOfFrames andWriteSizeTo:(int*)inSize{
speex_bits_reset(&enc_bits);
for(int i = 0; i < numberOfFrames; ++i){
speex_encode_int(enc_state, audioBuffer+i, &enc_bits);
}
*inSize = speex_bits_write(&enc_bits, enc_buffer, numberOfFrames);
return enc_buffer;
}
And similarly for decoding, speex_bits_read_from the encoded buffer, and then iterate across the dec_bits for each frame, writing to the decoded buffer
-(spx_int16_t*)decodeSpeexBits:(char*)encodedAudio withEncodedSize:(int)encodedSize andNumberOfFrames:(int)numberOfFrames{
speex_bits_read_from(&dec_bits, encodedAudio, encodedSize);
for(int i = 0; i < numberOfFrames; ++i){
speex_decode_int(dec_state, &dec_bits, dec_buffer+i);
}
return dec_buffer;
}
This still runs quite slow for me. Even after configuring the speex library to use fixed point calculations instead of floating point calculations, it still runs slower than my audio loop (causing a new sort of choppiness). Any leads on how to get this running faster?
on both of your loops you're passing the audio buffer but not taking in consideration the frame size:
for(int i = 0; i < numberOfFrames; ++i){
speex_encode_int(enc_state, audioBuffer+i, &enc_bits);
}
and it should be:
for(int i = 0; i < numberOfFrames; ++i){
speex_encode_int(enc_state, audioBuffer + (i * enc_frame_size), &enc_bits);
}
hope that helps.
I'm designing a rhythm game which is driven by a MIDI track. The MIDI messages trigger the release of on-screen elements. I am loading the MIDI data from a file and then playing it using a MusicSequence and MusicPlayer.
I understand that MIDI files contain time and key signature information as meta messages at the start of the file. However I've not found a way to retrieve this information from either the MusicPlayer or the MusicSequence.
The information I need is the number of seconds it take to play a quaver, crotchet etc... I would expect this to be affected by the time signature and the MusicPlayerPlayRateScalar value.
It look like this information can be found in the CoreAudio clock but I've not been able to work how this is accessed for a particular music sequence.
Are there any CoreAudio experts out there who know how to do this?
You need to get the tempo track of the midi file and then iterate through it to get the tempo(s).
To get the sequence length you need to find the longest track:
(MusicTimeStamp)getSequenceLength:(MusicSequence)aSequence {
UInt32 tracks;
MusicTimeStamp len = 0.0f;
if (MusicSequenceGetTrackCount(sequence, &tracks) != noErr)
return len;
for (UInt32 i = 0; i < tracks; i++) {
MusicTrack track = NULL;
MusicTimeStamp trackLen = 0;
UInt32 trackLenLen = sizeof(trackLen);
MusicSequenceGetIndTrack(sequence, i, &track);
MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLen);
if (len < trackLen)
len = trackLen;
}
return len;
}
// - get the tempo track:
OSStatus result = noErr;
MusicTrack tempoTrack;
result = MusicSequenceGetTempoTrack(sequence, &tempoTrack);
if (noErr != result) {[self printErrorMessage: #"MusicSequenceGetTempoTrack" withStatus: result];}
MusicEventIterator iterator = NULL;
NewMusicEventIterator(tempoTrack, &iterator);
MusicTimeStamp timestamp = 0;
MusicEventType eventType = 0;
const void *eventData = NULL;
UInt32 eventDataSize = 0;
MusicEventIteratorGetEventInfo(iterator, ×tamp, &eventType, &eventData, &eventDataSize);
I have a Multichannel Mixer audio unit playing back audio files in an iOS app, and I need to figure out how to update the app's UI and perform a reset when the render callback hits the end of the longest audio file (which is set up to run on bus 0). As my code below shows I am trying to use KVO to achieve this (using the boolean variable tapesUnderway - the AutoreleasePool is necessary as this Objective-C code is running outside of its normal domain, see http://www.cocoabuilder.com/archive/cocoa/57412-nscfnumber-no-pool-in-place-just-leaking.html).
static OSStatus tapesRenderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
SoundBufferPtr sndbuf = (SoundBufferPtr)inRefCon;
UInt32 bufferFrames = sndbuf[inBusNumber].numFrames;
AudioUnitSampleType *in = sndbuf[inBusNumber].data;
// These mBuffers are the output buffers and are empty; these two lines are just setting the references to them (via outA and outB)
AudioUnitSampleType *outA = (AudioUnitSampleType *)ioData->mBuffers[0].mData;
AudioUnitSampleType *outB = (AudioUnitSampleType *)ioData->mBuffers[1].mData;
UInt32 sample = sndbuf[inBusNumber].sampleNum;
// --------------------------------------------------------------
// Set the start time here
if(inBusNumber == 0 && !tapesFirstRenderPast)
{
printf("Tapes first render past\n");
tapesStartSample = inTimeStamp->mSampleTime;
tapesFirstRenderPast = YES; // MAKE SURE TO RESET THIS ON SONG RESTART
firstPauseSample = tapesStartSample;
}
// --------------------------------------------------------------
// Now process the samples
for(UInt32 i = 0; i < inNumberFrames; ++i)
{
if(inBusNumber == 0)
{
// ------------------------------------------------------
// Bus 0 is the backing track, and is always playing back
outA[i] = in[sample++];
outB[i] = in[sample++]; // For stereo set desc.SetAUCanonical to (2, true) and increment samples in both output calls
lastSample = inTimeStamp->mSampleTime + (Float64)i; // Set the last played sample in order to compensate for pauses
// ------------------------------------------------------
// Use this logic to mark end of tune
if(sample >= (bufferFrames * 2) && !tapesEndPast)
{
// USE KVO TO NOTIFY METHOD OF VALUE CHANGE
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
FuturesEPMedia *futuresMedia = [FuturesEPMedia sharedFuturesEPMedia];
NSNumber *boolNo = [[NSNumber alloc] initWithBool: NO];
[futuresMedia setValue: boolNo forKey: #"tapesUnderway"];
[boolNo release];
[pool release];
tapesEndPast = YES;
}
}
else
{
// ------------------------------------------------------
// The other buses are the open sections, and are synched through the tapesSectionsTimes array
Float64 sectionTime = tapesSectionTimes[inBusNumber] * kGraphSampleRate; // Section time in samples
Float64 currentSample = inTimeStamp->mSampleTime + (Float64)i;
if(!isPaused && !playFirstRenderPast)
{
pauseGap += currentSample - firstPauseSample;
playFirstRenderPast = YES;
pauseFirstRenderPast = NO;
}
if(currentSample > (tapesStartSample + sectionTime + pauseGap) && sample < (bufferFrames * 2))
{
outA[i] = in[sample++];
outB[i] = in[sample++];
}
else
{
outA[i] = 0;
outB[i] = 0;
}
}
}
sndbuf[inBusNumber].sampleNum = sample;
return noErr;
}
At the moment when this variable is changed it triggers a method in self, but this leads to an unacceptable delay (20-30 seconds) when executed from this render callback (I am thinking because it is Objective-C code running in the high priority audio thread?). How do I effectively trigger such a change without the delay? (The trigger will change a pause button to a play button and call a reset method to prepare for the next play.)
Thanks
Yes. Don't use objc code in the render thread since its high priority. If you store state in memory (ptr or struct) and then get a timer in the main thread to poll (check) the value(s) in memory. The timer need not be anywhere near as fast as the render thread and will be very accurate.
Try this.
Global :
BOOL FlgTotalSampleTimeCollected = False;
Float64 HigestSampleTime = 0 ;
Float64 TotalSampleTime = 0;
in -(OSStatus) setUpAUFilePlayer:
AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyDataFormat,
&propSize, &fileASBD),
"couldn't get file's data format");
UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
CheckError(AudioFileGetProperty(inputFile, kAudioFilePropertyAudioDataPacketCount,
&propsize, &nPackets),
"AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] failed");
Float64 sTime = nPackets * fileASBD.mFramesPerPacket;
if (HigestSampleTime < sTime)
{
HigestSampleTime = sTime;
}
In RenderCallBack :
if (*actionFlags & kAudioUnitRenderAction_PreRender)
{
if (!THIS->FlgTotalSampleTimeCollected)
{
[THIS setFlgTotalSampleTimeCollected:TRUE];
[THIS setTotalSampleTime:(inTimeStamp->mSampleTime + THIS->HigestSampleTime)];
}
}
else if (*actionFlags & kAudioUnitRenderAction_PostRender)
{
if (inTimeStamp->mSampleTime > THIS->TotalSampleTime)
{
NSLog(#"inTimeStamp->mSampleTime :%f",inTimeStamp->mSampleTime);
NSLog(#"audio completed");
[THIS callAudioCompletedMethodHere];
}
}
This worked for me.
Test in Device.