I am using the SoXR library's variable rate feature to dynamically change the sampling rate of an audio stream in real time. Unfortunately I have have noticed that an unwanted clicking noise is present when changing the rate from 1.0 to a larger value (ex: 1.01) when testing with a sine wave. I have not noticed any unwanted artifacts when changing from a value larger than 1.0 to 1.0. I looked at the wave form it was producing and it appeared as if a few samples right at rate change are transposed incorrectly.
Here's a picture of an example of a stereo 440Hz sinewave stored using signed 16bit interleaved samples:
I also was unable to find any documentation covering the variable rate feature beyond the fifth code example. Here's is my initialization code:
bool DynamicRateAudioFrameQueue::intialize(uint32_t sampleRate, uint32_t numChannels)
{
mSampleRate = sampleRate;
mNumChannels = numChannels;
mRate = 1.0;
mGlideTimeInMs = 0;
// Intialize buffer
size_t intialBufferSize = 100 * sampleRate * numChannels / 1000; // 100 ms
pFifoSampleBuffer = new FiFoBuffer<int16_t>(intialBufferSize);
soxr_error_t error;
// Use signed int16 with interleaved channels
soxr_io_spec_t ioSpec = soxr_io_spec(SOXR_INT16_I, SOXR_INT16_I);
// "When creating a var-rate resampler, q_spec must be set as follows:" - example code
// Using SOXR_VR makes sense, but I'm not sure if the quality can be altered when using var-rate
soxr_quality_spec_t qualitySpec = soxr_quality_spec(SOXR_HQ, SOXR_VR);
// Using the var-rate io-spec is undocumented beyond a single code example which states
// "The ratio of the given input rate and ouput rates must equate to the
// maximum I/O ratio that will be used: "
// My tests show this is not true
double inRate = 1.0;
double outRate = 1.0;
mSoxrHandle = soxr_create(inRate, outRate, mNumChannels, &error, &ioSpec, &qualitySpec, NULL);
if (error == 0) // soxr_error_t == 0; no error
{
mIntialized = true;
return true;
}
else
{
return false;
}
}
Any idea what may be causing this to happen? Or have a suggestion for an alternative library that is capable of variable rate audio resampling in real time?
After speaking with the developer of the SoXR library I was able to resolve this issue by adjusting the maximum ratio parameters in the soxr_create method call. The developer's response can be found here.
Related
I am unable to compile the examples , hello_world_arcada and micro_speech_arcada shown below , on the adafruit website found here on my Circuit playground bluefruit microcontroller:
I installed the Adafruit_Tensorflow_Lite library as mentioned in the site however it turns out that examples cannot compile because they have numerous missing files. So i downloaded this tensorflow git hub repo and then transfered the missing files into the Adafruit_Tensorflow_Lite library.
I am now facing this error for the missing files : am_bsp.h ,am_mcu_apollo.h , am_util.h , i cannot locate these files on the repo or on google.[Note: i have found the am_bsp.h file in this repo
but it still doesnt compile.
Can anyone assist me in locating where i can find these files or a way to compile the example code mentioned in the adafruit website ?
The error is shown in the pic below of the missing file am_bsp.h when using Arduino IDE to compile:
My code is shown below:
#include <TensorFlowLite.h>
#include "Adafruit_TFLite.h"
#include "Adafruit_Arcada.h"
#include "output_handler.h"
#include "sine_model_data.h"
// Create an area of memory to use for input, output, and intermediate arrays.
// Finding the minimum value for your model may require some trial and error.
const int kTensorAreaSize (2 * 1024);
// This constant represents the range of x values our model was trained on,
// which is from 0 to (2 * Pi). We approximate Pi to avoid requiring additional
// libraries.
const float kXrange = 2.f * 3.14159265359f;
// Will need tuning for your chipset
const int kInferencesPerCycle = 200;
int inference_count = 0;
Adafruit_Arcada arcada;
Adafruit_TFLite ada_tflite(kTensorAreaSize);
// The name of this function is important for Arduino compatibility.
void setup() {
Serial.begin(115200);
//while (!Serial) yield();
arcada.arcadaBegin();
// If we are using TinyUSB we will have the filesystem show up!
arcada.filesysBeginMSD();
arcada.filesysListFiles();
// Set the display to be on!
arcada.displayBegin();
arcada.setBacklight(255);
arcada.display->fillScreen(ARCADA_BLUE);
if (! ada_tflite.begin()) {
arcada.haltBox("Failed to initialize TFLite");
while (1) yield();
}
if (arcada.exists("model.tflite")) {
arcada.infoBox("Loading model.tflite from disk!");
if (! ada_tflite.loadModel(arcada.open("model.tflite"))) {
arcada.haltBox("Failed to load model file");
}
} else if (! ada_tflite.loadModel(g_sine_model_data)) {
arcada.haltBox("Failed to load default model");
}
Serial.println("\nOK");
// Keep track of how many inferences we have performed.
inference_count = 0;
}
// The name of this function is important for Arduino compatibility.
void loop() {
// Calculate an x value to feed into the model. We compare the current
// inference_count to the number of inferences per cycle to determine
// our position within the range of possible x values the model was
// trained on, and use this to calculate a value.
float position = static_cast<float>(inference_count) /
static_cast<float>(kInferencesPerCycle);
float x_val = position * kXrange;
// Place our calculated x value in the model's input tensor
ada_tflite.input->data.f[0] = x_val;
// Run inference, and report any error
TfLiteStatus invoke_status = ada_tflite.interpreter->Invoke();
if (invoke_status != kTfLiteOk) {
ada_tflite.error_reporter->Report("Invoke failed on x_val: %f\n",
static_cast<double>(x_val));
return;
}
// Read the predicted y value from the model's output tensor
float y_val = ada_tflite.output->data.f[0];
// Output the results. A custom HandleOutput function can be implemented
// for each supported hardware target.
HandleOutput(ada_tflite.error_reporter, x_val, y_val);
// Increment the inference_counter, and reset it if we have reached
// the total number per cycle
inference_count += 1;
if (inference_count >= kInferencesPerCycle) inference_count = 0;
}
Try to install the library from below link, it should solve your problems,
https://github.com/tensorflow/tflite-micro-arduino-examples#how-to-install
I have an application using PETSc. For performance monitoring under (near) production runs, I'd like to log a small number of various values. Some generated by PETSc, some not.
Now I wonder: How can I read the time value out of PetscEventPerfInfo to write it into my file? I can't find a documentation entry about PetscEventPerfInfo, so I'm not sure whether I'm not supposed to touch it in any way.
However, I found the following method which basically reveals the structure of PetscEventPerfInfo:
PetscErrorCode EventPerfInfoClear(PetscEventPerfInfo *eventInfo)
{
PetscFunctionBegin;
eventInfo->id = -1;
eventInfo->active = PETSC_TRUE;
eventInfo->visible = PETSC_TRUE;
eventInfo->depth = 0;
eventInfo->count = 0;
eventInfo->flops = 0.0;
eventInfo->flops2 = 0.0;
eventInfo->flopsTmp = 0.0;
eventInfo->time = 0.0;
eventInfo->time2 = 0.0;
eventInfo->timeTmp = 0.0;
eventInfo->numMessages = 0.0;
eventInfo->messageLength = 0.0;
eventInfo->numReductions = 0.0;
PetscFunctionReturn(0);
}
I have a strong guess that it's just eventInfo->time, but I'm absolutely not sure whether it's save to read it oder whether there's an "official" way to read from that structure.
So, what should I do if I just want to read the time value into a variable for further usage?
Good news: there is an example of PETSC, src/dm/impls/plex/examples/tests/ex9.c which makes use of the field time of PetscEventPerfInfo to print performance-related logs!
The structure PetscEventPerfInfo is defined in petsc-3.7.6/include/petsclog.h.
There are plenty of comments in the file.
The field time is indeed defined as:
PetscLogDouble time, time2, timeTmp; /* The time and time^2 taken for this event */
There are clues that time contains the execution time in eventlog.c. Indeed, in function PetscLogEventBeginComplete, there is a eventPerfLog->eventInfo[event].time -= curTime;, which matches the eventPerfLog->eventInfo[event].time += curTime; of PetscLogEventEndComplete().
Consequently, the value of the field time is interesting if an only if both PetscLogEventBeginComplete() and PetscLogEventEndComplete() have been called or both PetscLogEventBeginDefault() and PetscLogEventEndDefault().
i'm trying to control my led in 256 (0-255) different levels of brightness. my controller is set to 80mhz and running on rtos. i'm setting the clock module to interrupt every 5 microseconds and the brightness e.g. to 150. the led is dimming, but i'm not sure if it is done right to really have 256 different levels
int counter = 1;
int brightness = 0;
void SetUp(void)
{
SysCtlClockSet(SYSCTL_SYSDIV_2_5|SYSCTL_USE_PLL|SYSCTL_OSC_MAIN|SYSCTL_XTAL_16MHZ);
GPIOPinTypeGPIOOutput(PORT_4, PIN_1);
Clock_Params clockParams;
Clock_Handle myClock;
Error_Block eb;
Error_init(&eb);
Clock_Params_init(&clockParams);
clockParams.period = 400; // every 5 microseconds
clockParams.startFlag = TRUE;
myClock = Clock_create(myHandler1, 400, &clockParams, &eb);
if (myClock == NULL) {
System_abort("Clock create failed");
}
}
void myHandler1 (){
brightness = 150;
while(1){
counter = (++counter) % 256;
if (counter < brightness){
GPIOPinWrite(PORT_4, PIN_1, PIN_1);
}else{
GPIOPinWrite(PORT_4, PIN_1, 0);
}
}
}
A 5 microsecond interrupt is a tall ask for an 80 MHz processor, and will leave little time for other work, and if you are not doing other work, you need not use interrupts at all - you could simply poll the clock counter; then it would still be a lot of processor to throw at a rather trivial task - and the RTOS is overkill too.
A better way to perform your task is to use the timer's PWM (Pulse Width Modulation) feature. You will then be able to accurately control the brightness with zero software overhead; leaving your processor to do more interesting things.
Using a PWM you could manage with a far lower performance processor if LED control is all it will do.
If you must use an interrupt/GPIO (for example your timer does not support PWM generation or the LED is not connected to a PWM capable pin), then it would be more efficient to set the timer incrementally. So for example for a mark:space of 150:105, you would set the timer for 150*5us (9.6ms), on the interrupt toggle the GPIO, then set the timer to 105*5us (6.72ms).
A major problem with your solution is the interrupt handler does not return - interrupts must run to completion and be as short as possible and preferably deterministic in execution time.
Without using hardware PWM, the following based on your code fragment is probably closer to what you need:
#define PWM_QUANTA = 400 ; // 5us
static volatile uint8_t brightness = 150 ;
static Clock_Handle myClock ;
void setBrightness( uint8_t br )
{
brightness = br ;
}
void SetUp(void)
{
SysCtlClockSet(SYSCTL_SYSDIV_2_5|SYSCTL_USE_PLL|SYSCTL_OSC_MAIN|SYSCTL_XTAL_16MHZ);
GPIOPinTypeGPIOOutput(PORT_4, PIN_1);
Clock_Params clockParams;
Error_Block eb;
Error_init(&eb);
Clock_Params_init(&clockParams);
clockParams.period = brightness * PWM_QUANTA ;
clockParams.startFlag = TRUE;
myClock = Clock_create(myHandler1, 400, &clockParams, &eb);
if (myClock == NULL)
{
System_abort("Clock create failed");
}
}
void myHandler1(void)
{
static int pin_state = 1 ;
// Toggle pin state and timer period
if( pin_state == 0 )
{
pin_sate = 1 ;
Clock_setPeriod( myClock, brightness * PWM_QUANTA ) ;
}
else
{
pin_sate = 0 ;
Clock_setPeriod( myClock, (255 - brightness) * PWM_QUANTA ) ;
}
// Set pin state
GPIOPinWrite(PORT_4, PIN_1, pin_state) ;
}
On the urging of Clifford I am elaborating on an alternate strategy for reducing the load of software dimming as as servicing interrupts every 400 clock cycles may prove difficult. The preferred solution should of course be to use hardware pulse-width modulation whenever available.
One option is to set interrupts only at the PWM flanks. Unfortunately this strategy tends to introduce races and drift as time elapses while adjustments are taking place and scales poorly to multiple channels.
Alternative we may switch from pulse-width to delta-sigma modulation. There is a fair bit of theory behind the concept but in this context it boils down to toggling the pin on and off as quickly as possible while maintaining an average on-time proportional to the dimming level. As a consequence the interrupt frequency may be reduced without bringing the overall switching frequency down to visible levels.
Below is an example implementation:
// Brightness to display. More than 8-bits are required to handle full 257-step range.
// The resolution also course be increased if desired.
volatile unsigned int brightness = 150;
void Interrupt(void) {
// Increment the accumulator with the desired brightness
static uint8_t accum;
unsigned int addend = brightness;
accum += addend;
// Light the LED pin on overflow, relying on native integer wraparound.
// Consequently higher brightness values translate to keeping the LED lit more often
GPIOPinWrite(PORT_4, PIN_1, accum < addend);
}
A limitation is that the switching frequency decreases with the distance from 50% brightness. Thus the final N steps may need to be clamped to 0 or 256 to prevent visible flicker.
Oh, and take care if switching losses are a concern in your application.
I'm doing some audio programming for a client and I've come across an issue which I just don't understand.
I have a render callback which is called repeatedly by CoreAudio. Inside this callback I have the following:
// Get the audio sample data
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
Float32 data;
// Loop over the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
// Convert from SInt16 to Float32 just to prove it's possible
data = (Float32) outA[frame] / (Float32) 32768;
// Convert back to SInt16 to show that everything works as expected
outA[frame] = (SInt16) round(next * 32768);
}
This works as expected which shows there aren't any unexpected rounding errors.
The next thing I want to do is add a small delay. I add a global variable to the class:
i.e. below the #implementation line
Float32 last = 0;
Then I use this variable to get a one frame delay:
// Get the audio sample data
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
Float32 data;
Float32 next;
// Loop over the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
// Convert from SInt16 to Float32 just to prove it's possible
data = (Float32) outA[frame] / (Float32) 32768;
next = last;
last = data;
// Convert back to SInt16 to show that everything works as expected
outA[frame] = (SInt16) round(next * 32768);
}
This time round there's a strange audio distortion on the signal.
I just can't see what I'm doing wrong! Any advice would be greatly appreciated.
It seems that what you've done is introduced an unintentional phaser effect on your audio.
This is because you're only delaying one channel of your audio, so the result is that you have the left channel being delayed one frame behind the right channel. This would result in some odd frequency cancellations / amplifications that would suit your description of "a strange audio distortion".
Try applying the effect to both channels:
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
AudioSampleType *outB = (AudioSampleType *)ioData->mBuffers[1].mData;
// apply the same effect to outB as you did to outA
This assumes that you are working with stereo audio (i.e ioData->mNumberBuffers == 2)
As a matter of style, it's (IMO) a bad idea to use a global like your last variable in a render callback. Use the inRefCon to pass in proper context (either as a single variable or as a struct if necessary). This likely isn't related to the problem you're having, though.
OSStatus SetupBuffers(BG_FileInfo *inFileInfo)
{
int numBuffersToQueue = kNumberBuffers;
UInt32 maxPacketSize;
UInt32 size = sizeof(maxPacketSize);
// we need to calculate how many packets we read at a time, and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
OSStatus result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize);
AssertNoError("Error getting packet upper bound size", end);
bool isFormatVBR = (inFileInfo->mFileFormat.mBytesPerPacket == 0 || inFileInfo- >mFileFormat.mFramesPerPacket == 0);
CalculateBytesForTime(inFileInfo->mFileFormat, maxPacketSize, 0.5/*seconds*/, &mBufferByteSize, &mNumPacketsToRead);
// if the file is smaller than the capacity of all the buffer queues, always load it at once
if ((mBufferByteSize * numBuffersToQueue) > inFileInfo->mFileDataSize)
inFileInfo->mLoadAtOnce = true;
if (inFileInfo->mLoadAtOnce)
{
UInt64 theFileNumPackets;
size = sizeof(UInt64);
result = AudioFileGetProperty(inFileInfo->mAFID, kAudioFilePropertyAudioDataPacketCount, &size, &theFileNumPackets);
AssertNoError("Error getting packet count for file", end);***>>>>this is where xcode says undefined<<<<***
mNumPacketsToRead = (UInt32)theFileNumPackets;
mBufferByteSize = inFileInfo->mFileDataSize;
numBuffersToQueue = 1;
}
//Here is the exact error
label 'end' used but not defined
I have that error twice
If you look at the SoundEngine.cpp source that the snippet comes from, you'll see it's defined on the very next line:
end:
return result;
It's a label that execution jumps to when there's an error.
Uhm, the only place I can find AssertNoError is here in Technical Note TN2113. And it has a completely different format. AssertNoError(theError, "couldn't unregister the ABL"); Where is AssertNoError defined?
User #Jeremy P mentions this document as well.