I am building a flutter app that needs to record an audio and predict some label using a tflite model I built. For linking the audio recording and tflite I use the flutter plugin tf-lite audio (https://github.com/Caldarie/flutter_tflite_audio).
The tensorflow model works on colab but when I launch the app and inference happens hence when it calls interpreter.invoke(), the following error occurs:
TensorFlow Lite Error: tensorflow/lite/kernels/reshape.cc:58 stretch_dim != -1 (0 != -1)
TensorFlow Lite Error: Node number 26 (RESHAPE) failed to prepare.
Failed to invoke the interpreter with error: Must call allocateTensors().
2
Fatal error: Unexpectedly found nil while implicitly unwrapping an Optional value: file tflite_audio/SwiftTfliteAudioPlugin.swift, line 290
* thread #2, queue = 'conversionQueue', stop reason = Fatal error: Unexpectedly found nil while implicitly unwrapping an Optional value
frame #0: 0x00000001a672ee08 libswiftCore.dylib`_swift_runtime_on_report
libswiftCore.dylib`_swift_runtime_on_report:
-> 0x1a672ee08 <+0>: ret
libswiftCore.dylib`_swift_reportToDebugger:
0x1a672ee0c <+0>: b 0x1a672ee08 ; _swift_runtime_on_report
libswiftCore.dylib`_swift_shouldReportFatalErrorsToDebugger:
0x1a672ee10 <+0>: adrp x8, 341475
0x1a672ee14 <+4>: ldrb w0, [x8, #0x7c8]
Target 0: (Runner) stopped.
Lost connection to device.
This error message appears even though I added allocateTensors in the SwiftTfliteAudioPlugin.swift file here:
var interval: TimeInterval!
var outputTensor: Tensor!
do {
// Copy the `[Int16]` buffer data as an array of Floats to the audio buffer input Tensor.
let audioBufferData = Data(copyingBufferOf: buffer.map { Float($0) / maxInt16AsFloat32 })
try interpreter.copy(audioBufferData, toInputAt: 0)
// I added this line
try interpreter.allocateTensors()
// Calculate inference time
let startDate = Date()
try interpreter.invoke() //required!!! Do not touch
interval = Date().timeIntervalSince(startDate) * 1000
// Get the output `Tensor` to process the inference results.
outputTensor = try interpreter.output(at: 0)
print(outputTensor as Any)
} catch let error {
print("Failed to invoke the interpreter with error: \(error.localizedDescription)")
}
In the tflite model here is the problematic node on netron
It looks like it is only squeezing the first dimension so maybe it cannot because as you can see on the summary of my model the first dimension is None, I tried some tricks to avoid having this None but I am not familiar enough with tensorflow to be sure about the validity of the operations I am doing.
I have boiled down my model to the minimal size and this node is between these 2 lines of code, so I suspect the tf.signal.stft function to do this reshaping but have no idea.
spectrograms = tf.signal.stft(waveforms,
frame_length=self.fft_size,
frame_step=self.hop_size,
pad_end=False)
magnitude_spectrograms = tf.abs(spectrograms)
Can anyone help on this issue?
Thanks!
As stated in the error message, you need to call allocateTensors first.
Related
I am unable to compile the examples , hello_world_arcada and micro_speech_arcada shown below , on the adafruit website found here on my Circuit playground bluefruit microcontroller:
I installed the Adafruit_Tensorflow_Lite library as mentioned in the site however it turns out that examples cannot compile because they have numerous missing files. So i downloaded this tensorflow git hub repo and then transfered the missing files into the Adafruit_Tensorflow_Lite library.
I am now facing this error for the missing files : am_bsp.h ,am_mcu_apollo.h , am_util.h , i cannot locate these files on the repo or on google.[Note: i have found the am_bsp.h file in this repo
but it still doesnt compile.
Can anyone assist me in locating where i can find these files or a way to compile the example code mentioned in the adafruit website ?
The error is shown in the pic below of the missing file am_bsp.h when using Arduino IDE to compile:
My code is shown below:
#include <TensorFlowLite.h>
#include "Adafruit_TFLite.h"
#include "Adafruit_Arcada.h"
#include "output_handler.h"
#include "sine_model_data.h"
// Create an area of memory to use for input, output, and intermediate arrays.
// Finding the minimum value for your model may require some trial and error.
const int kTensorAreaSize (2 * 1024);
// This constant represents the range of x values our model was trained on,
// which is from 0 to (2 * Pi). We approximate Pi to avoid requiring additional
// libraries.
const float kXrange = 2.f * 3.14159265359f;
// Will need tuning for your chipset
const int kInferencesPerCycle = 200;
int inference_count = 0;
Adafruit_Arcada arcada;
Adafruit_TFLite ada_tflite(kTensorAreaSize);
// The name of this function is important for Arduino compatibility.
void setup() {
Serial.begin(115200);
//while (!Serial) yield();
arcada.arcadaBegin();
// If we are using TinyUSB we will have the filesystem show up!
arcada.filesysBeginMSD();
arcada.filesysListFiles();
// Set the display to be on!
arcada.displayBegin();
arcada.setBacklight(255);
arcada.display->fillScreen(ARCADA_BLUE);
if (! ada_tflite.begin()) {
arcada.haltBox("Failed to initialize TFLite");
while (1) yield();
}
if (arcada.exists("model.tflite")) {
arcada.infoBox("Loading model.tflite from disk!");
if (! ada_tflite.loadModel(arcada.open("model.tflite"))) {
arcada.haltBox("Failed to load model file");
}
} else if (! ada_tflite.loadModel(g_sine_model_data)) {
arcada.haltBox("Failed to load default model");
}
Serial.println("\nOK");
// Keep track of how many inferences we have performed.
inference_count = 0;
}
// The name of this function is important for Arduino compatibility.
void loop() {
// Calculate an x value to feed into the model. We compare the current
// inference_count to the number of inferences per cycle to determine
// our position within the range of possible x values the model was
// trained on, and use this to calculate a value.
float position = static_cast<float>(inference_count) /
static_cast<float>(kInferencesPerCycle);
float x_val = position * kXrange;
// Place our calculated x value in the model's input tensor
ada_tflite.input->data.f[0] = x_val;
// Run inference, and report any error
TfLiteStatus invoke_status = ada_tflite.interpreter->Invoke();
if (invoke_status != kTfLiteOk) {
ada_tflite.error_reporter->Report("Invoke failed on x_val: %f\n",
static_cast<double>(x_val));
return;
}
// Read the predicted y value from the model's output tensor
float y_val = ada_tflite.output->data.f[0];
// Output the results. A custom HandleOutput function can be implemented
// for each supported hardware target.
HandleOutput(ada_tflite.error_reporter, x_val, y_val);
// Increment the inference_counter, and reset it if we have reached
// the total number per cycle
inference_count += 1;
if (inference_count >= kInferencesPerCycle) inference_count = 0;
}
Try to install the library from below link, it should solve your problems,
https://github.com/tensorflow/tflite-micro-arduino-examples#how-to-install
I have trained a model based on the keras lstm_text_generation example, and I would like to perform predictions on this model with front-end javascript.
First I tried using keras.js, however that only takes 1-dimensional Float32Array vectors in it's prediction function so I am unable to use it since the lstm_text_generation example uses a multidimensional array of shape (1, maxlen, len(chars)).
Next I tried using tensorflow.js, using this tutorial to port my keras model to a model.json file. Everything seems to work fine, up to the point where I perform the actual prediction where it freezes and gives me the warning Orthogonal initializer is being called on a matrix with more than 2000 (65536) elements: Slowness may result.
I noticed that in many of the tensorflow.js examples, people convert their arrays to tensor2d, but I did this and it had no effect on the performance of my code.
For anyone curious, here is the javascript code I wrote...
async function predict_from_model() {
const model = await tf.loadModel('https://raw.githubusercontent.com/98mprice/death-grips-lyrics-generator/master/model.json');
try {
var seed = "test test test test test test test test"
var maxlen = 40
for (var i = 0; i < 1; i++) {
var x_pred = nj.zeros([1, maxlen, 61]).tolist()
for (var j = 0; j < seed.length; j++) {
x_pred[0][j][char_indices[seed.charAt(j)]] = 1
}
console.log("about to predict")
const preds = model.predict(x_pred) //gets stuck here
console.log("prediction done")
}
} catch (err) {
// handle error
}
}
...to perform the same function as on_epoch_end() in the lstm_text_generation.py example. The output of x_pred is the same in both python and javascript code, so I don't think the issue lies there.
I think I need to make some optimisations in tensorflow.js, but I'm not sure what. Does anyone know how to fix any of my issues above and/or any other javascript library that would work for my purpose?
x_pred needs to be a tensor, the simplest way to create a tensor with custom values is tf.buffer, which can be initialized with a TypedArray or can be modified using .set() which would be better for you, because most of your values are 0 and buffer are filled with zeros by default. And to create a tensor out of a buffer just use .toTensor();
So it would something like this:
var x_pred = tf.buffer([1, maxlen, 61]);
for (var j = 0; j < seed.length; j++) {
x_pred.set(1, 0, j, char_indices[seed.charAt(j)]);
}
console.log("about to predict")
const preds = model.predict(x_pred.toTensor());
console.log("prediction done")
So one of my devices (a Nvidia GeForce GT 650m GPU) keeps giving me this weird ptxas application error saying "Arguments mismatch for instruction 'mov' when I try to build a cl_program on that device. It's the only one of my 3 devices that gives me this error. My CPU and other GPU (Intel HD 4000) do not give me this error at all.
Here's an example of a function that causes this error to happen. It's a helper function I use inside one of my kernels:
//Calculate the dot product of two vectors
float Dot(Vector v1, Vector v2)
{
return (v1.x*v2.x + v1.y*v2.y + v1.z*v2.z);
}
First I tried splitting up the work into something like this:
//Calculate the dot product of two vectors)
float Dot(Vector v1, Vector v2)
{
float a = v1.x*v2.x;
float b = v1.y*v2.y;
float c = v1.z*v2.z;
float result = a + b + c;
return result;
}
But that also gives me the same error. Interestingly enough, if I simply set result = 5.0f and return that it magically compiles and runs:
//THIS WILL COMPILE AND RUN
float Dot(Vector v1, Vector v2)
{
float a = v1.x*v2.x;
float b = v1.y*v2.y;
float c = v1.z*v2.z;
float result = 5.0f; //IGNORE THE CALCULATION. JUST MAKE IT 5
return result;
}
So I have no idea what's going on. My 'Dot' function isn't the only function that's affected but one of several. Is my Nvidia card defective?
EDIT Here is the log I get from clGetProgramBuildInfo after the build fails:
ptxas application ptx input, line 703; error : Arguments mismatch for instruction 'mov'
ptxas application ptx input, line 703; error : Unknown symbol 'LIntersection_2E_n'
ptxas application ptx input, line 703; error : Label expected for forward reference of 'LIntersection_2E_n'
ptxas fatal : Ptx assembly aborted due to errors
Although there are more errors printed than just the 'mov' one I described, they all go away when I make the above change of result = 5.0f;
According to the LLVM developers, this is a bug in the nvptx back-end.
LLVMdev forum message discussing this error
I am just beginning to learn audio programming using supercollider.
When I play a sound I am able to hear it on speakers but not headphone.
I get the following message on starting server -
booting 57110
localhost
JackDriver: client name is 'SuperCollider'
SC_AudioDriver: sample rate = 48000.000000, driver's block size = 1024
JackDriver: connected system:capture_1 to SuperCollider:in_1
JackDriver: connected system:capture_2 to SuperCollider:in_2
JackDriver: connected SuperCollider:out_1 to system:playback_1
JackDriver: connected SuperCollider:out_2 to system:playback_2
SuperCollider 3 server ready.
JackDriver: max output latency 42.7 ms
Receiving notification messages from server localhost
Shared memory server interface initialized
I went through some forums and they suggested to look for output devices options and set them, I did a -
ServerOptions.devices;
to look for device list but I got the following error in the post window -
ERROR: A primitive was not bound. 0 676
Instance of Method { (0x21199c0, gc=01, fmt=00, flg=11, set=04)
instance variables [15]
raw1 : Float 0.000000 00000000 0080000C
raw2 : Float 0.000000 00000300 03020003
code : instance of Int8Array (0x2119cc0, size=4, set=2)
selectors : nil
constants : nil
prototypeFrame : instance of Array (0x2119c00, size=3, set=2)
context : nil
argNames : instance of SymbolArray (0x2119b40, size=3, set=2)
varNames : nil
sourceCode : nil
ownerClass : class Meta_ServerOptions (0x21113c0)
name : Symbol 'prListDevices'
primitiveName : Symbol '_ListAudioDevices'
filenameSymbol : Symbol '/usr/share/SuperCollider/SCClassLibrary/Common/Control/Server.sc'
charPos : Integer 4025
}
ERROR: Primitive 'none' failed.
Failed.
RECEIVER:
nil
CALL STACK:
MethodError:reportError 0x3601498
arg this =
Nil:handleError 0x1f730f8
arg this = nil
arg error =
Thread:handleError 0x35fcfd8
arg this =
arg error =
Object:throw 0x3980c58
arg this =
Object:primitiveFailed 0x33395a8
arg this = nil
Interpreter:interpretPrintCmdLine 0x3d061e8
arg this =
var res = nil
var func =
var code = "ServerOptions.devices;"
var doc = nil
var ideClass =
Process:interpretPrintCmdLine 0x3443c08
arg this =
^^ The preceding error dump is for ERROR: Primitive 'none' failed.
Failed.
RECEIVER: nil
booting 57110
localhost
JackDriver: client name is 'SuperCollider'
SC_AudioDriver: sample rate = 48000.000000, driver's block size = 1024
JackDriver: connected system:capture_1 to SuperCollider:in_1
JackDriver: connected system:capture_2 to SuperCollider:in_2
JackDriver: connected SuperCollider:out_1 to system:playback_1
JackDriver: connected SuperCollider:out_2 to system:playback_2
SuperCollider 3 server ready.
JackDriver: max output latency 42.7 ms
Receiving notification messages from server localhost
Shared memory server interface initialized
ERROR: A primitive was not bound. 0 676
Instance of Method { (0x21199c0, gc=01, fmt=00, flg=11, set=04)
instance variables [15]
raw1 : Float 0.000000 00000000 0080000C
raw2 : Float 0.000000 00000300 03020003
code : instance of Int8Array (0x2119cc0, size=4, set=2)
selectors : nil
constants : nil
prototypeFrame : instance of Array (0x2119c00, size=3, set=2)
context : nil
argNames : instance of SymbolArray (0x2119b40, size=3, set=2)
varNames : nil
sourceCode : nil
ownerClass : class Meta_ServerOptions (0x21113c0)
name : Symbol 'prListDevices'
primitiveName : Symbol '_ListAudioDevices'
filenameSymbol : Symbol '/usr/share/SuperCollider/SCClassLibrary/Common/Control/Server.sc'
charPos : Integer 4025
}
ERROR: Primitive 'none' failed.
Failed.
RECEIVER:
nil
CALL STACK:
MethodError:reportError 0x35be518
arg this =
Nil:handleError 0x1ee0b78
arg this = nil
arg error =
Thread:handleError 0x3470ab8
arg this =
arg error =
Object:throw 0x3636a78
arg this =
Object:primitiveFailed 0x3cd86c8
arg this = nil
Interpreter:interpretPrintCmdLine 0x3d44b98
arg this =
var res = nil
var func =
var code = "ServerOptions.devices;"
var doc = nil
var ideClass =
Process:interpretPrintCmdLine 0x37c8708
arg this =
^^ The preceding error dump is for ERROR: Primitive 'none' failed.
Failed.
RECEIVER: nil
I am new to supercollider and I having a hard time figuring the reason for the error. Please suggest me how to resolve this.
Thanks in Advance.
I was having a similar problem (no output from supercollider at all, just complete and total silence), and this post ultimately led me to the right solution. I think it will be helpful to you and others.
From the ServerOptions documentation, I found that I could configure how SC talks to jack with environment variables.
In my case, I start scsynth with the relevant environment variables like so:
SC_JACK_DEFAULT_INPUTS="system:capture_1" SC_JACK_DEFAULT_OUTPUTS="system" scsynth -u 57110 &
It seems this can also be done from within sclang like so:
"SC_JACK_DEFAULT_INPUTS".setenv("system:capture_1");
"SC_JACK_DEFAULT_OUTPUTS".setenv("system");
In your case, where you are connecting to the wrong outputs, you might want to start scsynth like this:
SC_JACK_DEFAULT_OUTPUTS="system:playback_3,system:playback_4" scsynth -u 57110 &
Another alternative that will let you play with these connections and find what works for you is to use the jack_lsp, jack_connect, and jack_disconnect commands.
To see all of the ins/outs of your jack server as well as the current connections, run
jack_lsp -c
From your post, I think you will see something like
system:capture_1
SuperCollider:in_1
system:capture_2
SuperCollider:in_2
system:playback_1
SuperCollider:out_1
system:playback_2
SuperCollider:out_2
system:playback_3
system:playback_4
SuperCollider:out_1
system:playback_1
SuperCollider:out_2
system:playback_2
To make SuperCollider output to your headphones and speakers, you could conect out_1 and out_2 to playback_3 and playback_4 (assuming those are your headphones) like so:
jack_connect SuperCollider:out_1 system:playback_3
jack_connect SuperCollider:out_2 system:playback_4
To disconnect from the speakers, you could do
jack_disconnect SuperCollider:out_1 system:playback_1
jack_disconnect SuperCollider:out_2 system:playback_2
Run jack_lsp -c again to see if your system is setup how you want!
I had the same problem. I discovered the solution by using Catia from KXStudio. See Catia
Catia is a JACK Patchbay. (Other patchbays are available. QJackctl and Patchage are examples). On my system (Ubuntu 14.04 on a Dell Studio laptop), SuperCollider maps its first 4 outputs to the 4 system playbacks. The first 2 system playbacks are the speakers, system playbacks 3 and 4 are the headphones. By remapping out1 and out2 from SC to playback_3 and playback_4, I hear it through the headphones. So, get hold of a patchbay for JACK, and see what you see.
Hope this helps.
After struggling countless times with this issue I managed to get it working with:
Add your user to the audio linux group.
Use cadence to start jack
Additional resource that could be helpful: https://wiki.archlinux.org/index.php/JACK_Audio_Connection_Kit
Firstly: the big long error message saying "A primitive was not bound" is unpleasant but in this case it just means you typed the wrong command. I don't know where you got that command ServerOptions.devices from but it's just wrong. Maybe the message was intended to tell you to type s.options.device which is more sensible but it's NOT what you need to do. Forget that and forget that long error message.
Secondly: the message you see when you boot the server is good, it tells you that the server has booted and connected to jack. SuperCollider is happy. If you hear sound out of the speakers but not from the headphones (I take it you mean when you plug the headphones in!), this is NOT a supercollider problem but just a standard operating-system issue about setting the volume on your headphones.
It appears that you're using linux so run the command alsamixer in a terminal, that's a good way to check whether the headphone output is muted. Use man alsamixer to understand how to use it, if it's not familiar to you.
I am basically trying to obtain the samples produced by an AUGraph using a GenericOutput Node and a call to AudioUnitRender. As a starting point for my program I used the MixerHost example by Apple and changed the outputNode as follows.
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_GenericOutput;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
Later when I want to obtain my samples, I call
AudioUnitRenderActionFlags ioActionFlags = kAudioOfflineUnitRenderAction_Render;
AudioTimeStamp inTimeStamp = {0};
inTimeStamp.mHostTime = mach_absolute_time();
inTimeStamp.mFlags = kAudioTimeStampSampleHostTimeValid;
result = AudioUnitRender (
ioUnit,
&ioActionFlags,
&inTimeStamp,
1,
1024,
ioData
);
which yields an
"-10877 / Invalid Element"
error. My assumption is, that the error comes from not setting the inTimeStamp.mSampleTime field correctly. To be honest, I have not found a way to find out the sample time other than AudioQueueDeviceGetCurrentTime, which I cannot use, since I do not use an AudioQueue. However changing the ioActionFlag to kAudioTimeStampHostTimeValid does not change the the error behaviour.
The error pertaining to the element (AKA 'bus') refers to the 4th argument (1) to your AudioUnitRender call. The Generic Output unit only has one element/bus: 0 which has an input, output and global scope. If you pass 0 to the call instead of 1 for the element #, that error should disappear.