Tensorflow Lite: error on AllocateTensors() when I make .tflite using flatc - tensorflow

I am trying to test my model using tensorflow-lite on x86_64 PC.
I coded a c++ test code and succeed interpreting given mobilenet model and executing inference.
I wanted to change some operation in the model to my custom operation.
Before doing that, I checked whether I can convert .tflite to json correctly.
What I did is changing the mobilenet.lite to mobilenet.json using flatc and tensorflow lite's schema (schema.fbs) and re-changing the mobilenet.json to mobilenet_new.lite.
However, when I tested mobilenet_new.lite, error occurs like below : tensorflow/contrib/lite/kernels/kernel_util.cc:35 std::abs(input_product_scale - bias_scale) <= 1e-6 * std::min(input_product_scale, bias_scale) was not true.
When I converted the mobilenet_new.lite to mobilenet_new.json, two JSON files were the same without any difference. Why does this error happen? If the parameter values are the same, how this can be possible?
If you have knowledge about this, please give me help.
Thanks

I has resolved this.
when I debug this problem, it was flatbuffer problem.
flatbuffer change float to string when make json file.
So, it becomes fixed point value with precision 6. This makes float value rounding.
So, when I converted tflite -> json -> tflite, there was some change between two tflite files.

Related

Error: This ONNX model includes dimension variables

I converted my model from HDF5 to .onnx format using tf2onnx library. And hence, I want to further convert it into .nnoir format using onnx2nnoir. My onnx model name is tm1.onnx and I wanted to convert it to tm1.nnoir using the following command -
onnx2nnoir -o tm1.nnoir tm1.onnx. But I am getting the following error:
Error: This ONNX model includes dimension variables.
{'unk__6'}
Set the values with the `--fix_dimension` option.
I understood that I should fix the dimension value, but I have found no guidelines to instruct me so. I entered some values in the command using --fix_dimension keyword, but it was not fruitful as it shows onnx2nnoir: error: argument --fix_dimension: invalid parse_assign value: error.
I will be really grateful if someone can guide me on this.

Convert a .npy file to wav following tacotron2 training

I am training the Tacotron2 model using TensorflowTTS for a new language.
I managed to train the model (performed pre-processing, normalization, and decoded the few generated output files)
The files in the output directory are .npy files. Which makes sense as they are mel-spectograms.
I am trying to find a way to convert said files to a .wav file in order to check if my work has been fruitfull.
I used this :
melspectrogram = librosa.feature.melspectrogram(
"/content/prediction/tacotron2-0/paol_wavpaol_8-norm-feats.npy", sr=22050,
window=scipy.signal.hanning, n_fft=1024, hop_length=256)
print('melspectrogram.shape', melspectrogram.shape)
print(melspectrogram)
audio_signal = librosa.feature.inverse.mel_to_audio(
melspectrogram, sr22050, n_fft=1024, hop_length=256, window=scipy.signal.hanning)
print(audio_signal, audio_signal.shape)
sf.write('test.wav', audio_signal, sample_rate)
But it is given me this error : Audio data must be of type numpy.ndarray.
Although I am already giving it a numpy.ndarray file.
Does anyone know where the issue might be, and if anyone knows a better way to do it?
I'm not sure what your error is, but the output of a Tacotron 2 system are log Mel spectral features and you can't just apply the inverse Fourier transform to get a waveform because you are missing the phase information and because the features are not invertible. You can learn about why this is at places like Speech.Zone (https://speech.zone/courses/)
Instead of using librosa like you are doing, you need to use a vocoder like HiFiGan (https://github.com/jik876/hifi-gan) that is trained to reconstruct a waveform from log Mel spectral features. You can use a pre-trained model, and most off-the-shelf vocoders, but make sure that the sample rate, Mel range, FFT, hop size and window size are all the same between your Tacotron2 feature prediction network and whatever vocoder you choose otherwise you'll just get noise!

GluonCV inference with finetuned model - “Please make sure source and target networks have the same prefix” error

I used GluonCV to finetune an object detection model in order to recognize some custom classes, mostly following the related tutorial.
I tried using both “ssd_512_resnet50_v1_coco” and “ssd_512_mobilenet1.0_coco” as base models, and the training process ended successfully (the accuracy on the validation dataset is reasonably high).
The problem is, I tried running inference with the newly trained model, by using for example:
classes = ["CML_mug", "person"]
net = gcv.model_zoo.get_model('ssd_512_mobilenet1.0_custom',
classes=classes,
pretrained_base=False,
ctx=ctx)
net.load_params("saved_weights/-0070.params", ctx=ctx)
but I get the error:
AssertionError: Parameter 'mobilenet0_conv0_weight' is missing in file: saved_weights/CML_mobilenet_00/-0000.params, which contains parameters: 'ssd0_ssd0_mobilenet0_conv0_weight', 'ssd0_ssd0_mobilenet0_batchnorm0_gamma', 'ssd0_ssd0_mobilenet0_batchnorm0_beta', ..., 'ssd0_ssd0_ssdanchorgenerator2_anchor_2', 'ssd0_ssd0_ssdanchorgenerator3_anchor_3', 'ssd0_ssd0_ssdanchorgenerator4_anchor_4', 'ssd0_ssd0_ssdanchorgenerator5_anchor_5'. Please make sure source and target networks have the same prefix.
So, it seems the network parameters are named differently in the .params file and in the model I’m using for inference. Specifically, in the .params file, the name of the network weights is prefixed by the string “ssd0_ssd0_”, which lead to the error when invoking net.load_parameters.
I did this whole procedure a few times in the past without having problems, did anything change? I’m running it on Ubuntu 18.04, with mxnet-mkl (1.6.0) and gluoncv (0.7.0).
I tried loading the .params file by:
from mxnet import nd
model = nd.load(0070.param)
and I wanted to modify it and remove the “ssd0_ssd0_” string that is causing the problem.
I’m trying to navigate the dictionary, but between the keys I only found a:
ssd0_resnetv10_conv0_weight
so, slightly different than indicated in the error.
Anyway, this way of fixing the issue would be a little cumbersome, I’d prefer a more direct way.
Ok, fixed it. Basically, during training I was saving the .params file by using:
net.export(param_file)
and, as I said, loading them during inference by:
net.load_parameters(param_file)
However, it doesn’t work this way, but it does if instead of export I use:
net.save_parameters(param_file)

unsigned int overflow error in converting image to MNIST format

I'm a newbie of deep learning utilizing tensorflow.
I want to make the own model that predict my custom images that are constructed on the grayscale.
But the only thing that I know is MNIST example utilizing tensorflow.
So I used a converting module from this repo but the error had been occurred such as this.
Images like to convert was constructed as 80,680 of training images, 20,170 of test images.
I really don't know why this error has occurred.
Please help me.
The script you're referring to doesn't correctly set up the headers for the MNIST format. It was addressed in a previous Github issue that has since been deleted, but my modification:
header = array('B')
header.extend([0,0,8,1,0,0])
header.append(int('0x'+hexval[2:][:2],16))
header.append(int('0x'+hexval[2:][2:],16))
to
header = array('B')
header.extend([0,0,8,1])
header.append(int('0x'+hexval[2:][:2],16))
header.append(int('0x'+hexval[4:][:2],16))
header.append(int('0x'+hexval[6:][:2],16))
header.append(int('0x'+hexval[8:][:2],16))
should get it working. Hope this helps!

Tensorflow Lite export looks like it do not add weigths and add unsupported operations

I want to reload some of my model variables with the saved weight in the chheckpoint and then export it to the tflite file.
The question is a bit tricky without see code, so I made this Colab jupyter notebook with the complete code to explain it better (All code is working, you can actually copy in a new collab and change if you want):
https://colab.research.google.com/drive/1wSor4CxEz36LgElVi4y_N8uiSt4-j9b2#scrollTo=XKBQzoW_wd4A
I got it working but with two issues:
The exported .tflite file is like 3Ks, so I do not believe it is the entire model with the weights in it. Only the input is an image of 128x128x3, one weight for each is more than 3K.
When I finally import the model in Android, I have this error: "Didn't find custom op for name 'VariableV2' /n Didn't find custom op for name 'ReorderAxes' /n Registration failed."
Maybe the last error is cause the save/restore operations? They look like are there when I save the graph definition.
Thanks in advance.
I realize my problem.. I'm trying to convert to TFLITE a model without previously freezing it, TFLITE do not allow "VariableV2" nodes cause they should not be there..
All the problem is corrected freezing the model like this:
output_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ["output"])
I lost some time looking for that, hope it helps.