How to use customvision.ai to create object detection model for TensorFlow Lite? - tensorflow

I have an object detection model that I've created in https://customvision.ai. If I export it as a TensorFlow Lite model, I get a model that expects FLOAT32 [1, 416, 416, 3] as input and returns FLOAT32 [1, 13, 13, 35] as output (as per TensorFlow Lite's visualize.py).
I would like to use that model in an Android app. I've tried to load the .tflite model file into the TensorFlow Lite object detection sample app, however it expects a different format. I get the following exception when running the app. java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 13, 13, 35] and a Java object with shape [1, 10, 4].
Is it feasible to adapt the sample app to use the model from customvision.ai?
How should I interpret the shape [1, 13, 13, 35]?
Thanks in advance!

Related

In yolo Can I continue to training from final .weight with different class of images?

I want to train a few products for image classification in Yolo. Let's say I have trained with 10 products (means 10 classes) and saved the best file. Now I want to add some more data of other products (means some new class names and images). So, can I call the previous trained model and then train again with only the new class of images? If not then is there any way to retrain my model not from scratch but from the last trained file with new class of images. Please, reply.
I have tried this code:
!python train.py --workers 8 --batch-size 16 --data products/data.yaml --img 640 640 --cfg cfg/training/yolov7_custom.yaml --epochs 5 --weights 'runs/train/products/weights/last.pt' --name yolov7_custom_newClass --hyp data/hyp.scratch.custom.yaml --device 0
Here, "last.pt" is the previous trained file, and "data.yaml" is the new data yaml file, whose number of classes is different as new images with new class names, after running the code, it show me this error:
RuntimeError: Error(s) in loading state_dict for Model:
size mismatch for model.105.m.0.weight: copying a param with shape torch.Size([30, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([21, 256, 1, 1]).
size mismatch for model.105.m.0.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([21]).
size mismatch for model.105.m.1.weight: copying a param with shape torch.Size([30, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([21, 512, 1, 1]).
size mismatch for model.105.m.1.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([21]).
size mismatch for model.105.m.2.weight: copying a param with shape torch.Size([30, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([21, 1024, 1, 1]).
size mismatch for model.105.m.2.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([21]).
size mismatch for model.105.im.0.implicit: copying a param with shape torch.Size([1, 30, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 21, 1, 1]).
size mismatch for model.105.im.1.implicit: copying a param with shape torch.Size([1, 30, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 21, 1, 1]).
size mismatch for model.105.im.2.implicit: copying a param with shape torch.Size([1, 30, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 21, 1, 1]).
Looks like, as the new data.yaml file's number of classes are differnet from previous models, That's why this error occured.. Any solution.

Different behavior of sequential API and functional API for tensorflow embedding

When I tried using Sequential API and Functional API in Tensorflow to apply the same simple embedding function, I see different result.
The result is as follows:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import layers
inputs = np.random.randint(0, 99, [32, 100, 1])
myLayer = layers.Embedding(input_dim = 100, output_dim = 8)
# Sequential API
sm = keras.Sequential()
sm.add(myLayer)
sm_out = sm(inputs)
sm_out.shape # Shape of sm_out is: TensorShape([32, 100, 8])
# Functional API
fm_out = myLayer(inputs)
fm_out.shape # Shape of fm_out is: TensorShape([32, 100, 1, 8])
Is it intended or a bug?
First of all, your second call is not a functional API call. You need to wrap your layer output (with a tf.keras.layers.Input) in a tf.keras.models.Model for this to be a functional API call.
Secondly, when you're calling the sequential model, it is smart enough to detect that last dimension is 1 and ignore that when looking up embeddings (I'm not sure where exactly this is handled, maybe someone else can point to). So when you pass in a tensor of [32, 100, 1], what the embedding layer really sees is a [32, 100] sized array. This, after the look up, gets converted to a [32, 100, 8] sized tensor.
In your second call, when calling the model directly, it doesn't do this. So it simply converts the [32, 100, 1] sized input to a [32, 100, 1, 8] sized input.
You can get the same result from both these methods if you set your inputs shape to [32, 100] or [32, 100, 2] (last dimension != 1).
I guess the lesson here is always use the input_shape argument (to the first layer of the Sequential model) to prevent such unexpected behaviors.

I made tflite model with eager_few_shot_od_training_tflite.ipynb but can't use it on flutter project

I made tflite model with
https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tflite.ipynb
but the probem is that I export tflite file that I successfully made in above link ,
then use it in my flutter object detection api app
https://github.com/hiennguyen92/flutter_realtime_object_detection
then I run it with my tflite model
then it cause error saying
Caused by: java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (StatefulPartitionedCall:1) with shape [1, 10] to a Java object with shape [1, 10, 4].
I think it's because of my output of tflite model is like below...?
How can I resolve above error ?

Why model.fit() method of keras do not accept any tensor as feature or label argument, on the other hand it accepts numpy arrays

Last time when I was training a dnn model I noticed that When I try to train my model with tensor (dtype = float64) it always gives error but when I train the model with numpy array with same specs(shape, values, dtype) as tensor it shows no error. Why is it so
Code
For feature and labels as tensor replace numpy.arrys in 2nd script with:
celsius_q = tf.Variable([-40, -10, 0, 8, 15, 22, 38], tf.float64)
fahrenheit_a = tf.Variable([-40, 14, 32, 46, 59, 72, 100], tf.float64)
When using feature and label as tensor it shows this error:
Error: ValueError: Failed to find data adapter that can handle input:
<class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'>,
<class 'tensorflow.python.ops.resource_variable_ops.ResourceVariable'>
Use tf.constant for creating an input tensor in Tensorflow.
tf.Variable can be changed later so this type of tensor is not good for model input. Please refer to this answer https://stackoverflow.com/a/44746203/20388268

classify batch of images using tensorflow mobilenet retrain example

I have trained a classification net by using Tensorflow MobileNet retrain.py example file (in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py)
When I use the trained net, i managed to run only a single image at a time (the input is tensorflow 4d array shape [1, 128, 128, 3])
Don't know if it means anything, but the train process used train batch flag --train_batch_size=100
When i try to classify a batch of images (for example, tensorflow 4d array shape [2, 128, 128, 3] to 'input' layer), i get the following error:
ValueError: Cannot feed value of shape (2, 128, 128, 3) for Tensor 'input:0', which has shape '(1, 128, 128, 3)'
(before running this trained net session, i am using a preprocess tensorflow session that prepares the images for this net (resize, normalise, etc.)
Does anyone knows what should i do in order to run a batch of images over such a net or how can i configure the retrain.py file that would create a net that allows batch run?