Is it possible to run two TFLite models at the same time on a Flutter App? / make Teachable Machine recognize when an object is not present? - tensorflow

I am using a Teachable Machine model which i trained to recognize some specific objects, the issue with it, however, is that it does not recognize when there is nothing, basically it always assumes that one of the objects is there. One potential solution I am considering is combining two models like the YOLO V2 Tflite model in the same app. Would this be even possible/efficient? If it is what would be the best way to do it?
If anyone knows a solution to get teachable machine to recognize when the object is not present that would probably be a much better solution.

Your problem can be solved making a model ensemble: Train a classifier that learns to know if your specific objects are not in the visual space, and then use your detection model.
However, I really recommend you to upload your model to an online service and consume it via an API. As I know tflite package just supports well MobileNet based models.

I had the same problem, just create another class called whatever you want(for example none) and put some non-related images in it, then train the model.
Now whenever there is nothing in the field, it should output none.

Related

Continue training CoreML Model

I'm trying to get a better understanding on how to create object detection models in Turi Create (for usage in CoreML). I'm trying to create a model that detects custom images I designed and printed myself. To avoid having to take a huge amount of photo's, I'm figured I'd use the one-shot-object-detection feature provided by Turi Create. So far so good. I feed the algorithm two starter images and it successfully generates the synthetic data set and creates a somewhat reliable model.
Now I'm wondering what happens when I want to add a third category. I could of course add a third starter image and run the code again, but this feels like 2/3th of the work is redundant...
Is there a way to continue training a previously trained model, or combine multiple models so I don't have to retrain my models from scratch every time I add a category? If not, any other ways to get this done (e.g. TensorFlow)?
Turi Create is rather limited in the options it offers for retraining (none, basically). If you want more control over the process, using a tool such as TensorFlow is the better choice.

Using tensorflow hub with go

I want to use pre trained models in my go application. Especially the Inception-ResNet-v2 model.
This model seems to be only available via tensorflow hub (https://www.tensorflow.org/hub/).
However I could not find any documentation how to use tensorflow hub with the go language bindings for tensorflow.
How can I download and use these models in go?
So after a lot of work in the past few days I finally found a way.
At first I wanted to just use Python to do all the Tensorflow stuff and then provide the results via a rest service. However it turned out that the number of models provided by Tensorflow Hub is very small. This was a problem for me because I had to try out different models and compare them.
Thus I switched to using models from https://github.com/tensorflow/models. There are several tutorials how to export the data to .pb files. Those files can then be loaded in Go using gocv.
It requires a lot of work to convert the files, but in the end I think this is the best way to use Tensorflow models in go.

Tensorflow Stored Learning

I haven't tried Tensorflow yet but still curious, how does it store, and in what form, data type, file type, the acquired learning of a machine learning code for later use?
For example, Tensorflow was used to sort cucumbers in Japan. The computer used took a long time to learn from the example images given about what good cucumbers look like. In what form the learning was saved for future use?
Because I think it would be inefficient if the program should have to re-learn the images again everytime it needs to sort cucumbers.
Ultimately, a high level way to think about a machine learning model is three components - the code for the model, the data for that model, and metadata needed to make this model run.
In Tensorflow, the code for this model is written in Python, and is saved in what is known as a GraphDef. This uses a serialization format created at Google called Protobuf. Common serialization formats include Python's native Pickle for other libraries.
The main reason you write this code is to "learn" from some training data - which is ultimately a large set of matrices, full of numbers. These are the "weights" of the model - and this too is stored using ProtoBuf, although other formats like HDF5 exist.
Tensorflow also stores Metadata associated with this model - for instance, what should the input look like (eg: an image? some text?), and the output (eg: a class of image aka - cucumber1, or 2? with scores, or without?). This too is stored in Protobuf.
During prediction time, your code loads up the graph, the weights and the meta - and takes some input data to give out an output. More information here.
Are you talking about the symbolic math library, or the idea of tensor flow in general? Please be more specific here.
Here are some resources that discuss the library and tensor flow
These are some tutorials
And here is some background on the field
And this is the github page
If you want a more specific answer, please give more details as to what sort of work you are interested in.
Edit: So I'm presuming your question is more related to the general field of tensor flow than any particular application. Your question still is too vague for this website, but I'll try to point you toward a few resources you might find interesting.
The tensorflow used in image recognition often uses an ANN (Artificial Neural Network) as the object on which to act. What this means is that the tensorflow library helps in the number crunching for the neural network, which I'm sure you can read all about with a quick google search.
The point is that tensorflow isn't a form of machine learning itself, it more serves as a useful number crunching library, similar to something like numpy in python, in large scale deep learning simulations. You should read more here.

Deep Learning with TensorFlow on Compute Engine VM

I'm actualy new in Machine Learning, but this theme is vary interesting for me, so Im using TensorFlow to classify some images from MNIST datasets...I run this code on Compute Engine(VM) at Google Cloud, because my computer is to weak for this. And the code actualy run well, but the problam is that when I each time enter to my VM and run the same code I need to wait while my model is training on CNN, and after I can make some tests or experiment with my data to plot or import some external images to impruve my accuracy etc.
Is There is some way to save my result of trainin model just once, some where, that when I will decide for example to enter to the same VM tomorrow...and dont wait anymore while my model is training. Is that possible to do this ?
Or there is maybe some another way to do something similar ?
You can save a trained model in TensorFlow and then use it later by loading it; that way you only have to train your model once, and use it as many times as you want. To do that, you can follow the TensorFlow documentation regarding that topic, where you can find information on how to save and load the model. In short, you will have to use the SavedModelBuilder class to define the type and location of your saved model, and then add the MetaGraphs and variables you want to save. Loading the saved model for posterior usage is even easier, as you will only have to run a command pointing to the location of the file in which the model was exported.
On the other hand, I would strongly recommend you to change your working environment in such a way that it can be more profitable for you. In Google Cloud you have the Cloud ML Engine service, which might be good for the type of work you are developing. It allows you to train your models and perform predictions without the need of an instance running all the required software. I happen to have worked a little bit with TensorFlow recently, and at first I was also working with a virtualized instance, but after following some tutorials I was able to save some money by migrating my work to ML Engine, as you are only charged for the usage. If you are using your VM only with that purpose, take a look at it.
You can of course consult all the available documentation, but as a first quickstart, if you are interested in ML Engine, I recommend you to have a look at how to train your models and how to get your predictions.

Trained models for tensorflow ocr

I start the course of tensorflow in udacity, and simultaneously I am looking on the web for the topic.
I suppose that the typical use cases are well solved already, in a better way that i can achieve by my own. In other words in some place exists trained models for usual cases ready to use. I found zooModels that if I undestand properly is the thing that i looking for. but I can't realize that there does not exist a ocr model published that can recognise a number in a image:
image example
Do i need to train my own model? Is there a repository that i don't know?