Can TensorBoard or other tools visualize TensorFlow.js models? - tensorflow

In https://www.tensorflow.org/js/guide/save_load, it describes the format for saving model files as one that uses model.json and the corresponding model.weights.bin. I'm not sure if there's a name for talking about this format (I think it's the same as https://js.tensorflow.org/api/latest/#class:LayersModel but not entirely certain), and I'm wondering if there's a way to visualize them as a graph.
I was expecting to be able to load and view them in TensorBoard but don't see any way to do this with its "Graphs" tool, so perhaps no one has made anything like this yet.

One minimal way to do this is with the summary method on a loaded model, which will log to the console:
In the example I tried, this output doesn't match what I would expect from looking at the model.json though. It looks like this might only look at the outermost Sequential layer but I didn't look more closely.

You can examine models, layers and tensors and etc via Visor API https://js.tensorflow.org/api_vis/latest/. You just need to install it npm i #tensorflow/tfjs-vis in order to use.

Related

Is it possible to add custom entity labels to Spacy 3.0 config file?

I'm working on a custom NER model with spacy-transformers and roBERTa. I'm really only using the CLI for this and am trying to alter my Spacy config.cfg file to account for custom entity labels in the pipeline.
I'm new to Spacy, but I've gathered that people usually use ner.add_label to accomplish this. I wonder if I might be able to change something in [initialize.components.ner.labels] of the config, but haven't come across a good way to do that.
I can't seem to find any options to alter the config file in a similar fashion - does anyone know if this is possible, or what might be the most succinct way to achieve those custom labels?
Edited for clarity: My issue could be different than my config theory. Right now I am getting an output, but instead of text labels they are numeric labels, such as:
('Oct',383) ('2019',383) ('February',383)
Thank you in advance for your help!
If you are working with the config-based training, generally you should not have to specify the labels anywhere - spaCy will look at the training data and get the list of labels from there.
There are a few cases where this won't work.
You have labels that aren't in your training data. These can't be learned so I would just consider this an error, but sometimes you have to work with the data you've been given.
You training data is very large. In this case reading over all the training data to get a complete list of labels can be an issue. You can use the init labels command to generate data so that the input data doesn't have to be scanned every time you start training.

Is it possible to run two TFLite models at the same time on a Flutter App? / make Teachable Machine recognize when an object is not present?

I am using a Teachable Machine model which i trained to recognize some specific objects, the issue with it, however, is that it does not recognize when there is nothing, basically it always assumes that one of the objects is there. One potential solution I am considering is combining two models like the YOLO V2 Tflite model in the same app. Would this be even possible/efficient? If it is what would be the best way to do it?
If anyone knows a solution to get teachable machine to recognize when the object is not present that would probably be a much better solution.
Your problem can be solved making a model ensemble: Train a classifier that learns to know if your specific objects are not in the visual space, and then use your detection model.
However, I really recommend you to upload your model to an online service and consume it via an API. As I know tflite package just supports well MobileNet based models.
I had the same problem, just create another class called whatever you want(for example none) and put some non-related images in it, then train the model.
Now whenever there is nothing in the field, it should output none.

Tensorflow serving using the client

I successfully created a server that receives a TF saved_model, but now I want to send it queries and get predictions.
However, I'm having a hard time of understanding how the client works and how to implement it.
All I found online is the basic tutorial, but they only give the client code for mnist, and it doesn't fit my own mdoel.
So can anyone refer me to how to use or implement the client for a different model?
Thanks
I really thank google to make tensorflow serving open source, it is so helpful for people like me to put prediction models into production. But I have to admit tensorflow serving did poorly in documentation, or, they assume people who use it should already have pretty good knowledge in tensorflow. I stuck for a long time in order to get understand how it works. In their website they introduced concepts and examples well, but there is something missing in between.
I will recommend the tutorial here. This is the first part, and you can also follow the second part, the link will be in that article.
In general, when you export your .ckpt files to a servable model(.pb file and variables folder), you have to define the input, output and method name of your model and save them as a signature in tf.saved_model.signature_def_utils.build_signature_def
In the article, you will find what I said above in this part:
tf.saved_model.signature_def_utils.build_signature_def(
inputs={‘images’: predict_tensor_inputs_info},
outputs={‘scores’: predict_tensor_scores_info},
method_name=\
tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
You can follow how the author defined input and output in the article, and do the same thing to your customized model.
After that, you have to in your client script to call the signature and feed input to server, server then will recognize which method to use and return the output. You can check how the author wrote the client script and find corresponding part of calling signature and feeding input.

Object detection using CNTK

I am very new to CNTK.
I wanted to train a set of images (to detect objects like alcohol glasses/bottles) using CNTK - ResNet/Fast-R CNN.
I am trying to follow below documentation from GitHub; However, it does not appear to be a straight forward procedure. https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN
I cannot find proper documentation to generate ROI's for the images with different sizes and shapes. And how to create object labels based on the trained models? Can someone point out to a proper documentation or training link using which I can work on the cntk model? Please see the attached image in which I was able to load a sample image with default ROI's in the script. How do I properly set the size and label the object in the image ? Thanks in advance!
sample image loaded for training
Not sure what you mean by proper documentation. This is an implementation of the paper (https://arxiv.org/pdf/1504.08083.pdf). Looks like you are trying to generate ROI's. Can you look through the helper functions as documented at the site to parse what you might need:
To run the toy example, make sure that in PARAMETERS.py the datasetName is set to "grocery".
Run A1_GenerateInputROIs.py to generate the input ROIs for training and testing.
Run A2_RunCntk_py3.py to train a Fast R-CNN model using the CNTK Python API and compute test results.
The algo will work on several candidate regions and then generate outputs: one for the classes of objects and another one that generates the bounding boxes for the objects belonging to those classes. Please refer to the code for getting the details of the implementation.
Can someone point out to a proper documentation or training link using which I can work on the cntk model?
You can take a look at my repository on GitHub.
It will guide you through all the steps required to train your own model for object detection and classification with CNTK.
But in short the proper steps should look something like this:
Setup environment
Prepare data
Tag images (ground truth)
Download pretrained model and create mappings for your custom dataset
Run training
Evaluate the model on test set

Trained models for tensorflow ocr

I start the course of tensorflow in udacity, and simultaneously I am looking on the web for the topic.
I suppose that the typical use cases are well solved already, in a better way that i can achieve by my own. In other words in some place exists trained models for usual cases ready to use. I found zooModels that if I undestand properly is the thing that i looking for. but I can't realize that there does not exist a ocr model published that can recognise a number in a image:
image example
Do i need to train my own model? Is there a repository that i don't know?