Tensorboard processing for DLC File - tensorflow

I'm looking at a DLC file which represents the graph used for a neural network inside of the Snapdragon Neural Processing Engine.
https://developer.qualcomm.com/docs/snpe/model_conv_tensorflow.html
I would like to visualize this model in something like tensorboard. My understanding is tensorboard requires PB file which is used by tensorflow to save graphs.
Is there any way to convert a DLC file to a Tensorflow PB for visualization or another way to achieve this aim?

NPE SDK does not provide tool to convert a DLC file to PB/Any other framework supported model.
A platform like Tensorboard, which helps in debug and visualization of the model created are not available from NPE SDK.

Related

how to visualize the Tensorflow model of Text-to-speech synthesis https://github.com/Rayhane-mamah/Tacotron-2

I have been trying to visualize and see layers/parameters/Flops for this Tensorflow model of Text-to-speech synthesis https://github.com/Rayhane-mamah/Tacotron-2
Is there a way I can visualize or see graph of Tacotron-2 with all its RNN/LSTM layers using tensorflow?
Do I need train the model first before being able to print the model, or is there a way to simply see what ops are in each layer for the model without training?
I'm having a hard time figuring this out as I'm new to TF/pytorch frameworks. It seems to me one should be able to just print the model as the github has .py source, but I just don't know how this simple/basic things work with python and how to do it.

How do I convert a Tensorflow model to .mlmodel?

I want to convert a Tensorflow model with the following structure to a .mlmodel file for use in an iOS app:
cub_image_experiment/
logdir/
val_summaries/
test_summaries/
finetune/
val_summaries/
cmds.txt
config_train.yaml
config_test.yaml
I'm following this tutorial: https://github.com/visipedia/tf_classification/wiki/CUB-200-Image-Classification
However, I'm having trouble understanding the structure of the project. Which files are important and how do I convert all the separate config files and everything into a single .mlmodel file so that I can use in my application?
I've looked online and all I could find was how to convert .caffemodel to .mlmodel or .pb file to .mlmodel. These are all single files, however my project has multiple files. I found a tutorial on how to convert a tf model into a single .pb file, however, that model's structure was different and it did not contain any yaml files. My project is not focused on creating a model at the moment, but merely integrating a model into an iOS app. I found this model interesting for an app idea and wanted to know if it can be integrated. If there are any tutorials out there that might help me in this sort of problem please let me know.
None of that stuff is used by the Core ML model. The yaml files etc are used only to train the TF model.
All you need to provide is a frozen graph (a .pb file) and then convert it to an mlmodel using tfcoreml.
It looks like your project doesn't have a frozen graph but checkpoints. There is a TF utility that you can use to convert the checkpoint to a frozen graph, see https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py

How can I convert TRT optimized model to saved model?

I would like to convert a TRT optimized frozen model to saved model for tensorflow serving. Are there any suggestions or sources to share?
Or are there any other ways to deploy a TRT optimized model in tensorflow serving?
Thanks.
Assuming you have a TRT optimized model (i.e., the model is represented already in UFF) you can simply follow the steps outlined here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#python_topics. Pay special attention to section 3.3 and 3.4, since in these sections you actually build the TRT engine and then save it to a file for later use. From that point forward, you can just re-use the serialized engine (aka. a PLAN file) to do inference.
Basically, the workflow looks something like this:
Build/train model in TensorFlow.
Freeze model (you get a protobuf representation).
Convert model to UFF so TensorRT can understand it.
Use the UFF representation to build a TensorRT engine.
Serialize the engine and save it to a PLAN file.
Once those steps are done (and you should have sufficient example code in the link I provided) you can just load the PLAN file and re-use it over and over again for inference operations.
If you are still stuck, there is an excellent example that is installed by default here: /usr/src/tensorrt/samples/python/end_to_end_tensorflow_mnist. You should be able to use that example to see how to get to the UFF format. Then you can just combine that with the example code found in the link I provided.

How to retrieve original TensorFlow frozen graph from .tflite?

Basically I am trying to use google's pre trained Speaker-id model for speaker detection. But this being a TensorFlow Lite model, I can't use it on my Linux pc. For that, I am trying to find a converter back to its frozen graph model.
Any help on this converter or any direct way to use tensorflow Lite pretrained models on desktop itself, will be appreciated.
You can use the converter which generates tflite models to convert it back to a .pb file if that is what you're searching for.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md

Is there a C/C++ API for Tensorflow object detection

Is there a C/C++ API, pre-trained with Imagenet dataset, for Detection ?
I have tried Yolo, with
./darknet -i 0 detector demo cfg/imagenet1k.data extraction.cfg extraction.weights
But it gives me the error
Last layer must produce detections
And for Tensorflow, looks like there is only python API
https://github.com/tensorflow/models/tree/master/research/object_detection
When you develop a model in TensorFlow, it can be output as a protobuf file (usually with a pb extension, for more details on protobuf in TensorFlow check out this page). This protobuf file can then be used in different applications written in languages that TensorFlow has bindings to. A simple tutorial on how to accomplish this for a C++ application can be found here.
Regarding Yolo, you can generate a protobuf file from the Yolo script like this:
flow --model cfg/yolo.cfg --load bin/yolo.weights --savepb
(Further details on other parameters that can be passed to Yolo can be found on the Github readme page).
The output protobuf file can then be loaded into your C++ application to perform object detection.