Cannot find some functions in Python API in tensorflow - tensorflow

I'm trying to convert a model from tensorflow to onnx. The process to do this is like following.
Save a graph_def and a ckpt for weights in tensorflow.
Inspect a graph_def whether it's structure is valid and give us what the inputs and outputs are.
Freeze both of them together into frozen tensorflow graph.
Convert that graph to onnx model.
The problem is in step 2. To inspect the graph definition, I tried to invoke summarize_graph in Graph Transform Tool. But, it wasn't work properly. Next, i found documentation for Graph Transform Tool. According to the documentation they use bazel that is a tool to build and test like maven. It means that I cannot use this function in a tensorflow installed from pip package manager? Only way to use this function is to install a tensorflow from source and build with bazel?

You should be perfectly able to use these features installing TensorFlow from pip. Bazel is used to manage build procedures, you don't need It unless you want to compile TensorFlow from source by yourself.
Try to remove It and reinstall from pip paying attention to choose the right Python setup in case you have multiple Python distributions on your machine.

Related

Can I import tensorflow and keras in Maya , Blender

I am participating in a workshop , where we need to automatically rig characters . Perhaps , we will use deep learning methods . The task is to recognize body parts . My question : Is there a way for connecting tensorflow and keras , or other neural networks with 3D software?
For blender you can follow this tutorial,
https://www.youtube.com/watch?v=J7Iu1rfwbds
I tested it in Blender 2.81 and Python 3.7 by importing pytorch, opencv, sklearn etc. Also the test code provided in the video works correctly. You do not need follow the pandas installation and git cloning shown on the tutorial. Let it install with other bigger packages or install with conda.
Conda environment creation, https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html.
conda create -n MyNamedEnv python=3.7
After environment is created install your required packages. If you have multiple environments its usually in Anaconda3/envs folder. Command to make link,
mklink /j python C:\Users\computer\Anaconda3\envs\MyNamedEnv
To test if it is working go to scripting tab in blender 2.81 delete everything. A to select all and del button to delete. Paste code from below to Text Editor and run script.
https://github.com/virtualdvid/MachineLearning/blob/master/blender/iris_blender.py
Tensorflow and keras should work similarly by installing them in the conda environment and calling them from blender.

How can I use the TensorFlow Embeddings Projector inside my own GCP VM or Jupterlab instance?

Is there a way of running the Embedding Projector inside my GCP Jupyterlab instance (or through any other GCP service) as opposed to using the public https://projector.tensorflow.org ?
The TensorFlow documentation mentions that Embeddings Projector can be run inside Tensorboard, but doesn't provide any links or details.
Unfortunately there is not an Google Cloud product available that brings those projector functionalities specifically but you can run the projector Tensorboard plugin in AI Notebooks (Jupyterlab) locally.
Here's the source Tensorboard's projector plugin repository and here's the step by step guide where the projector plugin has been used for that specific use case you mentioned. Bear in mind that this step by step guide is done on Tensorflow 1.1x not 2.0.0.
If you want to use Tensorflow 2.0.0 you will need to import the plugin like this
from tensorboard.plugins import projector
and then migrate all the Tensorflow 1.1x code to >= 2.0 described in the guide in order to get the same log files as the guide. If you already have the neccesary files to make your custom projector you just need to select the plugin inside the Tensorboard UI.
Tensorboard Projector plugin selection
You can also make a web embedding into an IFrame if using the public Tensorboard tool (I understand that this is not your case but this might be helpful to other people searching for an alternative solution). Opening an AI Notebook and pasting the following code would do the job.
import IPython
url = 'https://projector.tensorflow.org/'
IPython.display.IFrame(url, width=1333, height=900)
Remember to change the width and height values if you need to.

Tensorflow Serving Developer environment

I can't seem to find any documentation that describes what parts of TF and TFS need to be installesd/built to create a servable, can anyone shed light on the subject?
I'm not sure if this documentation exists. The approach I would take is to create a new blank environment, on conda or whatever you prefer. Then install Tensorflow and Tensorflow serving into the environment, which will prompt you to install the dependencies into the environment as well.
Then just to pip list or conda list (or equivalent) and see what all libraries got installed. That should give you a list of the base libraries needed to use TF and TF Serving.

Running Tensorboard without CUDA support

Is it possible to run Tensorboard on a machine without CUDA support?
I'm working at a computation center (via ssh) which has two major clusters:
CPU-Cluster which is a general workhorse without CUDA support (no dedicated GPU)
GPU-Cluster with dedicated GPUs e.g. for running neural networks with tensorflow-gpu.
The access to the GPU-cluster is limited to Training etc. such that I can't afford to run Tensorboard on a machine with CUDA-support. Instead, I'd like to run Tensorboard on the CPU-Cluster.
With the TF bundled Tensorboard I get import errors due to missing CUDA support.
It seems reasonable that the official Tensorboard should have a mode for running with CPU-only. Is this true?
I've also found an inofficial standalone Tensorboard version (github.com/dmlc/tensorboard), does this work without CUDA-support?
Solved my problem: just install tensorflow instead of tensorflow-gpu.
Didn't work for me for a while due to my virtual environment (conda), which didn't properly remove tensorflow-gpu.
Tensorboard is not limited by whether a machine has GPU or not.
And as far as I know, what Tensorboard do is parsing events pb files and display them on web. There is not computing, so it doesn't need GPU.

How to ship TensorFlow with my app?

I'm building an app with C++ and I want to include an ML classifier model that I've built with TensorFlow. App will be built for different operating systems. Is there any way to ship TensorFlow with my app so people don't have to install TensorFlow by themselves on their machines?
My other option is to make my own neural network implementation in C++ and just read weights and biases from saved TensorFlow model.
I recommend using freeze_graph to package your graphdef and weights into a single file, and then follow the label_image C++ example for how to load and run the resulting GraphDef file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/main.cc#L140