Freeze TensorFlow graph to use in iOS app - tensorflow

I have the below files:
1. retrained_graph.pb
2. retrained_labels.txt
3. _retrain_checkpoint.meta
4. _retrain_checkpoint.index
5. _retrain_checkpoint.data-00000-of-00001
6. checkpoint
Command Executed:
python freeze_graph.py
--input_graph=/Users/saurav/Desktop/example/tmp/retrained_graph.pb
--input_checkpoint=./_retrain_checkpoint
--output_graph=/Users/saurav/Desktop/example/tmp/frozen_graph.pb --output_node_names=softmax
Getting error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 44: invalid start byte
Here are screenshots:

Finally I found the answer. To freeze a graph you need to build with "bazel".
1. Install bazel by using homebrew. brew install bazel
2. If you don't have homebrew get it installed.
/usr/bin/ruby -e "$(curl -fsSL \
https://raw.githubusercontent.com/Homebrew/install/master/install)"
Clone tensorflow by command git clone https://github.com/tensorflow/tensorflow
Change directory to tensorflow in terminal
run command ./Configure. It asks few questions answer according your need. Most of them you can type "NO". It asks default path to Python you need to specify the path or just hit "enter".
Now build bazel for freeze_graph using command:
bazel build tensorflow/python/tools:freeze_graph
Keep the retrained graph and checkpoints in folder.
Run bazel command to freeze the graph.
bazel-bin/tensorflow/python/tools/freeze_graph \ --input_graph=YouDirectory/retrained_graph.pb \ --input_checkpoint=YouDirectory/_retrain_checkpoint \ --output_graph=YouDirectory/frozen_graph.pb

Related

Tensorflow Serving Compiling Failure For CPU AVX AVX2

I use the method in the tfx official document to compile the tfx devel in docker file. The OS is MacOS, intel CPU.
here is the docker build code for it
#!/bin/bash
USER=$1
TAG=$2
TF_SERVING_VERSION_GIT_BRANCH="2.4.1"
git clone --branch="${TF_SERVING_VERSION_GIT_BRANCH}" https://github.com/tensorflow/serving
TF_SERVING_BUILD_OPTIONS="--copt=-mavx --local_ram_resources=4096"
cd serving && \
docker build --pull -t $USER/tensorflow-serving-devel:$TAG \
--build-arg TF_SERVING_VERSION_GIT_BRANCH="${TF_SERVING_VERSION_GIT_BRANCH}" \
--build-arg TF_SERVING_BUILD_OPTIONS="${TF_SERVING_BUILD_OPTIONS}" \
-f tensorflow_serving/tools/docker/Dockerfile.devel .
Then I run the shell script with >3hrs and get the following failure:
Actually I cannot know the detail because the log file from docker is clipped by the builder.
Does anyone met the similar problem and can help on this topic?
Thanks a lot in advance!
These instruction sets are not available on all machines, especially with older processors.
If you'd like to apply generally recommended optimizations, including utilizing platform-specific instruction sets for your processor, you can add --config=nativeopt to Bazel build commands when building TensorFlow Serving.
tools/run_in_docker.sh bazel build --config=nativeopt tensorflow_serving/...

Problems at running ImageDataBunch in Deepnote

I'm having trouble running this line of code in Deepnote, does anyone know why?
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
The error says:
NameError: name 'ImageDataBunch' is not defined
And previously, I have imported the Fastai library. So I don't get it!
The FastAI setup in Deepnote is not that straightforward. It's best to use a custom environment where you set stuff up in a Dockerfile and everything works afterwards in the notebook. I am not sure if the ImageDataBunch or whatever you're trying to do works the same way in FastAI v1 and v2, but here are the details for v1.
This is a Dockerfile which sets up the FastAI environment via conda:
# This is Dockerfile
FROM deepnote/python:3.9
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
RUN bash ~/miniconda.sh -b -p $HOME/miniconda
ENV PATH $HOME/miniconda/bin:$PATH
ENV PYTONPATH $HOME/miniconda
RUN $HOME/miniconda/bin/conda install python=3.9 ipykernel -y
RUN $HOME/miniconda/bin/conda install -c fastai -c pytorch fastai -y
RUN $HOME/miniconda/bin/python -m ipykernel install --user --name=conda
ENV DEFAULT_KERNEL_NAME "conda"
After that, you can test the fastai imports in the notebook:
import fastai
from fastai.vision import *
print(fastai.__version__)
ImageDataBunch
And if you download and unpack this sample MNIST dataset, you should be able to load the data like you suggested:
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
Feel free to check out or clone my Deepnote project to continue working on this.

'tflite_convert' is not recognized as an internal or external command (in windows)

im trying to convert my saved_model.pb(from object detection API) file to .tflite for mlkit but when i execute the command on cmd:
tflite_convert \
--output_file=/saved_model/maonani.tflite \
--saved_model_dir=/saved_model/saved_model
i get a response saying
C:\Users\LENOVO-PC\tensorflow> tflite_convert \ --output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
'tflite_convert' is not recognized as an internal or external command,
operable program or batch file.
what should i do to make this work?
Below is answer for Linux system, hope it gives you ideas for Windows.
locate tflite_convert
if output is
/tflite_convert
where TFLITE_CONVERT_PATH depends on your TF installation, execute (2)
/tflite_convert \
--output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
if output is not found, reinstall tf, in my case it is tf v1.9
if (2) works, you might add PATH=$PATH:/bin in ~/.bashrc
if "locate" command not recognized, install it by
sudo apt-get install locate

Installing tensorboard built from source

This is about a tensorboard which is built from source, not about pip-installed one.
I could successfully build it.
$ git clone https://github.com/tensorflow/tensorboard.git
$ cd tensorboard/
$ bazel build //tensorboard
tensorflow/tensorboard$ bazel build //tensorboard
Starting local Bazel server and connecting to it...
......................................
: (log messages here)
Target //tensorboard:tensorboard up-to-date:
bazel-bin/tensorboard/tensorboard
INFO: Elapsed time: 326.553s, Critical Path: 187.92s
INFO: 619 processes: 456 linux-sandbox, 12 local, 151 worker.
INFO: Build completed successfully, 1268 total actions
Then yes I can run it as documented in tensorboard/README.md, and it works.
$ ./bazel-bin/tensorboard/tensorboard --logdir path/to/logs
The problem is, I'd like to run it as if installed via pip like this:
$ tensorboard --logdir path/to/logs
But as far as I looked for, no script provided to create .whl file so that we can locally-pip-install it, unlike tensorflow provides one like this.
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.8.0-py2-none-any.whl
So... can anybody show how to do that? Making packaging script would solve this, but it should exist somewhere as long as tensorboard is provided via pip anyway. :)
My workaround so far is not clean enough:
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard ~/bin
$ ln -s /my/build/folder/tensorboard/bazel-bin/tensorboard/tensorboard.runfiles ~/bin
I appreciate your suggestions, thanks!
Update July-21:
Thanks to W JC, I found instruction is already there in tensorboard/pip_package/BUILD.
# rm -rf /tmp/tensorboard
# bazel run //tensorboard/pip_package:build_pip_package
# pip install -U /tmp/tensorboard/*py2*.pip
Though unfortunately it shows error in my environment, and I guess it's local issue maybe because I'm using anaconda.
But basically the problem was resolved. It should basically work as long as running on supported environment.
It seems there exists an script in the /tensorboard/pip_packages try to build wheels
bazel run //tensorboard/pip_package:build_pip_package ./ did generate the wheel out, but in the folder where bazel-bin points to. In my case, it's generated at ~/.cache/bazel/_bazel_peijia/b64ba42719633ff75eec6880decefcd3/execroot/org_tensorflow_tensorboard/bazel-out/k8-fastbuild/bin/tensorboard/pip_package/build_pip_package.runfiles/org_tensorflow_tensorboard/tensorboard-2.10.0a0-py3-none-any.whl

installing TensorFlow on the IBM power8

I have access to a large IBM Power8 machine, and would like to install TensorFlow on it. Naturally, I tried the quick pip install, but it failed:
sudo pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.6.0-cp27-none-linux_x86_64.whl
tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Storing debug log for failure in /home/pv/.pip/pip.log
Unfortunately, pip.log cotains little useful info.
/usr/bin/pip run on Sat Feb 6 17:29:34 2016
tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Exception information:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run
InstallRequirement.from_line(name, None))
File "/usr/lib/python2.7/dist-packages/pip/req.py", line 168, in from_line
raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename)
UnsupportedWheel: tensorflow-0.6.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform.
Next thing I tried was to build TensorFlow from source. To no avail, all my attempts ended with some cannot execute binary file: Exec format error message, e.g.:
/usr/local/bin/bazel: line 86: /usr/local/lib/bazel/bin/bazel-real: cannot execute binary file: Exec format error
So then I tried to compile Bazel from source, which also resulted in a similar hard error.
me#machine:~/bazel-0.1.5$ ./compile.sh
INFO: You can skip this first step by providing a path to the bazel binary as second argument:
INFO: ./compile.sh compile /path/to/bazel
🍃 Building Bazel from scratch.
Compiling Java stubs for protocol buffers...
third_party/protobuf/protoc-linux-x86_32.exe -Isrc/main/protobuf/ --java_out=/tmp/bazel.T9C83cNa/src src/main/protobuf/android_studio_ide_info.proto
scripts/bootstrap/buildenv.sh: line 63: third_party/protobuf/protoc-linux-x86_32.exe: cannot execute binary file: Exec format error
pv#sardonis:~/bazel-0.1.5$ ^C
I however found this link http://www.cnblogs.com/rodenpark/p/5007744.html that explains how to build the Protobuf compiler from source on the Power8 machine. This worked and after the modifications described in his other topic http://www.cnblogs.com/rodenpark/p/5007846.html I managed to at least get the compilation process started. But now it crashes with a ton of errors which each seem less severe on their own but the vast amount of them makes it look really hopeless, I posted them on http://pastebin.com/KjkseaGx for reference.
So... I'm running out of inspiration. What can I do to make TensorFlow work on the Power8 machine?
Install bazel 0.2.0-ppc
tf#ubuntu16:~$ git clone https://github.com/ibmsoe/bazel
tf#ubuntu16:~/bazel$ git checkout v0.2.0-ppc
tf#ubuntu16:~/bazel$ ./compile.sh
Install tensorflow
tf#ubuntu16:~$ git clone --recurse-submodules https://github.com/tensorflow/tensorflow
tf#ubuntu16:~/tensorflow$ git checkout v0.10.0rc0
tf#ubuntu16:~/tensorflow$ git commit -m"v0.10.0rc0"
tf#ubuntu16:~/tensorflow$ git cherry-pick ce70f6cf842a46296119337247c24d307e279fa0
tf#ubuntu16:~/tensorflow$ git cherry-pick f1acb3bd828a73b15670fc8019f06a5cd51bd564
tf#ubuntu16:~/tensorflow$ git cherry-pick 9b6215a691a2eebaadb8253bd0cf706f2309a0b8
tf#ubuntu16:~/tensorflow$ ./configure
tf#ubuntu16:~/tensorflow$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
Here you'll encounter an error, something like this
ERROR: /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/external/farmhash_archive/BUILD:5:1: Executing genrule #farmhash_archive//:configure failed: bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
/home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260 /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow
/tmp/tmp.XdCPQefJyZ /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260 /home/tf/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/tensorflow
You'll have to edit config.guess as below to insert a stanza for ppc64le
tf#ubuntu16:~/.cache/bazel/_bazel_tf/b2f766da603b0bed56d4c1d0b178456a/external/farmhash_archive/farmhash-34c13ddfab0e35422f4c3979f360635a8c050260$ vi config.guess
*:BSD/OS:*:*)
echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE}
exit ;;
+ ppc64le:Linux:*:*)
+ echo powerpc64le-unknown-linux-gnu
+ exit ;;
*:FreeBSD:*:*)
case ${UNAME_MACHINE} in
tf#ubuntu16:~/tensorflow$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
tf#ubuntu16:~/tensorflow$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
tf#ubuntu16:~/tensorflow$ sudo pip install /tmp/tensorflow_pkg/tensorflow*.whl
tf#ubuntu16:~/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfi
tf#ubuntu16:~/tensorflow$ mkdir _python_build
tf#ubuntu16:~/tensorflow$ cd _python_build
tf#ubuntu16:~/tensorflow/_python_build$ ln -s ~/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles/tensorflow/* .
tf#ubuntu16:~/tensorflow/_python_build$ ln -s ~/tensorflow/tools/* .
tf#ubuntu16:~/tensorflow/_python_build$ python __init__.py develop
Using miniconda:
Installing miniconda:
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux--ppc64le.sh -O miniconda.sh
bash miniconda.sh
Accept the condition and allow conda to be added to PATH
rm miniconda.sh
echo export IBM_POWERAI_LICENSE_ACCEPT=yes >> ~/.bashrc
source ~/.bashrc
This sould add (base) on terminal. Add the correct channel as first priority
conda config --add default_channels https://repo.anaconda.com/pkgs/main
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/
Create environment (it is good practice not to install packages on base)
conda create -n ai python=3.7
conda activate ai
conda install --strict-channel-priority tensorflow-gpu
For more information on miniconda on IBM Power 8 and Anaconda: IBM Source & Anaconda Source