Static libray Tensorflow - tensorflow

I'm new in c++, and I want to use a static library of tensorflow lite to load and run a model of tensorflow on a c++ program. But after I built the library and run the program ( with the library.a include inside), there is a problem :
./libtensorflow-lite.a:1:1: error: expected unqualified-id
!<arch>
^
./libtensorflow-lite.a:2:3: error: invalid filename for line marker directive
#1/20 1595980892 0 0 100644 664268 `
^
[....]
If you could give me a clue of how to import this library with no errors it's would be awesome.
PS: I built the library with ./build_lib.sh which is on the git of tensorflow at the address tensorflow/tensorflow/lite/tools/make/build_lib.sh

In order to load and run a TFLite model in C++, use the TFLite C++ Inference API

Related

Grappler optimization failed. Error: Op type not registered 'FusedBatchNormV3' in Tensorflow Serving

I am serving a model using Tensorflow Serving.
TensorFlow ModelServer: 1.13.0-rc1+dev.sha.fd92d2f
TensorFlow Library: 1.13.0-rc1
I sanity tested with load_model and predict(...) in notebook and it is making the expected predictions. The model is a ResNet50 with custom head (fine tuned).
If i try to submit the request as instructed in:
https://www.tensorflow.org/tfx/tutorials/serving/rest_simple
I got error
2022-02-10 22:22:09.120103: W external/org_tensorflow/tensorflow/core/kernels/partitioned_function_ops.cc:197] Grappler optimization failed. Error: Op type not registered 'FusedBatchNormV3' in binary running on tensorflow-no-gpu-20191205-rec-eng. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2022-02-10 22:22:09.137225: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at partitioned_function_ops.cc:118 : Not found: Op type not registered 'FusedBatchNormV3' in binary running on tensorflow-no-gpu-20191205-rec-eng. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
Any idea how to resolve? Will provide more details upon request.
I found out what's wrong. the tensorflow server version seemed wrong. Just ensure it is:
TensorFlow ModelServer: 2.8.0-rc1+dev.sha.9400ef1
TensorFlow Library: 2.8.0

How to let TensorFlow XLA know the CUDA path

I installed TensorFlow nightly build version via the command
pip install tf-nightly-gpu --prefix=/tf/install/path
When I tried to run any XLA example, TensorFlow has error "Unable to find libdevice dir. Using '.' Failed to compile ptx to cubin. Will attempt to let GPU driver compile the ptx. Not found: /usr/local/cuda-10.0/bin/ptxas not found".
So apparently TensorFlow cannot find my CUDA path. In my system, the CUDA is installed in /cm/shared/apps/cuda/toolkit/10.0.130. Since I didn't build TensorFlow from source, by default XLA searches the folder /user/local/cuda-*. But since I do not have this folder, it will issue an error.
Currently my workaround is to create a symbolic link. I checked the TensorFlow source code in tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc. There is a comment in the file "// CUDA location explicitly specified by user via --xla_gpu_cuda_data_dir has highest priority." So how to pass values to this flag? I tried the following two environment variables, but neither of them works:
export XLA_FLAGS="--xla_gpu_cuda_data_dir=/cm/shared/apps/cuda10.0/toolkit/10.0.130/"
export TF_XLA_FLAGS="--xla_gpu_cuda_data_dir=/cm/shared/apps/cuda10.0/toolkit/10.0.130/"
So how to use the flag "--xla_gpu_cuda_data_dir"? Thanks.
you can run export XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda in terminal
There is a code change for this issue, but not clear how to use. Check here https://github.com/tensorflow/tensorflow/issues/23783

ImportError: cannot import name 'preprocessor_pb2' in the training part after installation was successful

I installed the object detection API correctly using this https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md and I checked by running model_builder_test.py
This gave me a OK result.
Then I moved on to running the train.py on my dataset using the following command
python train.py --logtostderr --pipeline_config_path=pipeline.config --train_dir=train_file
And I am getting the error ImportError: cannot import name 'preprocessor_pb2'
This particular preprocessor_pb2.py exists in the path it is looking for i.e
C:\Users\SP-TestMc\Anaconda3\envs\tensorflow\models-master\models-master\research\object_detection\protos
What could be the reason for this error then?
See https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md#protobuf-compilation. Sounds like you need to compile the protos before using the Python scripts.

unable to import tensorflow in spyder,it show following error

Error importing tensorflow. Unless you are using bazel,
you should not try to import tensorflow from its source directory;
please exit the tensorflow source tree, and relaunch your python interpreter
from there.

Python Configuration Error when build retrain.py by bazel, following google doc

I am learning transfer learning according to How to Retrain Inception's Final Layer for New Categories however, when I build 'retrain.py' using bazel, the following error ocures:
The error message is:
python configuration error:'PYTHON_BIN_PATH' environment variable is not set and referenced by '//third_party/py/numpy:headers'
I am so sorry, I have done my best to display the error image.unfortunately, I failed.
I use python2.7, anaconda2 and bazel0.6.1, tensorflow1.3.
appreciate for your any reply !