compiling inception_client.cc without bazel - tensorflow

I need a simple function for my c++ program to be able to pull predictions from the tensorflow model server so I decided to user the inception client as a starting point.
I've followed the advice from here:
https://github.com/tensorflow/tensorflow/issues/2412
I'm currently getting various compile errors such as:
"undefined reference to tensorflow::serving::PredictRequest::_slow_mutable_model_spec()" and "undefined reference to tensorflow::serving::PredictRequest::~PredictRequest()"
Am I missing a library or something.
Is there a simpler way to make the function without the need for tensorflow in c++?

Related

How to build numpy from source and be able to debug it in a live application?

I am currently reading the documentation for numpy, however to get a more thorough understanding of the library, it would be helpful if there was a way to debug the workflow of the library as I call a particular function.
I have tried debugging when numpy was imported as a third party module. However, when I try to step into it, it is actually stepping over.
Therefore, I am building it from source and thereby trying to build it locally in an attempt to run it.
I find the documentation provided in the numpy website for developers to be a bit vague for beginners like me.
I would highly appreciate any comments that would set me on the right path, as I have tried everything that I know of.
Thanks!
I am currently reading the documentation for numpy, however to get a more thorough understanding of the library, it would be helpful if there was a way to debug the workflow of the library as I call a particular function.
Unless you plan to fix a bug in Numpy, help Numpy developpers or you are a contributor, you should not debug Numpy directly.
I have tried debugging when numpy was imported as a third party module. However, when I try to step into it, it is actually stepping over.
By default, Numpy enable compiler optimizations like -O2 or -O3 or even using annotations in the code so to tell the compiler to use a given optimization level (so to better vectorize it for example). Such optimizations tends to make debugging harder and unreliable. The maximal optimization level for debugging should be -Og and the minimal one is -O0. Using -O1/-O2/-O3 tends to causes issues. You also need to enable debugging informations with -g.
The standard way to run and debug Numpy is to use gdb --args python runtests.py -g --python mytest.py. The -g flag should compile Numpy with compiling options -O0 -ggdb. Adding --debug-info may help you to understand if everything is built correctly. For more information see this and that. You can also see the above informations in the runtests.py script.
If you still have issues with the above method, the last desperate option is to add printf directly in the code (and take care to flush stdout frequently). It is not very clean and force Numpy to be frequently recompiled which is a bit slow but it is a pretty good solution when gdb is unstable (ie. crashes or just bogus) for example.
Thank you for contributing to Numpy.

Unexpected keyword argument 'show_dtype'

I am trying to plot my model with the data types with the following the code:
plot_model(model, to_file='model/model.png', show_dtype=True, show_shapes=True, show_layer_names=True)
However, I get an error that show_dtype is not an acceptable parameter even though it appears on the TensorFlow documentation: https://www.tensorflow.org/api_docs/python/tf/keras/utils/plot_model
This is the first time that I have run into this issue. It seems that this may be due to having an earlier release if you downloaded it from Anaconda Forge rather than something else like Pip. It is a simple fix, however.
Basically, you need to go into the library source file and edit it to the current version that is shown on the TensorFlow documentation page.
The link to the GitHub page that you will copy the Python code from is here: https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/keras/utils/vis_utils.py#L278-L348
Afterwards, head to your library path and paste that Python code there.
For example, my path is the following: C:/ProgramData/Anaconda3/envs/ml/Lib/site-packages/tensorflow/python/keras/utils/vis_utils.py. Yours should be something similar.

Error using update_struct function in TensorFlow Federated

I'm attempting to run the Minimal Stand-Alone Implementation of Federated Averaging from the TensorFlow Federated GitHub repository but receiving the following error in the server_update function:
AttributeError: module 'tensorflow_federated.python.common_libs.structure' has no attribute 'update_struct'
I have some old TensorFlow Federated code that uses the update_state function from the tff.utils package in place of update_struct() but according to a commit on GitHub this package is empty now. I'm using TensorFlow Federated version 0.18.0 and I also had the same problem trying on Google CoLab.
My question is how can I fix this error?
Thanks, any help appreciated.
I am assuming you hit the error you describe here.
It seems that the symbol is not in the 0.18 release. You can either depend on the nightly version (pip install tensorflow-federated-nightly), or modify the line to construct the new object directly, instead of using the update_struct helper. That is, the linked command could change to:
return ServerState(model_weights,
server_optimizer.variables(),
server_state.round_num + 1))

How tf.contrib.seq2seq.TrainingHelper can be replaced in TensorFlow 2.2

I am trying to run the project from GitHub but I have a trouble with TrainingHelper. Now, I am stuck with it, I dont know how to convert it to tf2. The console always returns the error like this:
AttributeError: module 'tensorflow_addons.seq2seq' has no attribute 'TrainingHelper'
Please help me!
Seems to be https://www.tensorflow.org/addons/api_docs/python/tfa/seq2seq/TrainingSampler
The api is a bit different, though. Some parameters passed in the constructor in TrainingHelper, are passed in TrainingSampler.initialize() instead. Also there are minimal differences in some of the return values. So, you have to do some adaptation for code migration.

Did tensorflow at any point change 'tensorflow.sub' into 'tensorflow.subtract'?

I was testing some code I was given and got an error saying:
AttributeError: 'module' object has no attribute 'sub'
The module referred to is TensorFlow. To investigate this error I started looking into the TensorFlow source code and found a function 'tensorflow.subtract'. Replacing 'sub' by 'subtract' made the error go away.
However now I am still wondering why the error occurred in the first place. I can think of 2 reasons:
At some point TensorFlow renamed 'sub' to 'subtract' and the code I was given hasn't yet updated to accommodate that change. Changing 'sub' to 'subtract' simply updated the code to the newer version of TensorFlow
I have made some mistake in importing the wrong libraries and TensorFlow does actually have a 'sub' function. This would mean that changing to 'subtract' potentially altered the workings of the program.
Can anyone give advice on what the most likely scenario is here?
The TensorFlow 1.0 release contained multiple breaking changes to the API, including the renaming of tf.sub to tf.subtract (likewise, tf.mul was renamed to tf.multiply et cetera). Comprehensive lists of all changes can be found here:
https://www.tensorflow.org/install/migration
https://github.com/tensorflow/tensorflow/releases/tag/v1.0.0