How to load tensorflow checkponit by myself without c++ api? - tensorflow

I am using tensorflow 1.0.
My production environment cannot build tensorflow-cpp because low gcc&glibc version.
Is there any doc about how to load a checkponit or freezed-graph in C++ without api?
1、 how to save network parameter? (embeding...)
2、 how to save graph structure (layers,weights...)

There is no documentation on doing this that I know of. Loading a checkpoint without the C++ runtime won't be very useful to you because you won't be able to run it.
The checkpoint by default does not include the graph structure, but if you export a metagraph you will get it in a serialized protocol buffer format. Implementing a parser for this (and the weights checkpoint) yourself sounds difficult to get right and likely to break in the future.

Related

Buffer deduplication procedure will be skipped when flatbuffer library is not properly loaded. (Tensorflow Lite)

Every time I convert a model to a tflite format, I always receive this WARNING. I wonder if this library will further reduce the model size. If so, I hope to use it. But I can't find relevant information in Google, and flatbuffer's documentation doesn't seem to mention how to simply install it so that tensorflow can invoke it.

Using custom StyleGAN2-ada network in GANSpace (.pkl to .pt conversion)

I trained a network using Nvdia's StyleGAN2-ada pytorch implementation. I now have a .pkl file. I would like to use the GANSpace code on my network. However, to use GANSpace with a custom model, you need to be able to give it a checkpoint to your model that should be uploaded somewhere (they suggest Google Drive)(checkpoint required in code here). I am not entirely sure how this works or why it works like this, but either way it seems I need a .pt file of my network, not a .pkl file, which is what I currently have.
I tried following this tutorial. It seems the GANSpace code actually provides a file (models/stylegan2/convert_weight.py) that can do this conversion. However, it seems the file convert_weight.py that was supposed to be there has been replaced by a link to a whole other repo. If I try run the convert_weight.py file as below, it gives me the following error
python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2-pytorch/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
ModuleNotFoundError: No module named 'dnnlib'
This makes sense because there is no such dnnlib module. If I instead change it to look for the dnnlib module somewhere that does have it (here) like this
python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
it previously gave me an error saying TensorFlow had not been installed (which in all fairness it hadn't because I am using PyTorch), much like this error reported here. I then installed TensorFlow, but then it gives me this error.
ModuleNotFoundError: No module named 'torch_utils'
again the same as in the previous issue reported on github. After installed torch_utils I get the same error as SamTransformer (ModuleNotFoundError: No module named 'torch_utils.persistence'). The response was "convert_weight.py does not supports stylegan2-ada-pytorch".
There is a lot I am not sure about, like why I need to convert a .pkl file to .pt in the first place. A lot of the stuff seems to talk about converting Tensorflow models to Pytorch ones, but mine was done in Pytorch originally, so why do I need to convert it? I just need a way to upload my own network to use in GANSpace - I don't really mind how, so any suggestions would be much appreciated.
Long story short, the conversion script provided was to convert weights from the official Tensorflow implementation of StyleGAN2 into Pytorch. As you mentioned, you already have a model in Pytorch, so it's reasonable for the conversion script to not work.
Instead of StyleGAN2 you used StyleGAN2-Ada which isn't mentioned in the GANspace repo. Most probably it didn't exist by the time the GANspace repo was created. As far as I know, StyleGAN2-Ada uses the same architecture as StyleGAN2, so as long as you manually modify your pkl file into the required pt format,you should be able to continue setup.
Looking at the source code for converting to Pytorch, GANspace requires the pt file to be a dict with keys: ['g', 'g_ema', 'd', 'latent_avg']. StyleGAN2-Ada saves a pkl containing a dict with the following keys: ['G', 'G_ema', 'D', 'augment_pipe']. You might be able to get things to work by loading the contents of your pkl file and resaving them in pt using these keys.

How can I convert TRT optimized model to saved model?

I would like to convert a TRT optimized frozen model to saved model for tensorflow serving. Are there any suggestions or sources to share?
Or are there any other ways to deploy a TRT optimized model in tensorflow serving?
Thanks.
Assuming you have a TRT optimized model (i.e., the model is represented already in UFF) you can simply follow the steps outlined here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#python_topics. Pay special attention to section 3.3 and 3.4, since in these sections you actually build the TRT engine and then save it to a file for later use. From that point forward, you can just re-use the serialized engine (aka. a PLAN file) to do inference.
Basically, the workflow looks something like this:
Build/train model in TensorFlow.
Freeze model (you get a protobuf representation).
Convert model to UFF so TensorRT can understand it.
Use the UFF representation to build a TensorRT engine.
Serialize the engine and save it to a PLAN file.
Once those steps are done (and you should have sufficient example code in the link I provided) you can just load the PLAN file and re-use it over and over again for inference operations.
If you are still stuck, there is an excellent example that is installed by default here: /usr/src/tensorrt/samples/python/end_to_end_tensorflow_mnist. You should be able to use that example to see how to get to the UFF format. Then you can just combine that with the example code found in the link I provided.

How to parse tensorflow model with C++ API

I want to parse a pre-trained model of tensorflow. For example, I want to get the full list of operation nodes, including the names and dependency given a model.
So, first I searched Java API and apparently there's little APIs supported by Java interface. So I seek for C++ API, but failed to find the right APIs.
The reason I don't use python is that I need to do this on android devices.
The TensorFlow graph is stored as a GraphDef protocol buffer. You should be able to build a java version of this and use it to inspect the stored graph. This will have the lists of operations, and their dependencies, but will have the values of the weights.

Can Tensorflow read from HDFS on Mac?

I'm trying to coerce Tensorflow on OS/X to read from HDFS. The documentation
https://www.tensorflow.org/deploy/hadoop
does not clearly specify whether this is possible, and the code refers only to "posix" operating systems. The error I'm seeing when trying to use the HDFS is the following:
UnimplementedError (see above for traceback): File system scheme hdfs not implemented
[[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TFRecordReaderV2, input_producer)]]
Here's what I've done up to this point:
brew installed Hadoop 2.7.2
separately compiled Hadoop 2.7.2 for the native libraries. Hadoop is installed on /usr/local/Cellar/hadoop/2.7.2/libexec on my system, and the native libraries (libhdfs.dylib) are in ~/Source/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-2.7.2/lib/native.
Edited the code at https://github.com/tensorflow/tensorflow/blob/v1.0.0/tensorflow/core/platform/hadoop/hadoop_file_system.cc#L113-L119 to read from libhdfs.dylib rather than libhdfs.so, recompiled, and reinstalled Tensorflow. (I have to admit this is pretty boneheaded, and I have no idea if it's all that's required to make this code work on Mac.)
Here is the code to reproduce.
test.sh:
set -x
export JAVA_HOME=$($(dirname $(which java | xargs readlink))/java_home)
export HADOOP_HOME=/usr/local/Cellar/hadoop/2.7.2/libexec
. $HADOOP_HOME/libexec/hadoop-config.sh
export HADOOP_HDFS_HOME=$(echo ~/Source/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-2.7.2)
export CLASSPATH=$($HADOOP_HDFS_HOME/bin/hdfs classpath --glob)
# Virtual environment with Tensorflow and necessary dependencies
. venv/bin/activate
python ./test.py
test.py:
import tensorflow as tf
_, example_bytes = tf.TFRecordReader().read(
tf.train.string_input_producer(
[
"hdfs://localhost:9000/user/foo/feature_output/part-r-00000",
"hdfs://localhost:9000/user/foo/feature_output/part-r-00001",
"hdfs://localhost:9000/user/foo/feature_output/part-r-00002",
"hdfs://localhost:9000/user/foo/feature_output/part-r-00003",
]
)
)
with tf.Session().as_default() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
print(len(sess.run(example_bytes)))
The code path I'm seeing in the Tensorflow source seems to indicate to me that I'd receive a different error than the one above if the issue were really mac-specific, since some kind of handler is registered for the "hdfs" scheme regardless: https://github.com/tensorflow/tensorflow/blob/v1.0.0/tensorflow/core/platform/hadoop/hadoop_file_system.cc#L474 . Has anyone else succeeded in coercing Tensorflow to work with Mac? If it isn't supported, is there an easy place to patch it?
I'm also open to suggestions as to what might be a better approach. The high-level goal is to efficiently train a model in parallel, using shared parameter servers, considering that each worker will only read a subset of the data. This is readily accomplished using the local filesystem, but it's less clear how to scale beyond that. Even if I do succeed in making the code above work, the result could suffer from problems with data locality.
This thread https://github.com/tensorflow/tensorflow/issues/2218 suggests using pyspark.RDD.toLocalIterator to iterate over the data set with a placeholder in the graph. Aside from my concern about forcing each worker to iterate through the full dataset, I don't see a way to coerce Tensorflow's builtin Estimator class to accept a custom feed function along with a specified input_fn, and a custom input_fn appears necessary in order to take advantage of models like LinearClassifier (https://www.tensorflow.org/tutorials/linear) that are capable of learning from sparse, weighted features.
Any thoughts?
Did you enable HDFS support in ./configure when building? That's the error you would get if HDFS is disabled.
I think you made the correct change to make it work. Feel free to send a pull request to look for .dylib on macOS.