How to load tensorflow graph from memory address - tensorflow

I'm using the TensorFlow C++ API to load a graph from a file and execute it. Everything is working great, but I'd like to load the graph from memory rather than from a file (so that I can embed the graph into the binary for better portability). I have variables that reference both the binary data (as an unsigned char array) and the size of the data.
This how I am currently loading my graph.
GraphDef graph_def;
ReadBinaryProto(tensorflow::Env::Default(), "./graph.pb", &graph_def);
Feels like this should be simple but most of the discussion is about the python API. I did try looking for the source of ReadBinaryProto but wasn't able to find it in the tensorflow repo.

The following should work:
GraphDef graph_def;
if (!graph_def.ParseFromArray(data, len)) {
// Handle error
}
...
This is because GraphDef is a sub-class of google::protobuf::MessageList, and thus inherits a variety of parsing methods
Edit: Caveat: As of January 2017, the snippet above works only when the serialized graph is <64MB because of a default protocol buffer setting. For larger graphs, take inspiration from ReadBinaryProtos implementation
FWIW, the code for ReadBinaryProto is in tensorflow/core/platform/env.cc

Related

Buffer deduplication procedure will be skipped when flatbuffer library is not properly loaded. (Tensorflow Lite)

Every time I convert a model to a tflite format, I always receive this WARNING. I wonder if this library will further reduce the model size. If so, I hope to use it. But I can't find relevant information in Google, and flatbuffer's documentation doesn't seem to mention how to simply install it so that tensorflow can invoke it.

Tensorflow Object Detection API model for use in TensorFlow.js

I am trying to use an object detection model, that was created using the TF Object Detection API, in TensorFlow.js.
I converted the model using:
tensorflowjs_converter --input_format=tf_saved_model inference_graph/saved_model inference_graph/web_model
It gets converted without any problems and loads in my javascript code.
Now I am a bit unsure about what my next steps should be.
I have to translate the Python into JavaScript, but certain areas I am unsure about.
With the object detection API in python, there are many steps; (1)preprocessing the image, such as convert to RGB, numpy array reshape, expand dimensions (I have an idea of how I would approach it) and (2) the run inference for single image function, I am not sure how I would go about it in tensorflow.js.
I tried to find some general information about deploying an object detection model in tensorflow.js, but I could not find much, except with pre-trained models.
Any information about this topic would be great!
Thanks!
as mentioned by #edkeveked you will need to perform similar input processing and output processing in JavaScript as is being done in Python. i can't say exactly what you will need to do since i am not familiar with the model. however, you can find an example using a specific object detection model here:
https://github.com/vabarbosa/tfjs-model-playground/blob/master/object-detector/demo/object-detector.js
see also
https://medium.com/codait/bring-machine-learning-to-the-browser-with-tensorflow-js-part-iii-62d2b09b10a3
You would need to replicate the same process in javascript before giving it to the model. In js, the image use by default the RGB channel, so there is no need to make that conversion again.

Deserializing Tensorflow's protocol buffer MetaGraph file

My question is in relation to protocol buffers. I understand that they serialize structured data. Is there a way to deserialize the data back to the original structured data.
For example, Tensorflow produces a MetaGraph file which stores a TensorFlow GraphDef as well as associated metadata necessary for running computation in a graph.
I have a metagrpah of an GoogleNet inception network and I would like to deserialize it to see the fields described in the link.
https://www.tensorflow.org/api_guides/python/meta_graph
That is a beautiful problem. But log story short as I saw in the code MetaGraph this is possible.
https://www.tensorflow.org/api_guides/python/meta_graph
In order for a Python object to be serialized to and from MetaGraphDef, the Python class must implement to_proto() and from_proto() methods.
This would mean that you need to implement those methods define their properties like proto files and that should work. I never tried it.

Tensorflow Stored Learning

I haven't tried Tensorflow yet but still curious, how does it store, and in what form, data type, file type, the acquired learning of a machine learning code for later use?
For example, Tensorflow was used to sort cucumbers in Japan. The computer used took a long time to learn from the example images given about what good cucumbers look like. In what form the learning was saved for future use?
Because I think it would be inefficient if the program should have to re-learn the images again everytime it needs to sort cucumbers.
Ultimately, a high level way to think about a machine learning model is three components - the code for the model, the data for that model, and metadata needed to make this model run.
In Tensorflow, the code for this model is written in Python, and is saved in what is known as a GraphDef. This uses a serialization format created at Google called Protobuf. Common serialization formats include Python's native Pickle for other libraries.
The main reason you write this code is to "learn" from some training data - which is ultimately a large set of matrices, full of numbers. These are the "weights" of the model - and this too is stored using ProtoBuf, although other formats like HDF5 exist.
Tensorflow also stores Metadata associated with this model - for instance, what should the input look like (eg: an image? some text?), and the output (eg: a class of image aka - cucumber1, or 2? with scores, or without?). This too is stored in Protobuf.
During prediction time, your code loads up the graph, the weights and the meta - and takes some input data to give out an output. More information here.
Are you talking about the symbolic math library, or the idea of tensor flow in general? Please be more specific here.
Here are some resources that discuss the library and tensor flow
These are some tutorials
And here is some background on the field
And this is the github page
If you want a more specific answer, please give more details as to what sort of work you are interested in.
Edit: So I'm presuming your question is more related to the general field of tensor flow than any particular application. Your question still is too vague for this website, but I'll try to point you toward a few resources you might find interesting.
The tensorflow used in image recognition often uses an ANN (Artificial Neural Network) as the object on which to act. What this means is that the tensorflow library helps in the number crunching for the neural network, which I'm sure you can read all about with a quick google search.
The point is that tensorflow isn't a form of machine learning itself, it more serves as a useful number crunching library, similar to something like numpy in python, in large scale deep learning simulations. You should read more here.

How to load tensorflow checkponit by myself without c++ api?

I am using tensorflow 1.0.
My production environment cannot build tensorflow-cpp because low gcc&glibc version.
Is there any doc about how to load a checkponit or freezed-graph in C++ without api?
1、 how to save network parameter? (embeding...)
2、 how to save graph structure (layers,weights...)
There is no documentation on doing this that I know of. Loading a checkpoint without the C++ runtime won't be very useful to you because you won't be able to run it.
The checkpoint by default does not include the graph structure, but if you export a metagraph you will get it in a serialized protocol buffer format. Implementing a parser for this (and the weights checkpoint) yourself sounds difficult to get right and likely to break in the future.