Anyone knows how to save tensorflow keras checkpoint file into .pb binary file for serving?
I am not 100% sure how to implement this.
Do I need to declare tf.keras.backend.get_session() to save graph and pb file?
In sum,
1. Method to save pb file via tf.keras
or
2. Any approaches using tf.keras.estimator.model_to_estimator(keras_model=model)
Thanks
Here is a link on how to use keras with tensorflow. Basically you should get the tensorflow model from keras (K.get_session() and K.get_graph().graph) and use the tensorflow methotd to save it.
Related
What I tried so far:
pre-train a model using unsupervised method in PyTorch, and save off the checkpoint file (using torch.save(state, filename))
convert the checkpoint file to onnx format (using torch.onnx.export)
convert the onnx to tensorflow saved model (using onnx-tf)
trying to load the variables in saved_model folder as checkpoint in my tensorflow training code (using tf.train.init_from_checkpoint) for fine-tuning
But now I am getting stuck at step 4 because I notice that variables.index and variables.data#1 files are basically empty (probably because of this: https://github.com/onnx/onnx-tensorflow/issues/994)
Also, specifically, if I try to use tf.train.NewCheckpointReader to load the files and call ckpt_reader.get_variable_to_shape_map(), _CHECKPOINTABLE_OBJECT_GRAPH is empty
Any suggestions/experience are appreciated :-)
What would be the easiest way to convert a .pb or a .h5 model to a .meta file.
I have 2 programs, one generating a model a keras and one loading the same model in tensorflow 1.0. Can anyone help me with converting the keras model to a metagraphdef and get all trainable variables with weights,
Suppose I have a pre-trained model stored in a Tensorflow checkpoint. I'd like to convert it into a Keras model. I can load the checkpoint into a TF session alright but that's where I get stuck.
I think it's impossible to create a Keras model using TF checkpoint, but you can copy it's weights to the already created Keras model.
Checkout this. https://github.com/yuyang-huang/keras-inception-resnet-v2
The extract_weights.py is to save the TF weights to numpy array, while load_weights.py is for load the npy file to the Keras model.
For more reference, this is how I implement it https://github.com/DableUTeeF/keras-efficientnet/tree/master/keras_efficientnet.
I used Keras to build a model and trained it. Then I saved the model as an h5 file, i.e. model.save('name.h5'). Now I want to reload the model in tensorflow such that I have access to .meta file, for example I want to import the computational graph from the .meta file, i.e., tf.train.import_meta_graph('name_of_the_file.meta').
So, the question is how to convert .h5 file of Keras to the following four files of TensorFlow:
.meta
checkpoint
.data-00000-of-00001
.index
You can use 3rd party packages, for example keras_to_tensorflow
keras_to_tensorflow: General code to convert a trained keras model into an inference tensorflow model
The conversion can be done by
python3 keras_to_tensorflow.py -input_model_file model.h5
Tensorflow 2.x will do that automatically. The function you are using to save (see also) is:
save(
filepath,
overwrite=True,
include_optimizer=True,
save_format=None
)
The save format let's you choose either 'h5' or 'tf'. However, for tensorflow 1.x is not implemented yet (and probably never will).
save_format: Either 'tf' or 'h5', indicating whether to save the model
to Tensorflow SavedModel or HDF5. The default is currently 'h5', but
will switch to 'tf' in TensorFlow 2.0. The 'tf' option is currently
disabled (use tf.keras.experimental.export_saved_model instead).
You can do as it says and use the tf.keras.experimental.export_saved_model but it will still not create the .meta file.
I want to train my own Word2Vec model for my text corpus. I can get the code from TensorFlow's tutorial. What I don't know is how to save this model to use for CNN text classification later? Should I use pickle to save it and then read it later?
No pickling is not the way of saving the model in case of tensorflow.
Tensorflow provides with tensorflow serving for saving the models as proto bufs(for exporting the model). The way to save model would be to save the tensorflow session as:
saver.save(sess, 'my_test_model',global_step=1000)
Heres the link for complete answer:
Tensorflow: how to save/restore a model?
You can use pickle to save it to disk. Then when you are creating the CNN model, load the saved word embedding table and use it to initialize the TensorFlow variable that holds the word embeddings for your CNN classifier.