Assume that the model trained in tensorflow uses two methods to convert the models available to tensorflowjs:
1) Use the tf.saved_model.simple_save method to save the model in tensorflow, then use tf.loadFrozenModel to load the model in tensorflowjs and predict the result using model.predict
2) Use keras(sequence) to save the model in tensorflow, then use tf.loadModel to load the model in tensorflowjs and predict the result using model.predict
If you train the same model in tensorflow, but different save methods. In the tensorflowjs to use the above 2 load model method to predict the results, will the time difference?
If you have the same architecture in both tensorflowJs and keras, the inference time using tensorflowJs will be alike. tensorflowJs converter will just construct a graph of your topology and the weights. So in both cases the processing time is roughly the same
Related
I am using a custom keras model in Databricks environment.
For a custom keras model, model.save(model.h5) does not work, because custom model is not serializable. Instead it is recommended to use model.save_weights(path) as an alternate.
model.save_weights(pathDirectory) works. This yields 3 files checkpoint,.data-00000-of-00001,.index in the pathDirectory
For loading weights, Following mechanism is working fine.
model = Model()
model.load_weights(path)
But I want to train my model on pretrained weights I just saved. Like I saved model weights, and continue training on these saved weights afterwards.
So, when I load model weights and apply training loop, I get this error, TypeError: 'CheckpointLoadStatus' object is not callable
After much research, I have found a workaround,
we can also save model using
model.save("model.hpy5") and read it the saved model in databricks.
model.h5 not work for customized models, but it works for standard models.
Do we have an option to save a trained Gensim Word2Vec model as a saved model using tf 2.0 tf.saved_model.save? In other words, how can I save a trained embedding vector as a saved model signature to work with tensorflow 2.0. The following steps are not correct normally:
model = gensim.models.Word2Vec(...)
model.init_sims(..)
model.train(..)
model.save(..)
module = gensim.models.KeyedVectors.load_word2vec(...)
tf.saved_model.save(
module,
export_dir
)
EDIT:
This example helped me about how to do it : https://keras.io/examples/nlp/pretrained_word_embeddings/
Gensim does not use TensorFlow and it has its own methods for loading and saving models.
You would need to convert Gensim embeddings into a TensorFlow a model which only makes sense if you further plan to use your embeddings within TensorFlow and possibly fine-tune them for your task.
Gensim Word2Vec are two steps in TensorFlow:
Vocabulary lookup: a table that assigns indices to tokens.
Embedding lookup layer that picks up the actual embeddings for the indices.
Then, you can save it as any other TensorFlow model.
I have keras pretrained model(model.h5). And I want to prune that model with tensorflow Magnitude-based weight pruning with Keras. One curious things is that my pretrained model is built with original keras model > I mean that is not from tensorflow.keras. Inside tensorflow Magnitude-based weight pruning with Keras example, they show how to do with tensorflow.keras model. I want to ask is that can I use their tool to prune my original keras pretrained model?
inside their weight pruning toolkit ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
inside their weight pruning toolkit ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
For my original pretrained model, I load the weight from model.h5 and can call model.summary() after I apply prune_low_magnitude() none of the method from model cannot call including model.summary() method. And show the error like AttributeError: 'NoneType' object has no attribute 'summary'
model = get_training_model(weight_decay)
model.load_weights('model/keras/model.h5')
model.summary()
epochs = 1
end_step = np.ceil(1.0 * 100 / 2).astype(np.int32) * epochs
print(end_step)
new_pruning_params = {
'pruning_schedule': tfm.sparsity.keras.PolynomialDecay(initial_sparsity=0.1,
final_sparsity=0.90,
begin_step=40,
end_step=end_step,
frequency=30)
}
new_pruned_model = tfm.sparsity.keras.prune_low_magnitude(model, **new_pruning_params)
print(new_pruned_model.summary())
Inside their weight pruning toolkit
enter link description here ,there is two way. one is pruned the model layer by layer while training and second is pruned the whole model. I tried the second way to prune the whole pretrained model. below is my code.
For my original pretrained model, I load the weight from model.h5 and can call model.summary() after I apply prune_low_magnitude() none of the method from model cannot call including model.summary() method. And show the error like
AttributeError: 'NoneType' object has no attribute 'summary'
I hope this answer still helps, but I recently had the same issue that prune_low_magnitude() returns an object of type 'None'. Also new_pruned_model.compile() would not work.
The model I had been using was a pretrained model that could be imported from tensorflow.python.keras.applications.
For me this worked:
(0) Import the libraries:
from tensorflow_model_optimization.python.core.api.sparsity import keras as sparsity
from tensorflow.python.keras.applications.<network_type> import <network_type>
(1) Define the pretrained model architecture
# define model architecture
loaded_model = <model_type>()
loaded_model.summary()
(2) Compile the model architecture and load the pretrained weights
# compile model
opt = SGD(lr=learn_rate, momentum=momentum)
loaded_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
loaded_model.load_weights('weight_file.h5')
(3) set pruning parameters and assign pruning schedule
# set pruning parameters
pruning_params = {
'pruning_schedule': sparsity.PolynomialDecay(...)
}
# assign pruning schedule
model_pruned = sparsity.prune_low_magnitude(loaded_model, **pruning_params)
(4) compile model and show summary
# compile model
model_pruned.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='SGD',
metrics=['accuracy'])
model_pruned.summary()
It was important to import the libraries specifically from tensorflow.python.keras and use this keras model from the TensorFlow library.
Also, it was important to use the TensorFlow Beta Release (pip install tensorflow==2.0.0b1), otherwise still an object with type 'None' would be returned by prune_low_magnitude.
I am using PyCharm 2019.1.3 (x64) as IDE. Here is the link that led me to this solution: https://github.com/tensorflow/model-optimization/issues/12#issuecomment-526338458
I am using tensorflow 1.3.0 to train a CNN classification model. However I need to get access to the prelogits layer to evaluate my method (i.e. while this is casted as a classification problem, the method is not a classification problem but is used to extract CNN features, i.e. to produce a point in an N-dimensional vector space for an input image test)
I am using both the dataset API (with TFRecord files) and the estimator API to train the model. However, I don't see how I can get access/return the prelogits value using the Estimator API, i.e. estimator.train(), .evaluate() or .predict() since model_fn() needs to return a specific tf.estimator.EstimatorSpec object.
Previously (i.e. using the standard sess=tf.Session() method) I could train the model and get access to the prelogits layer while training (or by loading the model after training) and feed the network with a specific input to get the specific layer output with a sess.run(specific_layer) as long as the layer was named specific_layer.
I have tried to use the prediction output of EstimatorSpec but it did not work. Any ideas/suggestions?
I did quantization on inception-resnet-v2 model using https://www.tensorflow.org/performance/quantization#how_can_you_quantize_your_models.
Size of freezed graph(input for quantization) is 224,6 MB and quantized graph is 58,6 MB. I ran accuracy test for certain dataset wherein, for freezed graph the accuracy is 97.4% whereas for quantized graph it is 0%.
Is there a different way to quantize the model for inception-resnet versions? or, for inception-resnet model, quantization is not support at all?
I think they transitioned from quantize_graph to graph_transforms. Try using this:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms
And what did you use for the input nodes/output nodes when testing?