how to properly train TensorFlow on one machine and evaluate on another? - tensorflow

I'm training a TensorFlow (1.2) model on one machine and attempting to evaluate it on another. Everything works fine when I stay local to one machine.
I am not using placeholders and feed-dict's to get data to the model but rather TF file queues and batch generators. I suspect with placeholders this would be much easier but I am trying to make the TF batch generator machinery work.
In my evaluation code I have lines like:
saver = tf.train.Saver()
ckpt = tf.train.get_checkpoint_state(os.path.dirname(ckpt_dir))
if ckpt and ckpt.model_checkpoint_path:
saver.restore(sess, ckpt.model_checkpoint_path)
This produces errors like:
017-08-16 12:29:06.387435: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Unsuccessful TensorSliceReader constructor: Failed to get matching files on /data/perdue/minerva/tensorflow/models/11/20170816/checkpoints-20: Not found: /data/perdue/minerva/tensorflow/models/11/20170816
The referenced directory (/data/...) exists on my training machine but not the evaluation machine. I have tried things like
saver = tf.train.import_meta_graph(
'/local-path/checkpoints-XXX.meta',
clear_devices=True
)
saver.restore(
sess, '/local-path/checkpoints-XXX',
)
but this produces a different error:
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value train_file_queue/limit_epochs/epochs
or, if I explicitly call the initializer functions immediately after the restore,
AttributeError: 'Tensor' object has no attribute 'initializer'
Here, train_file_queue/limit_epochs/epochs is an element of the training graph that I would like the evaluation function to ignore (I have another, new element test_file_queue that is pointing at a different file queue with the evaluation data files in it).
I think in the second case when I'm calling the initializers right after the restore that there is something in the local variables that won't doesn't work quite like a "normal" Tensor, but I'm not sure exactly what the issue is.
If I just use a generic Saver and restore TF does the right thing on the original machine - it just restores model parameters and then uses my new file queue for evaluation. But I can't be restricted to that machine, I need to be able to evaluate the model on other machines.
I've also tried freezing a protobuf and a few other options and there are always difficulties associated with the fact that I need to use file queues as the most upstream inputs.
What is the proper way to train using TensorFlow's file queues and batch generators and then deploy the model on a different machine / in a different environment? I suspect if I were using feed-dict's to get data to the graph this would be fairly simple, but it isn't as clear when using the built in file queues and batch generators.
Thanks for any comments or suggestions!

At least part of the answer to this dilemma was answered in TF 1.2 or 1.3. There is a new flag for the Saver() constructor:
saver = tf.train.Saver(save_relative_paths=True)
that makes it such that when you save the checkpoint directory and move it to another machine, and use it to restore() a model, everything works without errors relating to nonexistent paths for the data (the paths from the old machine where training was performed).
It isn't clear my use of the API is really idiomatic in this case, but at least the code works such that I can export trained models from one machine to another.

Related

I need to upload weights that were saved on tensorflow 1.x to an identical model in tensroflow 2.x

So I have an old model with tensorflow 1.x code and it includes too much stuff I don't need, all I need is just the model and I created the model in a way I'm almost certain is identical to the previous one (I checked a bunch of stuff)
I have the .data and .index and a .meta file and I tried very many different types of things and either it says that "a few things weren't saved" and then lists all of the weights (but not really the entire thing, cause when the weights are too big it just adds three dots (...) )
I would LOVE to have someone tell me how I can use that in my new model
I tried:
model.load_weights
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.import_meta_graph('checkpoints/pix2pix-60.meta')
saver.restore( "checkpoints/pix2pix-60")
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.Checkpoint(model=gen)
saver.restore(tf.train.latest_checkpoint('checkpoints')).assert_consumed()
I tried:
ck_path = tf.train.latest_checkpoint('checkpoints')
gen.load_weights(ck_path)
I tried:
from tensorflow.python.training import checkpoint_utils as cp
ckpt = cp.load_checkpoint('checkpoints/pix2pix--60')
and then tried to see what I can do with that
and I think I tried honestly a bunch of more stuff
I honestly won't mind if someone can even just tell me how I can read the .index or .data files so that I can just copy the weights and from there I can deal with it
I would again really love some help,
Thanks!
It seems that your TF1.x model is saved as a ckpt format, and to restore a ckpt model, you need get the graph before load weight.
To convert it to TF2.x model, you may instantiate the original model, then save it as like recommended saved_model format use 2.x api.
Your can continue your second trying, use compat v1 to instantiate a default Session, then load graph from meta file, then load weight, after this, your Session will contain your graph and loaded weights.
To convert to 2.x model, you need get the inputs and outputs tensors from graph:
# you have loaded graph and weight into sess
sess.as_default()
g = sess.graph
# assuming that your input output names are "input:0", "output:0"
input_tensor = g.get_tensor_by_name("input:0")
output_tensor = g.get_tensor_by_name("output:0")
# then use tf2.x to save a saved_model format model
model = tf.keras.Model(input_tensor, output_tensor, name="tf2_model")
model.save("your_saved_dir")
A saved_model format model stores all graph and weight, you can simply use
model = tf.saved_model.load("your_model_dir")
to instantiate model for using.
Ok, So I think I figured it out although it was quite tedious
In the model in tensorflow 1.x all variables were created with tf.name_scope and in tensorflow 2.x there is no such thing so the variable names were unmatched and so I pretty much had to kind of manually change the names so they would fit and then it really did upload the weights as such:
checkpoint = tf.train.Checkpoint(model=gen)
checkpoint.restore('checkpoints/pix2pix--60').assert_consumed()
this also seemed to work:
gen.load_weights('checkpoints/pix2pix--60')
however something is still not working correctly since the output is actually not what I am expecting (what the output is like in the tensorflow 1.x model)
It may have something to do with the batch_normalization weights that aren't being loaded but I checked and in my current tf 2.x model they are untrainable and are equal to exactly the weights that aren't being loaded
Another weird thing is that when I do gen.predict(x) it gives me a different outcome each time, so I guess the weights aren't being frozen or something...
So I have yet to understand what went wrong previously, but I do know that there have been many changes in the API of tf2 from tf1 including default parameters and more so what I eventually did which worked perfectly was this:
tf_upgrade_v2
--intree my_project/
--outtree my_project_v2/
--reportfile report.txt
as explained here
you just put all the pieces of code you want to change in folder my_project and it creates a folder named myproject_v2 with the tf1 code converted to tf2

How to use feature_column v2 in Tensorflow (TF-Ranking)

I'm using TF-Ranking to train a recommendation engine. I have encountered a problem that seems to be a version incompatibility issue concerning tf.feature_column API.
The short version of my question is: What is a v2 feature column (TF 2.0?) (see this for instance) and how can I ensure that my feature columns are treated as v2, while I'm still using TF 1.14.
Here is the details:
I'm unable to shorten my code sufficiently to provide a reproducible example. But I will try to describe the problem in words.
TF Version: 1.14
OS: Ubuntu 18.04
I initialy had two features in my model, user and item, both sparse categorical features which were wrapped in their own tf.feature_column.embedding_column. I was able to use the train_and_evaluate method of the Estimator and export the model for serving.
Then I added a new feature curr_item which is only present during prediction (as a context feature). This shares the embeddings with item. So now I have a tf.feature_column.shared_embedding_columns which wraps both item and current_item.
Now calling train_and_evaluate results in the following error (shortened messages):
ValueError: Could not load all requested variables from checkpoint. Please make sure your model_fn does not expect variables that were not saved in the checkpoint.
Key input_layer/user_embedding/embedding_weights not found in checkpoint
Note that calling train method only works fine. My understanding is that once it gets to evaluation, it tries to load the variables from the checkpoint, but that variable doesn't exist. I did a little debugging and found the reason:
When encode_listwise_features is called during training (which in turn calls encode_features) all features (user and item) are "V2" (not sure what that means) and so the following if statement holds:
https://github.com/tensorflow/ranking/blob/31fc134816cc4974a46a11e7bb2df0066d0a88f0/tensorflow_ranking/python/feature.py#L92
and both variables are named with an encoding_layer prefix (scope name?):
encoding_layer/user_embedding/embedding_weights
encoding_layer/item_embedding/embedding_weights
But when I call the same function for all three features (a little confused wether this is in eval or predict mode), some of these are not "V2" and we end up in the else part of the above condition which calls input_layer direcetly and variables are named using input_layer prefix. Now TF is trying to restore
input_layer/user_embedding/embedding_weights
from the check-point, but that name doesn't exist in the checkpoint, because it was called
encoding_layer/user_embedding/embedding_weights
in training.
So:
1) How can I ensure that all my features are treated as v2 at all stages? I tried using tf.compat.v2.feature_column but that didn't help. There is already a ToDo note above that if statement for this.
2) Can the encode_feature be modified to avoid this situation? e.g. raise an exception with a helpful message?

Can't save save/export and load a keras model that uses eager execution

I'm following the RNN text-generation tutorial with eager execution pretty much line for line. I've trained the model with my own data set and have saved a low loss checkpoint. I'm able to load the weights and generate text but I want to export/save the model so that I can learn how to deploy one using flask. However I can't figure out how. The version I'm using is '1.14.0-rc1'.
The tutorial: https://www.tensorflow.org/tutorials/sequences/text_generation
I have been able to save the model as an HDF5 file but I cannot load it. I've also disabled eager execution but that causes problems with running the code later on. I have tried the following and a few more snippets but those led to nothing as well:
new_model = keras.models.load_model("/content/gdrive/My Drive/ColabNotebooks/ckpt4/my_model.h5")
How ever I get
RuntimeError: tf.placeholder() is not compatible with eager execution.
Lastly I found this in another post and tried it as well but was met with another error:
tf.saved_model.save(model, "/content/gdrive/My Drive/Colab Notebooks/ckpt4/my_model.h5")
error:
AssertionError: Tried to export a function which references untracked object Tensor("StatefulPartitionedCall/args_2:0", shape=(), dtype=resource).TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.

Estimator's model_fn includes params argument, but params are not passed to Estimator

I'm trying to run Object Detection API locally.
I believe I have everything set up as described in the TensorFlow Object Detection API documents, however, when I'm trying to run model_main.py, this warning shows and model doesn't train. (I can't really tell if model is training or not, because the process isn't terminated, but no further logs appear)
WARNING:tensorflow:Estimator's model_fn (.model_fn at 0x0000024BDBB3D158>) includes
params argument, but params are not passed to Estimator.
The code I'm passing in is:
python tensorflow-models/research/object_detection/model_main.py \
--model_dir=training \
--pipeline_config_path=ssd_mobilenet_v1_coco.config \
--checkpoint_dir=ssd_mobilenet_v1_coco_2017_11_17/model.ckpt \
--num_tain_steps=2000 \
--num_eval_steps=200 \
--alsologtostderr
What could be causing this warning?
Why would the code seem stuck?
Please help!
I met the same problem, and I found that this warning has nothing to do with the problem that the model doesn't work. I can make the model work as this warning showing.
My mistake was that I misunderstood the line in the document of running_locally.md
"${MODEL_DIR} points to the directory in which training checkpoints and events will be written to"
I changed the MODEL_DIR to the {project directory}/models/model where the structure of the directory is:
+data
-label_map file
-train TFRecord file
-eval TFRecord file
+models
+ model
-pipeline config file
+train
+eval
And it worked. Hoping this can help you.
Edit: while this may work, in this case model_dir does not contain any saved checkpoint files, if you stop the training after some checkpoint files are saved and restart again, the training would still be skipped. The doc specifies the recommended directory structure, but it is not necessary to be the same structure as all paths to tfrecord, pretrained checkpoints can be configured in the config file.
The actual reason is when model_dir contains checkpoint files which already reached the NUM_TRAIN_STEP, the script will assume the training is finished and exit. Remove the checkpoint files and restart training will work.
In my case, I had the same error because I had inside of the folder where my .cpkt files were, the checkpoint of the pre-trained models too.
Removing that file came inside of the .tar.gz file, the training worked.
I also received this error, and it was because I had previously trained a model on a different dataset/model/config file, and the previous ckpt files still existed in the directory I was working with, moving the old ckpt training data to a different directory fixed the issue
Your script seems good.
One thing we should notice is that, the new model_main.py will not print the log of training(like training step, lr, loss and so on.) It only print the evaluation result after one or multi-epoches, which will be a long time.
So "the process isn't terminated, but no further logs appear" is normal. You can confirm its running by using "nvidia-smi" to check the gpu situation, or use tensorboard to check.
I also encountered this warning message. I checked nvidia-smi and it seemed training wasn't started. Also tried re-organizing output directory and it didn't work out. After checking out Configuring the Object Detection Training Pipeline (tensorflow official), I found it was configuration problem. Solved the problem by adding load_all_detection_checkpoint_vars: true.

Using saved model for prediction in tensorflow

I use this code to restore my model, but I don't know how to predict after restoring it, which function can I use? I'm a beginner in tensorflow, I have no idea to which parameters or function will be saved.
In the meta model:
sess = tf.Session()
saver = tf.train.import_meta_graph("/home/MachineLearning/model.ckpt.meta")
saver.restore(sess,tf.train.latest_checkpoint('./'))
print("Model restored with success ")
x_predict,y_predict= load_svmlight_file('/MachineLearning/to_predict.csv')
x_predict = x_valid.toarray()
sess.run([] ,feed_dict ) #i don't know how to use predict function
These are the results:
$python predict.py
Model restored with success
Traceback (most recent call last):
File "predict.py", line 23, in <module>
sess.run([] ,feed_dict )
NameError: name 'feed_dict' is not defined
You're almost there. Tensorflow is simply a math library. Your graph is a collection of math operations with the associated dependencies (e.g. a graph, DAG specifically).
When you loaded the graph and associated variables (weights) you loaded all the definitions. Now you need to ask tensorflow to compute some value in the graph. There are lots of values it could compute, the one you want is often named logits (a typical name for the output layer of a neural network). But note that it could be named anything (especially if this isn't a neural network model), you need to understand the model. You might also want to compute an operation named accuracy which is defined to compute the accuracy of a particular batch of inputs (again depends on your model).
Note that you will need to provide tensorflow with whatever it needs to perform these computations. There is generally a placeholder where you pass in your data (and during training a placeholder for your labels which you don't need for prediction because none of the operations you will ask tensorflow to compute depend on it).
But you will need to get references to these various operations (logits, and accuracy) and placeholders (x is a typical name). Since you loaded your graph from disk you don't have the references (note that an alternative way of loading the model is to re-run the code that builds the model, which gives you easy access to the references you need).
In order to get the right references you can look them up by name. Here's how you would get a list of all the operations:
List of tensor names in graph in Tensorflow
Then to get a specific OP (operation) by name:
How to get a tensorflow op by name?
So you'll have something like this:
logits = tf.get_default_graph().get_operation_by_name("logits:0")
x = tf.get_default_graph().get_operation_by_name("x:0")
accuracy = tf.get_default_graph().get_operation_by_name("accuracy:0")
Note that the :0 is an index added to all names in tensorflow to avoid duplicate names. Now you have all the references you need and you can use sess.run to perform a specific computation, providing the input data, and OPs you'd like to have computed:
sess.run([logits, accuracy], feed_dict={x:your_input_data_in_numpy_format})
The names of these elements will vary in your implementation, I've used the most common names. If they weren't given pretty names it'll be hard to identify them and you'll need to look through the original code that produced the graph. In fact if they weren't named properly looking them up by name is so painful that it's probably better to just re-run the code that produced the original graph rather than import the meta graph. Notice that saver.restore only restores the actual data, import_meta_graph is the optional piece which can be replaced by simply re-building the graph programmatically.