TensorBoard callback in model.evaluate() - tensorflow

According to the official Keras documentation, the TensorBoard callback can be used with Model.evaluate:
When used in Model.evaluate, in addition to epoch summaries, there will be a summary that records evaluation metrics vs Model.optimizer.iterations written. The metric names will be prepended with evaluation, with Model.optimizer.iterations being the step in the visualized TensorBoard.
I would assume that passing a TensorBoard callback to Model.evaluate would create a folder eval that contains the log files. But nothing happens when I add the callback:
Model.evaluate(xTest, yTest, callbacks=[TensorBoard(log_dir="./Logs/")]
The TensorBoard callback works just fine in Model.fit.
Does anyone know how to visualize the model evaluation with TensorBoard?

Could you try again by specifying the evaluate folder inside the logs folder to store the model.evaluate() metric's logs:
model.evaluate(x_test, y_test, callbacks=[TensorBoard(log_dir="logs/evaluate/")])
and then visualize the model.evaluate on tensorboard:
%tensorboard --logdir logs/evaluate/
Please find the working gist for the same. Thank you.

Related

Evaluation loss in tensorboard is always 0 when training music_vae

I'm training my own music_vae models and I've noticed that the evaluation loss and accuracy is always 0 when viewed in tensorboard. This is strange because I follow a similar process to train the RNNs, but the evaluation loss and accuracy look good with the RNNs.
Here is what I'm seeing in tensorboard:
tensorboard legend
evaluation loss
accuracy
Finally, here is what I'm seeing inside the eval folder. As you can see there is some data there:
Eval folder contents
Any help on this issue would be appreciated! Thanks

View logs from Keras Tuner in Tensorboard

I am tuning a Neural Net with Keras Tuner
I am creating logs this way:
tuner = RandomSearch(
build_model,
objective='val_accuracy',
max_trials=5,
executions_per_trial=3,
directory='my_dir',
project_name='helloworld')
This gives me this directory of log files:
/my_dir/helloworld/
-trial_xxxxx
-trial_yyyy
-trial_zzzz
-oracle.json
-tuner0.json
I can get the summary by writing
tuner.result_summary()
or even get the best model using
tuner.get_best_models(num_models=1)[0]
But I also want to explore the runs more in details and see if there are any patterns. For that I want to use TensorBoard, but if I write:
%tensorboard --logdir="my_dir/helloworld"
I only get a empty TensorBoard. I guess the problem here is that Keras Tuner and TensorBoard write logs in different fileformat.
My question is stil have anyone been able to run hyperparameter optimalization in Keras Tuner and then watch the log files in TensorBoard afterwards?
Tensorboard needs seperate logging through callbacks:
before running tuner.search() add
tensorboard=TensorBoard(log_dir='tensorborad_log_dir')
and add the tensorboard callback to tuner.search()
tuner.search(X_train, y_train, callbacks=[tensorboard])
then you can run
%tensorboard --logdir='tensorborad_log_dir'

Outputting multiple loss components to tensorboard from tensorflow estimators

I am pretty new to tensorflow and I am struggling to get tensorboard to display some of my custom metrics. The model I am working with is a tf.estimator.Estimator, with an associated EstimatorSpec. The first new metric I am trying to log is from my loss function, which is composed of two components: a loss for an age prediction (tf.float32) and a loss for a class prediction (one-hot/multiclass), which I add together to determine a total loss (my model is predicting both a class and an age). The total loss is output just fine during training and shows up on tensorboard, but I would like to track the individual age and the class prediction loss components as well.
I think a solution that is supposed to work is to add a eval_metric_ops argument to the EstimatorSpec as described here (Custom eval_metric_ops in Estimator in Tensorflow). I have not been able to make this approach work, however. I defined a custom metric function that looks like this:
def age_loss_function(labels, ages_pred, ages_true):
per_sample_age_loss = get_age_loss_per_sample(ages_pred, ages_true) ### works fine
#### The error happens on this line:
mean_abs_age_diff, age_loss_update_fn = tf.metrics.Mean(per_sample_age_loss)
######
return mean_abs_age_diff, age_loss_update_fn
eval_metric_ops = {"age_loss": age_loss_function} #### Want to use this in EstimatorSpec
The instructions seem to say that I need both the error metric and the update function which should both be returned from the tf.metrics command as in examples like the one I linked. But this command fails for me with the error message:
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.
I am probably just misusing the APIs. If someone can guide me on the proper usage I would really appreciate it. Thanks!
It looks like the problem was from a version change. I had updated to tensorflow 2.0 while the instructions I was following were from 1.X. Using tf.compat.v1.metrics.mean() instead gets past this problem.

why are my tensorflow events files empty?

I am running the tensorflow object detection API and using the SSD_mobilenet model.I have the model.cpkt as well as the graph.pbtxt in my training dir. But in my training dir I found that my events files are empty. It seems that no data was written to my events. Could anyone help me,please!!!
Tensorflow event files will be generated based on the summaries what we have added in code.
For example, suppose you are training a convolutional neural network for recognizing MNIST digits. You'd like to record how the learning rate varies over time, and how the objective function is changing. Collect these by attaching tf.summary.scalar ops to the nodes that output the learning rate and loss respectively. Then, give each scalar_summary a meaningful tag, like 'learning rate' or 'loss function'.
For example:
Add a scalar summary for the snapshot loss.
tf.summary.scalar('loss', loss)
Please refer the below link:
https://www.tensorflow.org/guide/summaries_and_tensorboard

TypeError in freeze_graph tool while trying to freeze a graph in Tensorflow 1.3

I train a tf.contrib.learn estimator (specifically, DNNLinearCombineRegressor) in Python and save the model’s parameters and graph by specifying model_dir when defining the estimator. After training is done, I try to freeze the graph using the CLI as mentioned in this post, and get this error:
TypeError: names_to_saveables must be a dict mapping string names to Tensors/Variables. Not a variable: Tensor("dnn/hiddenlayer_0/biases:0", shape=(10,), dtype=float32)
Any idea how to resolve this? Also how do I make sure that after training, when I use predict_scores() of the estimator for prediction, the frozen graph file is used to create the model? I want to freeze the graph so that predict_scores() doesnt reload the graph and its variables everytime for prediction. Thanks in advance.