I tried to visualize my training progress using tensorboard, but when I used the command for displaying tensorboard, nothing displayed and no error message. It just shows a blank page with the message "take too long to respond". This is the callback code and magic command that I used to display it.
#log directory
log_folder = "logs"
#callback
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_folder, histogram_freq=1)
#magic command
%tensorboard --logdir log_folder
and this is the image of the problem:
Related
I am tuning a Neural Net with Keras Tuner
I am creating logs this way:
tuner = RandomSearch(
build_model,
objective='val_accuracy',
max_trials=5,
executions_per_trial=3,
directory='my_dir',
project_name='helloworld')
This gives me this directory of log files:
/my_dir/helloworld/
-trial_xxxxx
-trial_yyyy
-trial_zzzz
-oracle.json
-tuner0.json
I can get the summary by writing
tuner.result_summary()
or even get the best model using
tuner.get_best_models(num_models=1)[0]
But I also want to explore the runs more in details and see if there are any patterns. For that I want to use TensorBoard, but if I write:
%tensorboard --logdir="my_dir/helloworld"
I only get a empty TensorBoard. I guess the problem here is that Keras Tuner and TensorBoard write logs in different fileformat.
My question is stil have anyone been able to run hyperparameter optimalization in Keras Tuner and then watch the log files in TensorBoard afterwards?
Tensorboard needs seperate logging through callbacks:
before running tuner.search() add
tensorboard=TensorBoard(log_dir='tensorborad_log_dir')
and add the tensorboard callback to tuner.search()
tuner.search(X_train, y_train, callbacks=[tensorboard])
then you can run
%tensorboard --logdir='tensorborad_log_dir'
Here's the code I've written in python3.6. When I try to plot using tensorboard I see two graphs namely main and auxilary but they do not seem to correspond to my code:
import tensorflow as tf
a = tf.constant(1.0)
b = tf.constant(2.0)
c=a*b
sess=tf.Session()
writer=tf.summary.FileWriter("E:/python_prog",sess.graph)
print(sess.run(c))
writer.close()
sess.close().
I run the code in anaconda(Windows) prompt:
(tfp3.6) E:\python_prog>tensorboard --logdir="E:\python_prog"
Starting TensorBoard b'54' at http://DESKTOP-31KSN08:6006
(Press CTRL+C to quit)
I am trying to view my network graph with tensor board. I read the page https://www.tensorflow.org/get_started/summaries_and_tensorboard
My question is : can I visualize the graph without creating the summaries and the FileWriter ?
Following http://ischlag.github.io/2016/06/04/how-to-use-tensorboard/ I added the following code after the session object was created:
writer = tf.summary.FileWriter("/tmp/tensorflow/", sess.graph)
Then I used the command in the blog:
tensorboard --logdir=run1:/tmp/tensorflow/ --port 6006
TensorBoard gives you back the page you should open to visualize the graph:
"TensorBoard 0.1.6 at http://page:6006"
I am manually trying to link embedding tensor with metadata.tsv, but I am getting following error: "$LOG_DIR/metadata.tsv is not a file."
I am running Tensorboard with following command :
tensorboard --logdir default/
and my projector_config.pbtxt file is following :
embeddings {
tensor_name: 'embedding/decoder_embedding_matrix'
metadata_path: '$LOG_DIR/metadata.tsv'
}
I have checked my log_dir and it has all files. [Screen shot attached]
LOG_DIR Screenshot:
Error Screenshot
It cannot recognize $LOG_DIR the way you have used it. Either edit projector_config.pbtxt manually to provide the full path, or use this in your code:
import os
embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')
where again LOG_DIR should preferably be the full path (and not the relative one).
I ran this code snippet:
import os
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.tensorboard.plugins import projector
LOG_DIR = 'logs'
metadata = os.path.join(LOG_DIR, 'metadata.tsv')
mnist = input_data.read_data_sets('MNIST_data')
input_1 = mnist.train.next_batch(10)
images = tf.Variable(input_1[0], name='images')
with open(metadata, 'w') as metadata_file:
for row in input_1[1]:
metadata_file.write('%d\n' % row)
with tf.Session() as sess:
saver = tf.train.Saver([images])
sess.run(images.initializer)
saver.save(sess, os.path.join(LOG_DIR, 'images.ckpt'))
config = projector.ProjectorConfig()
# One can add multiple embeddings.
# Link this tensor to its metadata file (e.g. labels).
embedding = config.embeddings.add()
embedding.tensor_name = images.name
embedding.metadata_path = metadata
# Saves a config file that TensorBoard will read during startup.
projector.visualize_embeddings(tf.summary.FileWriter(LOG_DIR), config)
And after this, I opened tensorboard embedding tab and it showed parsing metadata. However, it kept on loading that way endlessly. I tried another code and in that case, it kept loading on fetching spite Image. Is there something wrong with my tensorboard?
The problem is that TensorBoard couldn't find your metadata file, because it looks for the metadata file relative to the directory that you have specified with your '--logdir' parameter of the 'tensorboard' command.
So if you are opening TensorBoard with 'tensorboard --logdir logs', it will look for the metadata file in 'logs/logs/metadata.tsv'.
A possible fix for your code is to replace this line
embedding.metadata_path = metadata
with this one:
embedding.metadata_path = 'metadata.tsv'
In general, in order to debug errors TensorBoard you have to look at the response of the error messages in your browser console when looking at TensorBoard.