I am implementing a simple linear regression code in google collab and trying to visualize the results with tensorboard with the following command
%tensorboard --logdir=/tmp/lr-train.
However, when I run this command, the tensorboard just simply does not show up. Instead I just see the following message Reusing TensorBoard on port 6012 (pid 3219), started 0:07:06 ago. (Use '!kill 3219' to kill it.)
How to launch the tensorboard in my case? Here is the code I am trying to run:
%tensorflow_version 1.x
import tensorflow as tf
import numpy as np
N = 100
x_zeros = np.random.multivariate_normal(
mean=np.array((-1, -1)), cov = 0.1 * np.eye(2), size = (N//2))
y_zeros = np.zeros((N//2,))
x_ones = np.random.multivariate_normal(mean=np.array((1, 1)), cov = 0.1 * np.eye(2), size=(N//2))
y_ones = np.zeros((N//2))
x_np = np.vstack([x_zeros, x_ones])
y_np = np.concatenate([y_zeros, y_ones])
with tf.name_scope("placeholders"):
x = tf.placeholder(tf.float32, (N, 2))
y = tf.placeholder(tf.float32, (N, 1))
with tf.name_scope("weights"):
W = tf.Variable(tf.random_normal((2, 1)))
b = tf.Variable(tf.random_normal((1, )))
with tf.name_scope("prediction"):
y_pred = tf.matmul(x, W) + b
with tf.name_scope("loss"):
l = tf.reduce_sum((y - y_pred)**2)
with tf.name_scope("optim"):
train_op = tf.train.AdamOptimizer(0.05).minimize(l)
with tf.name_scope("summaries"):
tf.summary.scalar("loss", l)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter('/tmp/lr-train', tf.get_default_graph())
n_steps = 100
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Train model
for i in range(n_steps):
feed_dict = {x: x_np, y: y_np.reshape(-1,1)}
_, summary, loss = sess.run([train_op, merged, l], feed_dict=feed_dict)
if i%10 == 0:
print("step %d, loss: %f" % (i, loss))
I tried an example in tf-2, and tensorboard launched without any issues with the same command.
I tried your code in Colab and was able to reproduce what you mentioned and found a solution that worked as described below.
Use a ”space” between —logdir and /tmp/lr-train instead of a =.
What did not work as mentioned in the question:
%load_ext tensorboard
%tensorboard --logdir=/tmp/lr-train
What did work:
%load_ext tensorboard
%tensorboard --logdir /tmp/lr-train
You can also terminate your active session and then rerun all cells afterward. But maybe you even found a way of killing a tensorboard within a session.
In my training file(train.py), I write:
def deep_part(self):
with tf.variable_scope("deep-part"):
y_deep = tf.reshape(self.embeddings, shape=[-1, self.field_size * self.factor_size]) # None * (F*K)
# self.deep_layers = 2
for i in range(0,len(self.deep_layers)):
y_deep = tf.contrib.layers.fully_connected(y_deep, self.deep_layers[i], \
activation_fn=self.deep_layers_activation, scope = 'fc%d' % i)
return y_deep
now in predict file(predict.py), I restore the checkpoint, but I dont know how to reload the "deep-part" network's weights and biases.Because I think the "fully_conncted" function might hide the weights and biases.
I wrote a lengthy explanation here. A short summary:
By saver.save(sess, '/tmp/my_model') Tensorflow produces multiple files:
checkpoint
my_model.data-00000-of-00001
my_model.index
my_model.meta
The checkpoint file checkpoint is just a pointer to the latest version of our model-weights and it is simply a plain text file containing
$ !cat /tmp/model/checkpoint
model_checkpoint_path: "/tmp/my_model"
all_model_checkpoint_paths: "/tmp/my_model"
The others are binary files containing the graph (.meta) and weights (.data*).
You can help yourself by running
import tensorflow as tf
import numpy as np
data = np.arange(9 * 1).reshape(1, 9).astype(np.float32)
plhdr = tf.placeholder(tf.float32, shape=[1, 9], name='input')
print plhdr.name
activation = tf.layers.dense(plhdr, 10, name='fc')
print activation.name
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
expected = sess.run(activation, {plhdr: data})
print expected
saver = tf.train.Saver(tf.global_variables())
saver.save(sess, '/tmp/my_model')
tf.reset_default_graph()
with tf.Session() as sess:
# load the computation graph (the fully connected + placeholder)
loader = tf.train.import_meta_graph('/tmp/my_model.meta')
sess.run(tf.global_variables_initializer())
plhdr = tf.get_default_graph().get_tensor_by_name('input:0')
activation = tf.get_default_graph().get_tensor_by_name('fc/BiasAdd:0')
actual = sess.run(activation, {plhdr: data})
assert np.allclose(actual, expected) is False
# now load the weights
loader = loader.restore(sess, '/tmp/my_model')
actual = sess.run(activation, {plhdr: data})
assert np.allclose(actual, expected) is True
I'm trying to benchmark the performance in the inference phase of my Keras model build with the TensorFlow backend. I was thinking that the the Tensorflow Benchmark tool was the proper way to go.
I've managed to build and run the example on Desktop with the tensorflow_inception_graph.pb and everything seems to work fine.
What I can't seem to figure out is how to save the Keras model as a proper .pbmodel. I'm able to get the TensorFlow Graph from the Keras model as follows:
import keras.backend as K
K.set_learning_phase(0)
trained_model = function_that_returns_compiled_model()
sess = K.get_session()
sess.graph # This works
# Get the input tensor name for TF Benchmark
trained_model.input
> <tf.Tensor 'input_1:0' shape=(?, 360, 480, 3) dtype=float32>
# Get the output tensor name for TF Benchmark
trained_model.output
> <tf.Tensor 'reshape_2/Reshape:0' shape=(?, 360, 480, 12) dtype=float32>
I've now been trying to save the model in a couple of different ways.
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
model = trained_model
export_path = "path/to/folder" # where to save the exported graph
export_version = 1 # version number (integer)
saver = tf.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
model_exporter.export(export_path, tf.constant(export_version), sess)
Which produces a folder with some files I don't know what to do with.
I would now run the Benchmark tool with something like this
bazel-bin/tensorflow/tools/benchmark/benchmark_model \
--graph=tensorflow/tools/benchmark/what_file.pb \
--input_layer="input_1:0" \
--input_layer_shape="1,360,480,3" \
--input_layer_type="float" \
--output_layer="reshape_2/Reshape:0"
But no matter which file I'm trying to use as the what_file.pb I'm getting a Error during inference: Invalid argument: Session was not created with a graph before Run()!
So I got this to work. Just needed to convert all variables in the tensorflow graph to constants and then save graph definition.
Here's a small example:
import tensorflow as tf
from keras import backend as K
from tensorflow.python.framework import graph_util
K.set_learning_phase(0)
model = function_that_returns_your_keras_model()
sess = K.get_session()
output_node_name = "my_output_node" # Name of your output node
with sess as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(
sess,
sess.graph.as_graph_def(),
output_node_name.split(","))
tf.train.write_graph(output_graph_def,
logdir="my_dir",
name="my_model.pb",
as_text=False)
Now just call the TensorFlow Benchmark tool with my_model.pb as the graph.
You're saving the parameters of this model and not the graph definition; to save that use tf.get_default_graph().as_graph_def().SerializeToString() and then save that to a file.
That said I don't think the benchmark tool will work since it has no way to initialize the variables your model depends on.
The official way to visualize a TensorFlow graph is with TensorBoard, but sometimes I just want a quick look at the graph when I'm working in Jupyter.
Is there a quick solution, ideally based on TensorFlow tools, or standard SciPy packages (like matplotlib), but if necessary based on 3rd party libraries?
Here's a recipe I copied from one of Alex Mordvintsev deep dream notebook at some point
from IPython.display import clear_output, Image, display, HTML
import numpy as np
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
Then to visualize current graph
show_graph(tf.get_default_graph().as_graph_def())
If your graph is saved as pbtxt, you could do
gdef = tf.GraphDef()
from google.protobuf import text_format
text_format.Merge(open("tf_persistent.pbtxt").read(), gdef)
show_graph(gdef)
You'll see something like this
TensorFlow 2.0 now supportsTensorBoardinJupytervia magic commands (e.g %tensorboard --logdir logs/train). Here's a link to tutorials and examples.
[EDITS 1, 2]
As #MiniQuark mentioned in a comment, we need to load the extension first(%load_ext tensorboard.notebook).
Below are usage examples for using graph mode, #tf.function and tf.keras (in tensorflow==2.0.0-alpha0):
1. Example using graph mode in TF2 (via tf.compat.v1.disable_eager_execution())
%load_ext tensorboard.notebook
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
from tensorflow.python.ops.array_ops import placeholder
from tensorflow.python.training.gradient_descent import GradientDescentOptimizer
from tensorflow.python.summary.writer.writer import FileWriter
with tf.name_scope('inputs'):
x = placeholder(tf.float32, shape=[None, 2], name='x')
y = placeholder(tf.int32, shape=[None], name='y')
with tf.name_scope('logits'):
layer = tf.keras.layers.Dense(units=2)
logits = layer(x)
with tf.name_scope('loss'):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss_op = tf.reduce_mean(xentropy)
with tf.name_scope('optimizer'):
optimizer = GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss_op)
FileWriter('logs/train', graph=train_op.graph).close()
%tensorboard --logdir logs/train
2. Same example as above but now using #tf.function decorator for forward-backward passes and without disabling eager execution:
%load_ext tensorboard.notebook
import tensorflow as tf
import numpy as np
logdir = 'logs/'
writer = tf.summary.create_file_writer(logdir)
tf.summary.trace_on(graph=True, profiler=True)
#tf.function
def forward_and_backward(x, y, w, b, lr=tf.constant(0.01)):
with tf.name_scope('logits'):
logits = tf.matmul(x, w) + b
with tf.name_scope('loss'):
loss_fn = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=y, logits=logits)
reduced = tf.reduce_sum(loss_fn)
with tf.name_scope('optimizer'):
grads = tf.gradients(reduced, [w, b])
_ = [x.assign(x - g*lr) for g, x in zip(grads, [w, b])]
return reduced
# inputs
x = tf.convert_to_tensor(np.ones([1, 2]), dtype=tf.float32)
y = tf.convert_to_tensor(np.array([1]))
# params
w = tf.Variable(tf.random.normal([2, 2]), dtype=tf.float32)
b = tf.Variable(tf.zeros([1, 2]), dtype=tf.float32)
loss_val = forward_and_backward(x, y, w, b)
with writer.as_default():
tf.summary.trace_export(
name='NN',
step=0,
profiler_outdir=logdir)
%tensorboard --logdir logs/
3. Using tf.keras API:
%load_ext tensorboard.notebook
import tensorflow as tf
import numpy as np
x_train = [np.ones((1, 2))]
y_train = [np.ones(1)]
model = tf.keras.models.Sequential([tf.keras.layers.Dense(2, input_shape=(2, ))])
model.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
logdir = "logs/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
model.fit(x_train,
y_train,
batch_size=1,
epochs=1,
callbacks=[tensorboard_callback])
%tensorboard --logdir logs/
These examples will produce something like this below the cell:
I wrote a Jupyter extension for tensorboard integration. It can:
Start tensorboard just by clicking a button in Jupyter
Manage multiple tensorboard instances.
Seamless integration with Jupyter interface.
Github: https://github.com/lspvic/jupyter_tensorboard
I wrote a simple helper which starts a tensorboard from the jupyter notebook. Just add this function somewhere at the top of your notebook
def TB(cleanup=False):
import webbrowser
webbrowser.open('http://127.0.1.1:6006')
!tensorboard --logdir="logs"
if cleanup:
!rm -R logs/
And then run it TB() whenever you generated your summaries. Instead of opening a graph in the same jupyter window, it:
starts a tensorboard
opens a new tab with tensorboard
navigate you to this tab
After you are done with exploration, just click the tab, and stop interrupt the kernel. If you want to cleanup your log directory, after the run, just run TB(1)
A Tensorboard / iframes free version of this visualization that admittedly gets cluttered quickly can
import pydot
from itertools import chain
def tf_graph_to_dot(in_graph):
dot = pydot.Dot()
dot.set('rankdir', 'LR')
dot.set('concentrate', True)
dot.set_node_defaults(shape='record')
all_ops = in_graph.get_operations()
all_tens_dict = {k: i for i,k in enumerate(set(chain(*[c_op.outputs for c_op in all_ops])))}
for c_node in all_tens_dict.keys():
node = pydot.Node(c_node.name)#, label=label)
dot.add_node(node)
for c_op in all_ops:
for c_output in c_op.outputs:
for c_input in c_op.inputs:
dot.add_edge(pydot.Edge(c_input.name, c_output.name))
return dot
which can then be followed by
from IPython.display import SVG
# Define model
tf_graph_to_dot(graph).write_svg('simple_tf.svg')
SVG('simple_tf.svg')
to render the graph as records in a static SVG file
Code
def tb(logdir="logs", port=6006, open_tab=True, sleep=2):
import subprocess
proc = subprocess.Popen(
"tensorboard --logdir={0} --port={1}".format(logdir, port), shell=True)
if open_tab:
import time
time.sleep(sleep)
import webbrowser
webbrowser.open("http://127.0.0.1:{}/".format(port))
return proc
Usage
tb() # Starts a TensorBoard server on the logs directory, on port 6006
# and opens a new tab in your browser to use it.
tb("logs2", 6007) # Starts a second server on the logs2 directory, on port 6007,
# and opens a new tab to use it.
Starting a server does not block Jupyter (except for 2 seconds to ensure the server has the time to start before opening a tab). All TensorBoard servers will stop when you interrupt the kernel.
Advanced usage
If you want more control, you can kill the servers programmatically like this:
server1 = tb()
server2 = tb("logs2", 6007)
# and later...
server1.kill() # stops the first server
server2.kill() # stops the second server
You can set open_tab=False if you don't want new tabs to open. You can also set sleep to some other value if 2 seconds is too much or not enough on your system.
If you prefer to pause Jupyter while TensorBoard is running, then you can call any server's wait() method. This will block Jupyter until you interrupt the kernel, which will stop this server and all the others.
server1.wait()
Prerequisites
This solution assumes you have installed TensorBoard (e.g., using pip install tensorboard) and that it is available in the environment you started Jupyter in.
Acknowledgment
This answer was inspired by #SalvadorDali's answer. His solution is nice and simple, but I wanted to be able to start multiple tensorboard instances without blocking Jupyter. Also, I prefer not to delete log directories. Instead, I start tensorboard on the root log directory, and each TensorFlow run logs in a different subdirectory.
Another quick option with TF 2.x is through the plot_model() function. It's already built into more recent versions of TF utilities. For example:
import tensorflow
from tensorflow.keras.utils import plot_model
plot_model(model, to_file=('output_filename.png'))
This function is nice because you can have it display the layer name, output at a high DPI, configure it to plot horizontally, any other options. Here is the documentation for the function: https://www.tensorflow.org/api_docs/python/tf/keras/utils/plot_model
The plotting is very quick even for large models and works very well even with complex models that have multiple connections in and out.
TensorBoard Visualize Nodes - Architecture Graph
<img src="https://www.tensorflow.org/images/graph_vis_animation.gif" width=1300 height=680>
I am experimenting (learning) TensorBoard and use the following code I got from the internet (simple regression function)
import tensorflow as tf
import numpy as np
#sess = tf.InteractiveSession() #define a session
# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype("float32")
y_data = x_data * 0.1 + 0.3
# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but Tensorflow will
# figure that out for us.)
with tf.name_scope("calculatematmul") as scope:
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b
# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# Before starting, initialize the variables. We will 'run' this first.
init = tf.initialize_all_variables()
# Launch the graph.
sess = tf.Session()
sess.run(init)
#### ----> ADD THIS LINE <---- ####
writer = tf.train.SummaryWriter('mnist_logs', sess.graph_def)
# Fit the line.
for step in xrange(201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(W), sess.run(b))
The code runs fine when I create a python file and run the file with
python test.py
also it runs fine in the jupyter notebook
However, while Tensorboard gets the information from running the python file (that is to say, it creates the xyz....home file), the interactive version does not create any info usable for Tensorboard.
Can somebody explain to me why, please!
Thanks
Peter
Be sure that you use the full path when starting tensorboard.
tensorboard --logdir='./somedirectory/mnist_logs'