I am a newbie and I have had some trouble in using tensorboard.
I stared Spyder in Anaconda prompt by
conda activate D:\Software\Anaconda\envs\tf
spyder
And this is my simple code in file trial.py
import tensorflow as tf
a = 2
b = 3
x = tf.add(a, b)
writer = tf.summary.create_file_writer('./graphs')
print(writer)
Then I ran this code in spyder. I opened another Anaconda prompt and added this line
conda activate D:\Software\Anaconda\envs\tf
tensorboard --logdir=D:\Dung\Maytinh\Python\graphs --port 6006
I opened my Chrome and typed: http://localhost:6006/. The result was below:
I have tried many times and I could not understand where the error was. Some guides in the internet belong to tensorflow 1 and can not suit to handle. Please help me!
This is based on the documentation
import tensorflow as tf
a = 2
b = 3
#tf.function
def func(x, y):
return tf.add(a, b)
writer = tf.summary.create_file_writer('./graphs')
tf.summary.trace_on(graph=True, profiler=True)
z = func(a, b)
with writer.as_default():
tf.summary.trace_export(
name="trace",
step=0,
profiler_outdir='./graphs')
tensorboard --logdir ./graphs
Related
I'm trying to get used to tensorboard, and I code my models using pytorch.
However when I try to see my model using the add_graph() function, I've got this:
With this as the test code:
import numpy as np
import torch
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = nn.Linear(2, 1)
def forward(self, x):
x = self.linear(x)
return x
writer = SummaryWriter('runs_pytorch/test')
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
writer.add_graph(net, torch.zeros([4, 2], dtype=torch.float))
writer.close()
On the other hand, if I try to see a graph using TensorFlow, everything seems fine:
with this as the test code this time:
import tensorflow as tf
tf.Variable(42, name='foo')
w = tf.summary.FileWriter('runs_tensorflow/test')
w.add_graph(tf.get_default_graph())
w.flush()
w.close()
In case you are wondering, I'm using this command to start tensorboard:
tensorboard --logdir runs_pytorch
Something I noticed is that when I use it on the directory allocated for my tensorflow test, I've got the usual message with the address, but if I do the same thing with --logdir runs_pytorch I've got something more:
W1010 15:19:24.225109 15308 plugin_event_accumulator.py:294] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event.
W1010 15:19:24.226075 15308 plugin_event_accumulator.py:322] Found more than one "run metadata" event with tag step1. Overwriting it with the newest event.
I'm on windows, I tried on different browsers (chrome, firefox...).
I have tensorflow 1.14.0, torch 1.2.0, Python 3.7.3
Thank you very much for your help, it's driving me crazy!
There are two ways to solve it:
1. update PyTorch to 1.3.0 and above:
conda way:
conda install pytorch torchvision cudatoolkit=9.2 -c pytorch
pip way:
pip3 install torch==1.3.0+cu92 torchvision==0.4.1+cu92 -f https://download.pytorch.org/whl/torch_stable.html
2. install tensorboardX instead:
uninstall tensorboard:
if your tensorboard is installed by pip:
pip uninstall tensorboard
if your tensorboard is installed by anaconda:
conda uninstall tensorboard
install tensorboardX
pip install tensorboardX
when writing script,
change
from torch.utils.tensorboard import SummaryWriter
to
from tensorboardX import SummaryWriter
This might have been caused by this known problem, and it seems that it was solved in pytorch 1.3 which was realeased yesterday - check out Bug Fixes in the release notes.
I'm trying to profile a Tensorflow graph in Tensorboard, but despite recording runtime metadata, the "Compute time" colour option is greyed out. A simple example as follows:
import tensorflow as tf
sess = tf.Session()
x = tf.constant(1.0)
y = tf.constant(2.0)
z = x + y
writer = tf.summary.Filewriter('logs', sess.graph)
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_meta = tf.RunMetadata()
res = sess.run(z, options=run_options, run_metadata=run_meta)
writer.add_run_metadata(run_meta, "metadata")
writer.close()
I then run Tensorboard from the terminal:
$ tensorboard --logdir logs
I then navigate to http://localhost:6006 in Chrome, and can see the TF graph visualisation, but no performance stats. Am I missing something obvious?
Thanks,
Chris
Specs: OSX Mojave, Anaconda Python 3.6.8, Tensorflow 1.14.
Question answered! The piece I was missing was the dropdown "Tag" menu in the Tensorboard navigation panel on the left hand side of the brower window. I clicked it, and hey presto, I have "Compute Time" available to me.
Thanks all,
Chris
Here's the code I've written in python3.6. When I try to plot using tensorboard I see two graphs namely main and auxilary but they do not seem to correspond to my code:
import tensorflow as tf
a = tf.constant(1.0)
b = tf.constant(2.0)
c=a*b
sess=tf.Session()
writer=tf.summary.FileWriter("E:/python_prog",sess.graph)
print(sess.run(c))
writer.close()
sess.close().
I run the code in anaconda(Windows) prompt:
(tfp3.6) E:\python_prog>tensorboard --logdir="E:\python_prog"
Starting TensorBoard b'54' at http://DESKTOP-31KSN08:6006
(Press CTRL+C to quit)
I run the following code to see the graph generated.
import tensorflow as tf
logs_dir = "/home/sukriti/Desktop/GIS/new_code"
a = tf.constant(2)
b = tf.constant(3)
x = tf.add(a, b)
with tf.Session() as sess:
writer = tf.summary.FileWriter(logs_dir, sess.graph)
print sess.run(x)
writer.close()
I ran the following command
$ python demo.py
$ tensorboard --logdir="logs_dir"
Starting TensorBoard 54 at http://drsbhattac:6006 (Press CTRL+C to
quit)
Now clicking on the above link (Graph tab)I found the following message:
No graph definition files were found
Can you please help me what I am missing here?
Thanks in advance!!
I am reading a book on Tensorflow and I find this code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
const1 = tf.constant(2)
const2 = tf.constant(3)
add_opp = tf.add(const1,const2)
mul_opp = tf.mul(add_opp, const2)
with tf.Session() as sess:
result, result2 = sess.run([mul_opp,add_opp])
print(result)
print(result2)
tf.train.SummaryWriter('./',sess.graph)
so it is very simple, nothing fancy and it is supposed to produce some output that can be visualized with tensorboard.
So I run the script, it prints the results but apparently SummaryWriter produces nothing.
I run tensorboard -logdir='./' and of course there is no graph.
What could I be doing wrong?
And also how do you terminate tensorboard? I tried ctrl-C and ctrl-Z and it does not work. (also I am in a japanese keyboard so there is no backslash just in case)
The tf.train.SummaryWriter must be closed (or flushed) in order to ensure that data, including the graph, have been written to it. The following modification to your program should work:
writer = tf.train.SummaryWriter('./', sess.graph)
writer.close()
A very wierd thing happened to me
I am learning to work with tensorflow
import tensorflow as tf
a = tf.constant(3)
b = tf.constant(4)
c = a+b
with tf.Session() as sess:
File_Writer = tf.summary.FileWriter('/home/theredcap/Documents/CS/Personal/Projects/Test/tensorflow/tensorboard/' , sess.graph )
print(sess.run(c))
Inorder to see the graph on tensorboard
I typed
tensorboard --logdir = "the above mentioned path"
But nothing was displayed on the tensorboard
Then I went to the github README page
https://github.com/tensorflow/tensorboard/blob/master/README.md
And it said to run the command in this manner
tensorboard --logdir path/to/logs
I did the same, and finally I was able to view my graph