I am doing mnist tutorial, and fully_connected_feed.py works and saves events.out.tfevents.1447186888 file to ~..\data\
when I trying to open TensorBoard like this
python ~/tensorflow/tensorflow/tensorboard/tensorboard.py --logdir=~/tensorflow/tensorflow/g3doc/tutorials/mnist/data
or like this
tensorboard --logdir=~/tensorflow/tensorflow/g3doc/tutorials/mnist/data
It opens, but then I see "No scalar summary tags were found."
Try to use
tensorboard --logdir=home/$USER/tensorflow/tensorflow/g3doc/tutorials/mnist/data
or
tensorboard --logdir=${PWD} in that directory
Because tensorboard checks path existence by using os.path.exists()
=
Regarding that, I would like to set alias tensorboard='tensorboard --logdir=${PWD}' for convenient
Related
I was following this tutorial which comes with this notebook.
I plan to use Tensorflow for my project, so I followed this tutorial and added the line
tokenized_datasets = tokenized_datasets["train"].to_tf_dataset(columns=["input_ids"], shuffle=True, batch_size=16, collate_fn=data_collator)
to the end of the notebook.
However, when I ran it, I got the following error:
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
Why didn't this work? How can I use the collator?
The issue is not your code, but how the collator is set up. (It's set up to not use Tensorflow by default.)
If you look at this, you'll see that their collator uses the return_tensors="tf" argument. If you add this to your collator, your code for using the collator will work.
In short, your collator creation should look like
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm_probability=0.15, return_tensors="tf")
This will fix the issue.
I am trying to work with the quite recently published tensorflow_dataset API to train a Keras model on the Open Images Dataset. The dataset is about 570 GB in size. I downloaded the data with the following code:
import tensorflow_datasets as tfds
import tensorflow as tf
open_images_dataset = tfds.image.OpenImagesV4()
open_images_dataset.download_and_prepare(download_dir="/notebooks/dataset/")
After the download was complete, the connection to my jupyter notebook somehow interrupted but the extraction seemed to be finished as well, at least all downloaded files had a counterpart in the "extracted" folder. However, I am not able to access the downloaded data now:
tfds.load(name="open_images_v4", data_dir="/notebooks/open_images_dataset/extracted/", download=False)
This only gives the following error:
AssertionError: Dataset open_images_v4: could not find data in /notebooks/open_images_dataset/extracted/. Please make sure to call dataset_builder.download_and_prepare(), or pass download=True to tfds.load() before trying to access the tf.data.Dataset object.
When I call the function download_and_prepare() it only downloads the whole dataset again.
Am I missing something here?
Edit:
After the download the folder under "extracted" has 18 .tar.gz files.
This is with tensorflow-datasets 1.0.1 and tensorflow 2.0.
The folder hierarchy should be like this:
/notebooks/open_images_dataset/extracted/open_images_v4/0.1.0
All the datasets have a version. Then the data could be loaded like this.
ds = tf.load('open_images_v4', data_dir='/notebooks/open_images_dataset/extracted', download=False)
I didn't have open_images_v4 data. I put cifar10 data into a folder named open_images_v4 to check what folder structure tensorflow_datasets was expecting.
The solution to this was to also use the "data_dir" parameter when initializing the dataset:
builder = tfds.image.OpenImagesV4(data_dir="/raid/openimages/dataset")
builder.download_and_prepare(download_dir="/raid/openimages/dataset")
This way the dataset is donwloaded and extracted in the same directory. Before, it was (for me unnoticeably) extracting to the default directory, which is under /home/.../. That's what caused the error, as there wasn't enough space left under my home directory.
After the extraction, the folder structure is exactly as Manoj-Mohan described.
Above solution haven't worked for me.
builder = tfds.builder(name='folder_name', data_dir=data_dir)
builder.download_and_prepare(download_dir="/home/...")
ds = builder.as_dataset()
I used the following code in Pycharm:
import tensorflow as tf
sess = tf.Session()
a = tf.constant(value=5, name='input_a')
b = tf.constant(value=3, name='input_b')
c = tf.multiply(a,b, name='mult_c')
d = tf.add(a,b, name='add_d')
e = tf.add(c,d, name='add_e')
print(sess.run(e))
writer = tf.summary.FileWriter("./tb_graph", sess.graph)
Then, I pasted following line to the Anaconda Prompt:
tensorboard --logdir=="tb_graph"
I tried both with "" and '' as there were proposed: Tensorboard: No graph definition files were found. and it does nothing for me.
I had similar issue. The issue occurred when I specified 'logdir' folder inside single quotes instead of double quotes. Hope this may be helpful to you.
egs: tensorboard --logdir='my_graph' -> Tensorboard didn't detect the graph
tensorboard --logdir="my_graph" -> Tensorboard detected the graph
I checked the code on laptop with Ubuntu 16.04 and another one with Win10, so it probably isn't system-based error.
I also tried adding and removing --host=127.0.0.1 in An Prompt and checking several times both http://localhost:6006/ and http://desktop-.......:6006/.
Still same error:
No graph definition files were found.
To store a graph, create a tf.summary.FileWriter and pass the graph either via the constructor, or by calling its add_graph() method. You may want to check out the graph visualizer tutorial.
....
Please tell me what is wrong in the code/propmp command?
EDIT: On Ubuntu I used the normal terminal, of course.
EDIT2: I used both = and == in command prompt
The answer to my question is:
1) change "./new1_dir" into ".\\new1_dir"
and
2)put full track to file to anaconda propmpt: --logdir="C:\Users\Admin\Documents\PycharmProjects\try_tb\new1_dir"
Thanks #BugKiller for your help!
EDIT: Working only on Windows for me, but still better than nothing
EDIT2: Works on Ubuntu 16.04 too
I use tf.summary.tensor_summary in my code, following this: https://www.tensorflow.org/api_docs/python/tf/summary/tensor_summary
But I didn't see anything new in tensorboard, and tensorboard also printed some warnings:
W0321 12:50:47.244003 Reloader tf_logging.py:121] This summary with tag 'component_3_finalize/wealth_tensor' is oddly not associated with a plugin.
How to make this work? Do I need install some plugin? I didn't find any docs on this.
UPDATE:
So I here is how I create my summary:
def finalize_graph(self, graph_building_context):
tf.summary.scalar('loss', loss)
# the wealth tensor is of shape [B], where B is the batch size at runtime
tf.summary.scalar('wealth', tf.reduce_mean(wealth))
# uhmm, this tensor_summary doesn't work yet
tf.summary.tensor_summary('wealth_tensor', wealth)
Then I use a MonitoredTrainingSession, by default it will save the summary, and I can see my loss and wealth scalar summry, but not this wealth_tensor summary.
After following this tutorial on summaries and TensorBoard, I've been able to successfully save and look at data with TensorBoard. Is it possible to open this data with something other than TensorBoard?
By the way, my application is to do off-policy learning. I'm currently saving each state-action-reward tuple using SummaryWriter. I know I could manually store/train on this data, but I thought it'd be nice to use TensorFlow's built in logging features to store/load this data.
As of March 2017, the EventAccumulator tool has been moved from Tensorflow core to the Tensorboard Backend. You can still use it to extract data from Tensorboard log files as follows:
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
event_acc = EventAccumulator('/path/to/summary/folder')
event_acc.Reload()
# Show all tags in the log file
print(event_acc.Tags())
# E. g. get wall clock, number of steps and value for a scalar 'Accuracy'
w_times, step_nums, vals = zip(*event_acc.Scalars('Accuracy'))
Easy, the data can actually be exported to a .csv file within TensorBoard under the Events tab, which can e.g. be loaded in a Pandas dataframe in Python. Make sure you check the Data download links box.
For a more automated approach, check out the TensorBoard readme:
If you'd like to export data to visualize elsewhere (e.g. iPython
Notebook), that's possible too. You can directly depend on the
underlying classes that TensorBoard uses for loading data:
python/summary/event_accumulator.py (for loading data from a single
run) or python/summary/event_multiplexer.py (for loading data from
multiple runs, and keeping it organized). These classes load groups of
event files, discard data that was "orphaned" by TensorFlow crashes,
and organize the data by tag.
As another option, there is a script
(tensorboard/scripts/serialize_tensorboard.py) which will load a
logdir just like TensorBoard does, but write all of the data out to
disk as json instead of starting a server. This script is setup to
make "fake TensorBoard backends" for testing, so it is a bit rough
around the edges.
I think the data are encoded protobufs RecordReader format. To get serialized strings out of files you can use py_record_reader or build a graph with TFRecordReader op, and to deserialize those strings to protobuf use Event schema. If you get a working example, please update this q, since we seem to be missing documentation on this.
I did something along these lines for a previous project. As mentioned by others, the main ingredient is tensorflows event accumulator
from tensorflow.python.summary import event_accumulator as ea
acc = ea.EventAccumulator("folder/containing/summaries/")
acc.Reload()
# Print tags of contained entities, use these names to retrieve entities as below
print(acc.Tags())
# E. g. get all values and steps of a scalar called 'l2_loss'
xy_l2_loss = [(s.step, s.value) for s in acc.Scalars('l2_loss')]
# Retrieve images, e. g. first labeled as 'generator'
img = acc.Images('generator/image/0')
with open('img_{}.png'.format(img.step), 'wb') as f:
f.write(img.encoded_image_string)
You can also use the tf.train.summaryiterator: To extract events in a ./logs-Folder where only classic scalars lr, acc, loss, val_acc and val_loss are present you can use this GIST: tensorboard_to_csv.py
Chris Cundy's answer works well when you have less than 10000 data points in your tfevent file. However, when you have a large file with over 10000 data points, Tensorboard will automatically sampling them and only gives you at most 10000 points. It is a quite annoying underlying behavior as it is not well-documented. See https://github.com/tensorflow/tensorboard/blob/master/tensorboard/backend/event_processing/event_accumulator.py#L186.
To get around it and get all data points, a bit hacky way is to:
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
class FalseDict(object):
def __getitem__(self,key):
return 0
def __contains__(self, key):
return True
event_acc = EventAccumulator('path/to/your/tfevents',size_guidance=FalseDict())
It looks like for tb version >=2.3 you can streamline the process of converting your tb events to a pandas dataframe using tensorboard.data.experimental.ExperimentFromDev().
It requires you to upload your logs to TensorBoard.dev, though, which is public. There are plans to expand the capability to locally stored logs in the future.
https://www.tensorflow.org/tensorboard/dataframe_api
You can also use the EventFileLoader to iterate through a tensorboard file
from tensorboard.backend.event_processing.event_file_loader import EventFileLoader
for event in EventFileLoader('path/to/events.out.tfevents.xxx').Load():
print(event)
Surprisingly, the python package tb_parse has not been mentioned yet.
From documentation:
Installation:
pip install tensorflow # or tensorflow-cpu pip install -U tbparse # requires Python >= 3.7
Note: If you don't want to install TensorFlow, see Installing without TensorFlow.
We suggest using an additional virtual environment for parsing and plotting the tensorboard events. So no worries if your training code uses Python 3.6 or older versions.
Reading one or more event files with tbparse only requires 5 lines of code:
from tbparse import SummaryReader
log_dir = "<PATH_TO_EVENT_FILE_OR_DIRECTORY>"
reader = SummaryReader(log_dir)
df = reader.scalars
print(df)