Visualize results by Tensorboard - tensorflow

I have a list that each element composed of 4 entries - episode, reward,exploration_rate, and running average. I want to visualize the results by the Tensorboard.
There is some way to visualize these results?
It should be mentioned that I already have the results, so I cant use Callbacks. Currently, I have the results as a Matplotlib plot (Presented in the figure). However, I want to use TensorBoard.
Thanks.

Start by activating your environment in which tensorflow is installed.
conda activate <tensorflow_env>
Then, follow the below command by passing your "log_directory" as the parameter
tensorboard --logdir <log directory>
Kindly follow the link
https://www.tensorflow.org/tensorboard/image_summaries

Related

get bounding box coordinated from object detection api after eval

I used Object Detection API from Tensorflow to train my model. The eval script also worked fine.
However, I want to get the coordinates of the bounding box in eval images. Is that possible to do so in eval script? How can I do it?
I used this script.
A few examples (how many is set by num_visualizations in the config file) are shown on TensorBoard.
If you want more than that, I suggest to export the model and use inference.
See the TF OD tutorial notebook for how to use an exported model for inference and visualizing results.
You can refer to the following link.
Although this does not have any relation with the eval.py script, it gets the job done. If you specify a directory consisting of the test images, you can get the bounding box coordinates of all the detected objects using the model you have trained.

Tensorboard projector will compute PCA endlessly

I have just over 100k word embeddings which I created using gensim, originally each containing 200 dimensions. I've been trying to visualize them within tensorboard's projector but I have only failed so far.
My problem is that tensorboard seems to freeze while computing PCA. At first, I left the page open for 16 hours, imagining that it was just too much to be calculated, but nothing happened. At this point, I started to try and test different scenarios just in case all I needed was more time and I was trying to rush things. The following is a list of my testing so far, all of which failed at the same spot, computing PCA:
I plotted only 10 points of 200 dimensions;
I retrained my gensim model so that I could reduce its dimensionality to 100;
Then I reduced it to 10;
Then to 2;
Then I tried plotting only 2 points, i.e. 2 two dimensional points;
I am using Tensorflow 1.11;
You can find my last saved tensor flow session here, would you mind trying it out?
I am still a beginner, therefore I used a couple tutorial to get me started; I used Sud Harsan work so far.
Any help is much appreciated. Thanks.
Updates:
A) I've found someone else dealing with the same problem; I tried the solution provided, but it didn't change anything.
B) I thought it could have something to do with my installation, therefore I tried uninstalling tensorflow and installing it back; no luck. I then proceeded to create a new environment dedicated to tensorflow and that also didn't work.
C) Assuming there was something wrong with my code, I ran tensorflow's basic embedding tutorial to check if I could open its projector's results. And guess what?! I still can't go past "Calculating PCA"
Now, I did visit the online projector example and that loads perfectly.
Again, Any help would be more than appreciated. Thanks!
I have the same problem with word2vec_basic.py
My environment: win10, conda, python 3.6.7, tensorflow 1.11, tensorboard 1.11
That may not your fault because I roll back tensorflow & tensorboard from 1.11 to 1.7
And guess what?! The projector appears just a few seconds!
reference
Update 10/11
tensorboard & tensorflow 1.12 are available in conda today, I take a try and this problem seems to be fixed.
As mentioned by Bluedrops, updating tensorboard and tensorflow seems to fix the problem.
I created a new environment with conda and installed the newest versions of Tensorflow, Tensorboard and their dependencies and that seems to fix the issue.

Is it possible to make tensorflow graph summary?

I'm aware of Tensorboard and how awesome it is, but I think that simple console output with current graph summary is better (and faster) for prototyping purpose.
And also know that I can generate tensorboard graph after simply running session with last network node as shown here.
What I'm looking for is something similar to model.summary() from Keras.
In another words: how to iterate over tensorflow graph and print out only custom high end layer with their shapes and dtypes in the same order how all these layer where generated?
It's certainly possible. If you are using tf.keras wrapper to build you can easily visualize the graph, even before model.compile() method executes.
It's keras built-in functionality called plot_model().
*This method have dependency on graphviz and pydot libraries.
for pydot installation : pip install pydot
but for graphviz installation you have follow step in this page. And also probably you have to restart the machine because of there it create system environment variables.
for tutorial on how to use this method follow this link
To plot your model with shapes and dtypes before training you could use:
tf.keras.utils.plot_model(model, show_shapes=True, expand_nested=True, show_dtype=True)
where "model" is your built model. The output of a model could looks like this:

JupyterLab output doesnt show visualization

Encountered this issue with 2 different visualization libraries.
PYLDAVIS and DISPLACY (spacy).
On executing a code in jupyterlab (kernel as python3), the output expected should be Jupyter Notebook to show the graph or webcontent. But my Jupyter doesnt show any output with graph / dependency image . I only see textual output in JupyterLab.
eg.
displacy.serve(doc, style='dep')
I'm using KAGGLE docker image which has JUPYTERLAB and on top of that I have updated to latest packages.
Any pointers if this is JUPYTERLAB related or underlying packages?
I can only really comment on the spaCy part of this, but one thing I noticed is that you are using displacy.serve instead of displacy.render, which would be the correct method to call from within a Jupyter environment (see the spaCy visualizer docs for a full example and more details). The reason behind this is that displacy.serve will start a web server to show the visualization in a browser – all of which is not necessary if you're already in a Jupyter Notebook. So when you call displacy.render, it will detect your Jupyter environment, and wrap the visualization accordingly. You can also set jupyter=True to force this behaviour.
try
from spacy import displacy
displacy.render(doc, style="dep", jupyter=True, options={'distance': 140})
or
displacy.render(doc, style="ent", jupyter=True, options={'distance': 140})

Tensorflow, equivalent of Theano's pydotprint?

In Theano, I can use pydotprint to generate a nice graph of my model. Very useful for debugging, and for presenting too. Is there an equivalent for TensorFlow?
As #JHafdahl points out, TensorBoard provides graph visualization for TensorFlow graphs, which includes support for summarizing complex nested subgraphs.
To visualize a graph, build a TensorFlow graph as normal, then add the following statements to your Python program:
writer = tf.train.SummaryWriter("/path/to/logs", tf.get_default_graph().as_graph_def())
writer.flush()
Then, in a separate terminal, run TensorBoard to visualize your graph:
$ tensorboard --logdir=/path/to/logs --port 6006
Finally, connect to TensorBoard by opening http://localhost:6006 in your web browser. Clicking on the "Graph" tab will show the visualization of your graph; see the graph visualization tutorial for more details.
Look into Tensorboard, which ships with Tensorflow. I use it to track the performance of my models and make sure they are converging.