I am currently training a cycle gan on horse2zebra dataset using colab.
It seems that the logs are being written in a .txt file.The original repository for cyclegan uses visdom to visualize error logs.But since colab doesn't support visdom,so I can't use it to visualize the logs.
With that in mind, can I visualize the same log.txt files using tensorboard? And if yes then how shall i go about it
is it the same as
tensorboard --log_dir path to log.txt
Related
I have a couple of own tfrecord file made by myself.
They are working perfectly in tf1, I used them in several projects.
However if i want to use them in Tensorflow Object Detection API with tf2 (running the model_main_tf2.py script), I see the following in tensorboard:
tensorboard images tab
It totally masses up the images.
(Running the /work/tfapi/research/object_detection/model_main.py script or even legacy_train and they looks fine)
Is tf2 using different kind of encoding in tfrecords?
Or what can cause such results?
I'm looking at a DLC file which represents the graph used for a neural network inside of the Snapdragon Neural Processing Engine.
https://developer.qualcomm.com/docs/snpe/model_conv_tensorflow.html
I would like to visualize this model in something like tensorboard. My understanding is tensorboard requires PB file which is used by tensorflow to save graphs.
Is there any way to convert a DLC file to a Tensorflow PB for visualization or another way to achieve this aim?
NPE SDK does not provide tool to convert a DLC file to PB/Any other framework supported model.
A platform like Tensorboard, which helps in debug and visualization of the model created are not available from NPE SDK.
I want to use Tensorboard with a non tensorflow app. I can see how to make the graph using GraphDef and associated classes but not sure how to write it out so that Tensorboard will read it. This means that I have the graph in a serialized form and it's not the python graph class from tensorflow.
For seeing graph in tensorboard, you need weight, tensor name and structure of the graph.
I dont understand your question totally, but if you are able to create graph.pb file then , it will be simple to see that thing in tensorboard you have to run this file here.
actually here we are creating dummy graph structure using
graph_def = graph_pb2.GraphDef()
and then giving our .pb file to set all these weight and name
graph_def.ParseFromString('pb file directory to read'.read())
import_graph_def(graph_def)
let me know more detail so that i can help you better way.
I've been using tensorflow image recognition. I've build many scripts which interact with classify_image.py.
I also retrained the model using retrain.py, with my own dataset.
How can use the two files generated: output_graph.pb, output_labels.txt with classify_image.py ?
Ah, the docs say
If you'd like to use the retrained model in a Python program this example from #eldor4do shows what you'll need to do.
Just copied/edited that one file locally, and python .\edited-retraining-example.py. And that was easy.
Note that if you're on Windows, change all examples of /tmp/... to c:/tmp/....
I am starting to play with Distributed Tensorflow. I am able to distribute the training in different servers succesfully but I cannot see any summary in tensorboard.
Does anyone know if there are any limitation or caveat with this?
Thanks
There is a caveat, which is that TensorBoard doesn't support replicated summary writers. Other than that, it will work.
Choose one TensorFlow worker to be the summary writer, and have it write summaries to disk. Then, launch TensorBoard pointing to the summary files that you've saved (the simplest would be to launch TensorBoard on the same server that the summary worker is on - alternatively, you could copy the files off that server onto your machine, etc).
Note, in the special case where you are using Google Cloud, TensorBoard can read directly from gcs paths.