I am trying to view my network graph with tensor board. I read the page https://www.tensorflow.org/get_started/summaries_and_tensorboard
My question is : can I visualize the graph without creating the summaries and the FileWriter ?
Following http://ischlag.github.io/2016/06/04/how-to-use-tensorboard/ I added the following code after the session object was created:
writer = tf.summary.FileWriter("/tmp/tensorflow/", sess.graph)
Then I used the command in the blog:
tensorboard --logdir=run1:/tmp/tensorflow/ --port 6006
TensorBoard gives you back the page you should open to visualize the graph:
"TensorBoard 0.1.6 at http://page:6006"
Related
For a given graph, how can we visualize the graph using tensorboard for tf.compat.v1 ?
Sharing this here after searching everywhere. Most of the documentations explains tf.keras and not for tf.compat.v1 static graphs
First export the graph to a logdir that tensorboard can use!
import tensorflow as tf
# Get the default graph
graph = tf.compat.v1.get_default_graph()
writer = tf.compat.v1.summary.FileWriter("logs", graph)
After that simply open tensorboard in the specified directory (here logs/)
python -m tensorboard.main --logdir logs/
I tried to visualize my training progress using tensorboard, but when I used the command for displaying tensorboard, nothing displayed and no error message. It just shows a blank page with the message "take too long to respond". This is the callback code and magic command that I used to display it.
#log directory
log_folder = "logs"
#callback
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_folder, histogram_freq=1)
#magic command
%tensorboard --logdir log_folder
and this is the image of the problem:
I need to export a custom object detection model, fine-tuned on a custom dataset, to TensorFlow Lite, so that it can run on Android devices.
I'm using TensorFlow 2.4.1 on Ubuntu 18.04, and so far this is what I did:
fine-tuned an 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8' model, using a dataset of new images. I used the 'model_main_tf2.py' script from the repository;
I exported the model using 'exporter_main_v2.py'
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\custom_model\pipeline.config --trained_checkpoint_dir .\models\custom_model\ --output_directory .\exported-models\custom_model
which produced a Saved Model (.pb file);
3. I tested the exported model for inference, and everything works fine. In the detection routine, I used:
def get_model_detection_function(model):
##Get a tf.function for detection
#tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
and the shape of the produced image object is 640x640, as expected.
Then, I tried to convert this .pb model to tflite.
After updating to the nightly version of tensorflow (with the normal version, I got an error), I was actually able to produce a .tflite file by using this code:
import tensorflow as tf
from tflite_support import metadata as _metadata
saved_model_dir = 'exported-models/custom_model/'
## Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
I tried to use this model in AndroidStudio, following the instructions given here.
However, I'm getting a couple of errors:
something regarding 'Not a valid Tensorflow lite model' (have to check better on this);
the error:
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_input_tensor:0) with 3 bytes from a Java Buffer with 270000 bytes.
The second error seems to indicate there's something weird with the input expected from the tflite model.
I examined the file with Netron, and this is what I got:
the input is expected to have...1x1x1x3 shape, or am I misinterpreting the graph?
Should I somehow set the tensor input size when using the tflite exporter?
Anyway, what is the right way to export my custom model so that it can run on Android?
TF Ops are supported via the Flex delegate. I bet that is the problem. If you want to check if it is that, you can do:
Download benchmark app with flex delegate support for TF Ops. You can find it here, in the section Native benchmark binary: https://www.tensorflow.org/lite/performance/measurement. For example, for android is https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex
Connect your phone to your computer and where you have downloaded the apk, do adb push <apk_name> /data/local/tmp
Push your model adb push <tflite_model> /data/local/tmp
Open shell adb shell and go to folder cd /data/local/tmp. Then run the app with ./<apk_name> --graph=<tflite_model>
Info from:
https://www.tensorflow.org/lite/guide/ops_select
https://www.tensorflow.org/lite/performance/measurement
I am tuning a Neural Net with Keras Tuner
I am creating logs this way:
tuner = RandomSearch(
build_model,
objective='val_accuracy',
max_trials=5,
executions_per_trial=3,
directory='my_dir',
project_name='helloworld')
This gives me this directory of log files:
/my_dir/helloworld/
-trial_xxxxx
-trial_yyyy
-trial_zzzz
-oracle.json
-tuner0.json
I can get the summary by writing
tuner.result_summary()
or even get the best model using
tuner.get_best_models(num_models=1)[0]
But I also want to explore the runs more in details and see if there are any patterns. For that I want to use TensorBoard, but if I write:
%tensorboard --logdir="my_dir/helloworld"
I only get a empty TensorBoard. I guess the problem here is that Keras Tuner and TensorBoard write logs in different fileformat.
My question is stil have anyone been able to run hyperparameter optimalization in Keras Tuner and then watch the log files in TensorBoard afterwards?
Tensorboard needs seperate logging through callbacks:
before running tuner.search() add
tensorboard=TensorBoard(log_dir='tensorborad_log_dir')
and add the tensorboard callback to tuner.search()
tuner.search(X_train, y_train, callbacks=[tensorboard])
then you can run
%tensorboard --logdir='tensorborad_log_dir'
I want to visualize a complex model with tensorboard graph. I run the code below to restore the graph from a ckpt.meta file. However, data flows are not display properly as I can't get useful information like input channels or image size. Components are not connected explicitly and inputs are represented as mul0-704 which looks like an intermediate var. I didn't run a training session and the original training is run with tf.train.MonitoredTrainingSession() which is something I do not have a firm grasp with currently. Why does this graph show a mess? I am a rookie with tensorboard and I want to refactor the code with pytorch. Any help would be appreciated. Thanks in advance. disconnected graph, mul0-704
import tensorflow as tf
tf.train.import_meta_graph('meta_file_dir')
export_dir = 'export_log_dir'
for n in tf.get_default_graph().as_graph_def().node:
print(n)
with tf.Session() as sess:
writer = tf.summary.FileWriter(export_dir, sess.graph)
writer.close()