View logs from Keras Tuner in Tensorboard - tensorflow

I am tuning a Neural Net with Keras Tuner
I am creating logs this way:
tuner = RandomSearch(
build_model,
objective='val_accuracy',
max_trials=5,
executions_per_trial=3,
directory='my_dir',
project_name='helloworld')
This gives me this directory of log files:
/my_dir/helloworld/
-trial_xxxxx
-trial_yyyy
-trial_zzzz
-oracle.json
-tuner0.json
I can get the summary by writing
tuner.result_summary()
or even get the best model using
tuner.get_best_models(num_models=1)[0]
But I also want to explore the runs more in details and see if there are any patterns. For that I want to use TensorBoard, but if I write:
%tensorboard --logdir="my_dir/helloworld"
I only get a empty TensorBoard. I guess the problem here is that Keras Tuner and TensorBoard write logs in different fileformat.
My question is stil have anyone been able to run hyperparameter optimalization in Keras Tuner and then watch the log files in TensorBoard afterwards?

Tensorboard needs seperate logging through callbacks:
before running tuner.search() add
tensorboard=TensorBoard(log_dir='tensorborad_log_dir')
and add the tensorboard callback to tuner.search()
tuner.search(X_train, y_train, callbacks=[tensorboard])
then you can run
%tensorboard --logdir='tensorborad_log_dir'

Related

TensorBoard callback in model.evaluate()

According to the official Keras documentation, the TensorBoard callback can be used with Model.evaluate:
When used in Model.evaluate, in addition to epoch summaries, there will be a summary that records evaluation metrics vs Model.optimizer.iterations written. The metric names will be prepended with evaluation, with Model.optimizer.iterations being the step in the visualized TensorBoard.
I would assume that passing a TensorBoard callback to Model.evaluate would create a folder eval that contains the log files. But nothing happens when I add the callback:
Model.evaluate(xTest, yTest, callbacks=[TensorBoard(log_dir="./Logs/")]
The TensorBoard callback works just fine in Model.fit.
Does anyone know how to visualize the model evaluation with TensorBoard?
Could you try again by specifying the evaluate folder inside the logs folder to store the model.evaluate() metric's logs:
model.evaluate(x_test, y_test, callbacks=[TensorBoard(log_dir="logs/evaluate/")])
and then visualize the model.evaluate on tensorboard:
%tensorboard --logdir logs/evaluate/
Please find the working gist for the same. Thank you.

How can I train with my own dataset with darkflow?

I'm a beginner with some programming experince. I'm trying to train darkflow with my own dataset. I'm following these instructions.
https://github.com/thtrieu/darkflow
So far I have done the following steps.
installed darkflow and the relevant modules
created test images and made annotations (Pascal VOC).
https://ibb.co/y4HmtGz
https://ibb.co/GkxLshK
If I have understood correctly the darkflow training requires Pascal VOC?
My problem is that I don't know how to start the training. How can I start the training process and how can I test if the neuralnet is working? Am I supposed to get weights as a result of training?
You can choose to use pre-trained weights from here. Download cfg and weights.
Assuming you have darkflow installed, you can train your network like this:
flow --model cfg/<your-config-filename>.cfg --load bin/<filename>.weights --train --annotation train/Annotations --dataset train/Images --epoch 100 --gpu 1.0
If you want to train your network from scratch w/o using any pre-trained weights,
you can do this:
flow --model cfg/<your-config-filename>.cfg --train --annotation train/Annotations --dataset train/Images --epoch 100 --gpu 1.0
After the training starts, model checkpoints are saved inside ckpt directory. You can load latest checkpoint and test on sample images.

Deploying model

I just finished training a categorizer model exactly the way described in https://github.com/GoogleCloudPlatform/MiniCat but I am not sure how to use the model to make predictions.
Trained model in the direction Train
Data in the directory Data
I'm really new to this and I don't know where to start. I read something about deploying model in https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models but how do I even create a SavedModel.
Any answers will be appreciated.
So in the folder where you got the trained model, you just need to load that model in your session. First create a saver (you can also use it for laoding)
train_saver = tf.train.Saver()
Now inside your session:
train_saver.restore(sess, 'path/to/model/doc_classifier_cnn_model.ckpt')
Then just feed the tensors with feed_dict.
Other option is to create a protobuf file (.pb) but in doing so you will have to load the model as I said.

TypeError in freeze_graph tool while trying to freeze a graph in Tensorflow 1.3

I train a tf.contrib.learn estimator (specifically, DNNLinearCombineRegressor) in Python and save the model’s parameters and graph by specifying model_dir when defining the estimator. After training is done, I try to freeze the graph using the CLI as mentioned in this post, and get this error:
TypeError: names_to_saveables must be a dict mapping string names to Tensors/Variables. Not a variable: Tensor("dnn/hiddenlayer_0/biases:0", shape=(10,), dtype=float32)
Any idea how to resolve this? Also how do I make sure that after training, when I use predict_scores() of the estimator for prediction, the frozen graph file is used to create the model? I want to freeze the graph so that predict_scores() doesnt reload the graph and its variables everytime for prediction. Thanks in advance.

keras model structure visualization

I want to see keras model like this. I used K.get_session().graph and get
tensorflow.python.framework.ops.Graph at 0x7f2a8b809400
but i to see this graph and save it. I am using tensorflow backend
Install tensorboard
Import it
from keras.callbacks import TensorBoard
Load it into a variable
tbCallBack = TensorBoard(log_dir='Graph',
histogram_freq=10,
write_graph=True,
write_images=True)
And then use that as a callback at training:
model.fit(x, y, ...
callbacks=[tbCallBack])
Make sure you have a made a directory called 'Graph' or whatever you want. Then before training run in terminal:
tensorboard --logdir Graph
And then you can see your graph in your browser