Is there a doc explaining the options for tensors_to_detections_calculator.cc? - mediapipe

I have some difficulty to figure out the best values to set for the options of tensors_to_detections_calculator.cc.
Especially the x/y_scale and apply_exponential_on_box_size.
Wondering if there is a doc explaining how to set these options.
Thanks!

Looks like the settings correspond to the training configuration for my ssd model (box encoder session)

Related

How to set only some detected classes in FasterRCNN+InceptionResNet V2 Model? (tensorflow hub)

In tensor-hub (FasterRCNN + openimages_v4)
I am trying to set only some classes that are related in 'fashion'
Because I think
If I except detected classes(ex. airplain, human, etc.. not related 'fashion'),
I will have faster RCNN Model than original FasterRCNN model..
I have seen the Scheme example given on Google Tensorflow, but unfortunately it did not help.
PLZ give me the way "set only some class in tensor_hub FasterRCNN"
thanks..

Reload Keras-Tuner Trials from the directory

I'm trying to reload or access the Keras-Tuner Trials after the Tuner's search has completed for inspecting the results. I'm not able to find any documentation or answers related to this issue.
For example, I set up BayesianOptimization to search for the best hyper-parameters as follows:
## Build Hyper Parameter Search
tuner = kt.BayesianOptimization(build_model,
objective='val_categorical_accuracy',
max_trials=10,
directory='kt_dir',
project_name='lstm_dense_bo')
tuner.search((X_train_seq, X_train_num), y_train_cat,
epochs=30,
batch_size=64,
validation_data=((X_val_seq, X_val_num), y_val_cat),
callbacks=[callbacks.EarlyStopping(monitor='val_loss', patience=3,
restore_best_weights=True)])
I see this creates trial files in the directory kt_dir with project name lstm_dense_bo such as below:
Now, if I restart my Jupyter kernel, how can I reload these trials into a Tuner object and subsequently inspect the best model or the best hyperparameters or the best trial?
I'd very much appreciate your help. Thank you
I was trying to do the same thing. I was looking into the keras docs for an easier way than this but could not find one - so if any other SO-ers have a better idea, please let us know!
Load the previous tuner. Make sure overwrite=False or else you'll delete your trials.
workdir = "mlp_202202151345"
obj = "val_recall"
tuner = kt.Hyperband(
hypermodel=build_model,
metrics=metrics,
objective=kt.Objective(obj, direction="max"),
executions_per_trial=1,
overwrite=False,
directory=workdir,
project_name="keras_tuner",
)
Look for a trial you want to load. Note that TensorBoard works really well for this. In this example, I'm loading 1a38ebaba07b77501999cb1c4ab9413e.
Here's the part that I could not find in Keras docs. This might be dependent on the tuner you use (I am using Hyperband):
tuner.oracle.get_trial('1a38ebaba07b77501999cb1c4ab9413e')
Returns a Trial object (also could not find in the docs). The Trial object has a hyperparameters attribute that will return that trial's hyperparameters. Now:
tuner.hypermodel.build(trial.hyperparameters)
Gives you the trial's model for training, evaluation, predictions, etc.
NOTE This seems convuluted and hacky, would love to see a better way.
j7skov has correctly mentioned that you need to reload previous tuner and set the parameter overwrite=False(so that tuner will not overwrite already generated trials).
Further if you want to load first K best models then we need to use tuner's get_best_models method as below
# This will load 10 best hyper tuned models with the weights
# corresponding to their best checkpoint (at the end of the best epoch of best trial).
best_model_count = 10
bo_tuner_best_models = tuner.get_best_models(num_models=best_model_count)
Then you can access a specific best model as below
best_model_id = 7
model = bo_tuner_best_models[best_model_id]
This method is for querying the models trained during the search. For best performance, it is recommended to retrain your Model on the full dataset using the best hyperparameters found during search, which can be obtained using tuner.get_best_hyperparameters().
tuner_best_hyperparameters = tuner.get_best_hyperparameters(num_trials=best_model_count)
best_hp = tuner_best_hyperparameters[best_model_id]
model = tuner.hypermodel.build(best_hp)
If you want to just display hyperparameters for the K best models then use tuner's results_summary method as below
tuner.results_summary(num_trials=best_model_count)
For further reference visit this page.
Inspired by j7skov, I found that the models can be reloaded
by manipulating tuner.oracle.trials and tuner.load_model.
By assigning tuner.oracle.trials to a variable, we can find that it is a dict object containing all relavant trials in the tuning process.
The keys of the dictionary are the trial_id, and the values of the
dictionary are the instance of the Trial object.
Alternatively, we can return the best few trials by using tuner.oracle.get_best_trials.
To inspect the hyperparameters of the trial, we can use the summary method of the instance.
To load the model, we can pass the trial instance to tuner.load_model.
Beware that different versions can lead to incompatibilities.
For example the directory structure is a little different between keras-tuner==1.0 and keras-tuner==1.1 as far as I know.
Using your example, the working flow may be summarized as follows.
# Recreate the tuner object
tuner = kt.BayesianOptimization(build_model,
objective='val_categorical_accuracy',
max_trials=10,
directory='kt_dir',
project_name='lstm_dense_bo',
overwrite=False)
# Return all trials from the oracle
trials = tuner.oracle.trials
# Print out the ID and the score of all trials
for trial_id, trial in trials.items():
print(trial_id, trial.score)
# Return best 5 trials
best_trials = tuner.oracle.get_best_trials(num_trials=5)
for trial in best_trials:
trial.summary()
model = tuner.load_model(trial)
# Do some stuff to the model
using
tuner = kt.BayesianOptimization(build_model,
objective='val_categorical_accuracy',
max_trials=10,
directory='kt_dir',
project_name='lstm_dense_bo')
will load the tuner again.

Visualizing dataset on Tensorboard

I am reading tutorials about TensorFlow visualization and found out Tensorboard. I would like to know how can I visualize for example, Iris dataset taken from UCI Machine Learning repository. I have been able to run a specified port on localhost which shows TensorBoard, but do not know how to visualize a locally taken dataset there. I searched on google but really could not find how to do. Could you help me, please ?
If i understand you correctly then you wish to use tf.summary.image. The documentation is here: https://www.tensorflow.org/api_docs/python/tf/summary/image
Some example usage from my code is:
x_pl=tf.placeholder(tf.float32, [None,height,width,channels], name="ImageIn")
tf.summary.image('input', x_pl, 10)
x_pl is where I feed my image data in.
In my cummary declaration I say that I want to create a summary called 'input' and to take 10 images from x_pl.
Read the summary-writer example/tutorial here: https://www.tensorflow.org/get_started/summaries_and_tensorboard
You will need to merge your summaries:
merged = tf.summary.merge_all()
You will need to declare a summary-writer a bit like this:
train_writer = tf.summary.FileWriter(FLAGS.summaries_dir + '/train',sess.graph)
See the above tutorial/example to understand how Tensorboard works. Not that you will want to replace the summaries with image summaries for your purposes.

How is tf.summary.tensor_summary meant to be used?

TensorFlow provides a tf.summary.tensor_summary() function that appears to be a multidimensional variant of tf.summary.scalar():
tf.summary.tensor_summary(name, tensor, summary_description=None, collections=None)
I thought it could be useful for summarizing inferred probabilities per class ... somewhat like
op_summary = tf.summary.tensor_summary('classes', some_tensor)
# ...
summary = sess.run(op_summary)
writer.add_summary(summary)
However it appears that TensorBoard doesn't provide a way to display these summaries at all. How are they meant to be used?
I cannot get it to work either. It seems like that feature is still under development. See this video from the TensorFlow Dev Summit that states that the tensor_summary is still under development (starting at 9:17): https://youtu.be/eBbEDRsCmv4?t=9m17s. It will probably be better defined and examples should be provided in the future.

Multivariate time-series RNN using Tensorflow. Is this possible with an LSTM cell or similar?

I am looking for examples of how to build a multivariate time-series RNN using Tensorflow. Is this possible with an LSTM cell or similar?
e.g. the data might look something like this:
Time,A,B,C,...
0,3.5,4.5,7.7,...
1,2.1,6.4,8.2,...
...
Any help much appreciated. Thanks, John
It depends on exactly what you mean, but yes, it should be possible. If you write more specifically how exactly your input and target data looks like, somebody may be able to help. You can generally have sequential continuous or categorical input data and sequential continuous or categorical output data or a mix of those. I would suggest you look at the tutorials and try out a few things, then ask again here.
Thanks. I have figured it out now. I misunderstood the docs 'inputs: A length T list of inputs, each a vector with shape [batch_size].'
The following link was useful:
https://m.reddit.com/r/MachineLearning/comments/3sok8k/tensorflow_basic_rnn_example_with_variable_length/