I just run the instruction: './bin/bob pad vanilla-pad \ /bob/paper/cross_modal_focal_loss_cvpr2021/config/Method/Pipeline.py -o <folder_to_save_results> -vvv'. And I followed the 'Running experiments with the trained model' of 'running_cmfl_hqwmca.rst' to update the path to preprocessed files, annotations, protocol and the CNN model path in the config/Method/Pipeline.py. My understanding is that the instruction is for preprocessed files. But why do you say there is the preprocessing part in Pipeline.py?
Related
How do I convert a Net to VINO when both .pb and pbtxt format are used to read the net - which of the two best serves ?
frozen_graph = str("detection/240x180_depth0.75_ssd_mobilenetv1/frozen_inference_graph.pb")
text_graph = str("detection/240x180_depth0.75_ssd_mobilenetv1/graph.pbtxt")
cvNet = cv2.dnn.readNetFromTensorflow(frozen_graph, text_graph)
Which of the .pb and pbtxt do I use above?
i.e. How does one support the other?
The link https://medium.com/#prasadpal107/saving-freezing-optimizing-for-inference-restoring-of-tensorflow-models-b4146deb21b5 will give you an understanding of the different files associated with model. In short, .pbtxt files are human readable which holds only the structure of graph. It helps to check if some nodes are missing for debugging purpose.
.pb files holds much more details and in most cases, it holds the weights and biases on different layers. Hence you need to use .pb file. The link http://answers.opencv.org/question/187904/readnetfromtensorflow-when-loading-customized-model/ will give you some additional details.
Only frozen_inference_graph.pb is needed in your case to convert topology to Vino model. As well you would need pipeline.json for model
Go to Model Optimizer folder
python mo_tf.py \
--input_model <PATH_TO_MODEL>/frozen_inference_graph.pb \
--tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \
--tensorflow_object_detection_api_pipeline_config <PATH_TO_MODEL>/pipeline.json \
--input_shape [1,180,240,3]
I am trying to convert a fine-tuned tensorflow inception_v3 model to uff format which can be run on NVIDIA's Jetson TX2. For conversion to uff, certain ops are supported, some are not. I am able to successfully freeze and convert to uff inception_v3 model with imagenet checkpoint provided by tensorflow. However if I fine-tune the model, additional ops like Floor, RandomUniform, etc are added in the new graph which are not yet supported. These layers remain even after freezing the model. This is happening in the fine-tuning for flowers sample provided on tensorflow site as well.
I want to understand why additional ops are added in the graph, while fine-tuning is just supposed to modify the final layer to match number of outputs required.
If they are added while training, how can I get rid of them? What post-processing steps tensorflow team followed before releasing inception_v3 model for imagenet?
I can share the pbtxt files if needed. For now, model layers details are uploaded at https://github.com/shrutim90/TF_to_UFF_Issue. I am using Tensorflow 1.6 with GPU.
I am following the steps to freeze or fine-tune the model from: https://github.com/tensorflow/models/tree/master/research/slim#Pretrained. As described in the above link, to reproduce the issue, install TF-Slim image models library and follow these steps:
1. python export_inference_graph.py \
--alsologtostderr \
--model_name=inception_v3 \
--output_file=/tmp/inception_v3_inf_graph.pb
2. python freeze_graph.py \
--input_graph=/tmp/inception_v3_inf_graph.pb \
--input_checkpoint=/tmp/checkpoints/inception_v3.ckpt \
--input_binary=true --output_graph=/tmp/frozen_inception_v3.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1
3. DATASET_DIR=/tmp/flowers
TRAIN_DIR=/tmp/flowers-models/inception_v3
CHECKPOINT_PATH=/tmp/my_checkpoints/inception_v3.ckpt
python train_image_classifier.py --train_dir=$TRAIN_DIR --dataset_dir=$DATASET_DIR --dataset_name=flowers --dataset_split_name=train --model_name=inception_v3 --checkpoint_path=${CHECKPOINT_PATH} --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits
4. python freeze_graph.py \
--input_graph=/tmp/graph.pbtxt \
--input_checkpoint=/tmp/checkpoints/model.ckpt-2539 \
--input_binary=false --output_graph=/tmp/frozen_inception_v3_flowers.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1
To check the layers, you can check out .pbtxt file or use NVIDIA's convert-to-uff utility.
Run training script -> export_inference_graph -> freeze_graph . This gets rid of all the extra nodes and model can be easily converted to uff.
I am new to tensorflow and keras.
I trained a CNN for sentence classification using keras and exported the model using following code
K.set_learning_phase(0)
config = model.get_config()
weights = model.get_weights()
new_model = Sequential.from_config(config)
new_model.set_weights(weights)
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(
inputs={'input': new_model.inputs[0]},
outputs={'prob': new_model.outputs[0]})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tag_constants.SERVING],
clear_devices = True,
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature}
)
builder.save()
I got variables.data-00000-of-00001 and variables.index in variables folder and saved_model.pb.
I want to combine these files into one file before deploying for prediction.
In the end I want to quantize the model as variables file size is really huge and I think before using the quantize functionality from tensorflow I need to have my model frozen in a pb file.
Please help
You can use the freeze_graph.py tool to combine your files into a single file.
This will output a single GraphDef file that holds all of the weights and architecture.
You'd use it like this:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=some_graph_def.pb \
--input_checkpoint=model.ckpt-8361242 \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax
Where input_graph is your saved_model.pb file.
And where input_checkpoint are your variables in your variables folder, and they might look like this:
/tmp/model/model-chkpt-8361242.data-00000-of-00002
/tmp/model/model-chkpt-8361242.data-00001-of-00002
/tmp/model/model-chkpt-8361242.index
/tmp/model/model-chkpt-8361242.meta
Note that you refer to the model checkpoint as model-chkpt-8361242 in this case, for instance.
You take the prefix of each of the files you have there when using the freeze_graph.py tool.
how are you planning to serve your model? TensorFlow Serving supports the SavedModelFormat natively - without requiring the freeze_graph.py step.
if you still want to manually combine the graph and the variables (and use freeze_graph.py), you'll likely need to use the older ExportModel format as Clarence demonstrates above.
also, you'll likely want to switch to the Estimator API at this point, as well.
here are some examples using all of the above: https://github.com/pipelineai/pipeline
so I would like to use to TF Object Detection API to calculate mAP scores for YOLO.
Currently I'm training my YOLO model and it is producing ckpt files/frozen graph files.
I would like to take my YOLO model and evaluate it using the TF Obj Dec API. I know Eval.py says
2) Three configuration files may be provided: a model_pb2.DetectionModel
configuration file to define what type of DetectionModel is being evaluated, an
input_reader_pb2.InputReader file to specify what data the model is evaluating
and an eval_pb2.EvalConfig file to configure evaluation parameters.
Example usage:
./eval \
--logtostderr \
--checkpoint_dir=path/to/checkpoint_dir \
--eval_dir=path/to/eval_dir \
--eval_config_path=eval_config.pbtxt \
--model_config_path=model_config.pbtxt \
--input_config_path=eval_input_config.pbtxt
However, with darkflow there is no model_pb2.DetectonModel Configuration file. Is this possible?
Is it possible to use the freeze_graph.py tool with models saved via saver.save in TensorFlow v1? If so, how?
I have code that looks roughly like this:
supervisor = tf.train.Supervisor(logdir=output_directory_path)
with supervisor.managed_session() as session:
# train the model here
supervisor.saver.save(session, output_directory_path)
This produces a directory containing:
checkpoint
output
output-16640.data-00000-of-00001
output-16640.index
output-16640.meta
Where output is a directory containing the files for intermediate training steps. The rest are files.
My understanding is that this is a meta graph (the .meta file) and its variables (the .data* file) in saver v2 format. These files contain the data needed for the freeze_graph.py tool but it is unclear how to tell the freeze_graph.py tool to load the data from these files.
All of these attempts produce the error message Input checkpoint '...' doesn't exist!
python freeze_graph.py --input_checkpoint checkpoint --output_graph /tmp/out
python freeze_graph.py --input_checkpoint . --output_graph /tmp/out
python freeze_graph.py --input_checkpoint output-16640 --output_graph /tmp/out
The freeze_graph.py code includes the comment 'input_checkpoint' may be a prefix if we're using Saver V2 format next to where the --input_checkpoint argument is used so I had thought the third of the above attempts would work but, alas, no.
As #mrry pointed out in a comment, the answer to this particular question is to prefix the output prefix with ./. When this was done I discovered it is also necessary to provide values for the --input_graph and --output_name_names arguments.
The command now looks like
python freeze_graph.py \
--input_graph output/graph.pbtxt \
--input_checkpoint ./output-16640 \
--output_graph /tmp/out \
--output_node_names <name>
Unfortunately my graph contains variables for pre-loaded data which causes freeze_graph.py to fail with a message like Attempting to use uninitialized value ...; solving this subsequent problem is beyond the scope of this question.