object detection using tensorflow 1.8 - tensorflow

(tensorflow2) C:\tensorflow2\models\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
WARNING:tensorflow:From C:\Users\Nischay\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow\python\platform\app.py:124: main (from __main__) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
WARNING:tensorflow:From C:\tensorflow1\models\research\object_detection\legacy\trainer.py:266: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
Traceback (most recent call last):
File "train.py", line 184, in <module>
tf.app.run()
File "C:\Users\Nischay\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow\python\platform\app.py", line 124, in run
_sys.exit(main(argv))
File "C:\Users\Nischay\Anaconda3\envs\tensorflow2\lib\site-packages\tensorflow\python\util\deprecation.py", line 136, in new_func
return func(*args, **kwargs)
File "train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "C:\tensorflow1\models\research\object_detection\legacy\trainer.py", line 291, in train
clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
File "C:\tensorflow1\models\research\slim\deployment\model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "C:\tensorflow1\models\research\object_detection\legacy\trainer.py", line 204, in _create_losses
prediction_dict = detection_model.predict(images, true_image_shapes)
File "C:\tensorflow1\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 821, in predict
prediction_dict = self._predict_first_stage(preprocessed_inputs)
File "C:\tensorflow1\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 872, in _predict_first_stage
image_shape) = self._extract_rpn_feature_maps(preprocessed_inputs)
File "C:\tensorflow1\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 1250, in _extract_rpn_feature_maps
feature_map_shape[2])]))
File "C:\tensorflow1\models\research\object_detection\core\anchor_generator.py", line 103, in generate
anchors_list = self._generate(feature_map_shape_list, **params)
File "C:\tensorflow1\models\research\object_detection\anchor_generators\grid_anchor_generator.py", line 111, in _generate
with tf.init_scope():
AttributeError: module 'tensorflow' has no attribute 'init_scope'
For tensorflow 1.8 cuda 9.0 cuDNN 7.4.2 , getting error while training the model.

Tensorflow 1.8 does not define a function called init_scope(). You can have a look at all the functions available in API r1.8. Consider upgrading to a later version of Tensorflow like API r1.13 where init_scope() is defined.

Related

Error when training model Faster-RCNN: NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array

I am trying out a Faster-RCNN tutorial on colab:
https://colab.research.google.com/drive/1U3fkRu6-hwjk7wWIpg-iylL2u5T9t7rr#scrollTo=uQCnYPVDrsgx
But in the training part, I have received this error
The code is:
!python3 /content/models/research/object_detection/model_main.py
--pipeline_config_path={pipeline_fname}
--model_dir={model_dir}
--alsologtostderr
--num_train_steps={num_steps}
--num_eval_steps={num_eval_steps}
The output:
Using TensorFlow backend.
Cause: module 'gast' has no attribute 'Index'
Traceback (most recent call last):
File "/content/models/research/object_detection/model_main.py", line 109, in
tf.app.run()
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "/content/models/research/object_detection/model_main.py", line 105, in main
tf_estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/training.py", line 473, in train_and_evaluate
return executor.run()
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/training.py", line 613, in run
return self.run_local()
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/training.py", line 714, in run_local
saving_listeners=saving_listeners)
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1161, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1188, in _train_model_default
input_fn, ModeKeys.TRAIN))
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1025, in _get_features_and_labels_from_input_fn
self._call_input_fn(input_fn, mode))
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1116, in _call_input_fn
return input_fn(**kwargs)
File "/content/models/research/object_detection/inputs.py", line 765, in _train_input_fn
params=params)
File "/content/models/research/object_detection/inputs.py", line 908, in train_input
reduce_to_frame_fn=reduce_to_frame_fn)
File "/content/models/research/object_detection/builders/dataset_builder.py", line 252, in build
input_reader_config)
File "/content/models/research/object_detection/builders/dataset_builder.py", line 237, in dataset_map_fn
fn_to_map, num_parallel_calls=num_parallel_calls)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 1950, in map_with_legacy_function
use_legacy_function=True))
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 3472, in init
use_legacy_function=use_legacy_function)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 2689, in init
self._function.add_to_graph(ops.get_default_graph())
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 545, in add_to_graph
self._create_definition_if_needed()
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 377, in _create_definition_if_needed
self._create_definition_if_needed_impl()
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 408, in _create_definition_if_needed_impl
capture_resource_var_by_value=self._capture_resource_var_by_value)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 944, in func_graph_from_py_func
outputs = func(*func_graph.inputs)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 2681, in wrapper_fn
ret = _wrapper_helper(*args)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 2652, in _wrapper_helper
ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in converted code:
/content/models/research/object_detection/data_decoders/tf_example_decoder.py:580 decode
default_groundtruth_weights)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/control_flow_ops.py:1235 cond
orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/control_flow_ops.py:1061 BuildCondBranch
original_result = fn()
/content/models/research/object_detection/data_decoders/tf_example_decoder.py:573 default_groundtruth_weights
dtype=tf.float32)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py:2560 ones
output = _constant_if_small(one, shape, dtype, name)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py:2295 _constant_if_small
if np.prod(shape) < 1000:
<array_function internals>:6 prod
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py:3052 prod
keepdims=keepdims, initial=initial, where=where)
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py:86 _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py:736 array
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array.
I have tried
pip install numpy==1.19.5
But it does not work and received another error
Traceback (most recent call last):
File "/content/models/research/object_detection/model_main.py", line 26, in
from object_detection import model_lib
File "/content/models/research/object_detection/model_lib.py", line 30, in
from object_detection import eval_util
File "/content/models/research/object_detection/eval_util.py", line 35, in
from object_detection.metrics import coco_evaluation
File "/content/models/research/object_detection/metrics/coco_evaluation.py", line 25, in
from object_detection.metrics import coco_tools
File "/content/models/research/object_detection/metrics/coco_tools.py", line 51, in
from pycocotools import coco
File "/usr/local/lib/python3.7/dist-packages/pycocotools/coco.py", line 52, in
from . import mask as maskUtils
File "/usr/local/lib/python3.7/dist-packages/pycocotools/mask.py", line 3, in
import pycocotools._mask as _mask
File "pycocotools/_mask.pyx", line 1, in init pycocotools._mask
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Tensorflow 2.0 freeze_graph failing on tf.keras saved model

I'm converting a trained tf.keras model into tf frozen model to use with c++ api. I'm running into error while freezing the model on TF 2.0
model_path = '/home/Desktop/model.hdf5'
model = tf.keras.models.load_model(model_path)
tf.keras.experimental.export_savedmodel(model,newdir)
After this a variables folder with files [checkpoint,variables.data-00000-of-00001,variables.index], saved_model.pb and assests folder created in newdir.
I am trying to use saved_model.pb and variables.data-00000-of-00001 files to get single .pb frozen_graph
python /home/tensorflow/python/tools/freeze_graph.py --input_graph=/home/Desktop/tf_models/saved_model.pb --input_checkpoint=/home/Desktop/tf_models/variables/variables.data-00000-of-00001 --output_graph=/home/Desktop/tf_models/frozen_graph.pb --output_node_names=classes --input_binary=true
I expected a single frozen .pb file but instead running into error like below:
Traceback (most recent call last): File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 492, in run_main() File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 489, in run_main app.run(main=my_main, argv=[sys.argv[0]] +
unparsed) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/platform/app.py",
line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/absl/app.py",
line 300, in run
_run_main(main, args) File "/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/absl/app.py",
line 251, in _run_main sys.exit(main(argv)) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 488, in my_main = lambda unused_args: main(unused_args, flags)
File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 382, in main flags.saved_model_tags, checkpoint_version) File
"/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 341, in freeze_graph input_graph_def =
_parse_input_graph_proto(input_graph, input_binary) File "/home/vsrira10/anaconda2/envs/tf2/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py",
line 252, in _parse_input_graph_proto
input_graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
I'm open to suggestions alternate to using freeze_graph script. Thanks.

how to convert csv file to tfrecord

I am actually working with miniconda and tensorflow and trying to train a model for object detection and I'm facing a problem when running generate_ tfrecord.py to convert csv to tfrecord the error is:
"generate_tfrecord.py", line 90, in <module>
tf.app.run()
I used this link
https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py
(tensorflow) C:\Users\OctaNet\Miniconda3\envs\tensorflow\models\research\object_detection>python generate_tfrecord.py --csv_input=images/train.csv --output_path=data/train.record --image_dir=images/train/
Iterating train
Traceback (most recent call last):
File "generate_tfrecord.py", line 90, in <module>
tf.app.run()
File "C:\Users\OctaNet\Miniconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "generate_tfrecord.py", line 80, in main
tf_example = create_tf_example(group, path)
File "generate_tfrecord.py", line 67, in create_tf_example
'image/object/class/label': dataset_util.int64_list_feature(classes),
File "C:\Users\OctaNet\Miniconda3\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\dataset_util.py", line 26, in int64_list_feature
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
TypeError: None has type NoneType, but expected one of: int, long

Do the Tensorflow Silm assume TF version 1.4 for NASNet?

I try to train NASNet-A_Mobile_224 for two class classification by using train_image_classifier.py from slim with nasnet_mobile, However I get error
TypeError: separable_convolution2d() got an unexpected keyword argument 'data_format'
I suspect that the new NASNet requires TF version 1.4. Can somebody confirm this? I'm using Tensorflow 1.3.
More extensive error is given below:
Traceback (most recent call last):
File "train_image_classifier.py", line 574, in <module>
tf.app.run()
File "/home/sami/virenv/tensorflow_vanilla/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "train_image_classifier.py", line 474, in main
clones = model_deploy.create_clones(deploy_config, clone_fn, [batch_queue])
File "/home/sami/projects/Tools/models/research/slim/deployment/model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "train_image_classifier.py", line 457, in clone_fn
logits, end_points = network_fn(images)
File "/home/sami/projects/Tools/models/research/slim/nets/nets_factory.py", line 135, in network_fn
return func(images, num_classes, is_training=is_training, **kwargs)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 371, in build_nasnet_mobile
final_endpoint=final_endpoint)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 450, in _build_nasnet_base
net, cell_outputs = stem()
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 445, in <lambda>
stem = lambda: _imagenet_stem(images, hparams, stem_cell)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 264, in _imagenet_stem
cell_num=cell_num)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet_utils.py", line 326, in __call__
stride, original_input_left)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet_utils.py", line 352, in _apply_conv_operation
net = _stacked_separable_conv(net, stride, operation, filter_size)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet_utils.py", line 183, in _stacked_separable_conv
stride=stride)
File "/home/sami/virenv/tensorflow_vanilla/local/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
TypeError: separable_convolution2d() got an unexpected keyword argument 'data_format'
YES,it must be tensorflow 1.4.0

tensorflow slim inception_v3 model error

I am trying to use the tensorflow inception_v3 model for a transfer learning project.I get the following error on building the model.
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.
The same error does not arise on using the same script for inception_v1 model.
The models are imported from slim.nets
Running on CPU
Tensorflow version : 0.12.1
Script
import tensorflow as tf
slim = tf.contrib.slim
import models.inception_v3 as inception_v3
print("initializing model")
inputs=tf.placeholder(tf.float32, shape=[32,299,299,3])
with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
logits,endpoints = inception_v3.inception_v3(inputs, num_classes=1001, is_training=False)
trainable_vars=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
for tvars in trainable_vars:
print tvars.name
Full Error Message
Traceback (most recent call last):
File "test.py", line 8, in <module>
logits,endpoints = inception_v3.inception_v3(inputs, num_classes=1001, is_training=False)
File "/home/ashish/projects/python/fashion-language/models/inception_v3.py", line 576, in inception_v3
depth_multiplier=depth_multiplier)
File "/home/ashish/projects/python/fashion-language/models/inception_v3.py", line 181, in inception_v3_base
net = array_ops.concat([branch_0, branch_1, branch_2, branch_3], 3)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1075, in concat
dtype=dtypes.int32).get_shape(
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 669, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 367, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.
Found my mistake, i was importing the model from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim/python/slim/nets whereas the updated models are at https://github.com/tensorflow/models/tree/master/slim/nets.
Still haven't understood why there are two different repositories for the same classes.Must be a valid reason.