Do the Tensorflow Silm assume TF version 1.4 for NASNet? - tensorflow

I try to train NASNet-A_Mobile_224 for two class classification by using train_image_classifier.py from slim with nasnet_mobile, However I get error
TypeError: separable_convolution2d() got an unexpected keyword argument 'data_format'
I suspect that the new NASNet requires TF version 1.4. Can somebody confirm this? I'm using Tensorflow 1.3.
More extensive error is given below:
Traceback (most recent call last):
File "train_image_classifier.py", line 574, in <module>
tf.app.run()
File "/home/sami/virenv/tensorflow_vanilla/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "train_image_classifier.py", line 474, in main
clones = model_deploy.create_clones(deploy_config, clone_fn, [batch_queue])
File "/home/sami/projects/Tools/models/research/slim/deployment/model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "train_image_classifier.py", line 457, in clone_fn
logits, end_points = network_fn(images)
File "/home/sami/projects/Tools/models/research/slim/nets/nets_factory.py", line 135, in network_fn
return func(images, num_classes, is_training=is_training, **kwargs)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 371, in build_nasnet_mobile
final_endpoint=final_endpoint)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 450, in _build_nasnet_base
net, cell_outputs = stem()
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 445, in <lambda>
stem = lambda: _imagenet_stem(images, hparams, stem_cell)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet.py", line 264, in _imagenet_stem
cell_num=cell_num)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet_utils.py", line 326, in __call__
stride, original_input_left)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet_utils.py", line 352, in _apply_conv_operation
net = _stacked_separable_conv(net, stride, operation, filter_size)
File "/home/sami/projects/Tools/models/research/slim/nets/nasnet/nasnet_utils.py", line 183, in _stacked_separable_conv
stride=stride)
File "/home/sami/virenv/tensorflow_vanilla/local/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
TypeError: separable_convolution2d() got an unexpected keyword argument 'data_format'

YES,it must be tensorflow 1.4.0

Related

Error when training model Faster-RCNN: NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array

I am trying out a Faster-RCNN tutorial on colab:
https://colab.research.google.com/drive/1U3fkRu6-hwjk7wWIpg-iylL2u5T9t7rr#scrollTo=uQCnYPVDrsgx
But in the training part, I have received this error
The code is:
!python3 /content/models/research/object_detection/model_main.py
--pipeline_config_path={pipeline_fname}
--model_dir={model_dir}
--alsologtostderr
--num_train_steps={num_steps}
--num_eval_steps={num_eval_steps}
The output:
Using TensorFlow backend.
Cause: module 'gast' has no attribute 'Index'
Traceback (most recent call last):
File "/content/models/research/object_detection/model_main.py", line 109, in
tf.app.run()
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "/content/models/research/object_detection/model_main.py", line 105, in main
tf_estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/training.py", line 473, in train_and_evaluate
return executor.run()
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/training.py", line 613, in run
return self.run_local()
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/training.py", line 714, in run_local
saving_listeners=saving_listeners)
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1161, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1188, in _train_model_default
input_fn, ModeKeys.TRAIN))
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1025, in _get_features_and_labels_from_input_fn
self._call_input_fn(input_fn, mode))
File "/tensorflow-1.15.2/python3.7/tensorflow_estimator/python/estimator/estimator.py", line 1116, in _call_input_fn
return input_fn(**kwargs)
File "/content/models/research/object_detection/inputs.py", line 765, in _train_input_fn
params=params)
File "/content/models/research/object_detection/inputs.py", line 908, in train_input
reduce_to_frame_fn=reduce_to_frame_fn)
File "/content/models/research/object_detection/builders/dataset_builder.py", line 252, in build
input_reader_config)
File "/content/models/research/object_detection/builders/dataset_builder.py", line 237, in dataset_map_fn
fn_to_map, num_parallel_calls=num_parallel_calls)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 1950, in map_with_legacy_function
use_legacy_function=True))
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 3472, in init
use_legacy_function=use_legacy_function)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 2689, in init
self._function.add_to_graph(ops.get_default_graph())
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 545, in add_to_graph
self._create_definition_if_needed()
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 377, in _create_definition_if_needed
self._create_definition_if_needed_impl()
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 408, in _create_definition_if_needed_impl
capture_resource_var_by_value=self._capture_resource_var_by_value)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/function.py", line 944, in func_graph_from_py_func
outputs = func(*func_graph.inputs)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 2681, in wrapper_fn
ret = _wrapper_helper(*args)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/data/ops/dataset_ops.py", line 2652, in _wrapper_helper
ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in converted code:
/content/models/research/object_detection/data_decoders/tf_example_decoder.py:580 decode
default_groundtruth_weights)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/control_flow_ops.py:1235 cond
orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/control_flow_ops.py:1061 BuildCondBranch
original_result = fn()
/content/models/research/object_detection/data_decoders/tf_example_decoder.py:573 default_groundtruth_weights
dtype=tf.float32)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py:2560 ones
output = _constant_if_small(one, shape, dtype, name)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py:2295 _constant_if_small
if np.prod(shape) < 1000:
<array_function internals>:6 prod
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py:3052 prod
keepdims=keepdims, initial=initial, where=where)
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py:86 _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py:736 array
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array.
I have tried
pip install numpy==1.19.5
But it does not work and received another error
Traceback (most recent call last):
File "/content/models/research/object_detection/model_main.py", line 26, in
from object_detection import model_lib
File "/content/models/research/object_detection/model_lib.py", line 30, in
from object_detection import eval_util
File "/content/models/research/object_detection/eval_util.py", line 35, in
from object_detection.metrics import coco_evaluation
File "/content/models/research/object_detection/metrics/coco_evaluation.py", line 25, in
from object_detection.metrics import coco_tools
File "/content/models/research/object_detection/metrics/coco_tools.py", line 51, in
from pycocotools import coco
File "/usr/local/lib/python3.7/dist-packages/pycocotools/coco.py", line 52, in
from . import mask as maskUtils
File "/usr/local/lib/python3.7/dist-packages/pycocotools/mask.py", line 3, in
import pycocotools._mask as _mask
File "pycocotools/_mask.pyx", line 1, in init pycocotools._mask
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Visualize proposal regions from RPN head in Faster R-CNN with Tensorflow Object Detection API

I'm trying debug my trained Faster R-CNN model using Tensorflow Object Detection API and I want to visualize the proposal regions of RPN on an image. Can anyone tell me how to do it?
I found a post here but it hasn't been answered. I tried to export the model using exporter_main_v2.py with only the RPN head as said here and this is the massage when I deleted the second_stage.
Traceback (most recent call last):
File "exporter_main_v2.py", line 165, in <module>
app.run(main)
File "E:\Anaconda\envs\TFOD\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "E:\Anaconda\envs\TFOD\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "exporter_main_v2.py", line 158, in main
exporter_lib_v2.export_inference_graph(
File "E:\Anaconda\envs\TFOD\lib\site-packages\object_detection\exporter_lib_v2.py", line 245, in export_inference_graph
detection_model = INPUT_BUILDER_UTIL_MAP['model_build'](
File "E:\Anaconda\envs\TFOD\lib\site-packages\object_detection\builders\model_builder.py", line 1226, in build
return build_func(getattr(model_config, meta_architecture), is_training,
File "E:\Anaconda\envs\TFOD\lib\site-packages\object_detection\builders\model_builder.py", line 665, in _build_faster_rcnn_model
second_stage_box_predictor = box_predictor_builder.build_keras(
File "E:\Anaconda\envs\TFOD\lib\site-packages\object_detection\builders\box_predictor_builder.py", line 991, in build_keras
raise ValueError(
ValueError: Unknown box predictor for Keras: None
I tried again to export the model without deleting the second_stage. And this is the message I got
INFO:tensorflow:depth of additional conv before box predictor: 0
I0802 20:55:13.930429 1996 convolutional_keras_box_predictor.py:153] depth of additional conv before box predictor: 0
Traceback (most recent call last):
File "exporter_main_v2.py", line 165, in <module>
app.run(main)
File "E:\Anaconda\envs\TFOD\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "E:\Anaconda\envs\TFOD\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "exporter_main_v2.py", line 158, in main
exporter_lib_v2.export_inference_graph(
File "E:\Anaconda\envs\TFOD\lib\site-packages\object_detection\exporter_lib_v2.py", line 271, in export_inference_graph
concrete_function = detection_module.__call__.get_concrete_function()
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\eager\def_function.py", line 1299, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\eager\def_function.py", line 1205, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\eager\def_function.py", line 725, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\eager\function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\eager\function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\eager\function.py", line 3196, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\framework\func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\eager\def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "E:\Anaconda\envs\TFOD\lib\site-packages\tensorflow\python\framework\func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
tensorflow.python.autograph.pyct.error_utils.KeyError: in user code:
E:\Anaconda\envs\TFOD\lib\site-packages\object_detection\exporter_lib_v2.py:163 call_func *
return self._run_inference_on_images(images, true_shapes, **kwargs)
E:\Anaconda\envs\TFOD\lib\site-packages\object_detection\exporter_lib_v2.py:129 _run_inference_on_images *
detections[classes_field] = (
KeyError: 'detection_classes'
Found the solution!
In the config file add number_of_stages: 1
Instead of using exporter_main_v2.pyI write code that builds the model from the checkpoint file
# Load pipeline config and build a detection model
configs = config_util.get_configs_from_pipeline_file(path_to_config)
model_config = configs['model']
detection_model = model_builder.build(model_config=model_config, is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(path_to_ckpt, 'ckpt-0')).expect_partial()
Then I feed the image I need to inspect to the model, then I use object_detection.utils.visualization_utils.visualize_boxes_and_labels_on_image_array to inspect the boxes

how to convert csv file to tfrecord

I am actually working with miniconda and tensorflow and trying to train a model for object detection and I'm facing a problem when running generate_ tfrecord.py to convert csv to tfrecord the error is:
"generate_tfrecord.py", line 90, in <module>
tf.app.run()
I used this link
https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py
(tensorflow) C:\Users\OctaNet\Miniconda3\envs\tensorflow\models\research\object_detection>python generate_tfrecord.py --csv_input=images/train.csv --output_path=data/train.record --image_dir=images/train/
Iterating train
Traceback (most recent call last):
File "generate_tfrecord.py", line 90, in <module>
tf.app.run()
File "C:\Users\OctaNet\Miniconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "generate_tfrecord.py", line 80, in main
tf_example = create_tf_example(group, path)
File "generate_tfrecord.py", line 67, in create_tf_example
'image/object/class/label': dataset_util.int64_list_feature(classes),
File "C:\Users\OctaNet\Miniconda3\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\dataset_util.py", line 26, in int64_list_feature
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
TypeError: None has type NoneType, but expected one of: int, long

no kernel image is available for execution on the device

I training maskrcnn ,use tf-1.2 can train, but I use tf-1.5 it not training
The error is as follows:
Caused by op u'pyramid_1/AssignGTBoxes/Where_6', defined at:
File "/home/zhouzd2/letrain/applications/letrain.py", line 349, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "/home/zhouzd2/letrain/applications/letrain.py", line 346, in main
LeTrain().model_train(user_mode)
File "/home/zhouzd2/letrain/platform/base_train.py", line 1228, in model_train
cluster=self.cluster_spec)
File "/home/zhouzd2/letrain/platform/deployment/model_deploy.py", line 226, in create_clones
outputs, feed_ops,verify_model_loss = model_fn(*args, **kwargs)
File "/home/zhouzd2/letrain/platform/base_train.py", line 1195, in clone_fn
model_loss, end_points, feed_ops = network_fn(data_direct, data_batch, int_network_fn)
File "/home/zhouzd2/letrain/applications/letrain.py", line 214, in get_loss
FLAGS.batch_size)
File "/home/zhouzd2/letrain/applications/fmrcnn/get_fmrcnn_loss.py", line 23, in model_fn
loss_weights=[0.2, 0.2, 1.0, 0.2, 1.0])
File "/home/zhouzd2/letrain/applications/fmrcnn/libs/nets/pyramid_network.py", line 580, in build
is_training=is_training, gt_boxes=gt_boxes)
File "/home/zhouzd2/letrain/applications/fmrcnn/libs/nets/pyramid_network.py", line 263, in build_heads
assign_boxes(rois, [rois, batch_inds], [2, 3, 4, 5])
File "/home/zhouzd2/letrain/applications/fmrcnn/libs/layers/wrapper.py", line 173, in assign_boxes
inds = tf.where(tf.equal(assigned_layers, l))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 2538, in where
return gen_array_ops.where(condition=condition, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 6087, in where
"Where", input=condition, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1625, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InternalError (see above for traceback): WhereOp: Could not launch cub::DeviceReduce::Sum to count number of true / nonzero indices. temp_storage_bytes: 1, status: no kernel image is available for execution on the device
[[Node: pyramid_1/AssignGTBoxes/Where_6 = Where[T=DT_BOOL, _device="/job:worker/replica:0/task:0/device:GPU:0"](pyramid_1/AssignGTBoxes/Equal_6_S9493)]]
[[Node: pyramid_1/AssignGTBoxes/Reshape_8_G1028 = _Recv[client_terminated=false, recv_device="/job:worker/replica:0/task:0/device:CPU:0", send_device="/job:worker/replica:0/task:0/device:GPU:0", send_device_incarnation=5407481677180697062, tensor_name="edge_1349_pyramid_1/AssignGTBoxes/Reshape_8", tensor_type=DT_INT64, _device="/job:worker/replica:0/task:0/device:CPU:0"]()]]
No problem when loading calculation graphs, error is reported in sess.run()。
Does anyone know how to solve this problem? Or does anyone know what function can replace tf.where?
Thank you!
If you are using Visual Studio:
Right click on the project > Properies > Cuda C/C++ > Device
and add the following to Code Generation field
compute_30,sm_30;compute_35,sm_35;compute_37,sm_37;compute_50,sm_50;compute_52,sm_52;compute_60,sm_60;compute_61,sm_61;compute_70,sm_70;compute_75,sm_75;

Error in 'ValidationMonitor' when passing 'metrics' parameter

I'm using the following code to log accuracy as the validation measure (TensorFlow 0.10):
validation_metrics = {"accuracy": tf.contrib.metrics.streaming_accuracy}
validation_monitor = tf.contrib.learn.monitors.ValidationMonitor(
input_fn=input_fn_eval,
every_n_steps=FLAGS.eval_every,
# metrics=validation_metrics,
early_stopping_rounds=500,
early_stopping_metric="loss",
early_stopping_metric_minimize=True)
After running, in 'every_n_steps', I see the following lines in the output:
INFO:tensorflow:Validation (step 1000): loss = 1.04875, global_step = 900
The problem is that when 'metrics=validation_metrics' parameter uncomment in the above code, I get the following error in the validation phase:
INFO:tensorflow:Error reported to Coordinator: <type 'exceptions.TypeError'>, Input 'y' of 'Equal' Op has type int64 that does not match type float32 of argument 'x'.
E tensorflow/core/client/tensor_c_api.cc:485] Enqueue operation was cancelled
[[Node: read_batch_features_train/file_name_queue/file_name_queue_EnqueueMany = QueueEnqueueMany[Tcomponents=[DT_STRING], _class=["loc:#read_batch_features_train/file_name_queue"], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](read_batch_features_train/file_name_queue, read_batch_features_train/file_name_queue/RandomShuffle)]]
E tensorflow/core/client/tensor_c_api.cc:485] Enqueue operation was cancelled
[[Node: read_batch_features_train/random_shuffle_queue_EnqueueMany = QueueEnqueueMany[Tcomponents=[DT_STRING, DT_STRING], _class=["loc:#read_batch_features_train/random_shuffle_queue"], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](read_batch_features_train/random_shuffle_queue, read_batch_features_train/read/ReaderReadUpTo, read_batch_features_train/read/ReaderReadUpTo:1)]]
Traceback (most recent call last):
File "udc_train.py", line 74, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "udc_train.py", line 70, in main
estimator.fit(input_fn=input_fn_train, steps=None, monitors=[validation_monitor])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 240, in fit
max_steps=max_steps)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 578, in _train_model
max_steps=max_steps)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/graph_actions.py", line 280, in _supervised_train
None)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/supervised_session.py", line 270, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/recoverable_session.py", line 54, in run
run_metadata=run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/coordinated_session.py", line 70, in run
self._coord.join(self._coordinated_threads_to_join)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/coordinator.py", line 357, in join
six.reraise(*self._exc_info_to_raise)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/coordinated_session.py", line 66, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/monitored_session.py", line 107, in run
induce_stop = monitor.step_end(monitors_step, monitor_outputs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 396, in step_end
return self.every_n_step_end(step, output)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 687, in every_n_step_end
steps=self.eval_steps, metrics=self.metrics, name=self.name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 356, in evaluate
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 630, in _evaluate_model
eval_dict = self._get_eval_ops(features, targets, metrics)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 877, in _get_eval_ops
result[name] = metric(predictions, targets)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/metrics/python/ops/metric_ops.py", line 432, in streaming_accuracy
is_correct = math_ops.to_float(math_ops.equal(predictions, labels))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 708, in equal
result = _op_def_lib.apply_op("Equal", x=x, y=y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 468, in apply_op
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Equal' Op has type int64 that does not match type float32 of argument 'x'.
This looks like a problem with your input_fn and your estimator, which are returning different types for the label.