tensorflow tutorial mnist_with_summary throws TypeError - tensorflow

I am running the mnist_with_summary tutorial to see how the TensorBoard works. It throws a TypeError right away.
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
Traceback (most recent call last):
File "/Users/bruceho/workspace/TestTensorflow/mysrc/examples/tutorials/mnist/mnist_with_summaries.py", line 166, in <module>
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "/Users/bruceho/workspace/TestTensorflow/mysrc/examples/tutorials/mnist/mnist_with_summaries.py", line 163, in main
train()
File "/Users/bruceho/workspace/TestTensorflow/mysrc/examples/tutorials/mnist/mnist_with_summaries.py", line 110, in train
y = nn_layer(dropped, 500, 10, 'layer2', act=tf.nn.softmax)
File "/Users/bruceho/workspace/TestTensorflow/mysrc/examples/tutorials/mnist/mnist_with_summaries.py", line 104, in nn_layer
activations = act(preactivate, 'activation')
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/ops/nn_ops.py", line 582, in softmax
return _softmax(logits, gen_nn_ops._softmax, dim, name)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/ops/nn_ops.py", line 542, in _softmax
logits = _swap_axis(logits, dim, math_ops.sub(input_rank, 1))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/ops/nn_ops.py", line 518, in _swap_axis
0, [math_ops.range(dim_index), [last_index],
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/ops/math_ops.py", line 991, in range
return gen_math_ops._range(start, limit, delta, name=name)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1675, in _range
delta=delta, name=name)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/framework/op_def_library.py", line 490, in apply_op
preferred_dtype=default_dtype)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 657, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/framework/constant_op.py", line 180, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/framework/constant_op.py", line 163, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/framework/tensor_util.py", line 353, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/tensorflow/python/framework/tensor_util.py", line 290, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected int32, got 'activation' of type 'str' instead.
I tried running from inside eclipse and command line with the same results. Any one experience the same problem?

I think you must have modified the original code somehow. Your problem lies in this line:activations = act(preactivate, 'activation'). So if you check the api of tf.nn.softmax, you would find that the second argument represents dim instead of name. So to fix the problem, just change this line into:activations = act(preactivate, name='activation')
Besides, I don't know if you have changed
diff = tf.nn.softmax_cross_entropy_with_logits(y, y_)
If not, you probably have softmax the output twice.

Related

Tensorflow restore() function throwing incompatible shape error

I am doing tensorflow source code modification, wherein I am modigying the call() function of the source code. Interestingly, the checkpoint restore() is giving the error of mismatch in shape.
But from my understanding the flow of code is: build()->restore()->call().
Although my change in tensorflow code is expected to alter shape in call phase, is it expected that restoring checkpoint which happens before it will give shape mismatch error? Or, there must be another reason for this error?
Traceback (most recent call last):
File "run_classifier.py", line 549, in <module>
app.run(main)
File "/home/arpitj/projects/lib/python3.8/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/home/arpitj/projects/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 542, in main
custom_main(custom_callbacks=None, custom_metrics=None)
File "run_classifier.py", line 485, in custom_main
checkpoint.restore(
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2354, in restore
status = self.read(save_path, options=options)
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 2229, in read
result = self._saver.restore(save_path=save_path, options=options)
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 1366, in restore
base.CheckpointPosition(
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 254, in restore
restore_ops = trackable._restore_from_checkpoint_position(self) # pylint: disable=protected-access
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 983, in _restore_from_checkpoint_position
current_position.checkpoint.restore_saveables(
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 329, in restore_saveables
new_restore_ops = functional_saver.MultiDeviceSaver(
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 339, in restore
restore_ops = restore_fn()
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 323, in restore_fn
restore_ops.update(saver.restore(file_prefix, options))
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/saving/functional_saver.py", line 115, in restore
restore_ops[saveable.name] = saveable.restore(
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/training/saving/saveable_object_util.py", line 133, in restore
return resource_variable_ops.shape_safe_assign_variable_handle(
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 309, in shape_safe_assign_variable_handle
shape.assert_is_compatible_with(value_tensor.shape)
File "/home/arpitj/projects/lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py", line 1171, in assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (10, 64, 768) and (12, 64, 768) are incompatible

RASA init error : tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse

I am new to rasa . I installed rasa 2.4.1 in my windows 10, python 3.7.6 machine without any error . But when I initialise rasa project I get following error . I tried with multiple rasa2.x versions and multiple tensorflow installations . But no luck . Any help to resolve this issue is appreciated .
File "D:\NLP\rasa_env\Scripts\rasa.exe\__main__.py", line 7, in <module>
File "d:\nlp\rasa_env\lib\site-packages\rasa\__main__.py", line 116, in main
cmdline_arguments.func(cmdline_arguments)
File "d:\nlp\rasa_env\lib\site-packages\rasa\cli\scaffold.py", line 234, in run
init_project(args, path)
File "d:\nlp\rasa_env\lib\site-packages\rasa\cli\scaffold.py", line 129, in init_project
print_train_or_instructions(args, path)
File "d:\nlp\rasa_env\lib\site-packages\rasa\cli\scaffold.py", line 68, in print_train_or_instructions
training_result = rasa.train(domain, config, training_files, output)
File "d:\nlp\rasa_env\lib\site-packages\rasa\train.py", line 109, in train
loop,
File "d:\nlp\rasa_env\lib\site-packages\rasa\utils\common.py", line 308, in run_in_loop
result = loop.run_until_complete(f)
File "c:\users\kni9kor\anaconda3\lib\asyncio\base_events.py", line 583, in run_until_complete
return future.result()
File "d:\nlp\rasa_env\lib\site-packages\rasa\train.py", line 174, in train_async
finetuning_epoch_fraction=finetuning_epoch_fraction,
File "d:\nlp\rasa_env\lib\site-packages\rasa\train.py", line 353, in _train_async_internal
finetuning_epoch_fraction=finetuning_epoch_fraction,
File "d:\nlp\rasa_env\lib\site-packages\rasa\train.py", line 396, in _do_training
finetuning_epoch_fraction=finetuning_epoch_fraction,
File "d:\nlp\rasa_env\lib\site-packages\rasa\train.py", line 818, in _train_nlu_with_validated_data
**additional_arguments,
File "d:\nlp\rasa_env\lib\site-packages\rasa\nlu\train.py", line 116, in train
interpreter = trainer.train(training_data, **kwargs)
File "d:\nlp\rasa_env\lib\site-packages\rasa\nlu\model.py", line 209, in train
updates = component.train(working_data, self.config, **context)
File "d:\nlp\rasa_env\lib\site-packages\rasa\nlu\classifiers\diet_classifier.py", line 810, in train
self.model = self._instantiate_model_class(model_data)
File "d:\nlp\rasa_env\lib\site-packages\rasa\nlu\classifiers\diet_classifier.py", line 1132, in _instantiate_model_class
config=self.component_config,
File "d:\nlp\rasa_env\lib\site-packages\rasa\nlu\classifiers\diet_classifier.py", line 1146, in __init__
super().__init__("DIET", config, data_signature, label_data)
File "d:\nlp\rasa_env\lib\site-packages\rasa\utils\tensorflow\models.py", line 705, in __init__
checkpoint_model=config[CHECKPOINT_MODEL],
File "d:\nlp\rasa_env\lib\site-packages\rasa\utils\tensorflow\models.py", line 91, in __init__
super().__init__(**kwargs)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\training\tracking\base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\keras\engine\training.py", line 308, in __init__
self._init_batch_counters()
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\training\tracking\base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\keras\engine\training.py", line 317, in _init_batch_counters
self._train_counter = variables.Variable(0, dtype='int64', aggregation=agg)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\variables.py", line 262, in __call__
return cls._variable_v2_call(*args, **kwargs)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\variables.py", line 256, in _variable_v2_call
shape=shape)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\variables.py", line 237, in <lambda>
previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2646, in default_variable_creator_v2
shape=shape)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\variables.py", line 264, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1518, in __init__
distribute_strategy=distribute_strategy)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1666, in _init_from_args
graph_mode=self._in_graph_mode)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 243, in eager_safe_variable_handle
shape, dtype, shared_name, name, graph_mode, initial_value)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 175, in _variable_handle_from_shape_and_dtype
math_ops.logical_not(exists), [exists], name="EagerVariableNameReuse")
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\ops\gen_logging_ops.py", line 49, in _assert
_ops.raise_from_not_ok_status(e, name)
File "d:\nlp\rasa_env\lib\site-packages\tensorflow\python\framework\ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse
Possible Solutions:
1.Kill Concurrent python programs (like Jupyter notebooks) that is trying to access Tensorflow simultaneously.
2.Setting the environment variable TF_FORCE_GPU_ALLOW_GROWTH to true seems to make this issue disapper:
import os
os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = "true"
I have also attached following similar issues for reference which might help you out. link1 , link2, link3

ValueError: Operation u'tpu_140462710602256/VarIsInitializedOp' has been marked as not fetchable

The code works fine on GPU and CPU.But when I use keras_to_tpu_model function to make the model able to run on TPU, the error occurred.
This is the full output on colab:https://colab.research.google.com/gist/WangHexie/2252beb26f16354cb6e9ba2639970e5b/tpu-error.ipynb
Change runtype to TPU,I think this can be reproduced.
Code on github:https://github.com/WangHexie/DHNE/blob/master/src/hypergraph_embedding.py#L60
You can test the code on GPU by changing to the gpu branch.
Traceback
Traceback (most recent call last):
File "src/hypergraph_embedding.py", line 158, in <module>
h.train(dataset)
File "src/hypergraph_embedding.py", line 75, in train
epochs=self.options.epochs_to_train, verbose=1)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/engine/training.py", line 2177, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/engine/training_generator.py", line 176, in fit_generator
x, y, sample_weight=sample_weight, class_weight=class_weight)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/engine/training.py", line 1940, in train_on_batch
outputs = self.train_function(ins)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py", line 1238, in __call__
infeed_manager)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py", line 1143, in _tpu_model_ops_for_input_specs
infeed_manager)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py", line 1053, in _specialize_model
_model_fn, inputs=[[]] * self._tpu_assignment.num_towers)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu.py", line 687, in split_compile_and_replicate
outputs = computation(*computation_inputs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py", line 959, in _model_fn
self.model.cpu_optimizer)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py", line 378, in _clone_optimizer
config = optimizer.get_config()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/optimizers.py", line 275, in get_config
'lr': float(K.get_value(self.lr)),
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/backend.py", line 2709, in get_value
return x.eval(session=get_session())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/backend.py", line 469, in get_session
_initialize_variables(session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/backend.py", line 731, in _initialize_variables
[variables_module.is_variable_initialized(v) for v in candidate_vars])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1137, in _run
self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 484, in __init__
self._assert_fetchable(graph, fetch.op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 497, in _assert_fetchable
'Operation %r has been marked as not fetchable.' % op.name)
ValueError: Operation u'tpu_140276544043536/VarIsInitializedOp' has been marked as not fetchable.
I have a same issue which confuses me two days. I find a solution is that just switch to using tf.train.RMSPropOptimizer instead of using RMSProp from tensorflow.keras.optimizers.

Error while running TensorFlow wide_n_deep Tutorial

I encountered the error:
AttributeError: 'NoneType' object has no attribute 'bucketize'
The full error is as follows:
Traceback (most recent call last):
File "wide_n_deep_tutorial_1.py", line 214, in <module>
train_and_eval()
File "wide_n_deep_tutorial_1.py", line 203, in train_and_eval
m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py", line 711, in fit
max_steps=max_steps)
File "C:\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 191, in new_func
return func(*args, **kwargs)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 355, in fit
max_steps=max_steps)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 699, in _train_model
train_ops = self._get_train_ops(features, labels)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1052, in _get_train_ops
return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.TRAIN)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1019, in _call_model_fn
params=self.params)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py", line 504, in _dnn_linear_combined_model_fn
scope=scope)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column_ops.py", line 526, in weighted_sum_from_feature_columns
transformed_tensor = transformer.transform(column)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column_ops.py", line 869, in transform
feature_column.insert_transformed_feature(self._columns_to_tensors)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column.py", line 1489, in insert_transformed_feature
name="bucketize")
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\ops\bucketization_op.py", line 48, in bucketize
return _bucketization_op.bucketize(input_tensor, boundaries, name=name)
AttributeError: 'NoneType' object has no attribute 'bucketize'
I got the same issue, it seems that on windows, we just got None, sourcecode,
try to run this code on linux, or try to remove the bucketization and the column crossing, for example. change the line:
flags.DEFINE_string("model_type","wide_n_deep","valid model types:{'wide','deep', 'wide_n_deep'")
to
flags.DEFINE_string("model_type","deep","valid model types:{'wide','deep', 'wide_n_deep'")
follow this issue for update: issue

type matching error in tensor

I am following a Tensorflow example. However, running the code gives me the following error message.
It seems to me that the following code segment causes the problem. Or the last dimension of h_conv10 has float64. But there is no place where float64 is setup explicitly. Thank you for the suggestions and hints.
# Deconvolution 1
W_deconv_1 = weight_variable_devonc([max_pool_size, max_pool_size, 8*chanel_root, 16*chanel_root])
b_deconv1 = bias_variable([8*chanel_root])
h_deconv1 = tf.nn.relu(deconv2d(h_conv10, W_deconv_1, max_pool_size) + b_deconv1)
h_deconv1_concat = crop_and_concat(h_conv8,h_deconv1,[n_image,(((nx-180)/2+4)/2+4)/2+4,(((ny-180)/2+4)/2+4)/2+4,8*chanel_root])
python u-net.py
Traceback (most recent call last):
File "/experiment/tfw/lib/python3.4/site-
packages/tensorflow/python/ops/op_def_library.py", line 377, in apply_op
as_ref=input_arg.is_ref)
File "/experiment/tfw/lib/python3.4/site-
packages/tensorflow/python/framework/ops.py", line 608, in convert_n_to_tensor
ret.append(convert_to_tensor(value, dtype=dtype, name=n, as_ref=as_ref))
File "/experiment/tfw/lib/python3.4/site-
packages/tensorflow/python/framework/ops.py", line 566, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/experiment/tfw/lib/python3.4/site-
packages/tensorflow/python/framework/ops.py", line 510, in
_TensorTensorConversionFunction
% (dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype int32 for Tensor with dtype
float64: 'Tensor("truediv:0", shape=(), dtype=float64)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "u-net.py", line 193, in <module>
h_deconv1 = tf.nn.relu(deconv2d(h_conv10, W_deconv_1, max_pool_size) + b_deconv1)
File "u-net.py", line 77, in deconv2d
output_shape = tf.pack([tf.shape(x)[0], tf.shape(x)[1]*2, tf.shape(x)[2]*2, tf.shape(x)[3]/2])
File "/experiment/tfw/lib/python3.4/site-packages/tensorflow/python/ops/array_ops.py", line 241, in pack
return gen_array_ops._pack(values, name=name)
File "/experiment/tfw/lib/python3.4/site-packages/tensorflow/python/ops/gen_array_ops.py", line 916, in _pack
return _op_def_lib.apply_op("Pack", values=values, name=name)
File "/experiment/tfw/lib/python3.4/site-packages/tensorflow/python/ops/op_def_library.py", line 396, in apply_op
raise TypeError("%s that don't all match." % prefix)
TypeError: Tensors in list passed to 'values' of 'Pack' Op have types [int32, int32, int32, float64] that don't all match.