Conv1D with data_format channels_first yields error on Keras - tensorflow

I am trying to define a basic ConvNet using one Conv1D operation as follows:
n_class = 10
# channels last (default)
input_shape = (100, 1)
inp = Input(shape=input_shape)
x = Conv1D(4, kernel_size=9, activation=activations.relu, padding='same')(inp)
x = MaxPool1D(pool_size=5)(x)
x = Flatten()(x)
out = Dense(n_class, activation='softmax')(x)
model = Model(inputs=inp, outputs=out)
and that works fine (which uses data_format='channels_last' as default). However, if, instead, I want to use data_format='channels_first':
# "channels_first" inputs with shape (batch, channels, length).
input_shape = (1, 100)
inp = Input(shape=input_shape)
x = Conv1D(4, kernel_size=9, activation=activations.relu, data_format='channels_first', padding='same')(inp)
x = MaxPool1D(pool_size=5, data_format='channels_first')(x)
x = Flatten()(x)
out = Dense(n_class, activation='softmax')(x)
model = Model(inputs=inp, outputs=out)
I get the following error when defining the Conv1D layer:
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1596, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 974, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/edward/Desktop/baselines/cnn_mel.py", line 720, in <module>
model = get_1d_dummy_model(params_learn=params_learn, params_extract=params_extract)
File "/Users/edward/Desktop/baselines/architectures.py", line 71, in get_1d_dummy_model
x = Conv1D(4, kernel_size=9, activation=activations.relu, data_format='channels_first', padding='same')(inp)
File "/Users/edward/miniconda3/envs/baseline/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/Users/edward/miniconda3/envs/baseline/lib/python3.6/site-packages/keras/layers/convolutional.py", line 337, in __init__
**kwargs)
TypeError: __init__() got multiple values for keyword argument 'data_format'
Any insight as to what is being done wrong? thanks!

It seems that you are not using the most recent code (not just most recent release). For Conv1D layer, data_format='channels_first' is not supported even in the most recent release 2.1.6. You will need to clone and use codes from the master branch. The support is added by this commit on 5/7/2018. The documentation is always synced to the master which can be confusing. The idea (from the Keras creator François Chollet) is that
versions only exist to force PyPI users to upgrade. They are not meaningful. You should be constantly synced to master.
You can find some old Keras documentation here.

Related

Why TensorFlow throws this exception when loading a model that was normalized like this?

All latest versions from the very moment of this post.
tensorflow-gpu: 2.6.0
Python: 3.9.7
CUDA: 11.4.2
cuDNN: 8.2.4
As in the code below, when loading a model that was normalized by not passing arguments to Normalization() it throws an exception when that model is loaded by load_model(), however before loading the model I can use it without any apparent issues which makes you think it's all good since Normalization() did NOT complain and took care of the input shape. When loading a model that was normalized by Normalization(input_dim=5) it does NOT thrown any exception since a known shape is specified. That is weird I mean it should warn you that when normalizing it without passing arguments to Normalization() you should expect an exception when loading it.
I'm not sure if it's a bug so I'm posting it here before reporting a bug in the github section, maybe I'm missing to setup something.
Here's my code:
import numpy as np
import tensorflow as tf
def main():
train_data = np.array([[1, 2, 3, 4, 5]])
train_label = np.array([123])
# Uncomment this to load the model and comment the next model and normalizer related lines.
#model = tf.keras.models.load_model('AI/test.h5')
normalizer = tf.keras.layers.experimental.preprocessing.Normalization()
normalizer.adapt(train_data)
model = tf.keras.Sequential([normalizer, tf.keras.layers.Dense(units=1)])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.1), loss='mean_absolute_error')
model.fit(train_data, train_label, epochs=3000)
model.save('AI/test.h5')
unseen_data = np.array([[1, 2, 3, 4, 6]])
prediction = model.predict(unseen_data)
print(prediction)
if __name__ == "__main__":
main()
It throws the following exception:
Traceback (most recent call last):
File "E:\Backup\Desktop\tensorflow_test.py", line 30, in <module>
main()
File "E:\Backup\Desktop\tensorflow_test.py", line 11, in main
model = tf.keras.models.load_model('AI/test.h5')
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\saving\save.py", line 200, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects,
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\saving\hdf5_format.py", line 180, in load_model_from_hdf5
model = model_config_lib.model_from_config(model_config,
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\saving\model_config.py", line 52, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\serialization.py", line 208, in deserialize
return generic_utils.deserialize_keras_object(
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\utils\generic_utils.py", line 674, in deserialize_keras_object
deserialized_obj = cls.from_config(
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\sequential.py", line 434, in from_config
model.add(layer)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\training\tracking\base.py", line 530, in _method_wrapper
result = method(self, *args, **kwargs)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\sequential.py", line 217, in add
output_tensor = layer(self.outputs[0])
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 976, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 1114, in _functional_construction_call
outputs = self._keras_tensor_symbolic_call(
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 848, in _keras_tensor_symbolic_call
return self._infer_output_signature(inputs, args, kwargs, input_masks)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 886, in _infer_output_signature
self._maybe_build(inputs)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 2659, in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\preprocessing\normalization.py", line 145, in build
raise ValueError(
ValueError: All `axis` values to be kept must have known shape. Got axis: (-1,), input shape: [None, None], with unknown axis at index: 1
Process finished with exit code 1
It looks like a bug.
Follow this link
if 'input_dim' in kwargs and 'input_shape' not in kwargs:
# Backwards compatibility: alias 'input_dim' to 'input_shape'.
kwargs['input_shape'] = (kwargs['input_dim'],)
if 'input_shape' in kwargs or 'batch_input_shape' in kwargs:
# In this case we will later create an input layer
# to insert before the current layer
if 'batch_input_shape' in kwargs:
batch_input_shape = tuple(kwargs['batch_input_shape'])
elif 'input_shape' in kwargs:
if 'batch_size' in kwargs:
batch_size = kwargs['batch_size']
else:
batch_size = None
batch_input_shape = (batch_size,) + tuple(kwargs['input_shape'])
self._batch_input_shape = batch_input_shape
The error occurs because the normalization could not get any shape information which would lead to self._input_batch_shape =(None, None).
But when loading model(deserialization), It would call build function which should have known shape in all axes.
# Sorted to avoid transposing axes.
self._keep_axis = sorted([d if d >= 0 else d + ndim for d in self.axis])
# All axes to be kept should have known shape.
for d in self._keep_axis:
if input_shape[d] is None:
raise ValueError(
'All `axis` values to be kept must have known shape. Got axis: {}, '
'input shape: {}, with unknown axis at index: {}'.format(
self.axis, input_shape, d))

L2-normalization with Keras Backend?

I'd like to normalize the inputs going into my neural network but, as I'm defining my model in this way:
df = pd.read_csv(r'C:\Users\Davide Mori\PycharmProjects\pythonProject\Dataset.csv')
print(df)
target_column = ['W_mag', 'W_phase']
predictors = list(set(list(df.columns)) - set(target_column))
X = df[predictors].values
Y = df[target_column].values
def get_model(n_inputs, n_outputs):
model = Sequential()
model.add(Dense(1000,input_dim= n_inputs, activation='relu'))
#model.add(Lambda(lambda x: K.l2_normalize(x, axis=1)))
model.add(Dense(1000, activation='linear', activity_regularizer=regularizers.l1(0.0001)))
model.add(Activation('relu'))
model.add(Dense(n_outputs, activation='linear'))
model.compile(optimizer="adam", loss="mean_squared_error", metrics=["mean_squared_error"])
model.summary()
return model
n_inputs, n_outputs = X.shape[1], Y.shape[1]
model = get_model(n_inputs, n_outputs)
# fit the model on all data
model.fit(X, Y, epochs=100, batch_size=1)
how do I apply the lambda layer to my inputs? Isn't wrong the commented line position? Because If I put the lambda layer there I'm normalizing what is already be "transformed" by the first hidden layer,right? How can I solve this problem?
This is the error I have when putting the lambda layer before everything else :
2020-10-12 15:08:46.036872: I
tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Program Files\JetBrains\PyCharm
2020.2.2\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197,
in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the
script
File "C:\Program Files\JetBrains\PyCharm
2020.2.2\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line
18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Davide Mori/PycharmProjects/pythonProject/prova_rete_sfs.py",
line 60, in <module>
model = get_model(n_inputs, n_outputs)
File "C:/Users/Davide Mori/PycharmProjects/pythonProject/prova_rete_sfs.py",
line 52, in get_model
model.summary()
File "C:\Users\Davide Mori\Anaconda3\envs\pythonProject\lib\site-
packages\tensorflow_core\python\keras\engine\network.py", line 1302, in
summary
raise ValueError('This model has not yet been built. '
ValueError: This model has not yet been built. Build the model first by
calling `build()` or calling `fit()` with some data, or specify an
`input_shape` argument in the first layer(s) for automatic build.

Training keras models in a loop: "Tensor is not an element of this graph" when saving model afther calling K.clear_session()

I'm trying to train multiple Keras models in a loop to evaluate different parameters. To avoid memory problems, I call K.clear_session(), before each model building.
After adding the K.clear_session() call, I started getting this error when saving the second model.
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("level1/kernel:0", shape=(3, 3, 3, 16), dtype=float32_ref) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/gus/workspaces/wpy/cnn/srs/train_generators.py", line 286, in
train_models(model_defs)
File "/home/gus/workspaces/wpy/cnn/srs/train_generators.py", line 196, in train_models
model.save(file_path)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/engine/network.py", line 1090, in save
save_model(self, filepath, overwrite, include_optimizer)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/engine/saving.py", line 382, in save_model
_serialize_model(model, f, include_optimizer)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/engine/saving.py", line 97, in _serialize_model
weight_values = K.batch_get_value(symbolic_weights)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2420, in batch_get_value
return get_session().run(ops)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1137, in _run
self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 471, in init
self._fetch_mapper = _FetchMapper.for_fetch(fetches)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 261, in for_fetch
return _ListFetchMapper(fetch)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 370, in init
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 370, in
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 271, in for_fetch
return _ElementFetchMapper(fetches, contraction_fn)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 307, in init
'Tensor. (%s)' % (fetch, str(e)))
ValueError: Fetch argument cannot be interpreted as a Tensor. (Tensor Tensor("level1/kernel:0", shape=(3, 3, 3, 16), dtype=float32_ref) is not an element of this graph.)
The code basically:
while <models to train>:
K.clear_session()
model = modeldef.build() # everything that has a tensor goes here and just here
# create generators from directories
opt = Adam(lr=0.001, decay=0.001 / epochs)
model.compile(...)
H = model.fit_generator(...)
model.save(file_path) # --> here it crashes
No matter how deep the network is, a super simple ConvNet like this makes the code fail when saving:
class SuperSimpleCNN:
def __init__(self, img_size, depth):
self.img_size = img_size
self.depth = depth
def build(self):
init = Input(shape=(self.img_size, self.img_size, self.depth))
x = Convolution2D(16, (3, 3), padding='same', name='level1')(init)
x = Activation('relu')(x)
out = Convolution2D(self.depth, (5, 5), padding='same', name='output')(x)
model = Model(init, out)
return model
Looking similar problems, I understand the problem is due to the fact that keras shares a global session, and different graphs from different models can't be mixed.
But I don't understand why using K.clear_session() before each model makes the save operation fail when iteration>1. And why the difference between Tensor and Variable.
<tf.Variable 'level1/kernel:0' shape=(3, 3, 3, 16) dtype=float32_ref> cannot be interpreted as a Tensor
Can anyone help?
Thank you.
My mistake, I was importing the wrong package:
from tensorflow.python.keras import backend as K
instead of
import keras.backend as K

Different learning rates causes error: no gradients provided for any variable

I am trying to first train a language model (class PTBModel) separately and then attach a classifier to bias the language model towards sentiment classification, approximatelt like they do in http://deeplearning.net/tutorial/lstm.html. I am able to do this when all variables are trained with the same learning rate, but I would like to have a lower learning rate for the language model since it is already trained. I found a solution for this in another post but when i separate variables as below:
# Get trainable variables for the language model
with tf.variable_scope("RNN") as vs:
lm_parameters = [v for v in tf.trainable_variables()
if v.name.startswith(vs.name)]
with tf.variable_scope("RNN_SOFTMAX") as vs:
lm_parameters = [v for v in tf.trainable_variables()
if v.name.startswith(vs.name)]
# Get trainable variables for the classifier
with tf.variable_scope("output_projection") as vs:
classifier_parameters = [v for v in tf.trainable_variables()
if v.name.startswith(vs.name)]
And then calculate and try to apply gradients.
gradients = tf.gradients(self.losses, lm_parameters + classifier_parameters)
clipped_gradients, norm = tf.clip_by_global_norm(gradients, self.max_gradient_norm)
with tf.name_scope("grad_norms") as scope:
grad_summ = tf.scalar_summary("grad_norms", norm)
print(gradients)
print(clipped_gradients)
grads1 = clipped_gradients[:len(lm_parameters)]
grads2 = clipped_gradients[len(lm_parameters):]
train_op1 = opt_lm.apply_gradients(zip(grads1, lm_parameters))
train_op2 = opt_classifier.apply_gradients(zip(grads2, classifier_parameters))
self.update = train_op2
I get the following error
[None, None, <tensorflow.python.framework.ops.Tensor object at 0x7fb6cded3f50>, <tensorflow.python.framework.ops.Tensor object at 0x7fb6cded3790>]
[None, None, <tensorflow.python.framework.ops.Tensor object at 0x7fb6cde92190>, <tensorflow.python.framework.ops.Tensor object at 0x7fb6cde92490>]
Traceback (most recent call last):
File "train.py", line 214, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "train.py", line 210, in main
train_sentiment()
File "train.py", line 125, in train_sentiment
, is_training=True, config=config)
File "/home/seberik/workspace/lm-generator/models/sentiment.py", line 159, in __init__
train_op1 = opt_lm.apply_gradients(zip(grads1, lm_parameters))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 277, in apply_gradients
(grads_and_vars,))
ValueError: No gradients provided for any variable: ((None, <tensorflow.python.ops.variables.Variable object at 0x7fb6e0b14250>), (None, <tensorflow.python.ops.variables.Variable object at 0x7fb6e0b143d0>))
I dont know why this happens since the following code works
gradients = tf.gradients(self.losses, params)
clipped_gradients, norm = tf.clip_by_global_norm(gradients, self.max_gradient_norm)
with tf.name_scope("grad_norms") as scope:
grad_summ = tf.scalar_summary("grad_norms", norm)
self.update = opt.apply_gradients(zip(clipped_gradients, params), global_step=self.global_step)
loss_summ = tf.scalar_summary("{0}_loss".format(self.str_summary_type), self.mean_loss)
acc_summ = tf.scalar_summary("{0}_accuracy".format(self.str_summary_type), self.accuracy)
self.merged = tf.merge_summary([loss_summ, acc_summ])
The entire code is in the following repository, would greatly appreciate some input regarding this. https://bitbucket.org/briq/lm-generator/src

Compute status: Not found: Tensor name "input_producer/limit_epochs/epochs" not found in checkpoint files

I'm using the CIFAR10 example. I trained the net as it is with the code provided. The training was done successfully. As I wanted to evaluate each example only once on my data set, I have modified inputs in cifar10_input.py to the following.
def inputs(eval_data, data_dir, batch_size):
filename = os.path.join(data_dir, TEST_FILE)
filename_queue = tf.train.string_input_producer([filename],num_epochs=1)
image, label = read_and_decode(filename_queue)
float_image = tf.image.per_image_whitening(image)
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_EVAL *
min_fraction_of_examples_in_queue)
images, label_batch = tf.train.batch(
[image, label],
batch_size=batch_size,
num_threads=1,
capacity=min_queue_examples + 3 * batch_size)
tf.image_summary('images', images)
return images, tf.reshape(label_batch, [batch_size])
I have isolated the problem to the following:
tf.train_string_input_producer([filename], num_epochs = 1)
If I don't set num_epochs = 1, everything works fine as it is. If I do, I get the following error.
0x2cf2700 Compute status: Not found: Tensor name "input_producer/limit_epochs/epochs" not found in checkpoint files /home/jkschin/tensorflow/my_code/data/svhn/train/model.ckpt-8000
Thank you for your help!
EDIT 3 #mrry:
It still fails. Here's the trace.
Traceback (most recent call last):
File "cnn_eval.py", line 148, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "cnn_eval.py", line 144, in main
evaluate()
File "cnn_eval.py", line 119, in evaluate
saver = tf.train.Saver([v for v in variables_to_restore if v.name != "input_producer/limit_epochs/epochs"])
AttributeError: 'unicode' object has no attribute 'name'
EDIT 4 #mrry:
softmax_linear/biases/ExponentialMovingAverage
conv2/biases/ExponentialMovingAverage
local4/biases/ExponentialMovingAverage
local3/biases/ExponentialMovingAverage
softmax_linear/weights/ExponentialMovingAverage
conv1/biases/ExponentialMovingAverage
local4/weights/ExponentialMovingAverage
conv2/weights/ExponentialMovingAverage
input_producer/limit_epochs/epochs
local3/weights/ExponentialMovingAverage
conv1/weights/ExponentialMovingAverage
Traceback (most recent call last):
File "cnn_eval.py", line 148, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "cnn_eval.py", line 144, in main
evaluate()
File "cnn_eval.py", line 119, in evaluate
saver = tf.train.Saver([v for v in variables_to_restore if v != "input_producer/limit_epochs/epochs"])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 784, in __init__
restore_sequentially=restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 437, in build
vars_to_save = self._ValidateAndSliceInputs(names_to_variables)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 340, in _ValidateAndSliceInputs
names_to_variables = self._VarListToDict(names_to_variables)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 314, in _VarListToDict
raise TypeError("Variable to save is not a Variable: %s" % var)
TypeError: Variable to save is not a Variable: Tensor("Const:0", shape=(), dtype=string)
EDIT 5 #mrry:
saver = tf.train.Saver([tf.Variable(0.0,validate_shape=False,name=v) for v in variables_to_restore if v != "input_producer/limit_epochs/epochs"])
0x21d0cb0 Compute status: Invalid argument: Assign requires shapes of both tensors to match. lhs shape= [] rhs shape= [10]
[[Node: save/Assign_8 = Assign[T=DT_FLOAT, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](softmax_linear/biases/ExponentialMovingAverage, save/restore_slice_8/_20)]]
TL;DR: In cifar10_eval.py, change the saver constructor so that it is:
saver = tf.train.Saver([v for v in variables_to_restore
if v != "input_producer/limit_epochs/epochs"])
This problem arises because tf.train.string_input_producer() internally creates a variable (called "input_producer/limit_epochs/epochs") when its num_epochs argument is not None. When, in cifar10_eval.py a tf.train.Saver is created, it uses tf.all_variables(), which includes the implicitly-created variable from the tf.nn.string_input_producer(). This list of variables determines the set of names that TensorFlow looks up in the checkpoint file.
Currently there isn't a great way to refer to implicitly created variables, other than by their name. Therefore, the best fix is to exclude the variable from the Saver constructor by name.
Another way of eliminating the implicit variable "input_producer/limit_epochs/epochs" is to only load the trainable variables:
saver = tf.train.Saver(tf.trainable_variables())