I am trying to export a model that I trained using keras with a tensorflow backend into a .pb file so I can use it for android and I am getting the following error when I try to freeze my graph.
Traceback (most recent call last):
File "youtubeExample.py", line 102, in <module>
export_model(tf.train.Saver(), model, ["conv2d_1_input"], "output")
File "youtubeExample.py", line 49, in export_model
'out/frozen_' + MODEL_NAME + '.pb', True, "")
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py", line 244, in freeze_graph
saved_model_tags.split(","), checkpoint_version=checkpoint_version)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py", line 153, in freeze_graph_with_def_protos
variable_names_blacklist=variable_names_blacklist)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/graph_util_impl.py", line 232, in convert_variables_to_constants
inference_graph = extract_sub_graph(input_graph_def, output_node_names)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/graph_util_impl.py", line 174, in extract_sub_graph
_assert_nodes_are_present(name_to_node, dest_nodes)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/graph_util_impl.py", line 133, in _assert_nodes_are_present
assert d in name_to_node, "%s is not in graph" % d
AssertionError: output is not in graph
The the part of code that I use to define my model is:
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(224,224,3)))
model.add(Flatten())
model.add(Dense(units = 2, activation='softmax', name = "output"))
model.compile(Adam(lr=.0001), loss='categorical_crossentropy', metrics=['accuracy'])
I would appreciate any help I could get with this issue as I am relatively new to tensorflow/keras.
Related
Deep RNN Model was working like a month ago. Lest it as a differnt project took over. Now coming back and trying to run training I get an error.
Getting an error:
Traceback (most recent call last):
File "/home/matiss/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/201.7223.92/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "", line 1, in
File "/home/matiss/Documents/python_work/PycharmProjects/NectCleave/functions.py", line 358, in weighted_model
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1213, in fit
self._make_train_function()
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 314, in _make_train_function
training_updates = self.optimizer.get_updates(
File "/usr/local/lib/python3.8/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/optimizers.py", line 504, in get_updates
grads = self.get_gradients(loss, params)
File "/usr/local/lib/python3.8/dist-packages/keras/optimizers.py", line 93, in get_gradients
raise ValueError('An operation has None for gradient. '
ValueError: An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
My model arhitecture:
def make_model(metrics='', output_bias=None, timesteps=None, features=None):
from keras import regularizers
if output_bias is not None:
output_bias = Constant(output_bias)
K.clear_session()
model = Sequential()
# First LSTM layer
model.add(
Bidirectional(LSTM(units=50, return_sequences=True, recurrent_dropout=0.1), input_shape=(timesteps, features)))
model.add(Dropout(0.5))
# Second LSTM layer
model.add(Bidirectional(LSTM(units=50, return_sequences=True)))
model.add(Dropout(0.5))
# Third LSTM layer
model.add(Bidirectional(LSTM(units=50, return_sequences=True)))
model.add(Dropout(0.5))
# Forth LSTM layer
model.add(Bidirectional(LSTM(units=50, return_sequences=False)))
model.add(Dropout(0.5))
# First Dense Layer
model.add(Dense(units=128, kernel_initializer='he_normal', activation='relu'))
model.add(Dropout(0.5))
# Adding the output layer
if output_bias == None:
model.add(Dense(units=1, activation='sigmoid', kernel_regularizer=regularizers.l2(0.001)))
else:
model.add(Dense(units=1, activation='sigmoid',
bias_initializer=output_bias, kernel_regularizer=regularizers.l2(0.001)))
# https://keras.io/api/losses/
model.compile(optimizer=Adam(lr=1e-3), loss=BinaryCrossentropy(), metrics=metrics)
return model
Please helpo. Why is this happening?
Okay, so after a half of the day of googling and checking stuff I could not find a solution.
Then I decided to just set up a new python virtual enviroment, install all the required packages and boom: it works again.
Have no idea what was issue and how did it come to happen but it works now.
Hope this saves some time to others in same problems.
I am trying to build a CNN-LSTM model. My dataset is made by a bunch of single-channel images of size (n_spot, n_meas) = (100,102). In this small instance, I have got just n_sim = 20 images so that:
print(X_train.shape) -> (12, 100, 102, 1)
print(X_val.shape) -> (4, 100, 102, 1)
print(X_test.shape) -> (4, 100, 102, 1)
print(y_train.shape) -> (12, 200)
print(y_val.shape) -> (4, 200)
print(y_test.shape) -> (4,200)
My code is the following:
# create model
model = Sequential()
# add model layers
model.add(TimeDistributed(Conv2D(filters = 8, kernel_size=[3,3], padding = 'same',
activation='relu', input_shape=(None,n_spot,n_meas,1))))
model.add(TimeDistributed(MaxPool2D(pool_size=(3,3))))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(20, return_sequences=False))
model.add(Dense(2*n_spot, activation = 'linear'))
#compile model using accuracy to measure model performance
model.compile(optimizer='adam', loss='mean_squared_error')
model.summary() returns the following error:
'This model has not yet been built. '
ValueError: This model has not yet been built. Build the model first by calling build() or calling fit() with some data. Or specify input_shape or batch_input_shape in the first layer for automatic build.
while I get this one whenever I try to train my model with
model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=30)
Using TensorFlow backend.
Traceback (most recent call last):
File "<ipython-input-1-1109305aff30>", line 1, in <module>
runfile('C:/Users/nle5266/Documents/positioning/reduced_data.py', wdir='C:/Users/nle5266/Documents/positioning')
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile
execfile(filename, namespace)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/nle5266/Documents/positioning/reduced_data.py", line 338, in <module>
model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=30, callbacks=[tensorboard])
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\engine\training.py", line 952, in fit
batch_size=batch_size)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\engine\training.py", line 677, in _standardize_user_data
self._set_inputs(x)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\engine\training.py", line 589, in _set_inputs
self.build(input_shape=(None,) + inputs.shape[1:])
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\engine\sequential.py", line 221, in build
x = layer(x)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\engine\base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\layers\wrappers.py", line 248, in call
y = self.layer.call(inputs, **kwargs)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\layers\convolutional.py", line 171, in call
dilation_rate=self.dilation_rate)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\keras\backend\tensorflow_backend.py", line 3650, in conv2d
data_format=tf_data_format)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 779, in convolution
data_format=data_format)
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 828, in __init__
input_channels_dim = input_shape[num_spatial_dims + 1]
File "C:\Users\nle5266\AppData\Local\conda\conda\envs\tensorflow_env\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 616, in __getitem__
return self._dims[key]
IndexError: list index out of range
Googling around, I realized that this kind of problem is usually due to the input shape of the time distributed conv2D layer which should a 4-tuple. I tried with (None,n_spot,n_meas,1) with no success.
I'm using keras 2.2.4 with tensorflow 1.12.0. I tried to upgrade tensorflow to the last nightly but it didn't solve the problem.
I am trying to train on a data containing sequences of 43 records of 3-dimensional vectors. While trying to add this Conv1D layer here:
model = Sequential()
model.add(Conv1D(input_shape=(43, 3),
filters=16,
kernel_size=4,
padding='same')) # This is line 24 of bcl_model_builder.py
model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Dropout(0.5))
I am getting the following error. And I got no clue what went wrong here:
Traceback (most recent call last):
File "/home/shx/programs/pycharm-community-2017.1.3/helpers/pydev/pydevd.py", line 1585, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/shx/programs/pycharm-community-2017.1.3/helpers/pydev/pydevd.py", line 1015, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/shx/programs/pycharm-community-2017.1.3/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/shx/PycharmProjects/FBS/bcl/bcl_train_model.py", line 34, in <module>
model = mb.build_model()
File "/home/shx/PycharmProjects/FBS/bcl/bcl_model_builder.py", line 24, in build_model
padding='same'))
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/keras/models.py", line 442, in add
layer(x)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/keras/engine/topology.py", line 602, in __call__
output = self.call(inputs, **kwargs)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/keras/layers/convolutional.py", line 156, in call
dilation_rate=self.dilation_rate[0])
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 3124, in conv1d
data_format=tf_data_format)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 672, in convolution
op=op)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 338, in with_space_to_batch
return op(input, num_spatial_dims, padding)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 664, in op
name=name)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 116, in _non_atrous_convolution
name=scope)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 2013, in conv1d
data_format=data_format)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 397, in conv2d
data_format=data_format, name=name)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 589, in apply_op
param_name=input_name)
File "/home/shx/pyenvs/finainpy_env1/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 60, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'input' has DataType float64 not in list of allowed values: float16, float32
After giving so much thought on float64 in the error log, I realized that keras itself was configured that way in my machine
{
"floatx": "float64",
"epsilon": 1e-07,
"backend": "tensorflow",
"image_data_format": "channels_last"
}
I just had to update the property floatx to float32.
I want to feed IMDB dataset with multi layer dynamic LSTM network but it seems that next LSTM layers couldn't parse previous layers output.
code:
net = tflearn.input_data([None, 100])
net = tflearn.embedding(net, input_dim=6819, output_dim=256)
net = tflearn.lstm(net, 256, weights_init="xavier", dynamic=True, return_seq=True)
net = tflearn.dropout(net, 0.8)
net = tflearn.lstm(net, 256, weights_init="xavier", dynamic=True)
net = tflearn.dropout(net, 0.8)
net = tflearn.fully_connected(net, 2, activation='softmax')
error:
Traceback (most recent call last):
File "train.py", line 65, in <module>
model.fit(train_x, train_y, validation_set=(test_x, test_y), show_metric=True, batch_size=32)
File ".../tflearn/models/dnn.py", line 216, in fit
callbacks=callbacks)
File ".../tflearn/helpers/trainer.py", line 339, in fit
show_metric)
File ".../tflearn/helpers/trainer.py", line 818, in _train
feed_batch)
File ".../tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File ".../tensorflow/python/client/session.py", line 1100, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (32, 2) for Tensor u'TargetsData/Y:0', which has shape '(100, 2)'
I was following the standard cifar 10 keras tutorial here: https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py
Which I modified to use my own training images. Each image replicates the dimensions of the cifar set, ie, they are each 32x32 and 3 channels.
Shape of each image:
(32,32,3)
However, I run into a ValueError as shown in the full output below.
X_train shape: (7200, 32, 32, 3)
7200 train samples
800 test samples
Using real-time data augmentation.
Epoch 1/200
Traceback (most recent call last):
File "<ipython-input-16-70ca8831e139>", line 162, in <module>
validation_data=(X_test, Y_test))
File "/storage/programfiles/anaconda3/lib/python3.5/site-packages/keras/models.py", line 651, in fit_generator
max_q_size=max_q_size)
File "/storage/programfiles/anaconda3/lib/python3.5/site-packages/keras/engine/training.py", line 1383, in fit_generator
class_weight=class_weight)
File "/storage/programfiles/anaconda3/lib/python3.5/site-packages/keras/engine/training.py", line 1167, in train_on_batch
outputs = self.train_function(ins)
File "/storage/programfiles/anaconda3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 659, in __call__
updated = session.run(self.outputs + self.updates, feed_dict=feed_dict)
File "/storage/programfiles/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/storage/programfiles/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 625, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (32, 32, 32, 3) for Tensor 'convolution2d_input_7:0', which has shape '(?, 3, 32, 32)'
Can anyone help me out? :)
EDIT:
I tried reshaping as follows:
X_train = X_train.reshape((7200,3,32,32))
X_test = X_test.reshape((-1,3,32,32))
It crashed instead.
You actually need to transpose your array to the correct ordering, not a reshape:
X_train = np.transpose(X_train, (0, 3, 1, 2))
X_test = np.transpose(X_test, (0, 3, 1, 2))