Tensorflow tutorial estimator Failed to convert object of type <type 'dict'> to Tensor - tensorflow

I am running the tutorial code A Guide to TF Layers: Building a Convolutional Neural Network on API r.1.3
https://www.tensorflow.org/tutorials/layers
My code is here.
https://gist.github.com/Po-Hsuan-Huang/91e31d59fd3aa07f40272b75fe2a924d
The error shows:
runfile('/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py', wdir='/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST')
Extracting MNIST-data/train-images-idx3-ubyte.gz
Extracting MNIST-data/train-labels-idx1-ubyte.gz
Extracting MNIST-data/t10k-images-idx3-ubyte.gz
Extracting MNIST-data/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_tf_random_seed': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_save_checkpoints_steps': None, '_model_dir': '/tmp/mnist_convnet_model', '_save_summary_steps': 100}
Traceback (most recent call last):
File "<ipython-input-1-c9b70e26f791>", line 1, in <module>
runfile('/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py', wdir='/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST')
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
builtins.execfile(filename, *where)
File "/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 129, in <module>
main(None)
File "/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 117, in main
hooks=[logging_hook])
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 241, in train
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 630, in _train_model
model_fn_lib.ModeKeys.TRAIN)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 615, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 24, in cnn_model_fn
input_layer = tf.reshape(features, [-1, 28, 28, 1])
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2619, in reshape
name=name)
File "/Users/pohsuanhuang/miniconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 493, in apply_op
raise err
TypeError: Failed to convert object of type <type 'dict'> to Tensor. Contents: {'x': <tf.Tensor 'random_shuffle_queue_DequeueMany:1' shape=(100, 784) dtype=float32>}. Consider casting elements to a supported type.
I traced down a little bit, and found the function estimator._call_input_fn() does not use parameter 'mode' at all, thus not able to create a tuple comprises features and labels. Is it the tutorial that needs to be modified, or there is some problem with this function. I don't understand why mode is unused here.
Thanks !
def _call_input_fn(self, input_fn, mode):
"""Calls the input function.
Args:
input_fn: The input function.
mode: ModeKeys
Returns:
Either features or (features, labels) where features and labels are:
features - `Tensor` or dictionary of string feature name to `Tensor`.
labels - `Tensor` or dictionary of `Tensor` with labels.
Raises:
ValueError: if input_fn takes invalid arguments.
"""
del mode # unused
input_fn_args = util.fn_args(input_fn)
kwargs = {}
if 'params' in input_fn_args:
kwargs['params'] = self.params
if 'config' in input_fn_args:
kwargs['config'] = self.config
with ops.device('/cpu:0'):
return input_fn(**kwargs)

Your gist doesn't actually contain any of your code... Either way, from your error message I think you have just mistranscribed a bit of code from the tutorial.
Your error log indicates you have
"/Users/pohsuanhuang/Documents/workspace/tensorflow_models/NMIST/cnn_mnist.py", line 24, in cnn_model_fn
input_layer = tf.reshape(features, [-1, 28, 28, 1])
Whereas the tutorial has:
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])

Related

Why TensorFlow throws this exception when loading a model that was normalized like this?

All latest versions from the very moment of this post.
tensorflow-gpu: 2.6.0
Python: 3.9.7
CUDA: 11.4.2
cuDNN: 8.2.4
As in the code below, when loading a model that was normalized by not passing arguments to Normalization() it throws an exception when that model is loaded by load_model(), however before loading the model I can use it without any apparent issues which makes you think it's all good since Normalization() did NOT complain and took care of the input shape. When loading a model that was normalized by Normalization(input_dim=5) it does NOT thrown any exception since a known shape is specified. That is weird I mean it should warn you that when normalizing it without passing arguments to Normalization() you should expect an exception when loading it.
I'm not sure if it's a bug so I'm posting it here before reporting a bug in the github section, maybe I'm missing to setup something.
Here's my code:
import numpy as np
import tensorflow as tf
def main():
train_data = np.array([[1, 2, 3, 4, 5]])
train_label = np.array([123])
# Uncomment this to load the model and comment the next model and normalizer related lines.
#model = tf.keras.models.load_model('AI/test.h5')
normalizer = tf.keras.layers.experimental.preprocessing.Normalization()
normalizer.adapt(train_data)
model = tf.keras.Sequential([normalizer, tf.keras.layers.Dense(units=1)])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.1), loss='mean_absolute_error')
model.fit(train_data, train_label, epochs=3000)
model.save('AI/test.h5')
unseen_data = np.array([[1, 2, 3, 4, 6]])
prediction = model.predict(unseen_data)
print(prediction)
if __name__ == "__main__":
main()
It throws the following exception:
Traceback (most recent call last):
File "E:\Backup\Desktop\tensorflow_test.py", line 30, in <module>
main()
File "E:\Backup\Desktop\tensorflow_test.py", line 11, in main
model = tf.keras.models.load_model('AI/test.h5')
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\saving\save.py", line 200, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects,
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\saving\hdf5_format.py", line 180, in load_model_from_hdf5
model = model_config_lib.model_from_config(model_config,
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\saving\model_config.py", line 52, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\serialization.py", line 208, in deserialize
return generic_utils.deserialize_keras_object(
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\utils\generic_utils.py", line 674, in deserialize_keras_object
deserialized_obj = cls.from_config(
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\sequential.py", line 434, in from_config
model.add(layer)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\training\tracking\base.py", line 530, in _method_wrapper
result = method(self, *args, **kwargs)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\sequential.py", line 217, in add
output_tensor = layer(self.outputs[0])
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 976, in __call__
return self._functional_construction_call(inputs, args, kwargs,
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 1114, in _functional_construction_call
outputs = self._keras_tensor_symbolic_call(
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 848, in _keras_tensor_symbolic_call
return self._infer_output_signature(inputs, args, kwargs, input_masks)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 886, in _infer_output_signature
self._maybe_build(inputs)
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\base_layer.py", line 2659, in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
File "C:\Users\censored\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\preprocessing\normalization.py", line 145, in build
raise ValueError(
ValueError: All `axis` values to be kept must have known shape. Got axis: (-1,), input shape: [None, None], with unknown axis at index: 1
Process finished with exit code 1
It looks like a bug.
Follow this link
if 'input_dim' in kwargs and 'input_shape' not in kwargs:
# Backwards compatibility: alias 'input_dim' to 'input_shape'.
kwargs['input_shape'] = (kwargs['input_dim'],)
if 'input_shape' in kwargs or 'batch_input_shape' in kwargs:
# In this case we will later create an input layer
# to insert before the current layer
if 'batch_input_shape' in kwargs:
batch_input_shape = tuple(kwargs['batch_input_shape'])
elif 'input_shape' in kwargs:
if 'batch_size' in kwargs:
batch_size = kwargs['batch_size']
else:
batch_size = None
batch_input_shape = (batch_size,) + tuple(kwargs['input_shape'])
self._batch_input_shape = batch_input_shape
The error occurs because the normalization could not get any shape information which would lead to self._input_batch_shape =(None, None).
But when loading model(deserialization), It would call build function which should have known shape in all axes.
# Sorted to avoid transposing axes.
self._keep_axis = sorted([d if d >= 0 else d + ndim for d in self.axis])
# All axes to be kept should have known shape.
for d in self._keep_axis:
if input_shape[d] is None:
raise ValueError(
'All `axis` values to be kept must have known shape. Got axis: {}, '
'input shape: {}, with unknown axis at index: {}'.format(
self.axis, input_shape, d))

Training keras models in a loop: "Tensor is not an element of this graph" when saving model afther calling K.clear_session()

I'm trying to train multiple Keras models in a loop to evaluate different parameters. To avoid memory problems, I call K.clear_session(), before each model building.
After adding the K.clear_session() call, I started getting this error when saving the second model.
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("level1/kernel:0", shape=(3, 3, 3, 16), dtype=float32_ref) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/gus/workspaces/wpy/cnn/srs/train_generators.py", line 286, in
train_models(model_defs)
File "/home/gus/workspaces/wpy/cnn/srs/train_generators.py", line 196, in train_models
model.save(file_path)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/engine/network.py", line 1090, in save
save_model(self, filepath, overwrite, include_optimizer)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/engine/saving.py", line 382, in save_model
_serialize_model(model, f, include_optimizer)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/engine/saving.py", line 97, in _serialize_model
weight_values = K.batch_get_value(symbolic_weights)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2420, in batch_get_value
return get_session().run(ops)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1137, in _run
self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 471, in init
self._fetch_mapper = _FetchMapper.for_fetch(fetches)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 261, in for_fetch
return _ListFetchMapper(fetch)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 370, in init
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 370, in
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 271, in for_fetch
return _ElementFetchMapper(fetches, contraction_fn)
File "/home/gus/workspaces/venvs/dlcv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 307, in init
'Tensor. (%s)' % (fetch, str(e)))
ValueError: Fetch argument cannot be interpreted as a Tensor. (Tensor Tensor("level1/kernel:0", shape=(3, 3, 3, 16), dtype=float32_ref) is not an element of this graph.)
The code basically:
while <models to train>:
K.clear_session()
model = modeldef.build() # everything that has a tensor goes here and just here
# create generators from directories
opt = Adam(lr=0.001, decay=0.001 / epochs)
model.compile(...)
H = model.fit_generator(...)
model.save(file_path) # --> here it crashes
No matter how deep the network is, a super simple ConvNet like this makes the code fail when saving:
class SuperSimpleCNN:
def __init__(self, img_size, depth):
self.img_size = img_size
self.depth = depth
def build(self):
init = Input(shape=(self.img_size, self.img_size, self.depth))
x = Convolution2D(16, (3, 3), padding='same', name='level1')(init)
x = Activation('relu')(x)
out = Convolution2D(self.depth, (5, 5), padding='same', name='output')(x)
model = Model(init, out)
return model
Looking similar problems, I understand the problem is due to the fact that keras shares a global session, and different graphs from different models can't be mixed.
But I don't understand why using K.clear_session() before each model makes the save operation fail when iteration>1. And why the difference between Tensor and Variable.
<tf.Variable 'level1/kernel:0' shape=(3, 3, 3, 16) dtype=float32_ref> cannot be interpreted as a Tensor
Can anyone help?
Thank you.
My mistake, I was importing the wrong package:
from tensorflow.python.keras import backend as K
instead of
import keras.backend as K

tensorflow 2.0, variable_scope(), TypeError: __call__() got an unexpected keyword argument 'partition_info'

I have convert a CNN model from tf1.x to tf2.0 by using tf_upgrade_v2, but when i used this converted model, i got an error:
File "/home/hsw/virtual_env/tf2.0/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2492, in default_variable_creator
import_scope=import_scope, distribute_strategy=distribute_strategy)
File "/home/hsw/virtual_env/tf2.0/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 216, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/home/hsw/virtual_env/tf2.0/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 422, in __init__
constraint=constraint)
File "/home/hsw/virtual_env/tf2.0/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 545, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "/home/hsw/virtual_env/tf2.0/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 886, in <lambda>
shape.as_list(), dtype=dtype, partition_info=partition_info)
TypeError: __call__() got an unexpected keyword argument 'partition_info'
it seems like something wrong in variables.py, and the converted model such as like this :
with tf.compat.v1.variable_scope('backbone', reuse=tf.compat.v1.AUTO_REUSE):
net = tf.compat.v1.layers.separable_conv2d(inputs, 16, 3, 1, 'same',
activation=tf.nn.elu,
depthwise_initializer=tf.keras.initializers.glorot_normal(),
pointwise_initializer=tf.keras.initializers.glorot_normal(),
name='conv1')
net = tf.compat.v1.layers.max_pooling2d(net, 2, 2, padding='same')
net = tf.compat.v1.layers.separable_conv2d(net, 32, 3, 1, 'same',
activation=tf.nn.elu,
depthwise_initializer=tf.keras.initializers.glorot_normal(),
pointwise_initializer=tf.keras.initializers.glorot_normal(),
name='conv2')
how should do to solve this problem?
This is a bug, and it's been already filed on github. See the discussion there: https://github.com/tensorflow/tensorflow/issues/26665#issuecomment-472950222

Save Model for Serving but "ValueError: Both labels and logits must be provided." when trying to export model

I wanted to save a model to do some predictions on specific pictures. Here is my serving function:
def _serving_input_receiver_fn():
# Note: only handles one image at a time
feat = tf.placeholder(tf.float32, shape=[None, 120, 50, 1])
return tf.estimator.export.TensorServingInputReceiver(features=feat, receiver_tensors=feat)
and here is where I export the model:
export_dir_base = os.path.join(FLAGS.model_dir, 'export')
export_dir = estimator.export_savedmodel(
export_dir_base, _serving_input_receiver_fn)
But I get the following error:
ValueError: Both labels and logits must be provided.
Now this Error I don't understand since the Serving stuff should just create a placeholder so I can later put some images through the placeholder to make predictions on the saved model?
Here is the whole traceback:
Traceback (most recent call last):
File "/home/cezary/models/official/mnist/mnist_tpu.py", line 222, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/cezary/models/official/mnist/mnist_tpu.py", line 206, in main
export_dir_base, _serving_input_receiver_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 650, in export_savedmodel
mode=model_fn_lib.ModeKeys.PREDICT)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 703, in _export_saved_model_for_mode
strip_default_attrs=strip_default_attrs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 811, in _export_all_saved_models
mode=model_fn_lib.ModeKeys.PREDICT)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1971, in _add_meta_graph_for_mode
mode=mode)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 879, in _add_meta_graph_for_mode
config=self.config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1992, in _call_model_fn
features, labels, mode, config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1107, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2203, in _model_fn
features, labels, is_export_mode=is_export_mode)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1131, in call_without_tpu
return self._call_model_fn(features, labels, is_export_mode=is_export_mode)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1337, in _call_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "/home/cezary/models/official/mnist/mnist_tpu.py", line 95, in model_fn
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_impl.py", line 156, in sigmoid_cross_entropy_with_logits
labels, logits)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1777, in _ensure_xent_args
raise ValueError("Both labels and logits must be provided.")
ValueError: Both labels and logits must be provided.
Nevermind the mnist naming, I just used the structure of the code, but didn't rename it.
Thanks for any help!
(I can't comment with a brand new account.) I was able to replicate your error by setting features and receiver_tensors to have the same value, but I don't think that your __serving_input_receiver_fn is implemented correctly. Can you follow the example here?

Conv1D with data_format channels_first yields error on Keras

I am trying to define a basic ConvNet using one Conv1D operation as follows:
n_class = 10
# channels last (default)
input_shape = (100, 1)
inp = Input(shape=input_shape)
x = Conv1D(4, kernel_size=9, activation=activations.relu, padding='same')(inp)
x = MaxPool1D(pool_size=5)(x)
x = Flatten()(x)
out = Dense(n_class, activation='softmax')(x)
model = Model(inputs=inp, outputs=out)
and that works fine (which uses data_format='channels_last' as default). However, if, instead, I want to use data_format='channels_first':
# "channels_first" inputs with shape (batch, channels, length).
input_shape = (1, 100)
inp = Input(shape=input_shape)
x = Conv1D(4, kernel_size=9, activation=activations.relu, data_format='channels_first', padding='same')(inp)
x = MaxPool1D(pool_size=5, data_format='channels_first')(x)
x = Flatten()(x)
out = Dense(n_class, activation='softmax')(x)
model = Model(inputs=inp, outputs=out)
I get the following error when defining the Conv1D layer:
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1596, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 974, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/edward/Desktop/baselines/cnn_mel.py", line 720, in <module>
model = get_1d_dummy_model(params_learn=params_learn, params_extract=params_extract)
File "/Users/edward/Desktop/baselines/architectures.py", line 71, in get_1d_dummy_model
x = Conv1D(4, kernel_size=9, activation=activations.relu, data_format='channels_first', padding='same')(inp)
File "/Users/edward/miniconda3/envs/baseline/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/Users/edward/miniconda3/envs/baseline/lib/python3.6/site-packages/keras/layers/convolutional.py", line 337, in __init__
**kwargs)
TypeError: __init__() got multiple values for keyword argument 'data_format'
Any insight as to what is being done wrong? thanks!
It seems that you are not using the most recent code (not just most recent release). For Conv1D layer, data_format='channels_first' is not supported even in the most recent release 2.1.6. You will need to clone and use codes from the master branch. The support is added by this commit on 5/7/2018. The documentation is always synced to the master which can be confusing. The idea (from the Keras creator François Chollet) is that
versions only exist to force PyPI users to upgrade. They are not meaningful. You should be constantly synced to master.
You can find some old Keras documentation here.