How can a Tensorflow feature_column be used in conjunction with a Keras model?
E.g. for a Tensorflow estimator, we can use an embedding column from Tensorflow Hub:
embedded_text_feature_column = hub.text_embedding_column(
key="sentence",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.estimator.DNNClassifier(
hidden_units=[100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001))
However, I would like to use the TF Hub text_embedding_column as input to a Keras model. E.g.
net = tf.keras.layers.Input(...) # use embedding column here
net = tf.keras.layers.Flatten()
net = Dense(100, activation='relu')(net)
net = Dense(2)(net)
Is this possible?
The answer seems to be that you don't use feature columns. Keras comes with its own set of preprocessing functions for images and text, so you can use those.
So basically the tf.feature_columns are reserved for the high level API. Then the tf.keras.preprocessing() functions are used with tf.keras models.
Here is a link to the section on preprocessing data in the keras documentation.
https://keras.io/preprocessing/text/
Here is another Stackoverflow post that has an example of this approach.
Add Tensorflow pre-processing to existing Keras model (for use in Tensorflow Serving)
The keras functional api is a viable way to do this, but if you want to use feature_columns this tutorial shows you how:
https://www.tensorflow.org/beta/tutorials/keras/feature_columns
Basically it's this DenseFeatures layer that does the job:
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
Related
I'm adding data augmentation to my tensorflow model like so:
data_augmentation = keras.Sequential([
layers.experimental.preprocessing.RandomRotation(factor=0.4, fill_mode="wrap"),
layers.experimental.preprocessing.RandomTranslation(height_factor=0.2, width_factor=0.2, fill_mode="wrap"),
layers.experimental.preprocessing.RandomFlip("horizontal"),
layers.experimental.preprocessing.RandomContrast(factor=0.2),
layers.experimental.preprocessing.RandomHeight(factor=0.2),
layers.experimental.preprocessing.RandomWidth(factor=0.2)
])
input_shape = (299, 299, 3)
inceptionV3_base = tf.keras.applications.InceptionV3(
input_shape=input_shape,
include_top=False,
weights='imagenet'
)
tf_model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=input_shape),
data_augmentation,
inceptionV3_base,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(1024, activation='relu'),
tf.keras.layers.Dense(len(classes), activation='softmax')
])
Adding the data_augmentation layer to model slows down training by 13x. Am I using the keras preprocessing layers correctly?
I had the same issue - was missing ptxas (nvidia-cuda-toolkit package for me).
There are two ways of adding data augmentation:
1- Inside the model, just like the way you did.
2- Outside the model, and before training, using tf.data.Dataset.map()
Maybe trying option2 could make your model training faster. Try it!
More details here: https://keras.io/guides/preprocessing_layers/
There are many other ways of optimizing the training, such as the way you feed data into your model. Are you using tf.data.Dataset methods like cache() and prefetch(). More details can be found here: https://www.tensorflow.org/tutorials/load_data/images#configure_the_dataset_for_performance
I want to use ResNet50 with Imagenet weights.
The last layer of ResNet50 is (from here)
x = layers.Dense(1000, activation='softmax', name='fc1000')(x)
I need to keep the weights of this layer but remove the softmax function.
I want to manually change it so my last layer looks like this
x = layers.Dense(1000, name='fc1000')(x)
but the weights stay the same.
Currently I call my net like this
resnet = Sequential([
Input(shape(224,224,3)),
ResNet50(weights='imagenet', input_shape(224,224,3))
])
I need the Input layer because otherwise the model.compile says that placeholders aren't filled.
Generally there are two ways of achievieng this:
Quick way - supported functions:
To change the final layer's activation function, you can pass an argument classifier_activation.
So in order to get rid of activation all together, your module can be called like:
import tensorflow as tf
resnet = tf.keras.Sequential([
tf.keras.layers.Input(shape=(224,224,3)),
tf.keras.applications.ResNet50(
weights='imagenet',
input_shape=(224,224,3),
pooling="avg",
classifier_activation=None
)
])
This however, is not going to work if the you want a different function, that is not supported by Keras classifer_activation parameter (e. g. custom activation function).
To achieve this you can use the workaround solution:
Long way - copy the model's weights
This solution proposes copying the original model's weights onto your custom one. This approach works because apart from the activation function you are not chaning the model's architecture.
You need to:
1. Download original model.
2. Save it's weights.
3. Declare your modified version of the model (in your case, without the activation function).
4. Set the weights of the new model.
Below snippet explains this concept in more detail:
import tensorflow as tf
# 1. Download original resnet
resnet = tf.keras.Sequential([
tf.keras.layers.Input(shape=(224,224,3)),
tf.keras.applications.ResNet50(
weights='imagenet',
input_shape=(224,224,3),
pooling="avg"
)
])
# 2. Hold weights in memory:
imagenet_weights = resnet.get_weights()
# 3. Declare the model, but without softmax
resnet_no_softmax = tf.keras.Sequential([
tf.keras.layers.Input(shape=(224,224,3)),
tf.keras.applications.ResNet50(
include_top=False,
weights='imagenet',
input_shape=(224,224,3),
pooling="avg"
),
tf.keras.layers.Dense(1000, name='fc1000')
])
# 4. Pass the imagenet weights onto the second resnet
resnet_no_softmax.set_weights(imagenet_weights)
Hope this helps!
When using SavedModels in Tensorflow 2.0, is it possible to access activations from intermediate layers? For example, with one of the models here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md, I can run, for example,
model = tf.saved_model.load('faster_rcnn_inception_v2_coco_2018_01_28/saved_model').signatures['serving_default']
outputs = model(input_tensor)
to get output predictions and bounding boxes. I would like to be able to access layers other than the outputs, but there doesn't seem to be any documentation for Tensorflow 2.0 on how to do this. The downloaded models also include checkpoint files, but there doesn't seem to be very good documentation for how to load those with Tensorflow 2.0 either...
If you are generating saved models using TensorFlow 2.0, it is possible to extract individual layers. But the model which you are referring to has been saved in TensorFlow 1.x. With TF 1.x saved models, you cannot individually extract layers.
Here is an example on how you can extract layers from a saved model in TensorFlow 2.0
import tensorflow as tf
import numpy as np
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(100,)),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Compile and fit the model
model.save('save_model', save_format='tf')
Then load the model as shown.
model = tf.keras.models.load_model('save_model')
layer1 = model.get_layer(index=1)
I followed "Tensorflow for poets" in 2017 and retrained my own collection of images and created "retrained_graph.pb" and "retrained_labels.txt"
Today I need to run this model on Tensorflow Serving.
There are two options to accomplish this:
Upgrade the old model to save it as under the "saved_model" format and use it on Tensorflow Serving - I found some SO postings to acccomplish it (this or
that).
Use the latest tensorflow Hub with Keras (https://www.tensorflow.org/tutorials/images/hub_with_keras)
I am looking for the best option among these, or a new one.
In my opinion, either using Tensorflow Hub or using the Pre-Trained Models inside tf.keras.applications is preferable because, in either cases, there won't be many code changes required to Save the Model, to make it compatible for Tensorflow Serving.
The code for reusing the Pre-Trained Model, MobileNet which is present inside tf.keras.applications is shown below:
#Import MobileNet V2 with pre-trained weights AND exclude fully connected layers
IMG_SIZE = 224
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras import Model
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
# Add Global Average Pooling Layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# Add a Output Layer
my_mobilenetv2_output = Dense(5, activation='softmax')(x)
# Combine whole Neural Network
my_mobilenetv2_model = Model(inputs=base_model.input, outputs=my_mobilenetv2_output)
You can Save the Model using the Code given below:
version = 1
MODEL_DIR = 'Image_Classification_Model'
export_path = os.path.join(MODEL_DIR, str(version))
tf.keras.models.save_model(model = model, filepath = export_path)
I am using a bidirectional RNN in Keras and need to use Tensoflows LazyAdamOptimizer. I need to do Gradient Normalization. How can I implement gradient normalization with tensorflows LazyAdamOptimizer and than use the functional keras model further on?
I am training a unsupervised RNN to predict a input sequence of lenght 10. The Problem is, that i am using a keras functional model. Because of the sparsity of the embedding layer i need to use Tensorflows LazyAdamOptimizer, which is not a default optimizer in keras. When using a default keras optimizer i can do gradient normalization just by setting the argument 'clipnorm=1' in the optimizer function. Because i am using LazyAdam i need to do this with tensorflow and than pass it back to my keras model, but i can't get the code going.
#model architecture
model_input = Input(shape=(seq_len, ))
embedding_a = Embedding(len(port_fwd_dict), 50, input_length=seq_len, mask_zero=True)(model_input)
lstm_a = Bidirectional(GRU(25, return_sequences=True,implementation=2, reset_after=True, recurrent_activation='sigmoid'), merge_mode="concat (embedding_a)
dropout_a = Dropout(0.2)(lstm_a)
lstm_b = Bidirectional(GRU(25, return_sequences=False, activation="relu", implementation=2, reset_after=True, recurrent_activation='sigmoid'), merge_mode="concat")(dropout_a)
dropout_b = Dropout(0.2)(lstm_b)
dense_layer = Dense(100, activation="linear")(dropout_b)
dropout_c = Dropout(0.2)(dense_layer)
model_output = Dense(len(port_fwd_dict)-1, activation="softmax(dropout_c)
# trying to implement gradient normalization
optimizer = tf.contrib.opt.LazyAdamOptimizer()
optimizer = tf.contrib.estimator.clip_gradients_by_norm(optimizer, 1)
loss = tf.reduce_mean(categorical_crossentropy(model_input, model_output))
train_op = optimizer.minimize(loss, tf.train.get_global_step())
model = Model(inputs=model_input, outputs=model_output)
model.compile(optimizer=train_op, loss='categorical_crossentropie', metrics = [ 'categorical_accuracy'])
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size, validation_split=validation_split, class_weight = 'auto')
Blockquote
I get the following Error: NameError: name 'categorical_crossentropy' is not defined
But even if this error is solved, i do not know if this code will work. Because I need to use the keras function model.compile and in this function there need to be a loss specified. but when i do this in the tensorflow part above, it is not working.
I need a way to do gradient normalization and use my normal keras functional model?!
maybe you can try my implement of lazy optimizer:
https://github.com/bojone/keras_lazyoptimizer
It is a pure keras implement, wrapping a existing optimizer to be a lazy version.