Keras: MLP on CPU Reproducibility - tensorflow

I'm building and testing a simple MLP model, but am running into an issue with the Keras reproducibility for my results. I am trying to set up my neural network so that the prediction outputs won't change when I run the network.
I have already followed the Keras guide online as well as this post (Reproducible results using Keras with TensorFlow backend). I am running Keras on my local machine with Tensorflow backend and the following versions:
tensorflow 2.0.0-alpha0,
keras 2.2.4-tf,
numpy 1.16.0
import os
os.environ['PYTHONHASHSEED']=str(0)
import random
random.seed(0)
from numpy.random import seed
seed(1)
import tensorflow as tf
tf.compat.v1.set_random_seed(2)
from keras import backend as K
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
import numpy as np
from tensorflow.python.keras.layers import Dropout, BatchNormalization
from tensorflow.python.keras.optimizers import Adam
class Machine_Learning_Classifier_Keras(object):
#classmethod
def _get_classifier(cls, n_input_features=None, **params):
KerasClassifier = tf.keras.wrappers.scikit_learn.KerasClassifier
Dense = tf.keras.layers.Dense
Sequential = tf.keras.models.Sequential
sk_params = {"epochs": 200, "batch_size": 128, "shuffle": False}
def create_model(optimizer='adam', init='he_normal'):
# create model
model = Sequential()
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(500, input_dim=4, kernel_initializer=init, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(250, kernel_initializer=init, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(500, kernel_initializer=init, activation='relu'))
model.add(Dense(1, kernel_initializer=init, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=3e-3, decay=0.85), metrics=['accuracy'])
return model
return KerasClassifier(build_fn=create_model, **sk_params)
if __name__ == "__main__":
X = np.asarray([[0.0, 0.0], [1.0, 1.0], [2.0, 2.5], [1.5, 1.6]])
y = np.asarray([0, 0, 1, 1])
nn = Machine_Learning_Classifier_Keras._get_classifier()
nn.fit(X, y, sample_weight=np.asarray([0, 0, 1, 1]))
values = np.asarray([[0.5, 0.5], [0.6, 0.5], [0.8, 1.0], [0.5, 0.5], [0.5, 0.5], [0.5, 0.5], [0.5, 0.5], [0.5, 0.5]])
probas = nn.predict_proba(values)
print(probas)
I would expect my outputs for the predict_proba values to stay the same between runs; however, I am getting the following for two successive runs (results will vary):
Run 1:
[[0.9439231 0.05607685]
[0.91351616 0.08648387]
[0.06378722 0.9362128 ]
[0.9439231 0.05607685]
[0.9439231 0.05607685]
[0.9439231 0.05607685]
[0.94392323 0.05607677]
[0.94392323 0.05607677]]
Run 2:
[[0.94391584 0.05608419]
[0.91350436 0.08649567]
[0.06378281 0.9362172 ]
[0.94391584 0.05608419]
[0.94391584 0.05608419]
[0.94391584 0.05608419]
[0.94391584 0.05608416]
[0.94391584 0.05608416]]

Ended up figuring out what the issue is, but not sure how to resolve -- it has something to do with the first BatchNormalization() layer, which is supposed to standardize the inputs. If you remove that layer the results are entirely reproducible, but something in the BatchNormalization() implementation is leading to non-reproducible behavior

If you run the mentioned code twice then it will show the behavior that you have just described. Cause every-time the model is being trained and its not necessary that it may lead to same local minimum every-time.
However if you train your model only once and save the weights and use those weights to predict the output then you will get same results every-time for same data.

Related

Great validation accuracy but terrible prediction

I have got the following CNN:
import os
import numpy as np
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from keras.models import Sequential
from keras.utils import to_categorical
from sklearn.preprocessing import LabelEncoder
from tqdm import tqdm
# Load the data
data_dir = PATH_DIR
x_train = []
y_train = []
total_files = 0
for subdir in os.listdir(data_dir):
subdir_path = os.path.join(data_dir, subdir)
if os.path.isdir(subdir_path):
total_files += len([f for f in os.listdir(subdir_path) if f.endswith('.npy')])
with tqdm(total=total_files, unit='file') as pbar:
for subdir in os.listdir(data_dir):
subdir_path = os.path.join(data_dir, subdir)
if os.path.isdir(subdir_path):
for image_file in os.listdir(subdir_path):
if image_file.endswith('.npy'):
image_path = os.path.join(subdir_path, image_file)
image = np.load(image_path)
x_train.append(image)
y_train.append(subdir)
pbar.update()
x_train = np.array(x_train)
y_train = np.array(y_train)
# Preprocess the labels
label_encoder = LabelEncoder()
y_train = label_encoder.fit_transform(y_train)
y_train = to_categorical(y_train)
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(57, 57, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(8, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10)
model.save('GeneratedModels/units_model_np.h5')
And then the following function that is called within a loop about 15 times a second. Where image is a numpy array.
def guess_unit(image, classList):
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
model = tf.keras.models.load_model(MODEL_PATH)
image = np.expand_dims(image, axis=0)
prediction = model.predict(image, verbose=0)
index = np.argmax(prediction)
# Return the predicted unit
return classList[index]
The problem is that when i train the model the accuracy is very high (99,99976%) but when I am using the predict the output is terribily wrong, to the point it does not make any sense. Sometimes the image received will be the same but the predict will return 2 different things.
I have no idea what am I doing wrong. It's the first time I am tinkering with Neural Networks.
I have tried to use the model.predict with the images that it was trained on and it's always getting them right. Is just when it receives dynamic images that it's terribly wrong.
NOTE: I have 8 classes and it was trained using about 13000 images.
Generally to get performance on your training data you have to split your data into training, testing and validation (which I see you haven't done). This can be done manually or done via adding validation_split into your fit function.
Without seeing any curves on how your loss and accuracy it's behaving it's difficult to make any suggestions. However it might be the case that your are underfitting or overfitting to your data (I would assume that your facing overfitting in your case). In case you are overfitting to your data, I would suggest you to add some regularization or change your model architecture as the one used might not be appropriate. Options that one could think of would be to add regularization via Dropout or adding regularization to your weights.

Why keras theano backend outperforms tensorflow?

I just dove into my first deep-learning project to get familiar with keras and tf.
It is a multi-label classification problem where I'm using a simple convolutional network to identify individual chemical compounds from XRD-patterns.
It took me some time but the results really blew my mind. I did not expect my simple network to perform that good in terms of accuracy.
Then I realized that I've been using theano as backend.
So I gave it another try with tensorflow and the very same input data gave terrible results. Even worse than a random forest classifier.
I'm pretty much clueless what is going on there.
I guess I'm doing something wrong, but I just couldn't figure out what ....
Anyway, here's the code (the input data is provided below...)
Theano_vs_TF.py:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os
os.environ['KERAS_BACKEND']='theano'
from keras.models import Model
from keras.models import Sequential
import keras as K
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
X=np.loadtxt('X.txt')
Y=np.loadtxt('Y.txt')
X3d = np.expand_dims(X, axis=-1)
# start with TF-model
tf_model = keras.Sequential()
wreg = tf.keras.regularizers.l2(l=0) # dont use for now
tf_model.add(layers.Conv1D(32, 8,strides=8, padding='same',
input_shape=(180,1), activation='relu',kernel_regularizer=wreg))
tf_model.add(layers.Conv1D(32, 5,strides=5, padding='same',
activation='relu',kernel_regularizer=wreg))
tf_model.add(layers.Conv1D(16, 3,strides=3, padding='same',
activation='relu',kernel_regularizer=wreg))
tf_model.add(layers.Flatten())
tf_model.add(layers.Dense(512,activation='relu',kernel_regularizer=wreg))
tf_model.add(layers.Dense(29, activation='sigmoid',kernel_regularizer=wreg))
tf_optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
tf_model.compile(loss='binary_crossentropy',
optimizer=tf_optimizer,
metrics=['accuracy'],
)
tf_model.summary()
# train tf-model
tf_history = tf_model.fit(X3d,Y,batch_size=128,
epochs=50, verbose=1,validation_split=0.3,
)
plt.plot(tf_history.history['accuracy'])
plt.plot(tf_history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# now compare to theano model
th_model = Sequential() #old way
wreg = K.regularizers.l2(l=0.0) # 1e-3 was way too much, no hints for overfitting so far
th_model.add(K.layers.Conv1D(32, 8,strides=8, padding='same',
input_shape=(180,1), activation='relu',
kernel_regularizer=wreg))
th_model.add(K.layers.Conv1D(32, 5,strides=5,
padding='same', activation='relu',kernel_regularizer=wreg))
th_model.add(K.layers.Conv1D(16, 3,strides=3, padding='same', activation='relu',kernel_regularizer=wreg))
th_model.add(K.layers.Flatten())
th_model.add(K.layers.Dense(512,activation='relu',kernel_regularizer=wreg))
th_model.add(K.layers.Dense(29, activation='sigmoid'))
#Define optimizer
optimizer = K.optimizers.adam(lr=1e-3)
# Compile model
th_model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'],
)
th_model.summary()
# train th model and plot metrics
th_history = th_model.fit(X3d,Y,batch_size=128,
epochs=50, verbose=1,validation_split=0.3,
)
plt.plot(th_history.history['acc'])
plt.plot(th_history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
Uguu links (will expire after 24h)
X.tar.gz
Y.tar.gz
Tensorflow learning curve:
Theano learning curve:

SavedModel - TFLite - SignatureDef - TensorInfo - Get intermediate Layer outputs

I would like to get intermediate layers output of a TFLite graph. Something in the lines of below.
Visualize TFLite graph and get intermediate values of a particular node?
The above solution works on frozen graphs only. Since SavedModel is the preferred way of serializing the model in TF 2.0, I would like to have a solution with a saved model. I tried to pass --output_arrays for "toco" with savedModelDir as input. This is not helping.
From the documentation, it looks like SignatureDefs in SavedModel is the option to achieve this. But, I could not get it working.
x = test_images[0:1]
output = model.predict(x, batch_size=1)
signature_def = signature_def_utils.build_signature_def(
inputs={name:"x:0", dtype: DT_FLOAT, tensor_shape: (1, 28,28, 1)})
outputs = {{name: "output:0", dtype: DT_FLOAT, tensor_shape: (1, 10)},
{name:"Dense_1:0", dtype: DT_FLOAT, tensor_shape: (1, 10)}})
tf.saved_model.save(model, './tf-saved-model-sigdefs', signature_def)
Can you share an example usage of SignatureDefs for this purpose?
BTW, I have been playing around with the below tutorial for this experiment.
https://www.tensorflow.org/beta/tutorials/images/intro_to_cnns
Below is the best solution that I have so far.
from __future__ import absolute_import, division, print_function
#!pip install -q tensorflow==2.0.0-alpha0
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
#Saving in saved_model format
tf.keras.experimental.export_saved_model(model, './tf-saved-model')
#Saving in TFLIte FlatBUffer format
tflite_mnist_model = "mnist_model.tflite"
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open(tflite_mnist_model, "wb").write(tflite_model)
#cloning Keras model with debug outputs
outputs = [model.get_layer("dense").output, model.get_layer("dense_1").output]
model_debug = tf.keras.Model(inputs=model.inputs, outputs=outputs)
#Saving in TFLIte FlatBUffer format with debug information
tflite_mnist_model = "mnist_model_debug.tflite"
converter = tf.lite.TFLiteConverter.from_keras_model(model_debug)
tflite_model = converter.convert()
open(tflite_mnist_model, "wb").write(tflite_model)
tf.lite.Interpreter has a new parameter as of tf 2.5: experimental_preserve_all_tensors, when set to True it allows you to query the output from any node. Here is how I used it:
import numpy as np
import tensorflow as tf
import cv2
import argparse
parser = argparse.ArgumentParser(description='Use tensorflow framework to determine class')
parser.add_argument('--model-path', '-m', default='model.tflite', type=str, help='path to the model')
parser.add_argument('--nodes', '-n', default='concat,convert_scores', type=str, help='comma separated list of node names')
parser.add_argument('--image', '-i', default='test.jpg', type=str, help='image to process')
args = parser.parse_args()
# read in model
interpreter = tf.lite.Interpreter(
args.model_path, experimental_preserve_all_tensors=True
)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# prepare image
img = cv2.imread(args.image)
inp = cv2.resize(img, tuple(input_details[0]["shape"][1:3]))
inp = inp[:, :, [2, 1, 0]] # BGR2RGB
inp = inp[np.newaxis, :, :, :].astype(input_details[0]["dtype"])
# invoke mode
interpreter.set_tensor(input_details[0]["index"], inp)
interpreter.invoke()
# write out layer output to files
for node in args.nodes.split(','):
for tensor_details in interpreter.get_tensor_details():
if tensor_details["name"] == node:
tensor = interpreter.get_tensor(tensor_details["index"])
np.save(node, tensor)
break
You can use netron to graphically view the network described by your tflite model file, the node names can be found by clicking on a node.

How ensure that Keras is using GPU with tensorflow backend?

I've created virtual notebook on Paperspace cloud infrastructure with Tensorflow GPU P5000 virtual instance on the backend.
When i am starting to train my network, it woks 2x SLOWER than on my MacBook Pro with pure CPU runtime engine.
How could i ensure that Keras NN is using GPU instead of CPU during training process?
Please find my code below:
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Dense
from tensorflow.contrib.keras.api.keras.layers import Dropout
from tensorflow.contrib.keras.api.keras import utils as np_utils
import numpy as np
import pandas as pd
# Read data
pddata= pd.read_csv('data/data.csv', delimiter=';')
# Helper function (prepare & test data)
def split_to_train_test (data):
trainLenght = len(data) - len(data)//10
trainData = data.loc[:trainLenght].sample(frac=1).reset_index(drop=True)
testData = data.loc[trainLenght+1:].sample(frac=1).reset_index(drop=True)
trainLabels = trainData.loc[:,"Label"].as_matrix()
testLabels = testData.loc[:,"Label"].as_matrix()
trainData = trainData.loc[:,"Feature 0":].as_matrix()
testData = testData.loc[:,"Feature 0":].as_matrix()
return (trainData, testData, trainLabels, testLabels)
# prepare train & test data
(X_train, X_test, y_train, y_test) = split_to_train_test (pddata)
# Convert labels to one-hot notation
Y_train = np_utils.to_categorical(y_train, 3)
Y_test = np_utils.to_categorical(y_test, 3)
# Define model in Keras
def create_model(init):
model = Sequential()
model.add(Dense(101, input_shape=(101,), kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(3, kernel_initializer=init, activation='softmax'))
return model
# Train the model
uniform_model = create_model("glorot_normal")
uniform_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
uniform_model.fit(X_train, Y_train, batch_size=1, epochs=300, verbose=1, validation_data=(X_test, Y_test))
You need to run your network with log_device_placement = True set in the TensorFlow session (the line before the last in the sample code below.) Interestingly enough, if you set that in a session, it will still apply when Keras does the fitting. So this code below (tested) does output the placement for each tensor. Please note, I've short-circuited the data reading because your data wan't available, so I'm just running the network with random data. The code this way is self-contained and runnable by anyone. Another note: if you run this from Jupyter Notebook, the output of the log_device_placement will go to the terminal where Jupyter Notebook was started, not the notebook cell's output.
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Dense
from tensorflow.contrib.keras.api.keras.layers import Dropout
from tensorflow.contrib.keras.api.keras import utils as np_utils
import numpy as np
import pandas as pd
import tensorflow as tf
# Read data
#pddata=pd.read_csv('data/data.csv', delimiter=';')
pddata = "foobar"
# Helper function (prepare & test data)
def split_to_train_test (data):
return (
np.random.uniform( size = ( 100, 101 ) ),
np.random.uniform( size = ( 100, 101 ) ),
np.random.randint( 0, size = ( 100 ), high = 3 ),
np.random.randint( 0, size = ( 100 ), high = 3 )
)
trainLenght = len(data) - len(data)//10
trainData = data.loc[:trainLenght].sample(frac=1).reset_index(drop=True)
testData = data.loc[trainLenght+1:].sample(frac=1).reset_index(drop=True)
trainLabels = trainData.loc[:,"Label"].as_matrix()
testLabels = testData.loc[:,"Label"].as_matrix()
trainData = trainData.loc[:,"Feature 0":].as_matrix()
testData = testData.loc[:,"Feature 0":].as_matrix()
return (trainData, testData, trainLabels, testLabels)
# prepare train & test data
(X_train, X_test, y_train, y_test) = split_to_train_test (pddata)
# Convert labels to one-hot notation
Y_train = np_utils.to_categorical(y_train, 3)
Y_test = np_utils.to_categorical(y_test, 3)
# Define model in Keras
def create_model(init):
model = Sequential()
model.add(Dense(101, input_shape=(101,), kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(3, kernel_initializer=init, activation='softmax'))
return model
# Train the model
uniform_model = create_model("glorot_normal")
uniform_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
with tf.Session( config = tf.ConfigProto( log_device_placement = True ) ):
uniform_model.fit(X_train, Y_train, batch_size=1, epochs=300, verbose=1, validation_data=(X_test, Y_test))
Terminal output (partial, it was way too long):
...
VarIsInitializedOp_13: (VarIsInitializedOp): /job:localhost/replica:0/task:0/device:GPU:0
2018-04-21 21:54:33.485870: I tensorflow/core/common_runtime/placer.cc:884]
VarIsInitializedOp_13: (VarIsInitializedOp)/job:localhost/replica:0/task:0/device:GPU:0
training/SGD/mul_18/ReadVariableOp: (ReadVariableOp): /job:localhost/replica:0/task:0/device:GPU:0
2018-04-21 21:54:33.485895: I tensorflow/core/common_runtime/placer.cc:884]
training/SGD/mul_18/ReadVariableOp: (ReadVariableOp)/job:localhost/replica:0/task:0/device:GPU:0
training/SGD/Variable_9/Read/ReadVariableOp: (ReadVariableOp): /job:localhost/replica:0/task:0/device:GPU:0
2018-04-21 21:54:33.485903: I tensorflow/core/common_runtime/placer.cc:884]
training/SGD/Variable_9/Read/ReadVariableOp: (ReadVariableOp)/job:localhost/replica:0/task:0/device:GPU:0
...
Note the GPU:0 at the end of many lines.
Tensorflow manual's relevant page: Using GPU: Logging Device Placement.
Put this near the top of your jupyter notebook. Comment out what you don't need.
# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())
# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))
NOTE: With the release of TensorFlow 2.0, Keras is now included as part of the TF API.
Originally answerwed here.
Considering keras is a built-in of tensorflow since version 2.0:
import tensorflow as tf
tf.test.is_built_with_cuda()
tf.test.is_gpu_available(cuda_only = True)
NOTE: the latter method may take several minutes to run.

Calling a Keras model on a TensorFlow tensor but keep weights

In Keras as a simplified interface to TensorFlow: tutorial they describe how one can call a Keras model on a TensorFlow tensor.
from keras.models import Sequential
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=784))
model.add(Dense(10, activation='softmax'))
# this works!
x = tf.placeholder(tf.float32, shape=(None, 784))
y = model(x)
They also say:
Note: by calling a Keras model, your are reusing both its architecture and its weights. When you are calling a model on a tensor, you are creating new TF ops on top of the input tensor, and these ops are reusing the TF Variable instances already present in the model.
I interpret this as that the weights of the model will be the same in y as in model. However, for me it seems like the weights in the resulting Tensorflow node are reinitialized. A minimal example can be seen below:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
# Create model with weight initialized to 1
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='ones',
bias_initializer='zeros'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# Save the weights
model.save_weights('file')
# Create another identical model except with weight initialized to 0
model2 = Sequential()
model2.add(Dense(1, input_dim=1, kernel_initializer='zeros',
bias_initializer='zeros'))
model2.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# Load the weight from the first model
model2.load_weights('file')
# Call model with Tensorflow tensor
v = tf.Variable([[1, ], ], dtype=tf.float32)
node = model2(v)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(node), model2.predict(np.array([[1, ], ])))
# Prints (array([[ 0.]], dtype=float32), array([[ 1.]], dtype=float32))
Why I want to do this:
I want to use a trained network in another minimization scheme were the network "punish" places in the search space that are not allowed. So if you have ideas not involving this specific approach, that is also very appreciated.
Finally found the answer. There are two problems in the example from the question.
1:
The first and most obvious was that I called the tf.global_variables_intializer() function which will re-initialize all variables in the session. Instead I should have called the tf.variables_initializer(var_list) where var_list is a list of variables to initialize.
2:
The second problem was that Keras did not use the same session as the native Tensorflow objects. This meant that to be able to run the tensorflow object model2(v) with my session sess it needed to be reinitialized. Again Keras as a simplified interface to tensorflow: Tutorial was able to help
We should start by creating a TensorFlow session and registering it with Keras. This means that Keras will use the session we registered to initialize all variables that it creates internally.
import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
If we apply these changes to the example provided in my question we get the following code that does exactly what is expected from it.
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense
sess = tf.Session()
# Register session with Keras
K.set_session(sess)
model = Sequential()
model.add(Dense(1, input_dim=1, kernel_initializer='ones',
bias_initializer='zeros'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.save_weights('test')
model2 = Sequential()
model2.add(Dense(1, input_dim=1, kernel_initializer='zeros',
bias_initializer='zeros'))
model2.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model2.load_weights('test')
v = tf.Variable([[1, ], ], dtype=tf.float32)
node = model2(v)
init = tf.variables_initializer([v, ])
sess.run(init)
print(sess.run(node), model2.predict(np.array([[1, ], ])))
# prints: (array([[ 1.]], dtype=float32), array([[ 1.]], dtype=float32))
Conclusion:
The lesson is that when mixing Tensorflow and Keras, make sure everything uses the same session.
Thanks for asking this question, and answering it, it helped me! In addition to setting the same tf session in the Keras backend, it is also important to note that if you want to load a Keras model from a file, you need to run a global variable initializer op before you load the model.
sess = tf.Session()
# make sure keras has the same session as this code
tf.keras.backend.set_session(sess)
# Do this BEFORE loading a keras model
init_op = tf.global_variables_initializer()
sess.run(init_op)
model = models.load_model('path/to/your/model.h5')