Implementing Variational Auto Encoder using Tensoroflow_Probability NOT on MNIST data set - tensorflow

I know there are many questions related to Variational Auto Encoders. However, this question in two aspects differs from the existing ones: 1) it is implemented using Tensforflow V2 and Tensorflow_probability; 2) It does not use MNIST or any other image data set.
As about the problem itself:
I am trying to implement VAE using Tensorflow_probability and Keras. and I want to train and evaluate it on some synthetic data sets --as part of my research. I provided the code below.
Although the implementation is done and during the training, the loss value decreases but once I want to evaluate the trained model on my test set I face different errors.
I am somehow confident that the issue is related to input/output shape but unfortunately I did not manage the solve it.
Here is the code:
import numpy as np
import tensorflow as tf
import tensorflow.keras as tfk
import tensorflow_probability as tfp
from tensorflow.keras import layers as tfkl
from sklearn.datasets import make_classification
from tensorflow_probability import layers as tfpl
from sklearn.model_selection import train_test_split
tfd = tfp.distributions
n_epochs = 5
n_features = 2
latent_dim = 1
n_units = 4
learning_rate = 1e-3
n_samples = 400
batch_size = 32
# Generate synthetic data / load data sets
x_in, y_in = make_classification(n_samples=n_samples, n_features=n_features, n_informative=2, n_redundant=0,
n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=[0.5, 0.5],
flip_y=0.01, class_sep=1.0, hypercube=True,
shift=0.0, scale=1.0, shuffle=False, random_state=42)
x_in = x_in.astype('float32')
y_in = y_in.astype('float32') # .reshape(-1, 1)
x_train, x_test, y_train, y_test = train_test_split(x_in, y_in, test_size=0.4, random_state=42, shuffle=True)
x_test, x_val, y_test, y_val = train_test_split(x_test, y_test, test_size=0.5, random_state=42, shuffle=True)
print("shapes:", x_train.shape, y_train.shape, x_test.shape, y_test.shape, x_val.shape, y_val.shape)
prior = tfd.Independent(tfd.Normal(loc=[tf.zeros(latent_dim)], scale=1.), reinterpreted_batch_ndims=1)
train_dataset = tf.data.Dataset.from_tensor_slices(x_train).batch(batch_size)
valid_dataset = tf.data.Dataset.from_tensor_slices(x_val).batch(batch_size)
test_dataset = tf.data.Dataset.from_tensor_slices(x_test).batch(batch_size)
encoder = tf.keras.Sequential([
tfkl.InputLayer(input_shape=[n_features, ], name='enc_input'),
tfkl.Lambda(lambda x: tf.cast(x, tf.float32)), # - 0.5
tfkl.Dense(n_units, activation='relu', name='enc_dense1'),
tfkl.Dense(int(n_units / 2), activation='relu', name='enc_dense2'),
tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(latent_dim),
activation=None, name='mvn_triL1'),
tfpl.MultivariateNormalTriL(
# weight >> num_train_samples or some thing except 1 to convert VAE to beta-VAE
latent_dim, activity_regularizer=tfpl.KLDivergenceRegularizer(prior, weight=1.), name='bottleneck'),
])
decoder = tf.keras.Sequential([
tfkl.InputLayer(input_shape=latent_dim, name='dec_input'),
# tfkl.Dense(n_units, activation='relu', name='dec_dense1'),
# tfkl.Dense(int(n_units * 2), activation='relu', name='dec_dense2'),
tfpl.IndependentBernoulli([n_features], tfd.Bernoulli.logits, name='dec_output'),
])
vae = tfk.Model(inputs=encoder.inputs, outputs=decoder(encoder.outputs), name='VAE')
print("enoder:", encoder)
print(" ")
print("encoder.inputs:", encoder.inputs)
print(" ")
print(" encoder.outputs:", encoder.outputs)
print(" ")
print("decoder:", decoder)
print(" ")
print("decoder:", decoder.inputs)
print(" ")
print("decoder.outputs:", decoder.outputs)
print(" ")
# negative log likelihood i.e the E_{S(eps)} [p(x|z)];
# because the KL term was added in the last layer of the encoder, i.e., via activity_regularizer.
# this loss function takes two arguments, namely the original data points x, and the output of the model,
# which we call it rv_x (because it is a random variable)
negloglik = lambda x, rv_x: -rv_x.log_prob(x)
vae.compile(optimizer=tf.optimizers.Adam(learning_rate=learning_rate),
loss=negloglik,)
vae.summary()
history = vae.fit(train_dataset, epochs=n_epochs, validation_data=valid_dataset,)
print("x.shape:", x_test.shape)
x_hat = vae(x_test)
print("original:")
print(x_test)
print(" ")
print("Decoded Random Samples:")
print(x_hat.sample())
print(" ")
print("Decoded Means:")
print(x_hat.mean())
The Questions:
With the above code I receive the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 80 values, but the requested shape has 160 [Op:Reshape]
As far I know we can add as many layers as I want in the decoder model before its output layer --as it is done a convolutional VAEs, am I right?
If I uncomment the following two lines of code in decoder:
# tfkl.Dense(n_units, activation='relu', name='dec_dense1'),
# tfkl.Dense(int(n_units * 2), activation='relu', name='dec_dense2'),
I see the following warnings and the upcoming error:
WARNING:tensorflow:Gradients do not exist for variables ['dec_dense1/kernel:0', 'dec_dense1/bias:0', 'dec_dense2/kernel:0', 'dec_dense2/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['dec_dense1/kernel:0', 'dec_dense1/bias:0', 'dec_dense2/kernel:0', 'dec_dense2/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['dec_dense1/kernel:0', 'dec_dense1/bias:0', 'dec_dense2/kernel:0', 'dec_dense2/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['dec_dense1/kernel:0', 'dec_dense1/bias:0', 'dec_dense2/kernel:0', 'dec_dense2/bias:0'] when minimizing the loss.
And the error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 640 values, but the requested shape has 160 [Op:Reshape]
Now the question is why the decoder layers are not used during the training as it is mentioned in the warning.
PS, I also tried to pass the x_train, x_valid, x_test directly during the training and evaluation process but it does not help.
Any helps would be indeed appreciated.

Related

"No gradients provided for any variable" error when trying to use GradientTape mechanism

I'm trying to use GradientTape mechanism for the first time. I've looked at some examples but I'm getting the "No gradients provided for any variable" error and was wondering how to overcome this?
I want to define some complex loss functions, so I tried using GradientTape to produce its gradient for the CNN training. What was I doing wrong and can I fix it?
Attached is a run-able example code that demonstrates my problem:
# imports
import numpy as np
import tensorflow as tf
import sklearn
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
tf.config.run_functions_eagerly(True)
#my loss function
def my_loss_fn(y_true, y_pred):
` # train SVM classifier
VarC=1E6
VarGamma='scale'
clf = SVC(kernel='rbf', C=VarC, gamma=VarGamma, probability=True )
clf.fit(y_pred, y_true)
y_pred = clf.predict_proba(y_pred)
scce = tf.keras.losses.SparseCategoricalCrossentropy()
return scce(y_true, y_pred)
`
#creating inputs to demontration
X0=0.5*np.ones((12,12))
X0[2:12:4,:]=0
X0[3:12:4,:]=0
X1=0.5*np.ones((12,12))
X1[1:12:4,:]=0
X1[2:12:4,:]=0
X1=np.transpose(X1)
X=np.zeros((2000,12,12))
for i in range(0,1000):
X[i]=X0+np.random.rand(12,12)
for i in range(1000,2000):
X[i]=X1+np.random.rand(12,12)
y=np.zeros(2000, dtype=int)
y[1000:2000]=1
x_train, x_val, y_train, y_val = train_test_split(X, y, train_size=0.5)
x_val, x_test, y_val, y_test = train_test_split(x_val, y_val, train_size=0.5)
x_train = tf.convert_to_tensor(x_train)
x_val = tf.convert_to_tensor(x_val)
x_test = tf.convert_to_tensor(x_test)
y_train = tf.convert_to_tensor(y_train)
y_val = tf.convert_to_tensor(y_val)
y_test = tf.convert_to_tensor(y_test)
inputs = keras.Input((12,12,1), name='images')
x0 = tf.keras.layers.Conv2D(8,4,strides=4)(inputs)
x0 = tf.keras.layers.AveragePooling2D(pool_size=(3, 3), name='pooling')(x0)
outputs = tf.keras.layers.Flatten(name='predictions')(x0)
model = keras.Model(inputs=inputs, outputs=outputs)
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001)
# Instantiate a loss function.
loss_fn = my_loss_fn
# Prepare the training dataset.
batch_size = 256
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 100
for epoch in range(epochs):
print('Start of epoch %d' % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables autodifferentiation.
with tf.GradientTape() as tape:
tape.watch(model.trainable_weights)
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Seen so far: %s samples' % ((step + 1) * 64))
And when running, I get:
ValueError: No gradients provided for any variable: (['conv2d_2/kernel:0', 'conv2d_2/bias:0'],). Provided grads_and_vars is ((None, <tf.Variable 'conv2d_2/kernel:0' shape=(4, 4, 1, 8) dtype=float32, nump
If I use some standard loss function:
For example the following model and loss function
inputs = keras.Input((12,12,1), name='images')
x0 = tf.keras.layers.Conv2D(8,4,strides=4)(inputs)
x0 = tf.keras.layers.AveragePooling2D(pool_size=(3, 3), name='pooling')(x0)
x0 = tf.keras.layers.Flatten(name='features')(x0)
x0 = layers.Dense(16, name='meta_features')(x0)
outputs = layers.Dense(2, name='predictions')(x0)
model = keras.Model(inputs=inputs, outputs=outputs)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
Everything works fine and converges well.
What am I doing wrong and can I fix it?

Keras Input Layer Shape On Input Layer Error

I am trying to learn ai algorithms by building. I found a question on Stackoverflow which is here.
I copied this code to try it out, and then modified it to this.
import numpy as np
import tensorflow as tf
from tensorflow import keras as keras
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from tensorflow.python.keras import activations
# Importing the dataset
dataset = np.genfromtxt("data.txt", delimiter='')
X = dataset[:, :-1]
y = dataset[:, -1]
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.08, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Initialising the ANN
#model = Sequential()
# Adding the input layer and the first hidden layer
#model.add(Dense(32, activation = 'relu', input_dim = 6))
# Adding the second hidden layer
#model.add(Dense(units = 32, activation = 'relu'))
# Adding the third hidden layer
#model.add(Dense(units = 32, activation = 'relu'))
# Adding the output layer
#model.add(Dense(units = 1))
#model = Sequential([
# keras.Input(shape= (6),name= "digits"),
# Dense(units = 32, activation = "relu"),
# Dense(units = 32, activation = "relu"),
# Dense(units = 1 , name = "predict")##
#])
#
input = keras.Input(shape= (6),name= "digits")
#x0 = Dense(units = 6)(input)
x1 = Dense(units = 32, activation = "relu")(input)
x2 = Dense(units = 32, activation = "relu")(x1)
output = Dense(units = 1 , name = "predict")(x2)
model = keras.Model(inputs = input , outputs= output)
#model.add(Dense(1))
# Compiling the ANN
#model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the ANN to the Training set
#model.fit(X_train, y_train, batch_size = 10, epochs = 200)
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
loss = keras.losses.MeanSquaredError()
epochs = 200
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step in range(len(X_train)):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model( X_train[step] , training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss(y_train[step], logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * 64))
y_pred = model.predict(X_test)
plt.plot(y_test, color = 'red', label = 'Real data')
plt.plot(y_pred, color = 'blue', label = 'Predicted data')
plt.title('Prediction')
plt.legend()
plt.show()
I modified the code for creating data when processing. If I use model.fit, it uses data I have given but I wanted to when epochs start to create data from a simulation and then process it.(sorry for bad english. if i couldn't explain very well)
When I start code in line 81:
Exception has occurred: ValueError
Input 0 of layer dense is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: (6,)
It gives an Exception. I tried to use shape=(6,) shape=(6,1) or similar to this but it doesn't fix anything.
You need to add a batch dimension when calling the keras model:
logits = model( X_train[step][np.newaxis,:] , training=True) # Logits for this minibatch
A batch dimension is used to feed multiple samples to the network. By default, Keras assumes that the input has a batch dimension. To feed one sample, Keras expects a batch of 1 sample. In that case, it means a shape of (1,6). If you want to feed a batch of 2 samples, then the shape will be (2,6), etc.

Get gradients with respect to inputs in Keras ANN model

bce = tf.keras.losses.BinaryCrossentropy()
ll=bce(y_test[0], model.predict(X_test[0].reshape(1,-1)))
print(ll)
<tf.Tensor: shape=(), dtype=float32, numpy=0.04165391>
print(model.input)
<tf.Tensor 'dense_1_input:0' shape=(None, 195) dtype=float32>
model.output
<tf.Tensor 'dense_3/Sigmoid:0' shape=(None, 1) dtype=float32>
grads=K.gradients(ll, model.input)[0]
print(grads)
None
So here i have Trained a 2 hidden layer neural network, input has 195 features and output is 1 size. I wanted to feed the neural network with validation instances named as X_test one by one with their correct labels in y_test and for each instance calculate the gradients of the output with respect to input, the grads upon printing gives me a None. Your help is appreciated.
One can do this using tf.GradientTape. I wrote the following code to learn a sin wave, and get its derivative in the spirit of this question. I think, it should be possible to extend the following codes in order to compute partial derivatives.
Importing the needed libraries:
import numpy as np
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import losses
import tensorflow as tf
Create the data:
x = np.linspace(0, 6*np.pi, 2000)
y = np.sin(x)
Defining a Keras NN:
def model_gen(Input_shape):
X_input = Input(shape=Input_shape)
X = Dense(units=64, activation='sigmoid')(X_input)
X = Dense(units=64, activation='sigmoid')(X)
X = Dense(units=1)(X)
model = Model(inputs=X_input, outputs=X)
return model
Training the model:
model = model_gen(Input_shape=(1,))
opt = Adam(lr=0.01, beta_1=0.9, beta_2=0.999, decay=0.001)
model.compile(loss=losses.mean_squared_error, optimizer=opt)
model.fit(x,y, epochs=200)
To obtain the gradient of the network w.r.t. the input:
x = list(x)
x = tf.constant(x)
with tf.GradientTape() as t:
t.watch(x)
y = model(x)
dy_dx = t.gradient(y, x)
dy_dx.numpy()
One can further visualise dy_dx to make sure of how smooth the derivative is. Finally, note that one get a smoother derivative when one uses a smooth activation (e.g. sigmoid) instead of Relu as noted here.

Issue in RECURRENT NEURAL NETWORK

Please let us know anything wrong in below code, not getting desire result -
from numpy import sqrt
from numpy import asarray
from pandas import read_csv
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
import tensorflow as tf
from sklearn import metrics
from sklearn.model_selection import train_test_split
Assign the value as 40 to the variabe RANDOM_SEED which will be the seed value. Set the random seed value using the value stored in the variable RANDOM_SEED.
RANDOM_SEED = 40
tf.random.set_seed(RANDOM_SEED)
split a univariate sequence into samples
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return asarray(X), asarray(y)
Read the dataset airline-passengers.csv and give parameter index_col as 0 and save it in variable df.
df = read_csv("airline-passengers.csv", index_col=0)
Convert the data type of the values dataframe df to float32 and save it in variable values.
Assign the value 5 to the variable n_steps which is the window size.
Split the samples using the function split_sequence and pass the parameters values and n_steps and save it in variables X and y
values = df.values.astype('float32')
n_steps = 5
X, y = split_sequence(values, n_steps)
Split the data X,y with the train_test_split function of sklearn with parameters test_size=0.33 and random_state=RANDOM_SEED.**
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=RANDOM_SEED)
Construct a fully-connected network structure defined using dense class
Create a sequential model Add a LSTM layer which has 200 nodes with
activation function as relu and input shape as (n_steps,1).
The first hidden layer has 100 nodes and uses the relu activation function.
The second hidden layer has 50 nodes and uses the relu activation
function.
The output layer has 1 node.
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(n_steps,1)))
model.add(Dense(100, activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(1))
While comipling the model pass the following parameters -
-optimizer as Adam
-loss as mse
-metrics as mae
model.compile(optimizer='Adam', loss='mse', metrics=['mae'])
fit the model with X_train, y_train, epochs=350, batch_size=32,verbose=0.
model.fit(X_train, y_train, epochs=350, batch_size=32, verbose=0)
Perform prediction on the test data (i.e) on X_test and save the predictions in the variable y_pred.
row = ([X_test])
y_pred = model.predict(row)
Calculate the mean squared error on the variables y_test and y_pred using the mean_squared_error function in sklearn metrics and save it in variable MSE.
Calculate the Root mean squared error on the variables y_test and y_pred by performing square root on the above result and save it in variable RMSE.
Calculate the mean absolute error on the variables y_test and y_pred using the mean_absolute_error function in sklearn metrics and save it in variable MAE.
MSE = metrics.mean_squared_error(y_test,y_pred)
RMSE = sqrt(metrics.mean_squared_error(y_test,y_pred))
MAE = metrics.mean_absolute_error(y_test,y_pred)
print('MSE: %.3f, RMSE: %.3f, MAE: %.3f' % (MSE, RMSE,MAE))
MSE: 665.522, RMSE: 25.798, MAE: 17.127 ... this we getting and it is wrong.
with open("MSE.txt", "w") as text_file:
MSE=str(MSE)
text_file.write(MSE)
with open("RMSE.txt", "w") as text_file:
RMSE=str(RMSE)
text_file.write(RMSE)
with open("MAE.txt", "w") as text_file:
MAE=str(MAE)
text_file.write(MAE)
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
airline-passengers.zip
RNN_Question.zip
After all we are running -
from hashlib import md5
f = open("MSE.txt", "r")
s=f.read()
s=float(s)
s=round(s,3)
f1=open("RMSE.txt","r")
s1=f1.read()
s1=float(s1)
s1=round(s1,3)
f2=open("MAE.txt","r")
s2=f2.read()
s2=float(s2)
s2=round(s2,3)
if (md5(str(s).encode()).hexdigest() == '51ad543f7ac467cb8b518f1a04cc06af') and (md5(str(s1).encode()).hexdigest() == '6ad48a76bec847ede2ad2c328978bcfa') and (md5(str(s2).encode()).hexdigest() == '64bd1e146726e9f8622756173ab27831'):
print("Your MSE,RMSE and MAE Scores matched the expected output")
else :
print("Your MSE,RMSE and MAE Scores does not match the expected output")
Here our output should be match but coming as unmatched.
Try this:
def square_it(n):
global rv
rv+=str(n*n)+'\n'
print(rv)
it will work but it give extra line as output but it does not matter

Why is tensorflow having a worse accuracy than keras in direct comparison?

I made a direct comparison between TensorFlow vs Keras with the same parameters and the same dataset (MNIST).
The strange thing is that Keras achieves 96% performance in 10 epochs, while TensorFlow achieves about 70% performance in 10 epochs. I have run this code many times in the same instance and this inconsistency always occurs.
Even setting 50 epochs for TensorFlow, the final performance reaches 90%.
Code:
import keras
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# One hot encoding
from keras.utils import np_utils
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
# Changing the shape of input images and normalizing
x_train = x_train.reshape((60000, 784))
x_test = x_test.reshape((10000, 784))
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
# Creating the neural network
model = Sequential()
model.add(Dense(30, input_dim=784, kernel_initializer='normal', activation='relu'))
model.add(Dense(30, kernel_initializer='normal', activation='relu'))
model.add(Dense(10, kernel_initializer='normal', activation='softmax'))
# Optimizer
optimizer = keras.optimizers.Adam()
# Loss function
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['acc'])
# Training
model.fit(x_train, y_train, epochs=10, batch_size=200, validation_data=(x_test, y_test), verbose=1)
# Checking the final accuracy
accuracy_final = model.evaluate(x_test, y_test, verbose=0)
print('Model Accuracy: ', accuracy_final)
TensorFlow code: (x_train, x_test, y_train, y_test are the same as the input for the Keras code above)
import tensorflow as tf
# Epochs parameters
epochs = 10
batch_size = 200
# Neural network parameters
n_input = 784
n_hidden_1 = 30
n_hidden_2 = 30
n_classes = 10
# Placeholders x, y
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
# Creating the first layer
w1 = tf.Variable(tf.random_normal([n_input, n_hidden_1]))
b1 = tf.Variable(tf.random_normal([n_hidden_1]))
layer_1 = tf.nn.relu(tf.add(tf.matmul(x,w1),b1))
# Creating the second layer
w2 = tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]))
b2 = tf.Variable(tf.random_normal([n_hidden_2]))
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1,w2),b2))
# Creating the output layer
w_out = tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
bias_out = tf.Variable(tf.random_normal([n_classes]))
output = tf.matmul(layer_2, w_out) + bias_out
# Loss function
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = output, labels = y))
# Optimizer
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Making predictions
predictions = tf.equal(tf.argmax(output, 1), tf.argmax(y, 1))
# Accuracy
accuracy = tf.reduce_mean(tf.cast(predictions, tf.float32))
# Variables that will be used in the training cycle
train_size = x_train.shape[0]
total_batches = train_size / batch_size
# Initializing the variables
init = tf.global_variables_initializer()
# Opening the session
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(epochs):
# Loop through all batch iterations
for i in range(0, train_size, batch_size):
batch_x = x_train[i:i + batch_size]
batch_y = y_train[i:i + batch_size]
# Fit training
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
# Running accuracy (with test data) on each epoch
acc_val = sess.run(accuracy, feed_dict={x: x_test, y: y_test})
# Showing results after each epoch
print ("Epoch: ", "{}".format((epoch + 1)))
print ("Accuracy_val = ", "{:.3f}".format(acc_val))
print ("Training Completed!")
# Checking the final accuracy
checking = tf.equal(tf.argmax(output, 1), tf.argmax(y, 1))
accuracy_final = tf.reduce_mean(tf.cast(checking, tf.float32))
print ("Model Accuracy:", accuracy_final.eval({x: x_test, y: y_test}))
I'm running everything in the same instance. Can anyone explain this inconsistency?
I think it's the initialization that's the culprit. For example, one real difference is that you initialize bias in TF with random_normal which isn't the best practice, and in fact Keras defaults to initializing the bias to zero, which is the best practice. You don't override this, since you only set kernel_initializer, but not bias_initializer in your Keras code.
Furthermore, things are worse for the weight initializers. You are using RandomNormal for Keras, defined like so:
keras.initializers.RandomNormal(mean=0.0, stddev=0.05, seed=None)
But in TF you use tf.random.normal:
tf.random.normal(shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, seed=None, name=None)
I can tell you that using standard deviation of 0.05 is reasonable for initialization, but using 1.0 is not.
I suspect that if you changed these parameters, things would look better. But if they don't, I'd suggest dumping the TensorFlow graph for both models and just checking by hand to see the differences. The graphs are small enough in this case to double-check.
To some extent this highlights the difference in philosophy between Keras and TF. Keras tries hard to set good defaults for NN training that correspond to what is known to work. But TensorFlow is completely agnostic - you have to know those practices and explicitly code them in. The standard deviation thing is a stellar example: of course it should be 1 by default in a mathematical function, but 0.05 is a good value if you know it will be used to initialize an NN layer.
Answer originally provided by Dmitriy Genzel on Quora.