Fully Connected Layer with numbers - tensorflow

I am having some trouble understanding how to implement a fully connected layer. Can someone share an example with real values?
Ex. Flattening the input, multiplying by the weights, adding the bias, etc.

TensorFlow does most of the low-level stuff for you.
Have a look at one of the getting started tutorials where there are complete examples in just a few lines of code:
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
link: https://www.tensorflow.org/tutorials

Related

Tensorflow Keras Shape mismatch

While trying to implement a standard MNIST digit recognizer that many tutorials use to introduce you to neural networks, I'm encountering the error
ValueError: Shape mismatch: The shape of labels (received (1,)) should equal the shape of logits except for the last dimension (received (28, 10)).
I would like to use from_tensor_slices to process the data, since I want to apply the code to another problem where the data comes from a CSV file. Anyway, here is the code producing the error in the line model.fit(...)
import tensorflow as tf
train_dataset, test_dataset = tf.keras.datasets.mnist.load_data()
train_images, train_labels = train_dataset
train_images = train_images/255.0
train_dataset_tensor = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
num_of_validation_data = 10000
validation_data = train_dataset_tensor.take(num_of_validation_data)
train_data = train_dataset_tensor.skip(num_of_validation_data)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(100, activation='sigmoid'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']
)
model.fit(train_data, batch_size=50, epochs=5)
performance = model.evaluate(validation_data)
I don't understand where the shape (28, 10) of the logits comes from, I thought I was flattening the image, essentially making a 1D vector out of the 2D image? How can I prevent the error?
You can use the following code
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(100, activation='sigmoid'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']
)
model.fit(train_ds)

Simple Machine Learning example with handwritten digits does not work with conv2d and MaxPooling2D

I made an easy KI learning with tensorflow 2 with this code and everything works fine.
# Install TensorFlow
import tensorflow as tf
print(tf.__version__)
# Import matplotlib library
import matplotlib.pyplot as plt
#Import numpy
import numpy as np
#Dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
print("Evaluation");
model.evaluate(x_test, y_test)
plt.imshow(x_train[6], cmap="gray") # Import the image
plt.show() # Plot the image
predictions = model.predict([x_train]) # Make prediction
print("Vorhersage: ", np.argmax(predictions[6])) # Print out the number
print("Correct is: ", y_train[6])
My problem is how to add the detecting layers like Conv2d and MaxPooling2D. Where do I have to add this layers and does this influence my plotting and my predictions?
Before passing input to Convolution2d and maxpool2d, input must have 4 dimensions.
x_train and x_test have shape
[BatchSize, 28, 28] but it should be [BatchSize, 28, 28, 1].
So we are going to add channel dimension at last using np.expand_dims()
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), padding="same", input_shape=(None, 28, 28, 1)),
tf.keras.layers.Activation("relu"),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
Yes, It is going to influence your ploting and predictions.
Convolution layer uses less numbers of weights as compare to dense layer and then Maxpool will take features with only max values to make predictions. Which will reduce your feature because of this may be your accuracy will decrease.
Although, When we have images with large size like 500*500 then we have to apply Convolution and maxpool layers to reduce the features by selecting only important features.
If we apply flatten and dense function on input of 500*500 then program have to initialize large number of weights and you can get Out Of Memory error.

Trying to understand how to add more hidden layers into a neural network using a for loop

I'm trying to figure out how would I make a simple for loop to add more hidden layers to this neural network for the basic tensorflow neural network from the code below:
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
normally I would go ahead and change the following code:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
and add more layers.Dense as follow:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
Is it possible to create a simple for loop where I can input the number of hidden layers I want?
First, create an list object and add a Flatten layer to it.
layers = list()
layers.add( tf.keras.layers.Flatten() )
Now, we use a loop statement to add n number of Dense layers.
units = [ 64 , 128 , 256 ]
for i in range( n ):
layers.add( tf.keras.layers.Dense( units[i] , activation='relu' ) )
Where n could be any positive integer.

How to freeze weights in certain layer with Keras?

I am trying to freeze the weights of certain layer in a prediction model with Keras and mnist dataset, but it does not work. The code is like:
from keras.layers import Dense, Flatten
from keras.utils import to_categorical
from keras.models import Sequential, load_model
from keras.datasets import mnist
from keras.losses import categorical_crossentropy
import numpy as np
def load_data():
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
return x_train, y_train, x_test, y_test
def run():
x_train, y_train, x_test, y_test = load_data()
model = Sequential([Flatten(input_shape=(28, 28)),
Dense(300, name='dense1', activation='relu'),
Dense(100, name='dense2', activation='relu'),
Dense(10, name='dense3', activation='softmax')])
model.trainable = True
model.compile(optimizer='Adam',
metrics=['accuracy'],
loss=categorical_crossentropy)
print(model.summary())
model.fit(x_train, y_train, epochs=5, verbose=2)
print(model.evaluate(x_test, y_test))
return model
def freeze(model):
x_train, y_train, x_test, y_test = load_data()
name = 'dense1'
weightsAndBias = model.get_layer(name=name).get_weights()
# freeze the weights of this layer
model.get_layer(name=name).trainable = False
# record the weights before retrain
weights_before = weightsAndBias[0]
# retrain
model.fit(x_train, y_train, verbose=2, epochs=1)
weights_after = model.get_layer(name=name).get_weights()[0]
if (weights_before == weights_after).all():
print('the weights did not change!!!')
else:
print('the weights changed!!!!')
if __name__ == '__main__':
model = run()
freeze(model)
The program outputs 'the weights changed!!!!'.
I do not understand why the weights of the layer named 'dense1' changes after setting model.get_layer(name=name).trainable = False.
You can do it by using:
model=Sequential()
layer=Dense(64,init='glorot_uniform',input_shape=(784,))
layer.trainable=False
model.add(layer)
layer2=Dense(784, activation='sigmoid',init='glorot_uniform')
layer2.trainable=True
model.add(layer2)
model.compile(loss='relu', optimizer=sgd,metrics = ['mae'])
You need to compile the graph after setting 'trainable'.
more info here
let me keep my layers freezed upto 5th layer, rest i will keep trainable
Here is more simple & more efficient code
for layer in model.layers[:5]:
layer.trainable=False
for layer in model.layers[5:]:
layer.trainable=True

tensorflow keras save and load model

I have run this example and I got the following error when I try to save the model.
import tensorflow as tf
import h5py
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=2)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
model.save('model.h5')
new_model = tf.keras.models.load_model('model.h5')
I get this error:
Traceback (most recent call last):
File "/home/zneic/PycharmProjects/test/venv/test.py", line 23, in <module>
model.save('model.h5')
File "/home/zneic/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1359, in save
'Currently `save` requires model to be a graph network. Consider '
NotImplementedError: Currently `save` requires model to be a graph network. Consider using `save_weights`, in order to save the weights of the model.
Your weights don't seem to be saved or loaded back into the session. Can you try saving the graph and the weights separately and loading them separately?
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("model.h5")
Then you can load them:
def loadModel(jsonStr, weightStr):
json_file = open(jsonStr, 'r')
loaded_nnet = json_file.read()
json_file.close()
serve_model = tf.keras.models.model_from_json(loaded_nnet)
serve_model.load_weights(weightStr)
serve_model.compile(optimizer=tf.train.AdamOptimizer(),
loss='categorical_crossentropy',
metrics=['accuracy'])
return serve_model
model = loadModel('model.json', 'model.h5')
I have the same problem, and I solved it. I don't konw why, but it works. You can modify as this:
model = tf.keras.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
layers.Dropout(0.2),
layers.Dense(10, activation=tf.nn.softmax)
])