How to force distributed training in tensorflow to use more than 1 server? - tensorflow

I am following the official distributed training documentation but find that only 1 server is used in databricks ganglia
https://www.tensorflow.org/tutorials/distribute/keras
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
outputs 2
why is only one server in used ? When there is multiple (2) server available?
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
Is there a way I can force the number of server to split work in training?

Related

Why doesn't my CNN learn beyond more on the following dataset? A simpler network was able to learn on it better

I have the following CNN topology -
model=Sequential()
#model.add(Lambda(standardize,input_shape=(28,28,1)))
model.add(Conv2D(filters=64, kernel_size = (3,3), activation="relu", input_shape=(32,32,3)))
model.add(Conv2D(filters=64, kernel_size = (3,3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.50))
model.add(Conv2D(filters=128, kernel_size = (3,3), activation="relu"))
model.add(Conv2D(filters=128, kernel_size = (3,3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.50))
model.add(Conv2D(filters=256, kernel_size = (3,3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.50))
model.add(Flatten())
model.add(Dense(512,activation="relu"))
model.add(BatchNormalization())
model.add(Dense(10,activation="softmax"))
This network worked really well on just 500 training images of the MNIST dataset (https://www.tensorflow.org/datasets/catalog/mnist) Input size - (28,28,1)
However, on 500 training images of the SVHN dataset (http://ufldl.stanford.edu/housenumbers/), the model doesn't seem to learn as its validation accuracy maxed out at 0.1959. Input size - (32,32,3)
Another CNN I created which is much simpler than this network however, reaches ~70% validation accuracy on the SVHN dataset.
Im failing to understand why this might be the case - is it cause the CNN doesnt work the same on RGB images? Is it cause the CNN isnt complex enough to extract features of the SVHN dataset?
Let me know if there is anything else I could provide to help you guys out with this problem. :)

Train Keras model after Split for federated learning

Hi i am trying to train the keras model after split. simulating the keras model are placed on different devices and each device have their weights. that devices passes their intermediate output to host and host device receive intermediate output and send back gradient.How can we train the splited model here is the example of splited model for mnist dataset
model1 = keras.Sequential(
[
keras.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=2, activation="relu",strides=1, padding="same"),
layers.MaxPooling2D(pool_size=2),
]
)
model2 = keras.Sequential(
[
layers.Conv2D(64, kernel_size=2, activation="relu", strides=1, padding="same"),
layers.MaxPooling2D(pool_size=2),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
I am expecting the way to train these models on different devices where guest device will have images and model1 and send the intermediate output to host after predict with moodel1.
Host contain model2 and label for images. host 2 will receive intermediate output and host will train model2 and send the gradient to guest which will train the model1.

How to use LayerNormalization layer in a Keras sequential Model?

I am just getting into Keras and Tensor flow.
Im having a lot of problems adding an input normalization layer in a sequential model.
Now my model is ;
model = tf.keras.models.Sequential()
model.add(keras.layers.Dense(256, input_shape=(13, ), activation='relu'))
model.add(tf.keras.layers.LayerNormalization(axis=-1 , center=True , scale=True))
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dense(1))
model.summary()
My doubts are whether I should first perform an adapt function and how to use it in the sequential model.
Thanks to all!!
I'm trying to figure this out as well. According to this example, adapt is not necessary.
model = tf.keras.models.Sequential([
# Reshape into "channels last" setup.
tf.keras.layers.Reshape((28,28,1), input_shape=(28,28)),
tf.keras.layers.Conv2D(filters=10, kernel_size=(3,3),data_format="channels_last"),
# LayerNorm Layer
tf.keras.layers.LayerNormalization(axis=3 , center=True , scale=True),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_test, y_test)
Also, make sure you want a LayerNormalization. If I understand correctly, that normalizes every input on its own. Batch normalization may be more appropriate. See this for more info.

Alternative for tf.keras.layers.experimental.preprocessing.Resizing that works on embedded system (ARM6)

I use the following model for audio classification:
model = models.Sequential([
layers.Input(shape=input_shape),
preprocessing.Resizing(64, 64),
layers.Conv2D(64, 3, activation='relu', padding="same"),
layers.Conv2D(64, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, activation='relu'),
layers.Conv2D(32, 3, activation='relu'),
layers.Dropout(0.25),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dropout(0.5),
layers.Dense(2),
])
This works fine on a CPU. Now I would like to run the trained model on a TPU (Coral USB Accelerator via Raspberry Pi Zero (ARM6)). The edge TPU compiler which must be used to run a model on the Coral does not accept tf.keras.layers.experimental.preprocessing.Resizing (third line in code snippet). The edge TPU compiler does only accept a subset of TensorFlow lite functions. Therefore I am now looking for an alternative to resize my 624x129x1 spectrogram to 64x64x1 as a preprocessing step outside of models.Sequential(). Unfortunately, Tensorflow and PyTorch do not work on the RPI0. Therefore I cannot use these libraries and am looking for an alternative to preprocess the spectrogram to the right size.
Any help or advice is highly appreciated!

Why is my CNN/Image Classifier model accuracy so low?

I'm currently trying to build a CNN that can detect whether a patient has pnemonia caused by covid or not, and no matter what parameters I change the model accuracy is staying at 49%/50% so its basically useless because it's the same as a coin flip. Here is my code, I thought I would try using the VGG-16 model.
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPool2D, GlobalAveragePooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.preprocessing.image import ImageDataGenerator
# Loading in the dataset
traindata = ImageDataGenerator(rescale=1/255)
trainingdata = traindata.flow_from_directory(
directory="Covid-19CT/TrainingData",
target_size=(224,224),
batch_size=100,
class_mode="binary")
testdata = ImageDataGenerator(rescale=1/255)
testingdata = testdata.flow_from_directory(
directory="Covid-19CT/TestingData",
target_size=(224,224),
batch_size=100,
class_mode="binary")
# Initialize the model w/ Sequential & add layers + input and output <- will refer to the VGG 16 model architecture
model = Sequential()
model.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(2,2),padding="same", activation="relu"))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding="same", activation ="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2), strides=2))
model.add(GlobalAveragePooling2D())
model.add(Dense(units=4096, activation="relu"))
model.add(Dense(units=4096, activation="relu"))
model.add(Dense(units=1000, activation="relu"))
model.add(Dense(units=1, activation="softmax"))
# Compile the model
model_optimizer = Adam(lr=0.001)
model.compile(optimizer=model_optimizer, loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
# Add the callbacks
checkpoint = ModelCheckpoint(filepath="Covid-19.hdf5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto')
early = EarlyStopping(monitor='val_acc', min_delta=0, patience=50, verbose=1, mode='auto')
fit = model.fit_generator(steps_per_epoch=25, generator=trainingdata, validation_data=testingdata, validation_steps=10,epochs=10,callbacks=[checkpoint,early])
This always gives:
Epoch 1/10 6/25 [======>.......................] - ETA: 1:22:37 -
loss: 7.5388 - accuracy: 0.5083
<- Well, it just always gives a really poor accuracy...
Additional info:
Some of the images in the data set are JPG others are PNG (Not sure if this is the culprit)
The Dataset has 2072 images for training Covid CTs and 2098 images for training NonCovid CTs
The Dataset has 576 images for testing Covid CTs and 532 images for testing NonCovid CTs
File structure looks like this: Covid19ModelImages -> Training Data & Testing Data - Training Data has 2 subfolders Covid19CT and noncovid19 CT and testing data also has 2 subfolders Covid19CT and noncovid19CT
Also: Am I just being too impatient? I never let it run past the 1st epoch cause I just assume its never going to get better than 50%, could it be that the model will improve more on the next epochs?
If anyone would be willing to help out, or if you need any other additional info to maybe help you gain a better understanding of the problem, please let me know!
Since you are using binary cross entropy, the activation function in the dense layer with 1 unit should be "sigmoid". Since you are not using a GPU you have very long training times per epoch. To see if the model is working correctly you may want to reduce this time. There are few things you could do. Try reducing the image size say to 128 by 128. With 224 X 224 you have 50176 pixels to process versus 16384 for the 128 X 128 image so you reduce the computations by about a factor of 3. Also you have two dense layers with 4096 units. This is also computationally expense. It may also lead to overfitting. Try your model initially without these layers and see how it performs. I am not a fan of early stopping because it is a crutch to avoid dealing with the over fitting issue. If you encounter over fitting add a dropout layer to help avoid it. Finally I recommend you use an adjustable learning rate. The callback ReduceLROnPlateau makes this easy to do. Set it to monitor validation loss. You can set the parameters to reduce the learning rate a factor<1 if the loss fails to decrease after "patience" number of consecutive epochs. I usually use factor=.5 and patience=1. This also enables you to use a larger initial learning rate for faster convergence. Documentation is here. You need to let your model run for several epochs to see if the training loss and validation loss are decreasing.