Weird accuracy metric in verbose when using Tensorflow keras loss=tf.losses.sparse_softmax_cross_entropy - tensorflow

While testing out training of Mnist dataset using Tensorflow's Keras api, i witness weird accuracy while specifying the loss=tf.losses.sparse_softmax_cross_entropy in complile statement. I am simply trying usage of 3 different ways of specifying loss functions viz.
loss='sparse_categorical_crossentropy'
loss=tf.losses.sparse_softmax_cross_entropy
loss=tf.losses.softmax_cross_entropy
Google Colab link to explain the point better
Here is the sample code
import tensorflow as tf
import numpy as np
from tensorflow import keras
from keras.datasets import mnist
from sklearn.metrics import f1_score
(x_train,y_train),(x_test,y_test) = mnist.load_data()
model = tf.keras.Sequential([
keras.layers.Flatten(),
keras.layers.Dense(units=128,activation=tf.nn.relu),
keras.layers.Dense(10, activation = tf.nn.softmax)
])
# loss='sparse_categorical_crossentropy'
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
hist = model.fit(x_train/255,y_train,epochs=10, verbose=0 )
y_pred1 = model.predict(x_test/255)
y_pred1 = np.argmax(y_pred1, axis=1)
results1 = (np.array([y_pred1 == y_test]).astype(int).reshape(-1,1))
acc1 = np.asscalar(sum(results1)/results1.shape[0])
print("case 1: loss='sparse_categorical_crossentropy' : "+ str(hist.history['acc'][9]))
print("calculated test acc : "+ str(acc1))
print("_________________________________________________")
# loss=tf.losses.sparse_softmax_cross_entropy
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.losses.sparse_softmax_cross_entropy,
metrics=['accuracy'])
hist = model.fit(x_train/255,y_train,epochs=10,verbose=1)
y_pred2 = model.predict(x_test/255)
y_pred2 = np.argmax(y_pred2, axis=1)
results2 = (np.array([y_pred2 == y_test]).astype(int).reshape(-1,1))
acc2 = np.asscalar(sum(results2)/results2.shape[0])
print("case 2: loss=tf.losses.sparse_softmax_cross_entropy : "+ str(hist.history['acc'][9]))
print("calculated test acc : "+ str(acc2))
print("_________________________________________________")
# loss=tf.losses.softmax_cross_entropy
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.losses.softmax_cross_entropy,
metrics=['accuracy'])
from keras.utils import to_categorical
y_train_onehot = to_categorical(y_train)
hist = model.fit(x_train/255,y_train_onehot,epochs=10,verbose=0)
y_pred3 = model.predict(x_test/255)
y_pred3 = np.argmax(y_pred3, axis=1)
results3 = (np.array([y_pred3 == y_test]).astype(int).reshape(-1,1))
acc3 = np.asscalar(sum(results3)/results3.shape[0])
print("case 3: loss=tf.losses.softmax_cross_entropy : "+ str(hist.history['acc'][9]))
print("calculated test acc : "+ str(acc3))
The output is as shown below
case 1: loss='sparse_categorical_crossentropy' : 0.99495
calculated test acc : 0.978
_________________________________________________
Epoch 1/10
60000/60000 [==============================] - 5s 79us/sample - loss: 1.4690 - acc: 0.0988
Epoch 2/10
60000/60000 [==============================] - 5s 79us/sample - loss: 1.4675 - acc: 0.0987
Epoch 3/10
60000/60000 [==============================] - 4s 75us/sample - loss: 1.4661 - acc: 0.0988
Epoch 4/10
60000/60000 [==============================] - 4s 74us/sample - loss: 1.4656 - acc: 0.0987
Epoch 5/10
60000/60000 [==============================] - 5s 77us/sample - loss: 1.4652 - acc: 0.0987
Epoch 6/10
60000/60000 [==============================] - 5s 78us/sample - loss: 1.4648 - acc: 0.0988
Epoch 7/10
60000/60000 [==============================] - 4s 75us/sample - loss: 1.4644 - acc: 0.0987
Epoch 8/10
60000/60000 [==============================] - 5s 76us/sample - loss: 1.4641 - acc: 0.0988
Epoch 9/10
60000/60000 [==============================] - 5s 79us/sample - loss: 1.4639 - acc: 0.0987
Epoch 10/10
60000/60000 [==============================] - 5s 76us/sample - loss: 1.4639 - acc: 0.0988
case 2: loss=tf.losses.sparse_softmax_cross_entropy : 0.09876667
calculated test acc : 0.9791
_________________________________________________
case 3: loss=tf.losses.softmax_cross_entropy : 0.99883336
calculated test acc : 0.9784
The accuracy appearing in verbose in second case loss=tf.losses.sparse_softmax_cross_entropy is 0.0987 which isn't making sense since evaluating the model on both training and test data is showing accuracy over 0.97

Related

Keras : Simple 1 variable training to get the mean

I wrote a very basic model to train a single variable to approximate the mean value of a vector. But for some reason, it's not working properly.
I used this page describing a linear fit (2 variables):
https://www.tensorflow.org/guide/basic_training_loops
My code is as follow:
import tensorflow as tf
import numpy as np
class MyModel(tf.keras.Model):
def __init__(self, **kwargs):
super().__init__()
self.b = tf.Variable(1.0, trainable=True)
def call(self, x):
return x - self.b
model = MyModel()
model.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3), loss='mae')
X = np.random.random((10000,1))
Y = np.zeros(X.shape)
model.fit(X, Y, batch_size=10, epochs=10)
B should be optimized so that sum(abs(X - B)) is as close to 0 as possible (= the mean). However when I fit the model it's not training at all and always reaches to the solution B=0 (the real mean is around 0.5).
What do I do wrong?
This code is working fine. Please check below execution and it's output:
import tensorflow as tf
import numpy as np
class MyModel(tf.keras.Model):
def __init__(self, **kwargs):
super().__init__()
self.b = tf.Variable(1.0, trainable=True)
def call(self, x):
return x - self.b
model = MyModel()
model.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3), loss='mae')
X = np.random.random((10000,1))
Y = np.zeros(X.shape)
model.fit(X, Y, batch_size=10, epochs=10)
Output:
Epoch 1/10
1000/1000 [==============================] - 2s 1ms/step - loss: 0.2991
Epoch 2/10
1000/1000 [==============================] - 1s 1ms/step - loss: 0.2476
Epoch 3/10
1000/1000 [==============================] - 1s 1ms/step - loss: 0.2476
Epoch 4/10
1000/1000 [==============================] - 1s 1ms/step - loss: 0.2476
Epoch 5/10
1000/1000 [==============================] - 1s 1ms/step - loss: 0.2476
Epoch 6/10
1000/1000 [==============================] - 1s 1ms/step - loss: 0.2476
Epoch 7/10
1000/1000 [==============================] - 2s 2ms/step - loss: 0.2476
Epoch 8/10
1000/1000 [==============================] - 2s 2ms/step - loss: 0.2476
Epoch 9/10
1000/1000 [==============================] - 2s 2ms/step - loss: 0.2476
Epoch 10/10
1000/1000 [==============================] - 2s 2ms/step - loss: 0.2477

Saving and loading of Keras model not working

I've been trying to save and reupload a model and whenever I do that the accuracy always goes down.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu', input_shape=(IMG_SIZE,IMG_SIZE,3)))
model.add(tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(len(SURFACE_TYPES), activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=EPOCHS,
validation_steps=10)
Output:
Epoch 1/3
84/84 [==============================] - 2s 19ms/step - loss: 1.9663 - acc: 0.6258 - val_loss: 0.8703 - val_acc: 0.6867
Epoch 2/3
84/84 [==============================] - 1s 18ms/step - loss: 0.2865 - acc: 0.9105 - val_loss: 0.4494 - val_acc: 0.8667
Epoch 3/3
84/84 [==============================] - 1s 18ms/step - loss: 0.1409 - acc: 0.9574 - val_loss: 0.3614 - val_acc: 0.9000
This followed by running these commands to produce outputs result in the same training loss but different training accuracies. The weights and structures of the models are also identical.
model.save("my_model2.h5")
model2 = load_model("my_model2.h5")
model2.evaluate(train_ds)
model.evaluate(train_ds)
Output:
84/84 [==============================] - 1s 9ms/step - loss: 0.0854 - acc: 0.0877
84/84 [==============================] - 1s 9ms/step - loss: 0.0854 - acc: 0.9862
[0.08536089956760406, 0.9861862063407898]
i have shared reference link click here
it has all formats to save & load your model

Binary vs Multiclass Classification using TPU

I am using an EfficientNetB7 and EfficientNetB0 model for training my dataset, and am facing a major anomaly.
EfficientNetB7 gave 96.4 percent accuracy with 40 epochs, lr_callback,4 nb_classes,imagenet weights.
GCS_DS_PATH = KaggleDatasets().get_gcs_path('plant-pathology-2020-fgvc7')
path='../input/plant-pathology-2020-fgvc7/'
train = pd.read_csv(path + 'train.csv')
test = pd.read_csv(path + 'test.csv')
sub = pd.read_csv(path + 'sample_submission.csv')
train_paths = train.image_id.apply(lambda x : GCS_DS_PATH + '/images/' + x + '.jpg').values
test_paths = test.image_id.apply(lambda x : GCS_DS_PATH + '/images/' + x + '.jpg').values
train_labels = train.loc[:,'healthy':].values.astype(int)
train_labels_healthy = train.loc[:,'healthy'].values.astype(int)
train_labels_multiple_diseases = train.loc[:,'multiple_diseases'].values.astype(int)
train_labels_rust = train.loc[:,'rust'].values.astype(int)
train_labels_scab = train.loc[:,'scab'].values.astype(int)
train_dataset = (
tf.data.Dataset
.from_tensor_slices((train_paths, train_labels))
.map(decode_image, num_parallel_calls=AUTO)
.map(data_augment, num_parallel_calls=AUTO)
.repeat()
.shuffle(512)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
train_dataset1 = (
tf.data.Dataset
.from_tensor_slices((train_paths, train_labels_healthy_one_hot))
.map(decode_image, num_parallel_calls=AUTO)
.map(data_augment, num_parallel_calls=AUTO)
.repeat()
.shuffle(512)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
nb_classes=4
def get_model():
base_model = efn.EfficientNetB7(weights='imagenet', include_top=False, pooling='avg', input_shape=(img_size, img_size, 3))
x = base_model.output
predictions = Dense(nb_classes, activation="softmax")(x)
return Model(inputs=base_model.input, outputs=predictions)
with strategy.scope():
model = get_model()
model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy'])
model.summary()
model.fit(
train_dataset,
steps_per_epoch=train_labels.shape[0] // BATCH_SIZE,
epochs=10
)
Output: Train for 28 steps
Epoch 1/10
28/28 [==============================] - 253s 9s/step - loss: 0.2862 - accuracy: 0.8951
Epoch 2/10
28/28 [==============================] - 15s 535ms/step - loss: 0.1453 - accuracy: 0.9520
Epoch 3/10
28/28 [==============================] - 34s 1s/step - loss: 0.1450 - accuracy: 0.9554
Epoch 4/10
28/28 [==============================] - 35s 1s/step - loss: 0.1271 - accuracy: 0.9587
Epoch 5/10
28/28 [==============================] - 35s 1s/step - loss: 0.0935 - accuracy: 0.9621
Epoch 6/10
28/28 [==============================] - 35s 1s/step - loss: 0.0951 - accuracy: 0.9621
Epoch 7/10
28/28 [==============================] - 35s 1s/step - loss: 0.0615 - accuracy: 0.9721
Epoch 8/10
28/28 [==============================] - 35s 1s/step - loss: 0.0674 - accuracy: 0.9833
Epoch 9/10
28/28 [==============================] - 35s 1s/step - loss: 0.0654 - accuracy: 0.9743
Epoch 10/10
28/28 [==============================] - 35s 1s/step - loss: 0.0435 - accuracy: 0.9821
So, I tried improving the accuracy by using 4 EfficientNetB0 models to predict the 4 classes independently, but the accuracy got stuck at 50 per cent. I tried varying the learning rate to see if it is stuck in a local minimum, but the accuracy is the same.
nb_classes=1
def get_model():
base_model = efn.EfficientNetB0(weights='imagenet', include_top=False, pooling='avg', input_shape=(img_size, img_size, 3))
x = base_model.output
predictions = Dense(nb_classes, activation="softmax")(x)
return Model(inputs=base_model.input, outputs=predictions)
adam = Adam(learning_rate=0.05) #Tried 0.0001,0.001,0.01,0.05
with strategy.scope():
model1 = get_model()
#print('1')
# model2 = get_model()
# print('2')
# model3 = get_model()
# print('3')
# model4 = get_model()
# print('4')
model1.compile(optimizer=adam, loss='binary_crossentropy',metrics=['accuracy'])
#model2.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
#model3.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
#model4.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
model1.summary()
#model2.summary()
#model3.summary()
#model4.summary()
model1.fit(
train_dataset1,
steps_per_epoch=train_labels_rust.shape[0] // BATCH_SIZE,
epochs=10
)
Output: Train for 28 steps
Epoch 1/10
28/28 [==============================] - 77s 3s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 2/10
28/28 [==============================] - 32s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 3/10
28/28 [==============================] - 33s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 4/10
28/28 [==============================] - 33s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 5/10
28/28 [==============================] - 33s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 6/10
28/28 [==============================] - 33s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 7/10
28/28 [==============================] - 33s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 8/10
28/28 [==============================] - 33s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 9/10
28/28 [==============================] - 33s 1s/step - loss: 7.6666 - accuracy: 0.5000
Epoch 10/10
28/28 [==============================] - 34s 1s/step - loss: 7.6666 - accuracy: 0.5000
I also tried other Neural Networks like ResNet50, but the accuracy remained stuck at 50 per cent. Can anyone please tell me where I am committing the mistake.
TO predict more than 2 classes use loss as 'categorical_crossentropy' or 'sparse_categorical_crossentropy' with activation as softmax

Validation Loss Increases every iteration

Recently I have been trying to do multi-class classification. My datasets consist of 17 image categories. Previously I was using 3 conv layers and 2 hidden layers. It resulted my model overfitting with huge validation loss around 11.0++ and my validation accuracy was very low. So I decided to decrease the conv layers by 1 and hidden layer by 1. I also have removed dropout and it still have the same problem with the validation which still overfitting, even though my training accuracy and loss are getting better.
Here is my code for prepared datasets:
import cv2
import numpy as np
import os
import pickle
import random
CATEGORIES = ["apple_pie", "baklava", "caesar_salad","donuts",
"fried_calamari", "grilled_salmon", "hamburger",
"ice_cream", "lasagna", "macaroni_and_cheese", "nachos", "omelette","pizza",
"risotto", "steak", "tiramisu", "waffles"]
DATALOC = "D:/Foods/Datasets"
IMAGE_SIZE = 50
data_training = []
def create_data_training():
for category in CATEGORIES:
path = os.path.join(DATALOC, category)
class_num = CATEGORIES.index(category)
for image in os.listdir(path):
try:
image_array = cv2.imread(os.path.join(path,image), cv2.IMREAD_GRAYSCALE)
new_image_array = cv2.resize(image_array, (IMAGE_SIZE,IMAGE_SIZE))
data_training.append([new_image_array,class_num])
except Exception as exc:
pass
create_data_training()
random.shuffle(data_training)
X = []
y = []
for features, label in data_training:
X.append(features)
y.append(label)
X = np.array(X).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
y = np.array(y)
pickle_out = open("X.pickle", "wb")
pickle.dump(X, pickle_out)
pickle_out.close()
pickle_out = open("y.pickle", "wb")
pickle.dump(y, pickle_out)
pickle_out.close()
pickle_in = open("X.pickle","rb")
X = pickle.load(pickle_in)
Here is the code of my model:
import pickle
import tensorflow as tf
import time
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Activation, Conv2D, Dense, Dropout, Flatten, MaxPooling2D
NAME = "Foods-Model-{}".format(int(time.time()))
tensorboard = TensorBoard(log_dir='logs\{}'.format(NAME))
X = pickle.load(open("X.pickle","rb"))
y = pickle.load(open("y.pickle","rb"))
X = X/255.0
model = Sequential()
model.add(Conv2D(32,(3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size =(2,2)))
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size =(2,2)))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation("relu"))
model.add(Dense(17))
model.add(Activation('softmax'))
model.compile(loss = "sparse_categorical_crossentropy", optimizer = "adam", metrics = ['accuracy'])
model.fit(X, y, batch_size = 16, epochs = 20 , validation_split = 0.1, callbacks = [tensorboard])
The result of the trained model:
Train on 7650 samples, validate on 850 samples
Epoch 1/20
7650/7650 [==============================] - 242s 32ms/sample - loss: 2.7826 - accuracy: 0.1024 - val_loss: 2.7018 - val_accuracy: 0.1329
Epoch 2/20
7650/7650 [==============================] - 241s 31ms/sample - loss: 2.5673 - accuracy: 0.1876 - val_loss: 2.5597 - val_accuracy: 0.2059
Epoch 3/20
7650/7650 [==============================] - 234s 31ms/sample - loss: 2.3529 - accuracy: 0.2617 - val_loss: 2.5329 - val_accuracy: 0.2153
Epoch 4/20
7650/7650 [==============================] - 233s 30ms/sample - loss: 2.0707 - accuracy: 0.3510 - val_loss: 2.6628 - val_accuracy: 0.2059
Epoch 5/20
7650/7650 [==============================] - 231s 30ms/sample - loss: 1.6960 - accuracy: 0.4753 - val_loss: 2.8143 - val_accuracy: 0.2047
Epoch 6/20
7650/7650 [==============================] - 230s 30ms/sample - loss: 1.2336 - accuracy: 0.6247 - val_loss: 3.3130 - val_accuracy: 0.1929
Epoch 7/20
7650/7650 [==============================] - 233s 30ms/sample - loss: 0.7738 - accuracy: 0.7715 - val_loss: 3.9758 - val_accuracy: 0.1776
Epoch 8/20
7650/7650 [==============================] - 231s 30ms/sample - loss: 0.4271 - accuracy: 0.8827 - val_loss: 4.7325 - val_accuracy: 0.1882
Epoch 9/20
7650/7650 [==============================] - 233s 30ms/sample - loss: 0.2080 - accuracy: 0.9519 - val_loss: 5.7198 - val_accuracy: 0.1918
Epoch 10/20
7650/7650 [==============================] - 233s 30ms/sample - loss: 0.1402 - accuracy: 0.9668 - val_loss: 6.0608 - val_accuracy: 0.1835
Epoch 11/20
7650/7650 [==============================] - 236s 31ms/sample - loss: 0.0724 - accuracy: 0.9872 - val_loss: 6.7468 - val_accuracy: 0.1753
Epoch 12/20
7650/7650 [==============================] - 232s 30ms/sample - loss: 0.0549 - accuracy: 0.9895 - val_loss: 7.4844 - val_accuracy: 0.1718
Epoch 13/20
7650/7650 [==============================] - 229s 30ms/sample - loss: 0.1541 - accuracy: 0.9591 - val_loss: 7.3335 - val_accuracy: 0.1553
Epoch 14/20
7650/7650 [==============================] - 231s 30ms/sample - loss: 0.0477 - accuracy: 0.9905 - val_loss: 7.8453 - val_accuracy: 0.1729
Epoch 15/20
7650/7650 [==============================] - 233s 30ms/sample - loss: 0.0346 - accuracy: 0.9908 - val_loss: 8.1847 - val_accuracy: 0.1753
Epoch 16/20
7650/7650 [==============================] - 231s 30ms/sample - loss: 0.0657 - accuracy: 0.9833 - val_loss: 7.8582 - val_accuracy: 0.1624
Epoch 17/20
7650/7650 [==============================] - 233s 30ms/sample - loss: 0.0555 - accuracy: 0.9830 - val_loss: 8.2578 - val_accuracy: 0.1553
Epoch 18/20
7650/7650 [==============================] - 230s 30ms/sample - loss: 0.0423 - accuracy: 0.9892 - val_loss: 8.6970 - val_accuracy: 0.1694
Epoch 19/20
7650/7650 [==============================] - 236s 31ms/sample - loss: 0.0291 - accuracy: 0.9927 - val_loss: 8.5275 - val_accuracy: 0.1882
Epoch 20/20
7650/7650 [==============================] - 234s 31ms/sample - loss: 0.0443 - accuracy: 0.9873 - val_loss: 9.2703 - val_accuracy: 0.1812
Thank You for your time. Any help and suggestion will be really appreciated.
Your model suggests early over-fitting.
Get rid of the dense layer completely and use global pooling.
model = Sequential()
model.add(Conv2D(32,(3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(Conv2D(128,(3,3)))
model.add(Activation("relu"))
model.add(GlobalAveragePooling2D())
model.add(Dense(17))
model.add(Activation('softmax'))
model.summary()
Use SpatialDropout2D after conv layers.
ref: https://www.tensorflow.org/api_docs/python/tf/keras/layers/SpatialDropout2D
Use early stopping to get a balanced model.
Your output suggests categorical_crossentropy as a better-fit loss.

Why does the output layer is simply zero at the end of the network?

I am trying to train a model that takes a 15x15 image and classify each pixel into two classes (1/0).
This is my loss function:
smooth = 1
def tversky(y_true, y_pred):
y_true_pos = K.flatten(y_true)
y_pred_pos = K.flatten(y_pred)
true_pos = K.sum(y_true_pos * y_pred_pos)
false_neg = K.sum(y_true_pos * (1-y_pred_pos))
false_pos = K.sum((1-y_true_pos)*y_pred_pos)
alpha = 0.5
return (true_pos + smooth)/(true_pos + alpha*false_neg + (1-alpha)*false_pos + smooth)
def tversky_loss2(y_true, y_pred):
return 1 - tversky(y_true,y_pred)
This is the model:
input_image = layers.Input(shape=(size, size, 1))
b2 = layers.Conv2D(128, (3,3), padding='same', activation='relu')(input_image)
b2 = layers.Conv2D(128, (3,3), padding='same', activation='relu')(b2)
b2 = layers.Conv2D(128, (3,3), padding='same', activation='relu')(b2)
output = layers.Conv2D(1, (1,1), activation='sigmoid', padding='same')(b2)
model = models.Model(input_image, output)
model.compile(optimizer='adam', loss=tversky_loss2, metrics=['accuracy'])
The model left is the input and the label is the middle column and the prediction is always zero on the right column:
The training performs really poorly:
Epoch 1/10
100/100 [==============================] - 4s 38ms/step - loss: 0.9269 - acc: 0.1825
Epoch 2/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9277 - acc: 0.0238
Epoch 3/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9276 - acc: 0.0239
Epoch 4/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9270 - acc: 0.0241
Epoch 5/10
100/100 [==============================] - 3s 30ms/step - loss: 0.9274 - acc: 0.0240
Epoch 6/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9269 - acc: 0.0242
Epoch 7/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9270 - acc: 0.0241
Epoch 8/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9271 - acc: 0.0241
Epoch 9/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9276 - acc: 0.0239
Epoch 10/10
100/100 [==============================] - 3s 29ms/step - loss: 0.9266 - acc: 0.0242
This sounds like a very imbalanced dataset with very tiny true regions. This might be hard to train indeed.
You may want to increase alpha to penalize more false negatives than false positives. Anyway, unless alpha is big enough, it's very normal that in the beginning your model first goes to all neg because it's definitely a great way to decrease the loss.
Now, there is a conceptual mistake regarding how Keras works in that loss. You need to keep the "samples" separate. Otherwise you are calculating a loss as if all images were one image. (Thus, it's probable that images with many positives have a reasoable result, while images with few positives don't, and this will be a good solution)
Fix the loss as:
def tversky(y_true, y_pred):
y_true_pos = K.batch_flatten(y_true) #keep the batch dimension
y_pred_pos = K.batch_flatten(y_pred)
true_pos = K.sum(y_true_pos * y_pred_pos, axis=-1) #don't sum over the batch dimension
false_neg = K.sum(y_true_pos * (1-y_pred_pos), axis=-1)
false_pos = K.sum((1-y_true_pos)*y_pred_pos, axis=-1)
alpha = 0.5
return (true_pos + smooth)/(true_pos + alpha*false_neg + (1-alpha)*false_pos + smooth)
This way you have an individual loss value for each image, so the exitence of images with many positives don't affect the results of images with few positives.