I am a newbie in machine learning.
Actually, I used my unet code for image segmentation using one input image slice (192x912) and one output mask image (192x192)
My Unet code is contained several CNN layer and I usually used one input image (192x912) and one its corresponding mask binary image for training.
Code related with above explanation is as below.
def get_unet():
inputs = Input((img_rows, img_cols, 1))
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
drop1 = Dropout(0.2)(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(drop1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
drop2 = Dropout(0.2)(pool2)
'''''''
return model
model.fit(imgs_train, imgs_mask_train, batch_size=32, epochs=100, verbose=2, shuffle=True, validation_split=0.1, callbacks=[model_checkpoint])
it works well. But, now, I would like to use multi input image when I train my network. So, I add another train data and edit my code like below.
def get_unet():
inputs = Input((img_rows, img_cols, 1))
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
drop1 = Dropout(0.2)(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(drop1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
drop2 = Dropout(0.2)(pool2)
'''''''
return model
model.fit([imgs_train, imgs_other_train], imgs_mask_train, batch_size=32, epochs=100, verbose=2, shuffle=True, validation_split=0.1, callbacks=[model_checkpoint])
but when I run the train code, I got error message as below.
"ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays: "
I think my U net needs to be changed for multi input, but I don't know where I have to change.
Please help and give me any comments.
Thanks.
This is actually rather easy. All you need to do would adjust your input size I believe.
inputs = Input((img_rows, img_cols, size)) would have worked. Or you could have used concat like:
inputs = []
for _ in range(num_inputs):
inputs.append(Input((self.input_height, self.input_width, self.input_features)))
x = concatenate(inputs)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
You can check something similar I implemented here
Related
I want to use a pre-trained model (from Keras Applications), with weights, and append my (very simple) CNN model at the end. To this end I am trying to loosely follow the tutorial here under the sub-header 'Fine-tune InceptionV3 on a new set of classes'.
My original simple CNN model was this:
model = Sequential()
model.add(Rescaling(1.0 / 255))
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(256,256,3)))
model.add(MaxPool2D(pool_size=(2, 2), strides=2))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2), strides=2))
model.add(Flatten())
model.add(Dense(units=5, activation='softmax'))
As I'm following the tutorial, I've converted it as so:
x = base_model.output
x = Rescaling(1.0 / 255)(x)
x = Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(256,256,3))(x)
x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
x = Conv2D(64, kernel_size=(3, 3), activation='relu')(x)
x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
x = GlobalAveragePooling2D()(x)
predictions = Dense(units=5, activation='softmax')(x)
As you can see, the difference is that the top model is a Sequential() model while the bottom is Functional (I think?), and also, that the Flatten() layer has been replaced with GlobalAveragePooling2D(). I did this because I kept getting shape-related errors and it wasn't compiling. I thought I got it once I replaced the Flatten() layer with the GlobalAveragePooling() as this part of the code finally did compile, however now that I'm trying to train the model, it's giving me the following error:
ValueError: Exception encountered when calling layer "max_pooling2d_7" (type MaxPooling2D).
Negative dimension size caused by subtracting 2 from 1 for '{{node model/max_pooling2d_7/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]](model/conv2d_10/Relu)' with input shapes: [?,1,1,64].
Call arguments received:
• inputs=tf.Tensor(shape=(None, 1, 1, 64), dtype=float32)
I don't want to remove the MaxPooling layer as I want this fine-tuned model append to be as close to the 'simple CNN' model I originally had, so that I can compare the two results. But I keep getting hit with these shape errors, which I don't really understand, and it's coming to the end of the day.
Is there a nice quick-fix that can enable this VGG16+simple CNN to work?
the first most important technical problem in your model structure is that you are rescaling images after passed through the base_model, so you should implement it just before the base model
the second one is that you have defined input_shape in the model above in convolution layer while data first pass throught base model, so you should define input layer before base model and then pass its output thorough base_model and the other layers
here i've edited your code:
inputs = Input(shape = (input_shape=(256,256,3))
x = Rescaling(1.0 / 255)(inputs)
x = base_model(x)
x = Conv2D(32, kernel_size=(3, 3), activation='relu')(x)
x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
x = Conv2D(64, kernel_size=(3, 3), activation='relu')(x)
x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
x = GlobalAveragePooling2D()(x)
predictions = Dense(units=5, activation='softmax')(x)
model = keras.Model(inputs = [inputs], outputs = [predictions])
And for the error raised, in this case you could set convolution layers padding parameter to 'same' or even resize images to larger size to override the problem.
I want to train an autoencoder for the purpose of gpr investigations.
The input data dimension is 149x8.However, While i am trying deep autoencoder it works fine
input_img = Input(shape=(8,))
encoded1 = Dense(8, activation='relu')(input_img)
encoded2 = Dense(4, activation='relu')(encoded1)
encoded3 = Dense(2, activation='relu' )(encoded2)
decoded1 = Dense(2, activation='relu' )(encoded3)
decoded2 = Dense(4, activation='relu')(decoded1)
decoded3 = Dense(8, activation='relu' )(decoded2)
decoded = Dense(8, activation='linear')(decoded3)
autoencoder = Model(input_img, decoded)
sgd = optimizers.Adam(lr=0.001)
autoencoder.compile(optimizer=sgd, loss='mse')
autoencoder.summary()
..................................................
But while trying to use convolutional autoencoder for the same input
it gives error `ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=2`
can anybody suggest me how to overcome this problem.
My code is
input_img = Input(shape=(8,))
x = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(16, (3, 3), activation='relu')(x)
x = layers.UpSampling2D((2, 2))(x)
decoded = layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
sgd = optimizers.Adam(lr=0.001)
autoencoder.compile(optimizer=sgd, loss='mse')
autoencoder.summary()
Wrong Input Shape:
This is because we are passing the input shape of (8,) and 1 extra dimension added by TensorFlow for Batch size, so the error message says that it found ndim=3, but the CNN has expected min_ndim=4, 3 for the image size and 1 for the batch size. e.g.
input_shape=(number_of_rows, 28,28,1)
I have prepared a CNN model for image colorization:
"""Encoder - Input grayscale image (L)"""
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=(256, 256, 1)))
...
"""Latent space"""
model.add(Conv2D(512, (3,3), activation='relu', padding='same'))
"""Decoder - output (A,B)"""
...
model.add(Conv2D(2, (3, 3), activation='tanh', padding='same'))
Now i want to use ResNet as feature extractor and merge the output to Latent space.
I have already imported ResNet model as:
resnet50_imagnet_model = tf.keras.applications.resnet.ResNet50(weights = "imagenet",
include_top=False,
input_shape = (256, 256, 3),
pooling='max')
Encoder
"""Encoder - Input grayscale image (L)"""
encoder = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=(256, 256, 1)))
...
Decoder
decoder = """Decoder - output (A,B)"""
...
Use tf.keras.Sequential() to merge all models
comb_model = tf.keras.Sequential(
[encoder,resnet50_imagnet_model, decoder]
)
I am trying to get the output of the latent layer/hidden layer to use it as input for something else. I trained my model in an efficient way to minimize the loss so my model could learn the latent features efficiently and as close as possible to the image.
My model is
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
#Encoder
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# Decoder
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x) # opposite of Pooling
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
I want the output of encoded layer as my output for the model. Is it possible? ad If yes, Please tell me how.
you can simply do in this way
autoencoder.fit(...)
latent_model = Model(input_img, encoded)
latent_representation = latent_model.predict(X)
I'm using tensorflow serving to serve a savedmodel. I have two signatures: 1st outputting keras model.output and the 2nd outputting post processing of model.output. When I try a predict call of the 2nd signature on tensorflow serving it is giving me an error { "error": "Tensor name: prediction has no shape information " }
this is the code to build the savedmodel
shape1 = 92
shape2 = 92
reg=0.000001
learning_rate=0.001
sess = tf.Session()
K.set_session(sess)
K._LEARNING_PHASE = tf.constant(0)
K.set_learning_phase(0)
#preprocessing
x_input = tf.placeholder(tf.string, name='x_input', shape=[None])
reshaped = tf.reshape(x_input, shape=[])
image = tf.image.decode_jpeg(reshaped, channels=3)
image2 = tf.expand_dims(image,0)
resized = tf.image.resize_images(image2, (92,92))
meaned = tf.math.subtract(resized, tf.constant(116.0))
normalized = tf.math.divide(meaned, tf.constant(66.0))
#keras model
model = tf.keras.Sequential()
model.add(InputLayer(input_tensor=normalized))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.1))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.1))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.2))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.3))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.3))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu', kernel_regularizer=l2(reg)))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=tf.train.RMSPropOptimizer(learning_rate=learning_rate),
metrics=['accuracy'])
#post processing to output label
pred = tf.gather_nd(model.output, (0,0))
label = tf.cond(pred > 0.5, lambda: tf.constant('Dog', shape=[]), lambda: tf.constant('Cat', shape=[]))
model.load_weights(r'./checkpoints/4.ckpt')
export_path = './saved_models/1'
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
sess.run(init_op)
model.load_weights(r'./checkpoints/4.ckpt')
if os.path.isdir(export_path):
print('\nAlready saved a model, cleaning up\n')
print(subprocess.run(['rm', '-r', export_path]))
#first signature(this works)
x_info = tf.saved_model.utils.build_tensor_info(x_input)
y_info = tf.saved_model.utils.build_tensor_info(model.output)
sigmoid_signature = build_signature_def(inputs={"image": x_info}, outputs={"prediction":y_info}, method_name='tensorflow/serving/predict')
#2nd signature(this doesn't work)
x_info = tf.saved_model.utils.build_tensor_info(x_input)
y_info = tf.saved_model.utils.build_tensor_info(label)
label_signature = build_signature_def(inputs={"image": x_info}, outputs={"prediction":y_info}, method_name='tensorflow/serving/predict')
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(sess=sess,
tags=["serve"],
signature_def_map={'sigmoid': sigmoid_signature, 'label': label_signature})
builder.save()
this is code to call tf serving
imgs = ['./Dog/' + img for img in imgs]
img = open('./Dog/3.jpg', 'rb').read()
img = base64.b64encode(img).decode('utf-8')
data = json.dumps(
{"signature_name": "label",
"instances": [
{'image': {'b64': img}}
]
}
)
json_response = requests.post('http://localhost:8501/v1/models/pet:predict', data=data)
print(json_response.text)
Instead of getting a response of {"predictions": "Dog"}, i am getting an error { "error": "Tensor name: prediction has no shape information " }
I managed to fix this. I used tf.reshape on what i wanted to output and passed that into the signature builder.
#post processing to output label
pred = tf.gather_nd(model.output, (0,0))
label = tf.cond(pred > 0.5, lambda: tf.constant('Dog', shape=[]), lambda: tf.constant('Cat', shape=[]))
label_reshaped = tf.reshape(label, [None])
...
#2nd signature(this doesn't work)
x_info = tf.saved_model.utils.build_tensor_info(x_input)
y_info = tf.saved_model.utils.build_tensor_info(label_reshaped)
label_signature = build_signature_def(inputs={"image": x_info}, outputs={"prediction":y_info}, method_name='tensorflow/serving/predict')
Reading the tensorflow serving documentation, you'll see that there are two ways to specify input tensors in your request, the row format (using instances like your example), and the column format (using inputs).
Since the row format requires that all inputs and outputs have the same 0th dimension, if you did not export the model with explicit output shape, you cannot use the row format.
Therefore, in your case (without having to re-export the model with explicit reshaping, like the other answer has provided), you can send this payload instead
data = json.dumps(
{
"signature_name": "label",
"inputs": {'image': {'b64': img}}
}
)
On the other hand, keep in mind that if you do want to send multiple b64 encoded images, your best bet would be to use the row format with multiple instances (such as if you want to run batch predict on multiple images).