Train a model in Keras with non-differentiable layer - tensorflow

How can I train a deep neural net in Kera(Tensorflow) with a Quantization layer in middle? i.e. I want to make the representation in a particular layer of the network be quantized (using Vector quantization) and then passed to the next layer.

You can use a Lambda Layer.
def quantize(x):
# Your vector quantization code
model.add(Lambda(quantize))

Related

How do I create a deep learning model by concatenating two hidden layers of the same output shape of Resnet and VGG-16 using TensorFlow?

I want to create a CNN model using the concatenation of hidden layers two pretrained models Resnet and VGG16
After you define model, checkout these pretrained models layers by model.summary(), then when you define layer, try to take output of that layer in this way; first get the model.get_layer('layer_name') and then take its output by layer.output, and now concatenate the outputs of the layers that you have defined before.

Keras custom loss function with multiple output model

In a segmentation task I wanted to have my model to have two outputs because I implemented weight maps as suggested in the original U-net paper https://arxiv.org/pdf/1505.04597.pdf.
As per the suggestion I created weightmaps concentrating some of the ground truth mask to have higher weights. Now I have a model with.
weightmap=layers.Lambda(lambda x:x)(weight_map) # A non trainable layer to output this as tensor for loss function
Model=model(inputs=[input,weight_map], outputs=[output,weightmap]
Now I need to compute binary cross entropy loss for the following model
def custom_loss(target,outputs):
loss=K.binary_crossentropy(target,outputs[0]) #ouputs[0] should be the model output
loss=loss*outputs[1] #outputs[1] should be weightmaps
return loss
This output[0] and output[1] slicing of output tensor from model doesnt work.
Is there anything I can do to implement the following with both outputs of model in a single loss function?

SegNet for CT images pretrained weights

I'm trying to train a SegNet for segmentation task on ct images (with Keras TF).
I'm using VGG16 pretrained weights but I had a problem with the first convolutional layer because I'm using grayscale images but VGG was trained on rgb ones.
I solved that using second method of this (can't use first method because requires too much memory).
However it didn't help me, values are really bad (trained for 100 epochs).
Should I train the first convolutional layer from scratch?
You can try to add a Conv2D before the vgg. Something like :
> Your Input(shape=(height,width,1))
Conv2D(filters=3,kernel_size=1, padding='same',activation='relu')
> The VGG pretrained network (input = (height,width,3))
is interesting in your case because 1x1 convolution is usually employed to change the depth of your object.

What are the uses of layers in keras/Tensorflow

So I am new to computer vision, and I do not really know what the layers do in keras. What is the use of adding layers (dense, Conv2D, etc) in keras? What do they add to it?
Convolution neural network has 4 main steps: Convolution, Pooling, Flatten, and Full connection.
Conv2D(), Conv3D(), etc. is for Feature extraction (It's a Convolution Layer).
Pooling layers (MaxPool2D(), AvgPool2D(), etc) is for Feature extraction as well (It has different operation though).
Flattening layers (Flatten() ) are to convert the extracted feature map into Vector before being fed into the Fully connection layers (The Dense layers).
Dense layers are for Fully connected step in Computer vision that acts as Classifier (The Neural network classify each extracted features from the Convolution layers.)
There are also optimization layers such as Dropout(), BatchNormalization(), etc.
For more information, just open the keras documentation.
If you want to start learning Convolution neural network, this article may help.
A layer in an Artificial Neural Network is a bunch of nodes bound together at a specific depth in a Neural Network. Keras is a high level API used over NN modules like TensorFlow or CNTK in order to simplify tasks. A Keras layer comprises 3 main parts:
Input Layer - Which contains the raw data
Hidden layer - Where the nodes of a layer learn some aspects about
the raw data which is input. It's similar to levels of abstraction
to form a Neural network.
Output Layer - Consists of a single output which is mostly a single
node and can be subjected to classification.
Keras, as a whole consists of many different types of layers. A Convolutional layer creates a kernel which is convoluted with the input over a single temporal space to derive a group of outputs. Pooling layers provide sampling of the feature maps by simplifying features in a map into patches. Max Pooling and Average Pooling are commonly used methods in a Pool layer.
Other commonly used layers in Keras are Embedding layers, Noise layers and Core layers. A single NN layer can represent only a Linearly seperable method. Most prediction problems are complicated and more than just one layer is required. This is where Multi Layer concept is required.
I think i clear your doubts and for any other queries you can see on https://www.tensorflow.org/api_docs/python/tf/keras
Neural networks are a great tool nowadays to automate classification problems. However when it comes to computer vision the amount of input data is too great to be handled efficiently by simple neural networks.
To reduce the network workload, your data needs to be preprocessed and certain features need to be identified. To find features in images we can use certain filters (like sobel edge detection), which will highlight the essential features needed for classification.
Again the amount of filters required to classify one image is too great, and thus the selection of those filters needs to be automated.
That's where the convolutional layer comes in.
We use a convolutional layer to generate multiple random (at first) filters that will highlight certain features in an image. While the network is training those filters are optimized to do a better job at highlighting features.
In Tensorflow we use Conv2D() to add one of those layers. An example of parameters is : Conv2D(64, 3, activation='relu'). 64 denotes the number of filters used, 3 denotes the size of the filters (in this case 3x3) and activation='relu' denotes the activation function
After the convolutional layer we use a pooling layer to further highlight the features produced by the previous convolutional layer. In Tensorflow this is usually done with MaxPooling2D() which takes the filtered image and applies a 2x2 (by default) layer every 2 pixels. The filter applied by MaxPooling is basically looking for the maximum value in that 2x2 area and adds it in a new image.
We can use this set of convolutional layer and pooling layers multiple times to make the image easier for the network to work with.
After we are done with those layers, we need to pass the output to a conventional (Dense) neural network.
To do that, we first need to flatten the image data from a 2D Tensor(Matrix) to a 1D Tensor(Vector). This is done by calling the Flatten() method.
Finally we need to add our Dense layers which are used to train on the flattened data. We do this by calling Dense(). An example of parameters is Dense(64, activation='relu')
where 64 is the number of nodes we are using.
Here is an example CNN structure I used recently:
# Build model
model = tf.keras.models.Sequential()
# Convolution and pooling layers
model.add(tf.keras.layers.Conv2D(64, 3, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 1))) # Input layer
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(64, 3, activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
# Flattened layers
model.add(tf.keras.layers.Flatten())
# Dense layers
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(2, activation='softmax')) # Output layer
Of course this worked for a certain classification problem and the number of layers and method parameters differ depending on the problem.
The Youtube channel The Coding Train has a very helpful video explaining the Convolutional and Pooling layer.

How can I modify the dropout rate in tensorflowjs after using loadFrozenModel?

I train the model in tensorflow,the model has dropout layer.And then I convert it into tensorflowjs ,then I load it by loadFrozenModel(),Can I modify the dropout rate after model=tf.loadFrozenModel?
Currently frozen models cannot be trained further. You can of course use them as a base for a transfer learning task, but the variables inside that model are frozen and not marked as updatable.
Using transfer learning, you can retrieve the layer before the dropout layer and change the dropout layer and train further