How to implement wide one dimensional convolution in cntk - cntk

I'd like to implement the wide type of one dimensional convolution (https://arxiv.org/pdf/1404.2188v1.pdf) in CNTK. Is there a built in method for that or how should I play with the parameters of Convolution() to implement it?
Thanks!

You could try padding your data with zero and then use C.ops.convolution().

Related

Can I use final pooling layer to find best common features after concatenating deep features vector and handcrafted fetures vector?

I have two features vector. One is deep features vector extracted by CNN and another is handcrafted features extracted by uniform local binary pattern. I want to find common best features after concatenating these two features vector. I want to use a final pooling layer for this reason. Is it possible?
After you have concatenated the two feature vectors, the final pooling layer would help in reducing those feature vectors.
If you can define more what you aim to do / which pooling layer do you want to use ?
I'm not sure I understand correctly what you meant by "final pooling layer"
But in my opinion, adding ONLY a pooling layer after the concatenation layer and before the output layer (e.g., Dense-softmax...) may not help much in this case as pooling layers have no learnable parameters, and they operate over each activation map independently to reduce the size of the activation maps.
There is one simple way of feature fusion methods I would like to suggest is that you can apply another subnet (set of layers like convolution, pooling, dense) to the concatenated tensor. Thus, the model can keep learning to enhance the good features.

Trouble with implementing local response normalization in TensorFlow

I'm trying to implement a local response normalization layer in Tensorflow to be used in a Keras model:
Here is an image of the operation I am trying to implement:
Here is the Paper link, please refer to section 3.3 to see the description of this layer
I have a working NumPy implementation, however, this implementation uses for loops and inbuilt python min and max operators to compute the summation. However, these pythonic operations will cause errors when defining a custom keras layer, so I can't use this implementation.
The issue here lies in the fact that I need to iterate over all the elements in the feature map and generate a normalized value for each of them. Additionally, the upper and lower bound on the summation change depending on which value I am currently normalizing. I can't really think of a way to handle this without nested for loops, but this will not work in a Keras custom layer as it isn't a native TensorFlow function.
Could anyone point me towards tensorflow/keras backend functions that could help me in implementing this layer?
EDIT: I know that this layer is implemented as a keras layer, but I want to build intuition about custom layers, so I want to implement this layer using tensor ops.

How to decide convolution layers' filter?

When you guys make a convolution layer includes hidden layers, how to decide parameters? like filter, stride and even the number of convolution layers? I know the meaning of each parameter, but if I have to make from the start, how can I?
Please refer to below links to have better understanding of CNNs and how to make use of them.
http://cs231n.github.io/convolutional-networks/
https://medium.com/#RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148
https://www.analyticsvidhya.com/blog/2018/12/guide-convolutional-neural-network-cnn/
https://towardsdatascience.com/deciding-optimal-filter-size-for-cnns-d6f7b56f9363

Tensorflow: How to create new neuron (Not perceptron neuron)

So tensorflow is extremely useful at creating neural networks that involve perceptron neurons. However, if one wanted to use a new type of neuron instead of the classic perceptron neuron, is this possible through augmenting tensorflow code? I can't seem to find an answer. I understand this would change the forward propagation, and more mathematical calculations, and I am willing to change all the necessary areas.
I am also aware that I can just code from scratch the layers I need, and the neurons I had in mind, but tensorflow nevertheless has GPU integration, so one can see its more ideal to manipulate their code as opposed to creating my own from scratch.
Has anyone experimented with this? My goal is to create neural network structures that use a different type of neuron than the classic perceptron.
If someone who knows where in tensorflow I could look to see where they initialize the perceptron neurons, I would very much appreciate it!
Edit:
To be more specific, is it possible to alter code in tensorflow to use a different neuron type rather than the perceptron to invoke the tensorlfow Module: tf.layers for example? Or tf.nn? (conv2D, batch-norm, max-pool, etc). I can figure out the details. I just need to know where (I'm sure they're a few locations) I would go about changing code for this.
However, if one wanted to use a new type of neuron instead of the classic perceptron neuron, is this possible through augmenting tensorflow code?
Yes. Tensorflow provides you the possibility to define a computational graph. It then can automatically calculate the gradient for that. No need to do it yourself. This is the reason why you define it symbolically. You might want to read the whitepaper or start with a tutorial.

Define custom model/architecture in TensorFlow

From the little I have played around with TensorFlow I see it has already-implemented architectures like RNN/LSTM cells, ConvNets, etc. Is there a way to define one's "custom" architecture (e.g. an "enhanced" LSTM network with a few convolutional layers)?
Yes, it is totally possible. The output of LSTM or any network are tensors which cab used as input of another network.
See how to combine them at https://github.com/jazzsaxmafia/show_and_tell.tensorflow.
You can find more examples at https://github.com/TensorFlowKR/awesome_tensorflow_implementations.