How to input scalar (non-image) values to CNNs? - input

The classification task is based on a image and a scalar value.
If I encoded the scalar value as image pixels with that value (or a normalized version of the same) and append it as another layer in the input image, I would be wasting convolutional computation cycles over the encoding to get this information into the network.
On the other hand, I can send this as another neuron to the layer where flattening of conved feature maps occurs. Another option would be adding just before the output layer. (But how do I implement such a network in Keras or tensorflow?)
Which is the best method to send in scalar values?
PS: Although this question is not specific to any framework, Keras examples would be great in a way that they are simple enough for most people to understand... Links to blogs addressing the same are welcome too.

See this question and answer on the Cross Validated site: Combining image and scalar inputs into a neural network
In addition to the "bias" method suggested in the paper mentioned there(when the scalar is being used as a bias to some convolution layer), and the other option suggested in the answer to append the scalar to some flattened layer, you can also use an inner product (fully connected, "Dense" in Keras) layer to find the connectivity pattern between the ND input to the scalar.

Related

Convolutional network returning a matrix with a image as input

I have been trying to code a model that looks at an image with a grid and returns a matrix with the contents of that grid.
Here is an example of the input image:
Input
And this should be the output:
[30202133333,
12022320321,
23103100322,
13103110301,
22221301212,
33100210001,
11012010320,
21230233011,
00330223230,
02121221220,
23133103321,
23110110330]
With 0: Blue, 1: Pink, 2: Lavender, 3: Green
I have a hard time finding resources on how to do this. What would be the simpelst way?
Thanks in advance!
There could be multiple design choices to generate this type of output. I suggest using Autoencoders.
Here is some information about Autoencoders taken from Wikipedia -
An autoencoder is a type of artificial neural network used to learn
efficient codings of unlabeled data (unsupervised learning).1 The
encoding is validated and refined by attempting to regenerate the
input from the encoding. The autoencoder learns a representation
(encoding) for a set of data, typically for dimensionality reduction,
by training the network to ignore insignificant data (“noise”).
While autoencoders are typically used to reconstruct the input, you have a slightly different problem of mapping the input to a specific matrix.
You'd want to set up the architecture by providing images as input and the corresponding matrices as your "labels." The architecture can be further optimized by using Convolutional layers instead of MLP layers.

Training Tensorflow only one object

Corresponding Tensorflow documentation I trained 3 objects and get result (It can recognize these objects). When I show other objects (not the 3 ones) it doesn't work correctly.
I want to train only one object (example: a cup) and recognize only this object. Is it possible to do via Tensorflow ?
Your question doesn't provide enough details, but as I can guess your trained the network with softmax activation and Categorical or SparseCategorical cross entropy loss. If my guess is right, such network always generates prediction to one of three classess, regardless to actual data, i.e. there is no option of "no-one".
In order to train network to recognize only one class of objects, make the only one output with only one channel and sigmoid activation. Use BinaryCrossEntropy loss to train your model for the specific object. Provide dataset that includes examples with this object and without it.

Tensorflow Embedding for training and inference

I am trying to code a simple Neural machine translation using tensorflow. But I am a little stuck regarding the understanding of the embedding on tensorflow :
I do not understand the difference between tf.contrib.layers.embed_sequence(inputs, vocab_size=target_vocab_size,embed_dim=decoding_embedding_size)
and
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
In which case should I use one to another ?
The second thing I do not understand is about tf.contrib.seq2seq.TrainingHelper and tf.contrib.seq2seq.GreedyEmbeddingHelper. I know that in the case of translation, we use mainly TrainingHelper for the training step (use the previous target to predict the next target) and GreedyEmbeddingHelper for the inference step (use the previous timestep to predict the next target).
But I do not understand how does it work. In particular the different parameters used. For example why do we need a sequence length in the case of TrainingHelper (why do we not used an EOS)? Why both of them do not use the embedding_lookup or embedding_sequence as input ?
I suppose that you're coming from this seq2seq tutorial. Even though this question is starting to get old, I'll try to answer for the people passing by like me:
For the first question, I looked at the source code behind tf.contrib.layers.embed_sequence, and it is actually using tf.nn.embedding_lookup. So it just wraps it, and creates the embedding matrix (tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))) for you. Although this is convenient and less verbose, by using embed_sequence there doesn't seem to a direct way to access the embeddings. So if you want to, you have to query for the internal variable used as the embedding matrix by using the same name space. I have to admit that the code in the tutorial above is confusing. I even suspect he's using different embeddings in the encoder and the decoder.
For the second question:
I guess it is equivalent to use a sequence length or an embedding.
The TrainingHelper doesn't need the embedding_lookup as it only forwards the inputs to the decoder, GreedyEmbeddingHelper does take as a first input the embedding_lookup as mentioned in the documentation.
If I understand you correctly, the first question is about the differences between tf.contrib.layers.embed_sequence and tf.nn.embedding_lookup.
According to the official docs (https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence),
Typical use case would be reusing embeddings between an encoder and decoder.
I think tf.contrib.layers.embed_sequence is designed for seq2seq models.
I found the following post:
https://github.com/tensorflow/tensorflow/issues/17417
where #ispirmustafa mentioned:
embedding_lookup doesn't support invalid ids.
Also, in another post: tf.contrib.layers.embed_sequence() is for what?
#user1930402 said:
When building a neural network model that has multiple gates that take features as input, by using tensorflow.contrib.layers.embed_sequence, you can reduce the number of parameters in your network while preserving depth. For example, it eliminates the need for each gates of the LSTM to perform its own linear projection of features.
It allows for arbitrary input shapes, which helps the implementation be simple and flexible.
For the second question, sorry that I didn't use TrainingHelper and can't answer your question.

Predict all probable trajectories in a grid structure using Keras

I'm trying to predict sequences of 2D coordinates. But I don't want only the most probable future path but all the most probable paths to visualize it in a grid map.
For this I have traning data consisting of 40000 sequences. Each sequence consists of 10 2D coordinate pairs as input and 6 2D coordinate pairs as labels.
All the coordinates are in a fixed value range.
What would be my first step to predict all the probable paths? To get all probable paths I have to apply a softmax in the end, where each cell in the grid is one class right? But how to process the data to reflect this grid like structure? Any ideas?
A softmax activation won't do the trick I'm afraid; if you have an infinite number of combinations, or even a finite number of combinations that do not already appear in your data, there is no way to turn this into a multi-class classification problem (or if you do, you'll have loss of generality).
The only way forward I can think of is a recurrent model employing variational encoding. To begin with, you have a lot of annotated data, which is good news; a recurrent network fed with a sequence X (10,2,) will definitely be able to predict a sequence Y (6,2,). But since you want not just one but rather all probable sequences, this won't suffice. Your implicit assumption here is that there is some probability space hidden behind your sequences, which affects how they play out over time; so to model the sequences properly, you need to model that latent probability space. A Variational Auto-Encoder (VAE) does just that; it learns the latent space, so that during inference the output prediction depends on sampling over that latent space. Multiple predictions over the same input can then result in different outputs, meaning that you can finally sample your predictions to empirically approximate the distribution of potential outputs.
Unfortunately, VAEs can't really be explained within a single paragraph over stackoverflow, and even if they could I wouldn't be the most qualified person to attempt it. Try searching the web for LSTM-VAE and arm yourself with patience; you'll probably need to do some studying but it's definitely worth it. It might also be a good idea to look into Pyro or Edward, which are probabilistic network libraries for python, better suited to the task at hand than Keras.

Feed input to intermediate layer and then do back propagation in keras

I have looked around everywhere but could not find the way to do this.
Basically I want to feed input to some intermediate layer in a keras model and want to the backpropagation for the full graph (i.e. including layer before the intermediate layer). To understand this I refer you to the figure as mentioned in the paper "Multi-view Convolutional Neural Networks for 3D Shape Recognition".
From the figure you can see that the feature are maxpooled in view pooling layer and then the resultant vector is passed to the rest of the network.
From the paper they further did he back propagation using the view pooling features.
To achieve this I am trying a simple approach. There will not be any viewpooling layer in my model. This pooling I will do offline by taking the features for multiple views and then taking the max of it. Finally the aggregated feature will be passed to rest of the network. However I am not able to figure out how to do the back propagation to the full network by passing input to intermediate layer directly.
Any help would be appreciated. Thanks
If you have the code of the tensorflow model, then this will be quite simple. The model would probably look like
def model( cnns ):
viewpool_output = f(cnns)
cnn2_output = cnn2( viewpool_output )
...
You would just need to change the model to
def model( viewpool_output ):
cnn2_output = cnn2( viewpool_output )
...
and instead of passing a "real" view pool output, you just pass whatever image you want. But you haven't given any code, so we can only guess at what it looks like.