I'm looking for a way to achieve multiple classifications for an input. The number of outputs is specified, and the class sets may or may not be the same for the outputs. The sample belongs to one class of each class set.
My question is, what should the target data and the output layer look like? What activation, loss and training functions could be used, and how should the layer be connected to the hidden layer? I'm not necessarily looking for an optimal solution, just a working one.
My current guess on what could work, is to make the target data be multiple concatenated one-hot vectors and the output layer have as many softmax units as the number of vectors. I don't know how the layers would be connected with that solution and how the net would figure out the sizes of class sets. I think a label powerset would not work for my needs.
I think the matlab patternnet function can create a net that does that, but I don't know how the resulting net works. Code for TensorFlow or Keras would be very welcome.
Maybe it's not a good time to response to the question, but I am working on the multi-label classification and just found an solution.
As for Keras, there's a example:
target label: [1, 0, 0, 1, 0]
output layer: Dense(5, activation='sigmoid')
loss: 'binary_crossentropy'
That will work well if dataset is big enough.
Related
I have tabular data from a sensor measuring various features. When the sensor is "off" it will report zero as values. I am training some machine learning models kNN, XGBoost, and NN for the purpose of classification. Here's the issue I am facing: I can train and predict on a row by row basis; however it would be better to classify a range as whole rather than a row by row basis. Another issue to this is that the range can vary in size. For a very basic example, please see this diagram illustrating the range.
I have a basic Keras model:
model = Sequential()
model.add(Dense(100, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
print(model.summary())
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
And the training data is shaped with 20 features and 4 classes. How would I:
1.) Format my training data
2.) Shape input data to classify as a "whole" rather than row by row
3.) While this has been talking about using keras. Can the same input shaping/training be applied to XGBoost or a kNN?
I assume that the blue line in that graph represents your targets. Here is a fundamental issue I see with something like predicting the range as a whole instead of sample by sample.
Assuming that there is some reasonable logic that could collapse the range of samples into one (taking mean per each feature, or concatenation, or whatever...), obviously you would first need to identify the range itself. This range identification step is however dependent on the knowledge of target (at least it seems like that based on the presented graph).
If the preprocessing step is dependent on the knowledge of the target, you would need to know the target for the test set as well before you could preprocess the data and make the predictions. In other words, you would need to know the outcome before you could make the prediction which would then be rather pointless.
You have stated that you are trying to perform classification but your target seems to be continuous. I don't know what your classes are or what patterns they are associated with but you would need to bin the target before you could start solving this as a classification problem. You would most likely lose a lot of information by doing this.
Therefore, I would start by solving it as a regression problem. Trying to predict that continuous target for each sample. Once you have that, you can apply some patter matching logic to identify the class for a given sample/range (for example, you could slice the sequence of targets/predictions from the previous step, associate each slice with the desired class and use this data as a new dataset for some classification algorithm).
As for the variable length inputs. Some deep learning architectures allow you to work with input of variable length, such as RNNs or adaptive pooling. You may try to do this one you know how to predict the continuous target as mentioned before. Non-deep-learning algorithms usually expect all samples to have the same shape so there is no general/automatic way of reusing the same input between them and deep learning algorithms that work with input of variable length.
I'm following the tutorial on the tensorflow site (https://www.tensorflow.org/tutorials/text/word_embeddings#create_a_simple_model) to learn word embeddings, and a confusion that I have is about the purpose of having a Globalaveragepooling layer right after the embedding layer as follows:
model = keras.Sequential([
layers.Embedding(encoder.vocab_size, embedding_dim),
layers.GlobalAveragePooling1D(),
layers.Dense(16, activation='relu'),
layers.Dense(1)
])
I understand what pooling means and how it's done. If someone can explain why we need a pooling layer, and what would change if we didn't use it, I'd appreciate it.
The purpose of this tutorial is to get you to understand word-embeddings through a simple toy task: binary sentiment analysis.
To start with, they make you code a simple model: take the average of all embeddings in a sentence and add a feed-forward neural net to classify this aggregated input. GlobalAveragePooling1D does this averaging.
Obviously in the real world you'd want to use more complex models as RNNs, LSTMs, bidirectional models, atrous-convolution-based models or Transformers but that's not the point in this tutorial.
The "simple model" they mention being a feed-forward neural net, it expects a fixed input dimension so when you have sequential data of variable length you need to address this somehow: averaging, padding, cropping etc. Here they average with this GlobalAveragePooling1D layer
I am trying to add dropout in convolutional layers(although it seems people dont do this a lot).
According to cs231n, they recommended to drop the activation maps instead of units in all activation maps(I consider this somehow make sense, because each activation maps are extracting the same feature in different positions).
In tensorflow, I can't find any API can directly do this, so how can I do this?
This is my first time asking a question in StackOverflow, and I will appreciate for advices and answers.
You can actually do this with the available dropout functions via the noise_shape argument. E.g. using the layers API:
x = tf.layers.dropout(x, noise_shape=[batch_size, 1, 1, features])
This would be for 2D convolution and channels_last format. We only generate a single noise value for image width/height which will be broadcast over the image dimensions. However, we still generate a different noise value for each feature/activation map.
So far I have trained a couple different models in TensorFlow (with Keras) and I see that getting the batch_size right seems to be important not just for speed of training but also the resultant accuracy of the model.
What confuses me is a case where a model has an actual batch channel as the first dimension on the input (and output as well). If my batch size is 32 but I'm always inputting 1 data at run-time then where does the batch channel apply? How could I utilize the vast majority of it if I'm inherently only using 1/batch_size amount of it in forward pass?
If you are curious the model I am researching, it is this one:
https://github.com/pierluigiferrari/ssd_keras/blob/master/models/keras_ssd300.py
see:
Output shape of predictions: (batch, n_boxes_total, n_classes + 4 + 8)
predictions = Concatenate(axis=2, name='predictions')([mbox_conf_softmax, mbox_loc, mbox_priorbox])
The tensors had run through numerous other layers that had constants and such pretrained with [batch_size] as well. To me it just seems like inputs at various batch index would have to yield different results. Maybe I just need something incredibly obvious pointed out to me.
It would seem that after training you must recompile the model with a batch size of 1, then transfer the weights from the training model to the new model for evaluation. The alternative is performing 'batch_size' count of predictions at once (which of course is not always feasible per application). If there are alternatives (or if I read wrong) please feel free to add an answer.
There are quite a few examples on how to use LSTMs alone in TF, but I couldn't find any good examples on how to train CNN + LSTM jointly.
From what I see, it is not quite straightforward how to do such training, and I can think of a few options here:
First, I believe the simplest solution (or the most primitive one) would be to train CNN independently to learn features and then to train LSTM on CNN features without updating the CNN part, since one would probably have to extract and save these features in numpy and then feed them to LSTM in TF. But in that scenario, one would probably have to use a differently labeled dataset for pretraining of CNN, which eliminates the advantage of end to end training, i.e. learning of features for final objective targeted by LSTM (besides the fact that one has to have these additional labels in the first place).
Second option would be to concatenate all time slices in the batch
dimension (4-d Tensor), feed it to CNN then somehow repack those
features to 5-d Tensor again needed for training LSTM and then apply a cost function. My main concern, is if it is possible to do such thing. Also, handling variable length sequences becomes a little bit tricky. For example, in prediction scenario you would only feed single frame at the time. Thus, I would be really happy to see some examples if that is the right way of doing joint training. Besides that, this solution looks more like a hack, thus, if there is a better way to do so, it would be great if someone could share it.
Thank you in advance !
For joint training, you can consider using tf.map_fn as described in the documentation https://www.tensorflow.org/api_docs/python/tf/map_fn.
Lets assume that the CNN is built along similar lines as described here https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py.
def joint_inference(sequence):
inference_fn = lambda image: inference(image)
logit_sequence = tf.map_fn(inference_fn, sequence, dtype=tf.float32, swap_memory=True)
lstm_cell = tf.contrib.rnn.LSTMCell(128)
output_state, intermediate_state = tf.nn.dynamic_rnn(cell=lstm_cell, inputs=logit_sequence)
projection_function = lambda state: tf.contrib.layers.linear(state, num_outputs=num_classes, activation_fn=tf.nn.sigmoid)
projection_logits = tf.map_fn(projection_function, output_state)
return projection_logits
Warning: You might have to look into device placement as described here https://www.tensorflow.org/tutorials/using_gpu if your model is larger than the memory gpu can allocate.
An Alternative would be to flatten the video batch to create an image batch, do a forward pass from CNN and reshape the features for LSTM.