Are there any references for feature extraction using LSTM RNN in tensorflow? - tensorflow

Currently I am trying to use pre-trained LSTM RNN model for feature extraction.
I stumbled across following reference for feature extraction using deep neural nets. That's for images however.
https://www.kernix.com/blog/image-classification-with-a-pre-trained-deep-neural-network_p11
In a similar fashion I would like to use LSTM RNN https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/? for feature extraction. The code is implemented using Tensorflow BasicLSTMCell.
Is there any way to get the layer like "pool_3:0" as described in the first reference link ?
Any links or references would be helpful.

Related

General usefulness of Dense layers for different identification tasks

I'd like to ask, is it practical to use embeddings and similarity metrics to any form of identification task? If I had a neural network trained to find different objects in a photo, would extracting the fully-connected layers/Dense layers and clustering them be useful?
I've recently found that there is an embeddings projector tool from tensorflow that is very cool and useful. I know that there has been some work in word embeddings and how similar words cluster together. This is the case for faces as well.
Having said that, I want to follow the same methods into analyzing geological sites; can I train a model to create embeddings of the features of a site and use clustering methods to classify?
Yes, we can do that. We can use embeddings for images and visualize the embeddings in the tensorboard.
You can replicate using the fashion mnist embedding example found here for your use case.

How to add a layer which is implemented by TensorFlow into a PyTorch neural model?

I’d like to add some layer into a Pytorch based neural model. Basically I am trying to combine to codes together.
But I notice that the layer I want to add is the implemented by Tensorflow. I’d like to know if there is an easy way to integrate a TensorFlow layer into a Pytorch neural model… ?
The error is shown as:
module ‘torch.nn’ has no attribute ‘tensorflow_layer’

Differences between different attention layers for Keras

I am trying to add an attention layer for my text classification model. The inputs are texts (e.g. movie review), the output is a binary outcome (e.g. positive vs negative).
model = Sequential()
model.add(Embedding(max_features, 32, input_length=maxlen))
model.add(Bidirectional(CuDNNGRU(16,return_sequences=True)))
##### add attention layer here #####
model.add(Dense(1, activation='sigmoid'))
After some searching, I found a couple of read-to-use attention layers for keras. There is the keras.layers.Attention layer that is built-in in Keras. There is also the SeqWeightedAttention and SeqSelfAttention layer in the keras-self-attention package. As a person who is relatively new to the deep learning field, I have a hard time understanding the mechanism behind these layers.
What does each of these lays do? Which one will be the best for my model?
Thank you very much!
If you are using RNN, I would not recommend using the keras.layers.Attention class.
While analysing tf.keras.layers.Attention Github code to better understand how to use the same, the first line I could come across was - "This class is suitable for Dense or CNN networks, and not for RNN networks"
There is another open source version maintained by CyberZHG called
keras-self-attention. To the best of my knowledge this is NOT a part of the Keras or TensorFlow library and seems to be an independent piece of code. This contains the two classes you mentioned - SeqWeightedAttention & SeqSelfAttention layer classes. former returns a 2D value and latter a 3D value. So the SeqWeightedAttention should work for your situation. The former seems to be loosely based on Raffel et al and can be used for Seq classification, The latter seems to be a variation of Bahdanau.
In general, I would suggest you to write your own seq to classification model. The attention piece can be added in less than half a dozen lines of code (bare-bones essence)...much less than the time you would spend in integrating or debugging or understanding the code in these external libraries.
Please refer: Create an LSTM layer with Attention in Keras for multi-label text classification neural network

How to perform a multi label classification with tensorflow in purpose of auto tagging?

I'm new to tensorflow and would like to know if there is any tutorial or example of a multi-label classification with multiple network outputs.
I'm asking this because I have a collection of articles, in which, each article can have several tags.
Out of the box, tensorflow supports binary multi-label classification via tf.nn.sigmoid_cross_entropy_with_logits loss function or the like (see the complete list in this question). If your tags are binary, in other words there's a predefined set of possible tags and each one can either be present or not, you can safely go with that. A single model to classify all labels at once. There are a lot of examples of such networks, e.g. one from this question.
Unfortunately, multi-nomial multi-label classification is not supported in tensorflow. If this is your case, you'd have to build a separate classifier for each label, each using tf.nn.softmax_cross_entropy_with_logits or a similar one.

finetuning tensorflow seq2seq model

I've trained a seq2seq model for machine translation (DE-EN). And I have saved the trained model checkpoint. Now, I'd like to fine-tune this model checkpoint to some specific domain data samples which have not been seen in previous training phase. Is there a way to achieve this in tensorflow? Like modifying the embedding matrix somehow.
I couldn't find any relevant papers or works addressing this issue.
Also, I'm aware of the fact that the vocabulary files needs to be updated according to new sentence pairs. But, then do we have to again start training from scratch? Isn't there an easy way to dynamically update the vocabulary files and embedding matrix according to the new samples and continue training from the latest checkpoint?