Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am currently studying tensorflow. I just made some simple codes like CNN, RNN ans LSTM and so on. And now I want to implement convolutional lstm. I read this paper and tried to implement it as an exercise. However, there were, as far as I searched, no codes available in the internet. If someone knows where the available source code is, please let me know.
Yes, this is done in the Neural GPU TensorFlow model by Łukasz Kaiser and Ilya Sutskever.
It uses GRUs rather than LSTMs, but those are very similar cell types. The model is also a little different from the typical RNN implementations. The nuance is that the model does not accept new inputs in time past the first time step: the inputs are fed to the initial cell state so that this "mental image" state evolves trough timesteps.
The paper is here.
The neural GPU model implementation in TensorFlow can be found here.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 months ago.
Improve this question
I saw PyTorch Lightning advertised as PyTorch but for people who don't want to worry so much about the underlying methodology. This narrative is on the PyTorch lightning website but also here for example.
For hardware reasons, does something similar exist for TensorFlow? I have a code example for neural nets here written in PyTorch and PyTorch Lightning but am not sure how to rewrite it in TensorFlow.
Probably the best association would be Keras (formerly separate from but now for some time integrated in TF - you can you Keras as a high level API).
Note that you can also use tensorflow_addons (I personally enjoy working with it) package and other libraries&wrappers that come into the aid of TensorFlow, because since Keras is integrated into TF, you will be also very likely to use them on your Keras code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 12 months ago.
Improve this question
I want to use pretrained Alexnet for transfer learning. I dont see its available in Keras library.
Am I missing something here?
Other Alternative I see here is to create model and
load pretrained weight
train from scratch
Training from scratch using imagenet dataset is not possible for me due to resource constraint.
Loading pre-trained weight will work.
Would you provide any pointers for getting the pretrained weight for Alexnet?
Thanks,
As of right now, Keras does not (officially) seem to offer a pre-trained AlexNet model. PyTorch, on the other hand, does. If you are willing to use a different framework for the task, you can use PyTorch. You can retrieve a pre-trained version of the AlexNet like so:
import torchvision.models as models
alexnet = models.alexnet(pretrained=True)
You can find the list of available pre-trained models here, and a transfer learning tutorial for image classification here.
Hope that answers your question!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Need to start an object detection project. can anyone suggest the better framework which has better accuracy and speed. I have read about imagenet, resnet, mobilenet, yolo, tensorflow and dlib features. Can anyone give a comparison of them and suggest a better option.
A good overview is described in "Speed/accuracy trade-offs for modern convolutional object detectors" (https://arxiv.org/abs/1611.10012).
In order to save time, you may consider using Google Object Detection API https://github.com/tensorflow/models/tree/master/research/object_detection, they have an tutorial on how to train on your own dataset.
It is hard to say which object detection framework is the best. However, I saw people usually stick to Faster R-CNN (for accuracies) and SSD or YOLOv2 (for speed).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I have been looking for AlexNet models written on tensor-flow, and all I found was codes using some pre-trained weights already.
Do you have any idea if there exist code in which weights are built during the execution of the model ?
Thanks.
You can find a nice article here:Finetuning AlexNet with TensorFlow
It contains the address of the github code
You can find a definition of the AlexNet model in TensorFlow in the path tensorflow/contrib/slim/python/slim/nets/alexnet.py of the TensorFlow repository (among the examples of what used to be TF-Slim and now is just tf.contrib.layers).
Another alternative is here with a link to the model. But you can always train from scratch and check yourself.
Note: This only runs for 30 or so epochs(atleast at the time of writing) with less accuracy then claimed in paper. But you can always tweak learning rate and run for more epochs to get better accuracy.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Is there a comprehensive CTC loss example with Tensorflow out there? The docs for tensorflow.contrib.ctc don't contain enough information for me. I know that there is one Stackoverflow post, but I can't get that to work.
Maybe someone has a complete (bidirectional) LSTM example with sample data that he/she could share. Thanks.
See here for an example with bidirectional LSTM and CTC implementations, training a phoneme recognition model on the TIMIT corpus. If you don't have access to TIMIT or another phoneme-transcribed data set, you probably won't get any decent performance with a single-layer model like this, but the basic structure should hold.
Update: If you don't have access to TIMIT, or you just want to see the thing run without formatting your inputs to make the code work, I've added an 8-sample toy data set that you can overfit to see the training in action.
Have you seen the unit tests for CTC? See the ctc_loss test and the ctc_decoder tests.
These contain examples of usage that may get you further along in understanding how to use the ops.
Chris Dinanth has provided a great example for CTC and RNN used for speech recognition. His models recognizes speech using phonemes. The CTC loss used is tf.keras.backend.ctc_batch_cost.
The code is at https://github.com/chrisdinant/speech
and great explanation of what has been done can be found at https://towardsdatascience.com/kaggle-tensorflow-speech-recognition-challenge-b46a3bca2501