Document Similarity using Tensorflow - tensorflow

I am new to both TensorFlow and also Document Similarity / Topic Modeling therefore I apologize if my questions don't make complete sense.
From my limited understanding, topic modelling is done using algorithms such as LSA,LDA,etc. I have seen code using gensim and LSA but the time to train is very high for the large set of documents I have in mind. Consequently the CPU and RAM resources are very heavy.
Tensorflow doesn't seem to have a native LSA or LDA implementation.
I would appreciate an opinion on :
Would LDA implemented using Tensorflow have a better performance than implemented using gensim?
Could someone tell me of other Tensorflow primitives that I should look at for document similarity rather than LDA?
Once again I am sorry if my questions are too vague and do not cover sufficient information to give a proper response. I am new to this domain and I would appreciate any directions someone could point me to.
Thank you for your time.
Regards,
Jeetu

Related

TensorFlow 2 Detection Model Zoo metrics

I know it's a banality, but i'm really confused on what Speed (ms) and COCO mAP means HERE.
I get the idea, lower speed and higher mAP are better, but can i ask what does those metrics mean?
I have to write a report about a project that uses one of the model listed in the github model of tensorflow, so i would like a technical description of those two if possible. About COCO mAP i found something already, i'm trying to understand it, but nothing related to Speed. What does speed measure?
I'm sorry about the stupid question, but i like to fully understand things
It refers to inference speed, how much time it take for the network to provide an output based on your input.

Deep learning for computer vision: What after MNIST stage?

I am trying to explore computer vision using deep learning techniques. I have gone through basic literature, made a NN of my own to classify digits using MNIST data(without using any library like TF,Keras etc, and in the process understood concepts like loss function, optimization, backward propagation etc), and then also explored Fashion MNIST using TF Keras.
I applied my knowledge gained so far to solve a Kaggle problem(identifying a plant type), but results are not very encouraging.
So, what should be my next step in progress? What should I do to improve my knowledge and models to solve more complex problems? What more books, literature etc should I read to move ahead of beginner stage?
You should try hyperparameter tuning, it will help improve your model performance. Feel free to surf around various articles, fine tuning your model will be the next step as you have fundamental knowledge regarding how model works.

How to determine what type of layers do I need for my Deep learning model?

Suppose that I have want to make a model that does something. Now when I search about the topic in Google or YouTube, I find many related tutorials and it seems like some clever programmer had already implemented that model with Deep learning.
But how do they know that what type of layers, what type of activation functions, loss functions, optimizer, number of units etc. they need to solve that certain problem using deep learning.
Are there any techniques for knowing this, or its just a matter of understanding and experience? Also it would be very helpful if somebody could point me to some videos or articles answering my question.
This is more of a matter of understanding and experience. When building a model from scratch, you must understand which optimizer, loss, etc. makes sense for your particular problem. In order to choose these appropriately, you must understand the differences between the available optimizers, loss functions, etc.
In regards to choosing how many layers and nodes, what batch size, what learning rate, etc.-- these are all hyperparameters that you will need to test and tune as you experiment with your model.
I have a Deep Learning Fundamentals YouTube playlist that you may find helpful. It covers the fundamental basics of each of these topics in short videos. Additionally, this Deep Learning with Keras playlist may also be beneficial if you're wanting to focus more on coding after getting the basic concepts down.
Thanks for the question.
The CS231n Stanford lectures on CNN is the best for beginners refer to the video lectures here and class notes are available here
After watching the lectures and completing the assignments, you will get a basic idea of what Deep Learning is and all the algorithms available etc.
But when it comes to solving real-world problems this won't be sufficient So take this course by Jeremy Howard where he teaches more on how to approach a problem using Kaggle platform.
Keep on solving more problems experimenting new models and algorithms using several platforms like hackerearth, Kaggle, topcoder etc.

Which kinds of high level API of tensorflow should I learn?

I have studied tensorflow for about one month. I just feel that creating a network with primitive operations of Tensorflow is very verbose. Then I found some high level API, such as TF-Slim, TF Learn, Keras. But multiple choices confuse me so that I don't know which I should learn.
TF-Slim is a lightweight library for defining, training and evaluating complex models in TensorFlow, but as I investigated, it's only for convnets. What networks Keras can build are more diverse.
Can Anyone give a comparision between them so that I could choose which high level API I should learn ? In terms of :
1. popularity: which ones are the most popular ?
2. practicality: what kinds of network can they build ?
3. performance: what's their training/inference performance ?
... something else
Hope someone could give me a suggestion. Thanks.
I suggest you start with Keras.
It´s very easy to learn, it has a broad user base (see Shobhits link), there is a ton of reference code out there on GitHub and in tutorials / MOOCs / eBooks etc. and you can build almost anything with it. And I personally think that is has a good documentation (although some might disagree with that...).
Since it´s an API that connects to Tensorflow, Theano, CNTK (and possibly more frameworks in the future) you have even more flexibility.
Don´t worry too much about performance. That´s really not important while youre learning.

CNTK time series anomaly detection tutorial or documentation (RNN/LTSM)?

Problem
Do you have a tutorial for LTSM or RNN time series anomaly detection using deep learning with CNTK? If not, can you make one or suggest a series of simple steps here for us to follow?
I am a software developer and a member of a team investigating using deep learning on time series data we have for anomaly detection. We have not found anything on your python docs that can help us. It seems most of the tutorials are for visual recognition problems and not specific to the problem domain of interest to us.
Using LTSM and RNN in Anomaly Detection
I have found the following
This link references why we are trying to use time series for anomaly detection
This paper convinced us that the first link is a respected approach to the problem in general
This link also outlined the same approach
I look around on CNTK here, but didn't find any similar question and so I hope this question helps other developers in the future.
Additional Notes and Questions
My problem is that I am finding CNTK not that simple to use or as well documented as I had hoped. Frankly, our framework and stack is heavy on .NET and Microsoft technologies. So I repeat the question again for emphasis with a few follow ups:
Do you have any resources you feel you can recommend to developers learning neural networks, deep learning, and so on to help us understand what is going on under the hood with CNTK?
Build 2017 mentions C# is supported by CNTK. Can you please point us in the direction of where the documentation and support is for this?
Most importantly can you please help get us unstuck on trying to do time series anomaly analysis for time series using CNTK?
Thank you very much for time and assistance in reading and asking this question
Thanks for your feedback. Your suggestions help improve the toolkit.
First Bullet
I would suggest that you can start with the CNTK tutorials.
https://github.com/Microsoft/CNTK/tree/master/Tutorials
They are designed from CNTK 101 to 301. Suggest that you work through them. Many of them even though uses image data, the concept and the models are amenable to build solutions with numerical data. 101-103 series are great to understand basics of the train-test-predict workflow.
Second Bullet:
Once you have trained the model (using Python recommended). The model evaluation can be performed using different language bindings, C# being one of them.
https://github.com/Microsoft/CNTK/wiki/CNTK-Evaluation-Overview
Third Bullet
There are different approaches suggested in the papers you have cited. All of them are possible to do in CNTK with some changes to the code in the tutorials.
The key tutorial for you would be CNTK 106, CNTK 105, and CNTK 202
Anomaly as classification: This would involve you label your target value as 1 of N classes, with one of the class being "anomaly". Then you can combine 106 with 202, to classify the prediction
Anomaly as an autoencoder: You can need to study 105 autoencoder. Now instead of a dense network, you could apply the concept for Recurrent networks. Train only on the normal data. Once trained, pass any data through the trained model. The difference between the input and autoencoded version will be small for normal data but the difference will be much larger for anomalies. The 105 tutorial uses images, but you can train these models with any numerical data.
Hope you find these suggestions helpful.