Advantages and Disadvantages of MXNet compared to other Deep Learning APIs [closed] - tensorflow

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Recently I decided to learn MXNet, as some code I need to use, is written using this API.
However, I would like to know which are the advantages and disadvantages of MXNet compared to the other Deep Learning Libraries out there.

Perhaps the biggest reason for considering MXNet is its high-performance imperative API. This is one of the most important advantages of MXNet to other platforms. Imperative API with autograd makes it much easier and more intuitive to compose and debug a network. PyTorch also supports imperative API, but MXNet is the only platform AFAIK that supports hybridization, which effectively allows your imperative model to be converted to a symbol for similar performance to symbolic API. Here is a link to tutorials on Gluon, MXNet's imperative API: http://gluon.mxnet.io/
Given that you're using an example code, it is possible that the example was written using symbolic API. You may notice MXNet's advantage in symbolic API when training on many GPUs. Otherwise you won't notice much of a difference (except perhaps in some memory usage).
Tensorflow does have a one year head-start to MXNet and as a result it has a larger user base, but it only supports symbolic API (imperative API is very new and is only meant for experimentation), which is significantly harder to debug a network when you run into issues. However MXNet has quickly caught up in features and with 1.0 release, I don't think there is anything in TF that MXNet doesn't support.

Related

What is the advantage of using tensorflow instead of scikit-learn for doing regression? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am new to machine learning and I want to start doing basic regression analysis. I saw that scikit-learn provides a simple way to do this. But why people use tensorflow for regression instead? Thanks!
If the only thing you are doing is regression, scikit-learn is good enough and will definitely do you job. Tensorflow is more a deep learning framework for building deep neural networks.
There're people using Tensorflow to do regression maybe just out of personal interests or they think Tensorflow is more famous or "advanced".
Tensorflow is a deep learning framework and involves far more complex decisions concerning algorithm design.
In the first step, it is recommended to use sklearn, because you will get a first ml model with scikit-learn faster. Later you can use a dl model with tensorflow. :-)

TensorFlow - Advantages to a certain language [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am researching into using TensorFlow and am trying to decide which language to write in.
I am currently using Clojure in my day job (I know you can use Java so should I check out a Clojure wrapper). I have also started to learn Haskell. As TensorFlow is very mathematically based maybe Haskell would be the best language to write it in.
I have read that TensorFlow is mainly written in C++ and so Python is the main language that people use.
Is it best to write TensorFlow in Python or does Clojure/Java or Haskell work just as well?
It depends on what your use case is.
For exploring machine learning concepts or for doing machine learning research Python is your best bet. It has the most features, documentation and user base to help with support.
Java and some other language bindings are geared more towards integrating models with existing code (into production pipelines, apps, your-other-production servers etc.) and don't have the convenience of breadth of API that Python provides (quoting from the api documentation - "The Python API is at present the most complete and the easiest to use, but other language APIs may be easier to integrate into projects and may offer some performance advantages in graph execution.")
The Haskell bindings might be interesting, but again, if you're exploring TensorFlow, you're probably better off with Python to begin with. If and when you need to integrate models into other software, you will be able to export them from Python and import/execute them in other languages.
Hope that helps.

Seek a considerably good performance deep learning architecture to run style transfer algorithm [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Our boss found the idea of the paper "A Neural Algorithm of Artistic Style" amazing and think it should attract some of his customers. He decided to set up a server provide the service of style transfer for them.
There are several deep learning architectures with implementations of this idea such as TensorFlow, Torch, caffe, etc. If aims to achieve the best performance, which implementation of these architectures runs fastest? If we configure the algorithm with a considerably good CUDA device such as GeForce GTX 1090 or better, is it possible to finish the task of a VGG model in several seconds? If wish to apply the state of art of the idea to the aforementioned deep learning architectures, are they all applicable?
Checking out some benchmarks: https://github.com/soumith/convnet-benchmarks I'd say that Nervana and Torch are the best frameworks.
If instead of time, we have a look at open source contributions and paper implementations I think torch is the winner.
You can easily find neural-style algorithm implementations in Torch: Neural-Style and Fast Neural-Style

How dependency parse tree is used for sentiment analysis? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
With the announcement from Google on release of Parsey McParseface syntaxnet
which is claimed to be the most accurate dependency parser. I want to understand how this parser can be used for more accurate sentiment analysis ? If someone can share some blogs or research papers or tutorials which can help me understand the overall flow.
Good question, Im not expert, in fact I got intrigued when you asked the question.
td;lr; more accurate dependency parsers would in allow one propagate sentiment through a graph and thus better determine sentiment polarity, at least in theory.
It seems from my reading that sentiment analysis using dependency tree graphs propagate the independent (prior -- sentiment you might get from a lexicon) sentiment of words to compose overall sentiment polarity of the text.
This approach uses the composition of language (its grammatical structure) to determine sentiment. This is somewhat* opposed to a statistical (naives bayes, logistics regression, neural networks) approach to understanding sentiment.
Here's the paper I scanned:
http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7869/7837
For a deeper exploration of whats possible, this might help:
https://arxiv.org/pdf/1401.6330.pdf
A more through introduction to dependency parsing if you're interested might be https://web.stanford.edu/~jurafsky/slp3/14.pdf
*somewhat in the sense that (in particular) convolution networks do learn a certain composition of language so do rnns.

Is TensorFlow suitable for Recommendation Systems [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have read blogpost about TensorFlow is being open sourced.
In the tutorials and the examples on the TensorFlow website, I see that they are mostly classification problems. (e.g. given an image, classify the number written in it)
I am curious about it the software also suitable for solving problems in recommendation system?
For example, is it good for solving problems on collaborative filtering / content-based filtering?
Tensorflow is great for deep learning, or training large neural nets. Although, it can be used for several other mathematical applications such as PDEs, various classifiers, recommendation systems etc, there doesn't seem to have a lot of support for them as yet.
This reddit thread might be a good place to start for searching libraries which are centred around recommendation systems.