How dependency parse tree is used for sentiment analysis? [closed] - tensorflow

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
With the announcement from Google on release of Parsey McParseface syntaxnet
which is claimed to be the most accurate dependency parser. I want to understand how this parser can be used for more accurate sentiment analysis ? If someone can share some blogs or research papers or tutorials which can help me understand the overall flow.

Good question, Im not expert, in fact I got intrigued when you asked the question.
td;lr; more accurate dependency parsers would in allow one propagate sentiment through a graph and thus better determine sentiment polarity, at least in theory.
It seems from my reading that sentiment analysis using dependency tree graphs propagate the independent (prior -- sentiment you might get from a lexicon) sentiment of words to compose overall sentiment polarity of the text.
This approach uses the composition of language (its grammatical structure) to determine sentiment. This is somewhat* opposed to a statistical (naives bayes, logistics regression, neural networks) approach to understanding sentiment.
Here's the paper I scanned:
http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7869/7837
For a deeper exploration of whats possible, this might help:
https://arxiv.org/pdf/1401.6330.pdf
A more through introduction to dependency parsing if you're interested might be https://web.stanford.edu/~jurafsky/slp3/14.pdf
*somewhat in the sense that (in particular) convolution networks do learn a certain composition of language so do rnns.

Related

Is there a scientific field dedicated to the quantification of intelligent behavior? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
One of the biggest struggle with ML research is the creation of objective functions which capture the researcher's goals. Especially when talking about generalizable AI, the definition of the objective function is very tricky.
This excellent paper for instance attempts to define an objective function to reward an agent's curiosity.
If we could measure intelligent behavior well, it would perhaps be possible to perform an optimization in which the parameters of a simulation such as a cellular automaton are optimized to maximize the emergence of increasingly intelligent behavior.
I vaguely remember having come across a group of cross-discipline researchers who were attempting to use the information theory concept of entropy to measure intelligent behavior but cannot find any resources about it now. So is there a scientific field dedicated to the quantification of intelligent behavior?
The field is called Integrated Information Theory, initially proposed by Giulio Tononi. It attempts to quantify consciousness of systems by formally defining formally the phenomenological experience of consciousness, and computing a value Phi, meant for a proxy of "consciousness".

Advantages and Disadvantages of MXNet compared to other Deep Learning APIs [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Recently I decided to learn MXNet, as some code I need to use, is written using this API.
However, I would like to know which are the advantages and disadvantages of MXNet compared to the other Deep Learning Libraries out there.
Perhaps the biggest reason for considering MXNet is its high-performance imperative API. This is one of the most important advantages of MXNet to other platforms. Imperative API with autograd makes it much easier and more intuitive to compose and debug a network. PyTorch also supports imperative API, but MXNet is the only platform AFAIK that supports hybridization, which effectively allows your imperative model to be converted to a symbol for similar performance to symbolic API. Here is a link to tutorials on Gluon, MXNet's imperative API: http://gluon.mxnet.io/
Given that you're using an example code, it is possible that the example was written using symbolic API. You may notice MXNet's advantage in symbolic API when training on many GPUs. Otherwise you won't notice much of a difference (except perhaps in some memory usage).
Tensorflow does have a one year head-start to MXNet and as a result it has a larger user base, but it only supports symbolic API (imperative API is very new and is only meant for experimentation), which is significantly harder to debug a network when you run into issues. However MXNet has quickly caught up in features and with 1.0 release, I don't think there is anything in TF that MXNet doesn't support.

Could someone explain more clear "programming" part of probabilistic programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Usually in the docs for probabilistic programming frameworks I can read much about MCMC but not very much about programming. Every example I see have usually only very short and simple probabilistic program. Usually they are about 5-10 lines of code, if you don't count input of the data and output of the results. So, it doesn't kinda look like programming.
As I understand, I can write probabilistic program to regularize learning process, so the longer my probabilistic program is, the faster calculation will be, the smaller training data set I need and more correct result I can get. Am I right?
For example, if I want to find a cat on the picture. I can write probabilistic program that describes how cats look like and in what kind of exposition they can be. And the more detailed my description is the better result will be?
Thanks,
Dmitry
To me, "probababilistic programming" just means you write your models down in a programming language with probabilitiy constructs. Stan gives you an imperative programming language with variables that denote random variables.
Stan's documentation has 200+ pages on programming in Stan, so I'm not sure what you're looking for. It covers everything from data types to parameterizations to user-defined functions. Like most intros and manuals, the examples tend to be short. If you want to see longer programs, look at the case studies or follow the user forums.
Larger models don't necessarily mean you need less data. The more information the model contains about the answer before you start (the prior), the less data you need. With more data you have, you can make finer-grained inferences.
I don't think you'll have much luck describing cats with a detailed hand-built model.

Difference between model based testing and model driven testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
After hours of searching on google on the above mentioned topic. I am unable to contrast the difference between model based testing and model driven testing. Tons of definitions are there,. But there is no clear definition with real world example.
Can anyone please help me understand the difference between these two with the help of real world example.
I'm afraid there is no clear-cut difference between the two. First, because everybody uses a different terminology (there is no "standard" definition for these terms). Secondly, because IMO, both terms refer to the same concept (using models as part of the process of writing the tests for your system) and only differ regarding the importance of the role of models in the testing process.
To me, model-driven implies a stronger role of the models (i.e. models are used to derive the tests) than model-based (where models are used but maybe as an additional input in the test generation process).
At least, this is how I explain other "model-based" vs "model-driven" concepts as I tried to explain in more detail here: http://modeling-languages.com/clarifying-concepts-mbe-vs-mde-vs-mdd-vs-mda/

project topic for neural network for freshers? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to start working on neural network for my final project I want a topic which could be completed on 2-3 months of work and also It should be of good understanding for a fresher, as I am new to this topic and I want to learn by doing this project. It should not be very tough to understand and start work.
You could write a simple OCR using a Hopfield neural network.
A good start would be:
A comparative study of neural network algorithms applied to optical character recognition
Hopfield Networks: A Simple OCR Application
It is a relatively simple fun project.
It would be even easier if you could use Matlab and some of its modules. But even if you were to implement it in Java or some similar language, I think it should be doable in 3 months for a beginner.
In Matlab, you could start with the following:
Hopfield Neural Network
Hopfield Two Neuron Design
You will need the Neural Network Toolbox which has to be purchased separately I think.
Your first question should be what and not how you want to classify. Depending on the problem, you can choose a fitting classifier. It's hard for you to decide the detailed solution before knowing the actual problem.
Simple topics (depending on your personal background) can be text, audio or image analysis. OCR is quite typical (you can use the MNIST database for that, it's well researched so you can compare your own results). To get a general idea of what applications are out there, you should also definitely have a look at the UCI database. It has all sorts of data.
The easiest Type of Neural Network to understand and implement is a Single Layer Perceptron. To also classify non-linearly seperable data (which is needed in most real-world scenarios), you can use a Multi Layer Perceptron with 3 layers (in/hidden/out).