Is there a scientific field dedicated to the quantification of intelligent behavior? [closed] - data-science

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 10 months ago.
Improve this question
One of the biggest struggle with ML research is the creation of objective functions which capture the researcher's goals. Especially when talking about generalizable AI, the definition of the objective function is very tricky.
This excellent paper for instance attempts to define an objective function to reward an agent's curiosity.
If we could measure intelligent behavior well, it would perhaps be possible to perform an optimization in which the parameters of a simulation such as a cellular automaton are optimized to maximize the emergence of increasingly intelligent behavior.
I vaguely remember having come across a group of cross-discipline researchers who were attempting to use the information theory concept of entropy to measure intelligent behavior but cannot find any resources about it now. So is there a scientific field dedicated to the quantification of intelligent behavior?

The field is called Integrated Information Theory, initially proposed by Giulio Tononi. It attempts to quantify consciousness of systems by formally defining formally the phenomenological experience of consciousness, and computing a value Phi, meant for a proxy of "consciousness".

Related

Could someone explain more clear "programming" part of probabilistic programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Usually in the docs for probabilistic programming frameworks I can read much about MCMC but not very much about programming. Every example I see have usually only very short and simple probabilistic program. Usually they are about 5-10 lines of code, if you don't count input of the data and output of the results. So, it doesn't kinda look like programming.
As I understand, I can write probabilistic program to regularize learning process, so the longer my probabilistic program is, the faster calculation will be, the smaller training data set I need and more correct result I can get. Am I right?
For example, if I want to find a cat on the picture. I can write probabilistic program that describes how cats look like and in what kind of exposition they can be. And the more detailed my description is the better result will be?
Thanks,
Dmitry
To me, "probababilistic programming" just means you write your models down in a programming language with probabilitiy constructs. Stan gives you an imperative programming language with variables that denote random variables.
Stan's documentation has 200+ pages on programming in Stan, so I'm not sure what you're looking for. It covers everything from data types to parameterizations to user-defined functions. Like most intros and manuals, the examples tend to be short. If you want to see longer programs, look at the case studies or follow the user forums.
Larger models don't necessarily mean you need less data. The more information the model contains about the answer before you start (the prior), the less data you need. With more data you have, you can make finer-grained inferences.
I don't think you'll have much luck describing cats with a detailed hand-built model.

How dependency parse tree is used for sentiment analysis? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
With the announcement from Google on release of Parsey McParseface syntaxnet
which is claimed to be the most accurate dependency parser. I want to understand how this parser can be used for more accurate sentiment analysis ? If someone can share some blogs or research papers or tutorials which can help me understand the overall flow.
Good question, Im not expert, in fact I got intrigued when you asked the question.
td;lr; more accurate dependency parsers would in allow one propagate sentiment through a graph and thus better determine sentiment polarity, at least in theory.
It seems from my reading that sentiment analysis using dependency tree graphs propagate the independent (prior -- sentiment you might get from a lexicon) sentiment of words to compose overall sentiment polarity of the text.
This approach uses the composition of language (its grammatical structure) to determine sentiment. This is somewhat* opposed to a statistical (naives bayes, logistics regression, neural networks) approach to understanding sentiment.
Here's the paper I scanned:
http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS14/paper/viewFile/7869/7837
For a deeper exploration of whats possible, this might help:
https://arxiv.org/pdf/1401.6330.pdf
A more through introduction to dependency parsing if you're interested might be https://web.stanford.edu/~jurafsky/slp3/14.pdf
*somewhat in the sense that (in particular) convolution networks do learn a certain composition of language so do rnns.

Difference between model based testing and model driven testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
After hours of searching on google on the above mentioned topic. I am unable to contrast the difference between model based testing and model driven testing. Tons of definitions are there,. But there is no clear definition with real world example.
Can anyone please help me understand the difference between these two with the help of real world example.
I'm afraid there is no clear-cut difference between the two. First, because everybody uses a different terminology (there is no "standard" definition for these terms). Secondly, because IMO, both terms refer to the same concept (using models as part of the process of writing the tests for your system) and only differ regarding the importance of the role of models in the testing process.
To me, model-driven implies a stronger role of the models (i.e. models are used to derive the tests) than model-based (where models are used but maybe as an additional input in the test generation process).
At least, this is how I explain other "model-based" vs "model-driven" concepts as I tried to explain in more detail here: http://modeling-languages.com/clarifying-concepts-mbe-vs-mde-vs-mdd-vs-mda/

If I'm the only developer on a project, do I still need to use encapsulation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I always hear that we need to encapsulate whenever we write object-oriented code. If I'm the only developer on a project, do I still need to use encapsulation?
One way to put an answer: Encapsulation, conceptually, exists for writing better, safer, less error-prone code. It doesn't exist, primarily, to facilitate teams working together on code (that might be a side effect, but that's not the purpose).
So the goods that encapsulation seeks to foster scale from one coder to many coders, and they are goods that do not really have to do with the number of coders, although those goods may find stronger expression the larger the project and teams are.
Encapsulation is there for a reason.
Someone has to maintain and manage your code after you are done, right? What if the project gets bigger and you get team members?
So, the answer is "yes", it is always best to use encapsulation whenever possible.
The fact you are asking this question makes me wonder you actually did not get the actual value of encapsulation as a means to reduce and thus deal with complexity.
My theoretical computer science professor used to tell me that in the end, if you think at the whole binary representation of a program, any program is just a number. Very big indeed but, only a number. And that is true, any other construct we use but 0 and 1 (i.e. C++, Java, Python, functional programming, object oriented programming, aspect oriented programming, etc..) is just because of the fact we need more abstract means to get the one number we need.

Procedural Design documentation strategies [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
After reading the definition of procedural design (http://en.wikipedia.org/wiki/Design_document) and searching for a few example diagrams, I have been having trouble on finding out more on what procedural design means other than finding this diagram (http://www.kelso.scotborders.sch.uk/departments/computing/resources/mindmaps/Procedural%20program%20design.gif). Typically, when is this type of documentation necessary? Is it when there's a specific algorithm used in the application?
This is most often used when you have a few very similar constructs that are used really often. In a way SQL is a "procedural design" since it limits you to tables and column and a handful of operations which can be applied to the "data model" (= the database).
Code generators thrive in this area since they have a large but simple input and generate a lot of code that would be extremely tedious and error prone to write by hand. In a similar way, you can generate "documentation" for this which is usually a big waste of time since it will be enormous in volume and contain very little information about how the system works.
[EDIT] In computer science the amount of information in a message is the amount of "surprise" you get per bit. So one page of "1'000 feet view" which is tight packed with information, which gives you a compressed introduction how the system is designed and how you can find your way around, is worth more than 1'000 pages of documentation generated from a data model.