Can anyone please explain the equivalence or similarity of entropy in physics and entropy in information systems in layman terms? Sorry I'm no mathematician, but still I am trying ti understand the concepts so that I'll have a better understanding of the concepts. I have an idea of entropy in Physics, but I don't understand when someone says entropy in information systems and its uses and applications. Thanks for your time.
Information entropy (also called Shannon Information) is the measure of "surprise" about a new bit of information. A system with high entropy has a large surprise. Low entropy, little surprise.
Systems with high entropy are difficult to compress, because every bit is surprising and so has to be recorded.
Systems with low entropy are easy to compress, because you can predict what comes next given what you've seen before.
Counter-intuitively, this means that a TV showing static (white noise) is presenting a lot of information because each frame is random, while a TV show has comparatively little information because most frames can be mostly predicted based on the previous frame. Similarly, a good random number generator is defined by having very high entropy/information/surprise.
It also means that the amount of entropy is highly dependent on context. The digits of pi have very high entropy because an arbitrary one is impossible to predict (assuming pi is normal). But if I know that you will be sending me the digits of pi, then the digits themselves have zero information because I could have computed all of them myself.
The reason all of this plays into cryptography is because the goal of a cryptographic system is generate an output that is indistinguishable from random, which is to say that it takes low-entropy information and outputs high-entropy information. The output of a cryptographic algorithm can have no more entropy than its highest-entropy input. Systems whose highest-entropy input is a human chosen password are going to be very poor crypto systems because they are very predictable (have little information; low entropy). A good crypto system will include a high-entropy value like a well-seeded and unpredictable random number. To the extent that this random number is predictable (has low entropy), the system is weakened.
You must be careful at this point not to over-analogize between thermodynamic and information entropy. In particular, one is almost exclusively interested in entropy gradients in thermodynamics, while entropy is treated as an absolute value in information theory (measured in bits). Conversely, information entropy is sometimes incorrectly thought of as a form of energy that is "depleted" when generating random numbers. This is not true in any useful way, and certainly not like heat energy.
Also, how cryptographers use the word entropy isn't precisely the same as how Shannon used it. See Guesswork is not a substitute for Entropy for one discussion of this.
For how this does and doesn't apply to thermodynamics more broadly (and particularly how it applies to the famous Maxwell's Demon), I recommend the Wikipedia article comparing the two kinds of entropy.
Related
negative sampling in 'word2vec' improves the training speed, that's obviously!
but why 'makes the word representations significantly more accurate.'?
I didn't find the relevant discussion or details. can u help me?
It's hard to describe what the author of that claim may have meant, without the full context of where it appeared. For example, word-vectors can be optimized for different tasks, and the same options that make word-vectors better for one task might make them worse for another.
One popular way to evaluate word-vectors since Google's original paper & code release is a set of word-analogy problems. These give a nice repeatable summary 'accuracy' percentage, so the author might have meant that for a particular training corpus, on that particular problem, holding other things constant, the negative-sampling mode had a higher 'accuracy' score.
But that wouldn't mean it's always better, with any corpus, or for any other downstream evaluation of quality or accuracy-on-some-task.
Projects with larger corpuses, and especially larger vocabularies (more unique words), tend to prefer the negative-sampling mode. The hierarchical-softmax alternative mode becomes slower as the vocabulary becomes larger, while the negative-sampling mode does not.
And, having a large, diverse corpus, with many subtly-different usage examples of all interesting words, is the most important contributor to really good word-vectors.
So, simply by making larger corpuses manageable, within a limited amount of training time, negative-sampling could be seen as indirectly enabling improved word-vectors - because corpus size is such an important factor.
I'm trying to find an alternative to QR codes (I'd also be willing to accept an entirely novel solution and implement it myself) that meets certain specifications.
First, the codes will often end up on thin pipes, and so need to be readable around a cylinder. The advantage to this is that the effect on the image from wrapping it around a cylinder is easy to express geometrically, and the codes will never be placed on a very irregular shape.
Second, read accuracy must be very high, as any read mistake would be extremely costly. If this means larger codes with more redundancy for better error correction, so be it.
Third, ability to be read by the average smartphone camera from a few inches out.
Fourth, storage space of around half a kilobyte per code.
Do you know of such a code?
The Data Matrix Rectangular Extension (DMRE) improves upon the standard set of rectangular Data Matrix symbol sizes in an algorithmically compatible manner, thus increasing the range of suitable applications with no real downsides.
Reliable cylindrical marking is a primary use case.
Regardless of symbology you will be unable to approach sufficient data density to achieve 0.5KB of binary data in a single compact, narrow symbol scanned using a standard camera phone. However, most 2D symbologies (DMRE included) support a feature called Structured Append that allows chaining of multiple symbols that can be scanned in any order to produce a single read when all components are accounted for.
If the data to be encoded is known to be highly structured (e.g. mostly numeric or alphanumeric) then the internal encoding process of Data Matrix will be more optimised than for general binary data. For example, the largest DMRE symbol (26×64) will provide up to 236 numeric characters, ~175 alphanumeric characters and only 116 bytes.
If the default error recovery rate is insufficient then including a checksum in the data may be appropriate.
DMRE has just been voted to be accepted as an ISO/IEC project and will likely become an international standard enjoying broad hardware and software support in due course.
Another option may be to investigate PDF417 which has a broader range of symbols sizes, however the data density is somewhat less than Data Matrix.
DMRE references: AIM specification and explanatory notes.
I have a set of data, 2D matrix (like Grey pictures).
And use CNN for classifier.
Would like to know if there is any study/experience on the accuracy impact
if we change the encoding from traditionnal encoding.
I suppose yes, question is rather which transformation of the encoding make the accuracy invariant, which one deteriorates....
To clarify, this concerns mainly the quantization process of the raw data into input data.
EDIT:
Quantize the raw data into input data is already a pre-processing of the data, adding or removing some features (even minor). It seems not very clear the impact in term of accuracy on this quantization process on real dnn computation.
Maybe, some research available.
I'm not aware of any research specifically dealing with quantization of input data, but you may want to check out some related work on quantization of CNN parameters: http://arxiv.org/pdf/1512.06473v2.pdf. Depending on what your end goal is, the "Q-CNN" approach may be useful for you.
My own experience with using various quantizations of the input data for CNNs has been that there's a heavy dependency between the degree of quantization and the model itself. For example, I've played around with using various interpolation methods to reduce image sizes and reducing the color palette size, and in the end, I discovered that each variant required a different tuning of hyper-parameters to achieve optimal results. Generally, I found that minor quantization of data had a negligible impact, but there was a knee in the curve where throwing away additional information dramatically impacted the achievable accuracy. Unfortunately, I'm not aware of any way to determine what degree of quantization will be optimal without experimentation, and even deciding what's optimal involves a trade-off between efficiency and accuracy which doesn't necessarily have a one-size-fits-all answer.
On a theoretical note, keep in mind that CNNs need to be able to find useful, spatially-local features, so it's probably reasonable to assume that any encoding that disrupts the basic "structure" of the input would have a significantly detrimental effect on the accuracy achievable.
In usual practice -- a discrete classification task in classic implementation -- it will have no effect. However, the critical point is in the initial computations for back-propagation. The classic definition depends only on strict equality of the predicted and "base truth" classes: a simple right/wrong evaluation. Changing the class coding has no effect on whether or not a prediction is equal to the training class.
However, this function can be altered. If you change the code to have something other than a right/wrong scoring, something that depends on the encoding choice, then encoding changes can most definitely have an effect. For instance, if you're rating movies on a 1-5 scale, you likely want 1 vs 5 to contribute a higher loss than 4 vs 5.
Does this reasonably deal with your concerns?
I see now. My answer above is useful ... but not for what you're asking. I had my eye on the classification encoding; you're wondering about the input.
Please note that asking for off-site resources is a classic off-topic question category. I am unaware of any such research -- for what little that is worth.
Obviously, there should be some effect, as you're altering the input data. The effect would be dependent on the particular quantization transformation, as well as the individual application.
I do have some limited-scope observations from general big-data analytics.
In our typical environment, where the data were scattered with some inherent organization within their natural space (F dimensions, where F is the number of features), we often use two simple quantization steps: (1) Scale all feature values to a convenient integer range, such as 0-100; (2) Identify natural micro-clusters, and represent all clustered values (typically no more than 1% of the input) by the cluster's centroid.
This speeds up analytic processing somewhat. Given the fine-grained clustering, it has little effect on the classification output. In fact, it sometimes improves the accuracy minutely, as the clustering provides wider gaps among the data points.
Take with a grain of salt, as this is not the main thrust of our efforts.
This is for self-study of N-dimensional system of linear homogeneous ordinary differential equations of the form:
dx/dt=Ax
where A is the coefficient matrix of the system.
I have learned that you can check for stability by determining if the real parts of all the eigenvalues of A are negative. You can check for oscillations if there are any purely imaginary eigenvalues of A.
The author in the book I'm reading then introduces the Routh-Hurwitz criterion for detecting stability and oscillations of the system. This seems to be a more efficient computational short-cut than calculating eigenvalues.
What are the advantages of using Routh-Hurwitz criteria for stability and oscillations, when you can just find the eigenvalues quickly nowadays? For instance, will it be useful when I start to study nonlinear dynamics? Is there some additional use that I am completely missing?
Wikipedia entry on RH stability analysis has stuff about control systems, and ends up with a lot of equations in the s-domain (Laplace transforms), but for my applications I will be staying in the time-domain for the most part, and just focusing fairly narrowly on stability and oscillations in linear (or linearized) systems.
My motivation: it seems easy to calculate eigenvalues on my computer, and the Routh-Hurwitz criterion comes off as sort of anachronistic, the sort of thing that might save me a lot of time if I were doing this by hand, but not very helpful for doing analysis of small-fry systems via Matlab.
Edit: I've asked this at Math Exchange, which seems more appropriate:
https://math.stackexchange.com/questions/690634/use-of-routh-hurwitz-if-you-have-the-eigenvalues
There is an accepted answer there.
This is just legacy educational curriculum which fell way behind of the actual computational age. Routh-Hurwitz gives a very nice theoretical basis for parametrization of root positions and linked to much more abstract math.
However, for control purposes it is just a nice trick that has no practical value except maybe simple transfer functions with one or two unknown parameters. It had real value when computing the roots of the polynomials were expensive or even manual. Today, even root finding of polynomials is based on forming the companion matrix and computing the eigenvalues. In fact you can basically form a meshgrid and check the stability surface by plotting the largest real part in a few minutes.
Decision problems are not suited for use in evolutionary algorithms since a simple right/wrong fitness measure cannot be optimized/evolved. So, what are some methods/techniques for converting decision problems to optimization problems?
For instance, I'm currently working on a problem where the fitness of an individual depends very heavily on the output it produces. Depending on the ordering of genes, an individual either produces no output or perfect output - no "in between" (and therefore, no hills to climb). One small change in an individual's gene ordering can have a drastic effect on the fitness of an individual, so using an evolutionary algorithm essentially amounts to a random search.
Some literature references would be nice if you know of any.
Application to multiple inputs and examination of percentage of correct answers.
True, a right/wrong fitness measure cannot evolve towards more rightness, but an algorithm can nonetheless apply a mutable function to whatever input it takes to produce a decision which will be right or wrong. So, you keep mutating the algorithm, and for each mutated version of the algorithm you apply it to, say, 100 different inputs, and you check how many of them it got right. Then, you select those algorithms that gave more correct answers than others. Who knows, eventually you might see one which gets them all right.
There are no literature references, I just came up with it.
Well i think you must work on your fitness function.
When you say that some Individuals are more close to a perfect solution can you identify this solutions based on their genetic structure?
If you can do that a program could do that too and so you shouldn't rate the individual based on the output but on its structure.