Pros and cons of binary representation in a genetic algorithm - optimization

I'm solving problem of making teams of players for some game, using genetic algorithm.
Having pool of players to split, I represent individual from population as a row of teams, where each team is some list or an array of number of players in pool. I translate each number to the binary encode, as if it was in "original" genetic algo. But now I start to think of advantages and disadvantages of binary representation of the individuals.
My thoughts were that I can use very common mutation and crossover frameworks: just change bits and change tails, accordingly. But I am able to do it with just integer numbers, simply using some manual mutation function.
So, the question is: what are the benefits and drawbacks of the binary/integer-valued representation? What is most "true" way to use GA?
I checked original book by author of GA, John Holland, and there he uses binary representation, but I haven't found (yet probably) information about what representation fits best.
In this answer shown one drawback whilst using float numbers: after crossover two parents could move far away from each other. But in my case I don't need to use float numbers.

Related

Why differential evolution works so well?

What is the idea behind the mutation in differential evolution and why should this kind of mutation perform well?
I do not see any good geometric reason behind it.
Could anyone point me to some technical explanation of this?
Like all evolutionary algorithms, DE uses a heuristic, so my explanation is going to be a bit hand-wavy. What DE is trying to do, like all evolutionary algorithms, is to do a random search that's not too random. DE's mutation operator first computes the vector between two random members of the population, then adds that vector to a third random member of the population. This works well because it uses the current population as a way of figuring out how large of a step to take, and in what direction. If the population is widely dispersed, then it's reasonable to take big steps; if it's tightly concentrated, then it's reasonable to take small steps.
There are many reasons DE works better than Goldberg's GA, but focusing on the variation operators I'd say that the biggest difference is that DE uses real-coded variables and GA uses binary encoding. When optimizing on a continuous space, binary encoding is not a good choice. This has been known since the early 1990s, and one of the first things to come out of the encounter between the primarily German Evolution Strategy community and the primarily American Genetic Algorithm community was Deb's Simulated Binary Crossover. This operator acts like the GA's crossover operator, but on real-coded variables.

what is the right way to crossover when using GA to get minimum of one variable function,like sin(x)^2

I am encoding the interval [x:y] to binary codes like 10101111, so for population, it is like [[1,0,1,1],[0,1,0,1]].
I defined the fitness function directly using the value of the function (sin(x)^2).
For selection, i am using tournament selection and for crossover, only simple exchange part of the chromosome like this: 1(10)0 and 0(01)1 -> 1(01)0 and 0(10)1.
For mutation, using Bit inversion.
The algorithm kind of works, it can generate the global minimum sometimes, and sometimes local ones. but I don't see the function of crossover in this problem, because the feature of the 'x' is being broken every time (i think), I don't know why, and if it is even right way to code the crossover or maybe the encoding part.
I'm afraid that there isn't a "right way" to crossover.
There are many crossover operator (e.g. Comparison of a Crossover Operator in Binary-coded Genetic Algorithms - STJEPAN PICEK, MARIN GOLUB) that can be used in binary coded genetic algorithm, but:
depending on the properties of a problem one or another crossover operator will have better result.
every crossover operator has its advantages and downfalls, so choosing one ultimately represents the question of someone's requirements and experiments
undergone.
in many situations uniform and two-point crossover are good choices.
Crossover is the major exploratory mechanism of the genetic algorithm, but the driving force behind GA is the cooperation between selection, crossover and mutation (mutation prevents convergence of the population and introduces variation).
Usually a mutation-only approach doesn't have enough exploration strength to reach to the minimum and the success is largely due to distribution of solutions in the initial population.
For continuous function optimization you should also check differential evolution.

Converting decision problems to optimization problems? (evolutionary algorithms)

Decision problems are not suited for use in evolutionary algorithms since a simple right/wrong fitness measure cannot be optimized/evolved. So, what are some methods/techniques for converting decision problems to optimization problems?
For instance, I'm currently working on a problem where the fitness of an individual depends very heavily on the output it produces. Depending on the ordering of genes, an individual either produces no output or perfect output - no "in between" (and therefore, no hills to climb). One small change in an individual's gene ordering can have a drastic effect on the fitness of an individual, so using an evolutionary algorithm essentially amounts to a random search.
Some literature references would be nice if you know of any.
Application to multiple inputs and examination of percentage of correct answers.
True, a right/wrong fitness measure cannot evolve towards more rightness, but an algorithm can nonetheless apply a mutable function to whatever input it takes to produce a decision which will be right or wrong. So, you keep mutating the algorithm, and for each mutated version of the algorithm you apply it to, say, 100 different inputs, and you check how many of them it got right. Then, you select those algorithms that gave more correct answers than others. Who knows, eventually you might see one which gets them all right.
There are no literature references, I just came up with it.
Well i think you must work on your fitness function.
When you say that some Individuals are more close to a perfect solution can you identify this solutions based on their genetic structure?
If you can do that a program could do that too and so you shouldn't rate the individual based on the output but on its structure.

Difference between Gene Expression Programming and Cartesian Genetic Programming

Something pretty annoying in evolutionary computing is that mildly different and overlapping concepts tend to pick dramatically different names. My latest confusion because of this is that gene-expression-programming seems very similar to cartesian-genetic-programming.
(how) Are these fundamentally different concepts?
I've read that indirect encoding of GP instructions is an effective technique ( both GEP and CGP do that ). Has there been reached some sort of consensus that indirect encoding has outdated classic tree bases GP?
Well, it seems that there is some difference between gene expression programming (GEP) and cartesian genetic programming (CGP or what I view as classic genetic programming), but the difference might be more hyped up than it really ought to be. Please note that I have never used GEP, so all of my comments are based on my experience with CGP.
In CGP there is no distinction between genotype and a phenotype, in other words- if you're looking at the "genes" of a CGP you're also looking at their expression. There is no encoding here, i.e. the expression tree is the gene itself.
In GEP the genotype is expressed into a phenotype, so if you're looking at the genes you will not readily know what the expression is going to look like. The "inventor" of GP, Cândida Ferreira, has written a really good paper and there are some other resources which try to give a shorter overview of the whole concept.
Ferriera says that the benefits are "obvious," but I really don't see anything that would necessarily make GEP better than CGP. Apparently GEP is multigenic, which means that multiple genes are involved in the expression of a trait (i.e. an expression tree). In any case, the fitness is calculated on the expressed tree, so it doesn't seem like GEP is doing anything to increase the fitness. What the author claims is that GEP increases the speed at which the fitness is reached (i.e. in fewer generations), but frankly speaking you can see dramatic performance shifts from a CGP just by having a different selection algorithm, a different tournament structure, splitting the population into tribes, migrating specimens between tribes, including diversity into the fitness, etc.
Selection:
random
roulette wheel
top-n
take half
etc.
Tournament Frequency:
once per epoch
once per every data instance
once per generation.
Tournament Structure:
Take 3, kill 1 and replace it with the child of the other two.
Sort all individuals in the tournament by fitness, kill the lower half and replace it with the offspring of the upper half (where lower is worse fitness and upper is better fitness).
Randomly pick individuals from the tournament to mate and kill the excess individuals.
Tribes
A population can be split into tribes that evolve independently of each-other:
Migration- periodically, individual(s) from a tribe would be moved to another tribe
The tribes are logically separated so that they're like their own separate populations running in separate environments.
Diversity Fitness
Incorporate diversity into the fitness, where you count how many individuals have the same fitness value (thus are likely to have the same phenotype) and you penalize their fitness by a proportionate value: the more individuals with the same fitness value, the more penalty for those individuals. This way specimens with unique phenotypes will be encouraged, therefore there will be much less stagnation of the population.
Those are just some of the things that can greatly affect the performance of a CGP, and when I say greatly I mean that it's in the same order or greater than Ferriera's performance. So if Ferriera didn't tinker with those ideas too much, then she could have seen much slower performance of the CGPs... especially if she didn't do anything to combat stagnation. So I would be careful when reading performance statistics on GEP, because sometimes people fail to account for all of the "optimizations" available out there.
There seems to be some confusion in these answers that must be clarified. Cartesian GP is different from classic GP (aka tree-based GP), and GEP. Even though they share many concepts and take inspiration from the same biological mechanisms, the representation of the individuals (the solutions) varies.
In CGPthe representation (mapping between genotype and phenotype) is indirect, in other words, not all of the genes in a CGP genome will be expressed in the phenome (a concept also found in GEP and many others). The genotypes can be coded in a grid or array of nodes, and the resulting program graph is the expression of active nodes only.
In GEP the representation is also indirect, and similarly not all genes will be expressed in the phenotype. The representation in this case is much different from treeGP or CGP, but the genotypes are also expressed into a program tree. In my opinion GEP is a more elegant representation, easier to implement, but also suffers from some defects like: you have to find the appropriate tail and head size which is problem specific, the mnltigenic version is a bit of a forced glue between expression trees, and finally it has too much bloat.
Independently of which representation may be better than the other in some specific problem domain, they are general purpose, can be applied to any domain as long as you can encode it.
In general, GEP is simpler from GP. Let's say you allow the following nodes in your program: constants, variables, +, -, *, /, if, ...
For each of such nodes with GP you must create the following operations:
- randomize
- mutate
- crossover
- and probably other genetic operators as well
In GEP for each of such nodes only one operation is needed to be implemented: deserialize, which takes array of numbers (like double in C or Java), and returns the node. It resembles object deserialization in languages like Java or Python (the difference is that deserialization in programming languages uses byte arrays, where here we have arrays of numbers). Even this 'deserialize' operation doesn't have to be implemented by the programmer: it can be implemented by a generic algorithm, just like it's done in Java or Python deserialization.
This simplicity from one point of view may make searching of best solution less successful, but from other side: requires less work from programmer and simpler algorithms may execute faster (easier to optimize, more code and data fits in CPU cache, and so on). So I would say that GEP is slightly better, but of course the definite answer depends on problem, and for many problems the opposite may be true.

graphic imaginary numbers with vb.net

anyone have experience doing this? when i say imaginary i mean the square root of negative one. how would i graph this?
http://www.wolframalpha.com/input/?i=sqrt(-1)
Or more specifically, http://www.wolframalpha.com/input/?i=plot+sqrt(-1)
Complex numbers have many applications. They are useful for being able to store two properties (the real and imaginary parts) that behave sensibly when you apply standard math operators on them, like multiplication. Many problems become easy to solve by transforming them to the complex number domain, perform an operation on them that is easy to calculate, then transforming them back.
A good example is calculating the behavior of an electronic circuit that has reactive components. The impedance of a coil in the complex domain is jwL, of a capacitor is 1/jwC (w = omega). Driven with a signal in the complex domain, you can easily calculate the response. In this particular case, graphing the response is meaningful by mapping the real part on the X-axis and the imaginary part on the Y-axis. The length of the vector is the amplitude, the angle is the phase.
The Laplace transform is another complex domain transformation, based on Euler's identity. It has a very useful graphical representation too, plotting the complex roots of the equation within the unity circle allows predicting the stability of a feedback system.
These kind of transforms are popular because they simplify the math or their graphical representation are easy to interpret. Whether yours are equally useful really depends on what the transform does.