I work on a very little research team to create/adapt a Genetic Algorithm library in Scala for distributed computation with Scientific Worklow System, in our case we use the open source OpenMole software (http://www.openmole.org/).
Recently, i try to understand and re-implement the SBX crossover operator written in JMetal Metaheuristics library (http://jmetal.sourceforge.net/) to adapt it in functionnal version in our Scala library.
I write some code, but i need our advice or your validation about the SBX defined into java library, because the source code (src in svn) doesn't appear equal to the original equation written here : http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.7291&rep=rep1&type=pdf at page 30, in annexe A
First question, i don't understand the java version of JMetal, why they use two different beta values ?!
- beta1 which use in the equation the first arg of min[(y1 - yL), ...] and
- beta2 which use the second arg of min [... , (yu - y2)])
Beta 1 and 2 are used for computation of alpha value and two (so here and in jmetal we have also two alpha different value alpha1 and 2) ...
Same problem/question, we have in jmetal two computation for betaq (java code) or in Deb equation, result of :
Second question, what is the meaning of the symbol used in (2) and (3) procedure in pseudo-algorithm of SBX, and the difference with simple beta ? Especially when we want to compute children/offsprings of crossover parents, like here :
Edit
Correct a no-op if/else block
Author of code in jmetal give me the link of original source code of Nsga-II algorithm, and he explain me that description of SBX by Deb differs from his implementation :/
http://www.iitk.ac.in/kangal/codes.shtml
I don't understand the difference between the description and the implementation in jmetal and original source code, do you have an explanation ?
Correct if/else return for map
Start of translation into scala
class SBXBoundedCrossover[G <: GAGenome, F <: GAGenomeFactory[G]](rate: Random => Double = _.nextDouble) extends CrossOver [G, F] {
def this(rate: Double) = this( _ => rate)
def crossOver (genomes : IndexedSeq [G], factory: F) (implicit aprng : Random) = {
val g1 = genomes.random
val g2 = genomes.random
val crossoverRate = rate(aprng)
val EPS = 1.0e-14
val numberOfVariables = g1.wrappedValues.size
val distributionIndex = 2
val variableToMutate = (0 until g1.wrappedValues.size).map{x => !(aprng.nextDouble < 0.5)}
//crossover probability
val offspring = {
if (aprng.nextDouble < crossoverRate) {
(variableToMutate zip (g1.wrappedValues zip g2.wrappedValues)) map {
case (b, (g1e, g2e)) =>
if(b) {
if (abs(g1e - g2e) > EPS){
val y1 = min(g1e, g2e)
val y2 = max(g2e, g1e)
var yL = 0.0 //g1e.getLowerBound
var yu = 1.0 //g1e.getUpperBound
var rand = aprng.nextDouble // ui
var beta1 = 1.0 + (2.0 * (y1 - yL)/(y2 - y1))
var alpha1 = 2.0 - pow(beta1,-(distributionIndex+1.0))
var betaq1 = computebetaQ(alpha1,distributionIndex,rand)
//calcul offspring 1 en utilisant betaq1, correspond au β barre
var c1 = 0.5 * ((y1 + y2) - betaq1 * (y2 - y1))
// -----------------------------------------------
var beta2 = 1.0 + (2.0 * (yu - y2) / (y2 - y1))
var alpha2 = 2.0 - pow(beta2, -(distributionIndex + 1.0))
var betaq2 = computebetaQ(alpha2,distributionIndex,rand)
//calcul offspring2 en utilisant betaq2
var c2 = 0.5 * ((y1 + y2) + betaq2 * (y2 - y1))
if (c1 < yL) c1 = yL
if (c1 > yu) c1 = yu
if (c2 < yL) c2 = yL
if (c2 > yu) c2 = yu
if (aprng.nextDouble <= 0.5) {
(c2,c1)
} else {
(c1, c2)
}
}else{
(g1e, g2e)
}
}else{
(g2e, g1e)
}
}
}else{
// not so good here ...
(g1.wrappedValues zip g2.wrappedValues)
}
}
(factory.buildGenome(offspring.map{_._1}), factory.buildGenome(offspring.map{_._2}))
}
def computebetaQ(alpha:Double, distributionIndex:Double, rand:Double):Double = {
if (rand <= (1.0/alpha)){
pow ((rand * alpha),(1.0 / (distributionIndex + 1.0)))
} else {
pow ((1.0 / (2.0 - rand * alpha)),(1.0 / (distributionIndex + 1.0)))
}
}
Thanks a lot for your advice, or help in this problem.
SR
Reyman64, your question is the answer I was looking for. Thank you.
I took the paper you linked and the code of Deb's implementation and tried to understand both. For that, I commented almost every line of the code. They differ only in the calculation of beta.
Since Deb used this code in his implementation of the NSGA-II, I'll stick to this version of the algorithm.
If anyone is in the same situation I was (not understanding how to implement SBX), I left my commentaries in the following gist, take a look.
https://gist.github.com/Tiagoperes/1779d5f1c89bae0cfdb87b1960bba36d
I did an implementation of the SBX (it is called Simulated Binary Crossover btw) for HeuristicLab (C#). You can take a look at the implementation of our SimulatedBinaryCrossover. I took the description from a different reference however (paper title: "Simulated binary crossover for continuous search space" from 1995). The full citation is given in the source code.