How to optimize with differential evolution using julia package Evolutionary.jl?

360 Views Asked by At

I encountered such problem after I specified a differential evolution algorithm and an initial population of multiplied layer perceptron network. It requires to evolve a population of MLPs by DE. I tried to use Evolutionary package, but failed at this problem. I am just a beginner of julia. Can anyone help me with this problem? Or if there is any other way to implement a DE to evolve MLPs? Because I don't know much how to reuse codes if I don't see any similar example, I can't find any example of julia to evolve MLP by DE. The codes are attached as follow.

enter image description hereenter image description hereenter image description hereenter image description hereenter image description hereenter image description hereenter image description here //Here are the snippets of codes

begin
    features = Iris.features();
    slabels = Iris.labels();
    classes = unique(slabels)  # unique classes in the dataset
    nclasses = length(classes) # number of classes
    d, n = size(features)      # dimension and size if the dataset
end

//define MLP

model = Chain(Dense(d, 15, relu), Dense(15, nclasses))

//rewrite initial_population to generate a group of MLPs

begin
    import Evolutionary.initial_population
    function initial_population(method::M, individual::Chain;
                            rng::Random.AbstractRNG=Random.default_rng(),
                            kwargs...) where {M<:Evolutionary.AbstractOptimizer}
     θ, re = Flux.destructure(individual);
        [re(randn(rng, length(θ))) for i in 1:Evolutionary.population_size(method)]
    end
end

//define DE algorithm and I just used random parameters

algo2 = DE(
        populationSize=150,
        F=0.9,
        n=1,
        K=0.5*(1.9),
        selection = rouletteinv
        
    )
popu = initial_population(algo2, model)

//in the source code of Evolutionary.jl, it seems that to use optimize() function, I need to pass a constranit? I am not sure. I have tried every method of optimize function, but it still reported error. What's worse, I am not sure how to use box constraint, so I tried to use Nonconstranit constraint, but it still failed. I don't know how to set upper and lower bounds of box constraint in this case, so I don't know how to use it. and I tried to set a random box constraint to try to run optimize() function, but it still failed. error reported is in pitcure attached.

cnst = BoxConstraints([0.5, 0.5], [2.0, 2.0])
res2 = Evolutionary.optimize(fitness,cnst,algo2,popu,opts)

//so far what I do is simply define a DE algorithm, an initial population, a MLP network and there is a uniform_mlp(), which is used to deconstruct a mlp into a vector, perform crossover operator and reconstruct from them a new mlp

function uniform_mlp(m1::T, m2::T; rng::Random.AbstractRNG=Random.default_rng()) where {T <: Chain}
    θ1, re1 = Flux.destructure(m1);
    θ2, re2 = Flux.destructure(m2);
    c1, c2 = UX(θ1,θ2; rng=rng)
    return re1(c1), re2(c2)
end

//there is also a mutation function

function gaussian_mlp(σ::Real = 1.0)
    vop = gaussian(σ)
    function mutation(recombinant::T; rng::Random.AbstractRNG=Random.default_rng()) where{T <: Chain}  
            θ, re = Flux.destructure(recombinant)
        return re(convert(Vector{Float32}, vop(θ; rng=rng)))
    end
    return mutation
end
1

There are 1 best solutions below

6
On

The easiest way to use this is through Optimization.jl. There is an Evolutionary.jl wrapper that makes it use the standardized Optimization.jl interface. This looks like:

using Optimization, OptimizationEvolutionary
rosenbrock(x, p) =  (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p  = [1.0, 100.0]
f = OptimizationFunction(rosenbrock)
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0,-1.0], ub = [1.0,1.0])
sol = solve(prob, Evolutionary.DE())

Though given previous measurements of global optimizer performance, we would recommend BlackBoxOptim's methods as well, this can be changed through simply by changing the optimizer dispatch:

using Optimization, OptimizationBBO
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(), maxiters=100000, maxtime=1000.0)

This is also a DE method, but one with some adaptive radius etc. etc. that performs much better (on average).