Using JuMP inside a Turing model

85 Views Asked by At

My question is: Can one use a JuMP optimization problem inside a Turing model?

Below is a minimal example of the functionality that I'm trying to obtain:

using Turing
using JuMP
import Ipopt

function find_u(θ)
    # Describe the model
    model = Model(optimizer_with_attributes(Ipopt.Optimizer))

    set_silent(model)
    @variables(model, begin
        0 <= u <= 10
    end)
    @NLobjective(
        model,
        Max,
        θ * u - u^2
        )
    # optimize
    JuMP.optimize!(model)

    # return optimum value
    return value(u)
end

@model function turing_model(y)
    # fixed parameters
    σ = 1e-3

    # prior
    θ ~ Uniform(0, 2)

    # Model
    for i = 1 : length(y)
        y[i] ~ Normal(find_u(θ), σ)
    end

end

# Simulated data
θs = 1
y = [find_u(θs) + rand(Normal(0, 1e-3)) for i = 1: 10]

# inference 
numberofsamples = 500
m_train = turing_model(y);
posterior_chain = sample(m_train, NUTS(100, 0.65), numberofsamples)

The error I get when running this code is MethodError: no method matching Float64(::Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1})

Because of similar errors, it seems this error comes because the varible types are not correctly propagated during the autodifferentiation.

Do someone have any suggestions?

Thanks!

I have tried to make the find_u(θ) function to be compatible with the ForwardDiff package, testing for example if ForwardDiff.gradient(find_u, xin) with an input xin = 1 + Dual(0, 1) gives no error.

But I have not found a way to make it work.

1

There are 1 best solutions below

2
On

The simplest answer is that you cannot use JuMP in a Turing model because JuMP models are not automatically differentiable.

I don't know if Turing lets you provide explicit gradients, but you'd need a way of computing the derivative of the optimal solution of u with respect to θ, and that might be quite tricky for an arbitrary JuMP model.