Destructuring, variable creation and code optimlisation in Julia

110 Views Asked by At

This is more a question on how the Julia language is working rather than a technical issue. I have discovered structure and tuple destructuring but I am wondering what are the underlying processes regarding memory allocation.

I am currently writing a code to run simulations involving ODE solving. I store all my parameters in a tuple, which is used as an argument for the ODE function ODE_fun. I have red that destructuring tuples is a nice way to use a subset of the elements of a tuple and prevents you from writing: tuple.a each time you refer to parameter a for instance. But is it creating a new variable each time or? If yes, it looks less optimal than calling tuple.a because it would create a new a variable at each iteration of the ODE solving algorithm. This may be significant with tuples containing dozen of parameters and vectors. Or is it creating a kind of pointer pointing to tuple.a? Is there a way to use destructuring in an optimal way? Unfortunately, documentations are quite unclear...

1

There are 1 best solutions below

0
Przemyslaw Szufel On

If the type of the NamedTuple argument is known at the compile time, it does not matter whether you do myfun(argument.a, argument.b) or myfun(argument...)

Explanation

Since the tuple has a fixed type the Julia's compiler can handle that (providing that the type is not ambiguous).

Consider this two functions and a NamedTuple mypars:

function f(pars)
    +(pars...)
end
function f2(pars)
    +(pars.a, pars.b)
end

mypars = (;a=1, b=4)

It can be clearly seen that both are non allocating:

julia> @btime f($mypars)
  1.400 ns (0 allocations: 0 bytes)

julia> @btime f2($mypars)
  1.400 ns (0 allocations: 0 bytes)

Let is now look into the compilation process. The lowered codes are obviously different

julia> @code_lowered f(mypars)
CodeInfo(
1 ─ %1 = Core._apply_iterate(Base.iterate, Main.:+, pars)
└──      return %1
)

julia> @code_lowered f2(mypars)
CodeInfo(
1 ─ %1 = Base.getproperty(pars, :a)
│   %2 = Base.getproperty(pars, :b)
│   %3 = %1 + %2
└──      return %3
)

However, when inferring type compiler realizes these are basically the same:

julia> @code_typed f(mypars)
CodeInfo(
1 ─ %1 = Core.getfield(pars, 1)::Int64
│   %2 = Core.getfield(pars, 2)::Int64
│   %3 = Base.add_int(%1, %2)::Int64
└──      return %3
) => Int64

julia> @code_typed f2(mypars)
CodeInfo(
1 ─ %1 = Base.getfield(pars, :a)::Int64
│   %2 = Base.getfield(pars, :b)::Int64
│   %3 = Base.add_int(%1, %2)::Int64
└──      return %3
) => Int64

This in turn means that LLVM will get identical code to compile:

julia> @code_llvm f(mypars)
;  @ REPL[1]:1 within `f`
; Function Attrs: uwtable
define i64 @julia_f_702([2 x i64]* nocapture noundef nonnull readonly align 8 dereferenceable(16) %0) #0 {
top:
;  @ REPL[1]:2 within `f`
  %1 = getelementptr inbounds [2 x i64], [2 x i64]* %0, i64 0, i64 0
  %2 = getelementptr inbounds [2 x i64], [2 x i64]* %0, i64 0, i64 1
; ┌ @ int.jl:87 within `+`
   %unbox = load i64, i64* %1, align 8
   %unbox1 = load i64, i64* %2, align 8
   %3 = add i64 %unbox1, %unbox
; └
  ret i64 %3
}

julia> @code_llvm f2(mypars)
;  @ REPL[2]:1 within `f2`
; Function Attrs: uwtable
define i64 @julia_f2_704([2 x i64]* nocapture noundef nonnull readonly align 8 dereferenceable(16) %0) #0 {
top:
;  @ REPL[2]:2 within `f2`
; ┌ @ Base.jl:37 within `getproperty`
   %1 = getelementptr inbounds [2 x i64], [2 x i64]* %0, i64 0, i64 0
   %2 = getelementptr inbounds [2 x i64], [2 x i64]* %0, i64 0, i64 1
; └
; ┌ @ int.jl:87 within `+`
   %unbox = load i64, i64* %1, align 8
   %unbox1 = load i64, i64* %2, align 8
   %3 = add i64 %unbox1, %unbox
; └
  ret i64 %3
}

You can run @code_native f(mypars) and @code_native f2(mypars) to find out that the resulting binaries are identical.