This is more a question on how the Julia language is working rather than a technical issue. I have discovered structure and tuple destructuring but I am wondering what are the underlying processes regarding memory allocation.
I am currently writing a code to run simulations involving ODE solving. I store all my parameters in a tuple, which is used as an argument for the ODE function ODE_fun. I have red that destructuring tuples is a nice way to use a subset of the elements of a tuple and prevents you from writing: tuple.a each time you refer to parameter a for instance. But is it creating a new variable each time or? If yes, it looks less optimal than calling tuple.a because it would create a new a variable at each iteration of the ODE solving algorithm. This may be significant with tuples containing dozen of parameters and vectors. Or is it creating a kind of pointer pointing to tuple.a? Is there a way to use destructuring in an optimal way? Unfortunately, documentations are quite unclear...
If the type of the
NamedTupleargumentis known at the compile time, it does not matter whether you domyfun(argument.a, argument.b)ormyfun(argument...)Explanation
Since the tuple has a fixed type the Julia's compiler can handle that (providing that the type is not ambiguous).
Consider this two functions and a
NamedTuplemypars:It can be clearly seen that both are non allocating:
Let is now look into the compilation process. The lowered codes are obviously different
However, when inferring type compiler realizes these are basically the same:
This in turn means that LLVM will get identical code to compile:
You can run
@code_native f(mypars)and@code_native f2(mypars)to find out that the resulting binaries are identical.