Can I change the template argument deduction order for a generic variadic lambda?

508 Views Asked by At

Take the following code, which is a simplified example:

template <typename F>
void foo(F f) {
    //bool some = is_variadic_v<F>; // Scenario #1
    bool some = true;               // Scenario #2
    f(int(some), int(some));
}

int main() {
    auto some = [](int i, int j) {
        std::cout << i << " " << j << '\n';
    };
    
    foo([&some](auto... params) {
        some(params...);
    });
}

A function takes a generic variadic lambda and calls it with a fixed set of arguments. This lambda itself then just calls another function/lambda with a matching prototype. As one could expect, in scenario 2, when f is called inside foo, the compiler will deduce params... to be the parameter pack {1, 1}.

For scenario #1, I am using a code from another Q&A to deduce the arity of a callable object. If however such an object is callable with more than a pre-defined maximum amount of arguments, it is considered "variadic". In detail, is_variadic_v will employ a form of expression SFINAE where it is attempted to call the function object with a decreasing number of arguments having an "arbitrary type" that is implictly convertible to anything.

The problem is now that apparently, the compiler will deduce F (and along its argument pack) during this metacode, and if it is variadic (such as in this case), it deduces F as a lambda taking the dummy arguments, i.e. something like main()::lambda(<arbitrary_type<0>, arbitrary_type<1>, arbitrary_type<2>, ..., arbitrary_type<N>>) if N is the "variadic limit" from above. Now params... is deduced as arbitrary_type<1>, arbitrary_type<2>, ... and correspondingly, the call some(params...) will fail. This behaviour can be demonstrated in this little code example:

#include <utility>
#include <type_traits>
#include <iostream>

constexpr int max_arity = 12; // if a function takes more arguments than that, it will be considered variadic

struct variadic_type { };

// it is templated, to be able to create a
// "sequence" of arbitrary_t's of given size and
// hence, to 'simulate' an arbitrary function signature.
template <auto>
struct arbitrary_type {
    // this type casts implicitly to anything,
    // thus, it can represent an arbitrary type.
    template <typename T>
    operator T&&();

    template <typename T>
    operator T&();
};

template <
    typename F, auto ...Ints,
    typename = decltype(std::declval<F>()(arbitrary_type<Ints>{ }...))
>
constexpr auto test_signature(std::index_sequence<Ints...> s) {
    return std::integral_constant<int, size(s)>{ };
}

template <auto I, typename F>
constexpr auto arity_impl(int) -> decltype(test_signature<F>(std::make_index_sequence<I>{ })) {
    return { };
}

template <auto I, typename F, typename = std::enable_if_t<(I > 0)>>
constexpr auto arity_impl(...) {
    // try the int overload which will only work,
    // if F takes I-1 arguments. Otherwise this
    // overload will be selected and we'll try it 
    // with one element less.
    return arity_impl<I - 1, F>(0);
}

template <typename F, auto MaxArity>
constexpr auto arity_impl() {
    // start checking function signatures with max_arity + 1 elements
    constexpr auto tmp = arity_impl<MaxArity+1, F>(0);
    if constexpr (tmp == MaxArity+1) 
        return variadic_type{ }; // if that works, F is considered variadic
    else return tmp; // if not, tmp will be the correct arity of F
}

template <typename F, auto MaxArity = max_arity>
constexpr auto arity(F&&) { return arity_impl<std::decay_t<F>, MaxArity>(); }

template <typename F, auto MaxArity = max_arity>
constexpr auto arity_v = arity_impl<std::decay_t<F>, MaxArity>();

template <typename F, auto MaxArity = max_arity>
constexpr bool is_variadic_v = std::is_same_v<std::decay_t<decltype(arity_v<F, MaxArity>)>, variadic_type>;

template <typename F>
void foo(F f) {
    bool some = is_variadic_v<F>;
    //bool some = true;
    f(int(some), int(some));
}

int main() {
    auto some = [](int i, int j) {
        std::cout << i << " " << j << '\n';
    };
    
    foo([&some](auto... params) {
        some(params...);
    });
}

Can I prevent this behaviour? Can I force the compiler to re-deduce the parameter list?


EDIT:

An additional peculiarity is that the compiler seems to act kind of schizophrenic. When I change the contents of foo to

foo([&some](auto... params) {
    // int foo = std::index_sequence<sizeof...(params)>{ };
    std::cout << sizeof...(params) << '\n';
});

the compiler will create a program that will print 2 in this example. If however I include the commented line (which, as it makes no sense, should trigger a compiler diagnostic), I get confronted with

error: cannot convert 'std::index_sequence<13>' {aka 'std::integer_sequence<long unsigned int, 13>'} to 'int' in initialization
   85 |         int foo = std::index_sequence<sizeof...(params)>{ };

so does the compiler now deduces sizeof...(params) to be 2 and 13 at the same time? Or did he change his mind and chooses now 13 just because I added another statement into the lambda? Compilation will also fail if I instead choose a static_assert(2 == sizeof...(params));. So the compiler deduces sizeof...(params) == 2, except if I ask him whether he did deduce 2, because then he didn't.

Apparently, it is very decisive for the parameter pack deduction what is written inside the lambda. Is it just me or does this behaviour really look pathologic?

0

There are 0 best solutions below