I have a misunderstanding about standard-conversion sequence terms. I have come across the following quote N3797 §8.5.3/5 [dcl.init.ref]:
— If the initializer expression
— is an xvalue (but not a bit-field), class prvalue, array prvalue or function lvalue and “cv1 T1” is reference-compatible with “cv2 T2”, or
— has a class type (i.e., T2 is a class type), where T1 is not reference-related to T2, and can be converted to an xvalue, class prvalue, or function lvalue of type “cv3 T3”, where “cv1 T1” is reference-compatible with “cv3 T3” (see 13.3.1.6),
[..] In the second case, if the reference is an rvalue reference and the second standard conversion sequence of the user-defined conversion sequence includes an lvalue-to-rvalue conversion, the program is ill-formed.
What is the second standard conversion sequence here? I thought there is a standard conversion sequence including all necessary standard conversions (user-defined and implicit).
[over.ics.user]:
E.g. for
The second standard conversion sequence converts the
intprvalue to the parameter offoo, that is,short. The first standard conversion sequence converts theAobject toA(the implicit object parameter ofoperator int), which constitutes an identity conversion.For references, the rule quoted by you provides an example:
Here,
operator int&is selected by overload resolution. The return value is an lvalue reference toint. For that to initialize the reference, an lvalue-to-rvalue conversion would be necessary, and that's precisely what the quote prevents: Lvalues shall not be bound to rvalue references.