When I define in lexical analyzer
typedef boost::mpl::vector<std::string, unsigned int, bool>
token_value_types;
lex::token_def<unsigned int> lit_uint("[0-9]+", token_ids::lit_uint);
and then use it in some grammar as
primary_expr =
lexer.lit_uint
| lexer.true_or_false
| identifier
| '(' > expr > ')'
;
so how the string is converted to the value of correct token value type (unsigned int
in this case)? What happens if you specify a custom type or floating-point type as a token value type? Where is the presence of conversion routine (I think something like boost::iterator_range
to double
conversion)?
The way to accomplish what you want is specializing
assign_to_attribute_from_iterators
. You can find an example with a custom type here. If you usedouble
as the attribute in your token definition, spirit internally usesqi::double_
to parse the value. (You can find here the specialization for double and the rest of the fundamental types).Silly example where I define the
real
token as anything that is not a,
or a;
to show the parsing ofdouble
s.Edit: I have very little experience with regular expressions, but I believe that the token definition equivalent to the grammar linked in the comment (that I believe should have
fractional_constant >> -exponent_part
instead offractional_constant >> !exponent_part
) would be: