Boost's tokenizer different results under windows & linux

245 Views Asked by At

I found some strange thing with boost::tokenizer.( boost version 1.53.0, 1.52.0 )

    tokenizer<escaped_list_separator<char> > tok(line);
    vec.assign( tok.begin(), tok.end() );
    ......
    double volume = lexical_cast<double>( vec[6] );

This code crashes under Linux( Ubuntu 12.04.2 and gcc version 4.6.3 )and works fine under Windows 7 ( VS 2010 Express ). The string( 'line' variable ) which I want to prepare is :

    2012-12-03,09:30:00.000,35.3,35.5,35.26,35.47,26963

Under linux:

     vec[6] is '26963\r'

and lexical_cast is crashed.

Under Windows:

     vec[6] is '26963'

works fine.

Is this correct behavoir?

0

There are 0 best solutions below