$include_dir="/home/hyper-archives/boost-users/include"; include("$include_dir/msg-header.inc") ?>
From: Joel de Guzman (djowel_at_[hidden])
Date: 2003-07-25 19:08:52
Hi,
Is there any advantage in using both tokenizer and spirit? I'm not a
tokenizer expert, but it seems that what you are trying to achieve can
be done by spirit alone:
parse(first, last, uint_p[append(numbers)], space_p);
--
Joel de Guzman
joel at boost-consulting.com
http://www.boost-consulting.com
http://spirit.sf.net
Andrey Sverdlichenko <blaze_at_[hidden]> wrote:
> Friday, July 25, 2003, 3:47:41 AM, you wrote:
>
>>> Hello.
>>>
>>> Is there any "right way" to parse strings, already splitted with
>>> tokenizer? I wrote iterator that can work with this splitted
>>> strings, but i also need some "skip parser" to recognize token
>>> boundaries as whitespaces and can't design one.
>
>> Could you please be more specific?
>
> This is a sample code. It parses both numbers in data string as one
> single number and i need to separate them.
>
> #include <boost/tokenizer.hpp>
> #include <boost/spirit/core.hpp>
> #include <list>
> #include <iostream>
>
> typedef boost::char_separator<std::string::value_type> Separator;
> typedef boost::tokenizer<Separator> Tokenizer;
>
> class tok_iterator : public std::iterator<std::forward_iterator_tag,
> const std::string::value_type> {
> public:
> explicit tok_iterator(const Tokenizer::iterator &curr)
> : token(curr), offset(0) {}
> int operator ==(const tok_iterator &other) const
> { return (token == other.token && offset == other.offset); }
> int operator !=(const tok_iterator &other) const
> { return ! (*this == other); }
> reference operator *(void) const
> { return (*token)[offset]; }
>
> tok_iterator &operator ++(void);
>
> private:
> Tokenizer::iterator token;
> size_t offset;
> };
>
> inline
> tok_iterator &
> tok_iterator::operator ++(void) {
> if (++offset >= token->size()) {
> offset = 0;
> ++token;
> }
> return *this;
> }
>
> int
> main(void) {
> using namespace boost::spirit;
>
> std::string data("55 99");
> Separator sep;
> Tokenizer tok(data, sep);
> Tokenizer::iterator token = tok.begin();
> std::list<u_int> numbers;
>
> parse_info<tok_iterator> info = parse(tok_iterator(token),
> tok_iterator(tok.end()),
> uint_p[append(numbers)]);
>
> std::copy(numbers.begin(), numbers.end(),
> std::ostream_iterator<u_int>(std::cout, "\n"));
>
> return 0;
> }