我尝试学习使用boost ::精神。 要做到这一点,我想创造一些简单的词法分析,将它们结合起来,然后开始使用精神分析。 我试图修改的例子,但预期不运行(结果r是不是真的)。
这里的词法分析器:
#include <boost/spirit/include/lex_lexertl.hpp>
namespace lex = boost::spirit::lex;
template <typename Lexer>
struct lexer_identifier : lex::lexer<Lexer>
{
lexer_identifier()
: identifier("[a-zA-Z_][a-zA-Z0-9_]*")
, white_space("[ \\t\\n]+")
{
using boost::spirit::lex::_start;
using boost::spirit::lex::_end;
this->self = identifier;
this->self("WS") = white_space;
}
lex::token_def<> identifier;
lex::token_def<> white_space;
std::string identifier_name;
};
这是我试图运行示例:
#include "stdafx.h"
#include <boost/spirit/include/lex_lexertl.hpp>
#include "my_Lexer.h"
namespace lex = boost::spirit::lex;
int _tmain(int argc, _TCHAR* argv[])
{
typedef lex::lexertl::token<char const*,lex::omit, boost::mpl::false_> token_type;
typedef lex::lexertl::lexer<token_type> lexer_type;
typedef lexer_identifier<lexer_type>::iterator_type iterator_type;
lexer_identifier<lexer_type> my_lexer;
std::string test("adedvied das934adf dfklj_03245");
char const* first = test.c_str();
char const* last = &first[test.size()];
lexer_type::iterator_type iter = my_lexer.begin(first, last);
lexer_type::iterator_type end = my_lexer.end();
while (iter != end && token_is_valid(*iter))
{
++iter;
}
bool r = (iter == end);
return 0;
}
R作为只要有只有一个字符串中的令牌是真实的。 为什么会出现这样的情况?
问候托比亚斯
你已经创建了第二个词法分析器状态,但永远不会调用它。
简化和利润:
在大多数情况下,能有预期的效果最简单的方法是使用单态与乐星pass_ignore
上可跳过的记号标志:
this->self += identifier
| white_space [ lex::_pass = lex::pass_flags::pass_ignore ];
请注意,这需要一个actor_lexer
允许语义动作:
typedef lex::lexertl::actor_lexer<token_type> lexer_type;
全样本:
#include <boost/spirit/include/lex_lexertl.hpp>
#include <boost/spirit/include/lex_lexertl.hpp>
namespace lex = boost::spirit::lex;
template <typename Lexer>
struct lexer_identifier : lex::lexer<Lexer>
{
lexer_identifier()
: identifier("[a-zA-Z_][a-zA-Z0-9_]*")
, white_space("[ \\t\\n]+")
{
using boost::spirit::lex::_start;
using boost::spirit::lex::_end;
this->self += identifier
| white_space [ lex::_pass = lex::pass_flags::pass_ignore ];
}
lex::token_def<> identifier;
lex::token_def<> white_space;
std::string identifier_name;
};
int main(int argc, const char *argv[])
{
typedef lex::lexertl::token<char const*,lex::omit, boost::mpl::false_> token_type;
typedef lex::lexertl::actor_lexer<token_type> lexer_type;
typedef lexer_identifier<lexer_type>::iterator_type iterator_type;
lexer_identifier<lexer_type> my_lexer;
std::string test("adedvied das934adf dfklj_03245");
char const* first = test.c_str();
char const* last = &first[test.size()];
lexer_type::iterator_type iter = my_lexer.begin(first, last);
lexer_type::iterator_type end = my_lexer.end();
while (iter != end && token_is_valid(*iter))
{
++iter;
}
bool r = (iter == end);
std::cout << std::boolalpha << r << "\n";
}
打印
true
“WS”作为船长状态
也有可能您在使用第二解析器状态船长的样本来( lex::tokenize_and_phrase_parse
)。 让我花一分钟或10来创建该工作示例。
更新我花了一点时间超过10分钟(waaaah):)这里有一个对比试验,展示如何词法分析器状态互动,以及如何使用精神分析船长调用第二解析器的状态:
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/lex_lexertl.hpp>
namespace lex = boost::spirit::lex;
namespace qi = boost::spirit::qi;
template <typename Lexer>
struct lexer_identifier : lex::lexer<Lexer>
{
lexer_identifier()
: identifier("[a-zA-Z_][a-zA-Z0-9_]*")
, white_space("[ \\t\\n]+")
{
this->self = identifier;
this->self("WS") = white_space;
}
lex::token_def<> identifier;
lex::token_def<lex::omit> white_space;
};
int main()
{
typedef lex::lexertl::token<char const*, lex::omit, boost::mpl::true_> token_type;
typedef lex::lexertl::lexer<token_type> lexer_type;
typedef lexer_identifier<lexer_type>::iterator_type iterator_type;
lexer_identifier<lexer_type> my_lexer;
std::string test("adedvied das934adf dfklj_03245");
{
char const* first = test.c_str();
char const* last = &first[test.size()];
// cannot lex in just default WS state:
bool ok = lex::tokenize(first, last, my_lexer, "WS");
std::cout << "Starting state WS:\t" << std::boolalpha << ok << "\n";
}
{
char const* first = test.c_str();
char const* last = &first[test.size()];
// cannot lex in just default state either:
bool ok = lex::tokenize(first, last, my_lexer, "INITIAL");
std::cout << "Starting state INITIAL:\t" << std::boolalpha << ok << "\n";
}
{
char const* first = test.c_str();
char const* last = &first[test.size()];
bool ok = lex::tokenize_and_phrase_parse(first, last, my_lexer, *my_lexer.self, qi::in_state("WS")[my_lexer.self]);
ok = ok && (first == last); // verify full input consumed
std::cout << std::boolalpha << ok << "\n";
}
}
输出是
Starting state WS: false
Starting state INITIAL: false
true