robot.parsing.lexer package
Submodules
robot.parsing.lexer.blocklexers module
- class robot.parsing.lexer.blocklexers.BlockLexer(ctx: LexingContext)[source]
Bases:
Lexer
,ABC
- class robot.parsing.lexer.blocklexers.FileLexer(ctx: LexingContext)[source]
Bases:
BlockLexer
- class robot.parsing.lexer.blocklexers.SectionLexer(ctx: LexingContext)[source]
Bases:
BlockLexer
,ABC
- ctx: FileContext
- class robot.parsing.lexer.blocklexers.SettingSectionLexer(ctx: LexingContext)[source]
Bases:
SectionLexer
- class robot.parsing.lexer.blocklexers.VariableSectionLexer(ctx: LexingContext)[source]
Bases:
SectionLexer
- class robot.parsing.lexer.blocklexers.TestCaseSectionLexer(ctx: LexingContext)[source]
Bases:
SectionLexer
- class robot.parsing.lexer.blocklexers.TaskSectionLexer(ctx: LexingContext)[source]
Bases:
SectionLexer
- class robot.parsing.lexer.blocklexers.KeywordSectionLexer(ctx: LexingContext)[source]
Bases:
SettingSectionLexer
- class robot.parsing.lexer.blocklexers.CommentSectionLexer(ctx: LexingContext)[source]
Bases:
SectionLexer
- class robot.parsing.lexer.blocklexers.ImplicitCommentSectionLexer(ctx: LexingContext)[source]
Bases:
SectionLexer
- class robot.parsing.lexer.blocklexers.InvalidSectionLexer(ctx: LexingContext)[source]
Bases:
SectionLexer
- class robot.parsing.lexer.blocklexers.TestOrKeywordLexer(ctx: LexingContext)[source]
Bases:
BlockLexer
,ABC
- name_type: str
- class robot.parsing.lexer.blocklexers.TestCaseLexer(ctx: SuiteFileContext)[source]
Bases:
TestOrKeywordLexer
- name_type: str = 'TESTCASE NAME'
- class robot.parsing.lexer.blocklexers.KeywordLexer(ctx: FileContext)[source]
Bases:
TestOrKeywordLexer
- name_type: str = 'KEYWORD NAME'
- class robot.parsing.lexer.blocklexers.NestedBlockLexer(ctx: TestCaseContext | KeywordContext)[source]
Bases:
BlockLexer
,ABC
- ctx: TestCaseContext | KeywordContext
- class robot.parsing.lexer.blocklexers.ForLexer(ctx: TestCaseContext | KeywordContext)[source]
Bases:
NestedBlockLexer
- class robot.parsing.lexer.blocklexers.WhileLexer(ctx: TestCaseContext | KeywordContext)[source]
Bases:
NestedBlockLexer
- class robot.parsing.lexer.blocklexers.TryLexer(ctx: TestCaseContext | KeywordContext)[source]
Bases:
NestedBlockLexer
- class robot.parsing.lexer.blocklexers.GroupLexer(ctx: TestCaseContext | KeywordContext)[source]
Bases:
NestedBlockLexer
- class robot.parsing.lexer.blocklexers.IfLexer(ctx: TestCaseContext | KeywordContext)[source]
Bases:
NestedBlockLexer
- class robot.parsing.lexer.blocklexers.InlineIfLexer(ctx: TestCaseContext | KeywordContext)[source]
Bases:
NestedBlockLexer
robot.parsing.lexer.context module
- class robot.parsing.lexer.context.LexingContext(settings: Settings, languages: Languages)[source]
Bases:
object
- class robot.parsing.lexer.context.FileContext(lang: Languages | Language | str | Path | Iterable[Language | str | Path] | None = None)[source]
Bases:
LexingContext
- settings: FileSettings
- keyword_context() KeywordContext [source]
- class robot.parsing.lexer.context.SuiteFileContext(lang: Languages | Language | str | Path | Iterable[Language | str | Path] | None = None)[source]
Bases:
FileContext
- settings: SuiteFileSettings
- test_case_context() TestCaseContext [source]
- class robot.parsing.lexer.context.ResourceFileContext(lang: Languages | Language | str | Path | Iterable[Language | str | Path] | None = None)[source]
Bases:
FileContext
- settings: ResourceFileSettings
- class robot.parsing.lexer.context.InitFileContext(lang: Languages | Language | str | Path | Iterable[Language | str | Path] | None = None)[source]
Bases:
FileContext
- settings: InitFileSettings
- class robot.parsing.lexer.context.TestCaseContext(settings: TestCaseSettings)[source]
Bases:
LexingContext
- settings: TestCaseSettings
- property template_set: bool
- class robot.parsing.lexer.context.KeywordContext(settings: KeywordSettings)[source]
Bases:
LexingContext
- settings: KeywordSettings
- property template_set: bool
robot.parsing.lexer.lexer module
- robot.parsing.lexer.lexer.get_tokens(source: Path | str | TextIO, data_only: bool = False, tokenize_variables: bool = False, lang: Languages | Language | str | Path | Iterable[Language | str | Path] | None = None) Iterator[Token] [source]
Parses the given source to tokens.
- Parameters:
source – The source where to read the data. Can be a path to a source file as a string or as
pathlib.Path
object, an already opened file object, or Unicode text containing the date directly. Source files must be UTF-8 encoded.data_only – When
False
(default), returns all tokens. When set toTrue
, omits separators, comments, continuation markers, and other non-data tokens.tokenize_variables – When
True
, possible variables in keyword arguments and elsewhere are tokenized. See thetokenize_variables()
method for details.lang – Additional languages to be supported during parsing. Can be a string matching any of the supported language codes or names, an initialized
Language
subclass, a list containing such strings or instances, or aLanguages
instance.
Returns a generator that yields
Token
instances.
- robot.parsing.lexer.lexer.get_resource_tokens(source: Path | str | TextIO, data_only: bool = False, tokenize_variables: bool = False, lang: Languages | Language | str | Path | Iterable[Language | str | Path] | None = None) Iterator[Token] [source]
Parses the given source to resource file tokens.
Same as
get_tokens()
otherwise, but the source is considered to be a resource file. This affects, for example, what settings are valid.
- robot.parsing.lexer.lexer.get_init_tokens(source: Path | str | TextIO, data_only: bool = False, tokenize_variables: bool = False, lang: Languages | Language | str | Path | Iterable[Language | str | Path] | None = None) Iterator[Token] [source]
Parses the given source to init file tokens.
Same as
get_tokens()
otherwise, but the source is considered to be a suite initialization file. This affects, for example, what settings are valid.
robot.parsing.lexer.settings module
- class robot.parsing.lexer.settings.Settings(languages: Languages)[source]
Bases:
ABC
- names: tuple[str, ...] = ()
- aliases: dict[str, str] = {}
- multi_use = ('Metadata', 'Library', 'Resource', 'Variables')
- single_value = ('Resource', 'Test Timeout', 'Test Template', 'Timeout', 'Template', 'Name')
- name_and_arguments = ('Metadata', 'Suite Setup', 'Suite Teardown', 'Test Setup', 'Test Teardown', 'Test Template', 'Setup', 'Teardown', 'Template', 'Resource', 'Variables')
- name_arguments_and_with_name = ('Library',)
- class robot.parsing.lexer.settings.SuiteFileSettings(languages: Languages)[source]
Bases:
FileSettings
- names: tuple[str, ...] = ('Documentation', 'Metadata', 'Name', 'Suite Setup', 'Suite Teardown', 'Test Setup', 'Test Teardown', 'Test Template', 'Test Timeout', 'Test Tags', 'Default Tags', 'Keyword Tags', 'Library', 'Resource', 'Variables')
- aliases: dict[str, str] = {'Force Tags': 'Test Tags', 'Task Setup': 'Test Setup', 'Task Tags': 'Test Tags', 'Task Teardown': 'Test Teardown', 'Task Template': 'Test Template', 'Task Timeout': 'Test Timeout'}
- class robot.parsing.lexer.settings.InitFileSettings(languages: Languages)[source]
Bases:
FileSettings
- names: tuple[str, ...] = ('Documentation', 'Metadata', 'Name', 'Suite Setup', 'Suite Teardown', 'Test Setup', 'Test Teardown', 'Test Timeout', 'Test Tags', 'Keyword Tags', 'Library', 'Resource', 'Variables')
- aliases: dict[str, str] = {'Force Tags': 'Test Tags', 'Task Setup': 'Test Setup', 'Task Tags': 'Test Tags', 'Task Teardown': 'Test Teardown', 'Task Timeout': 'Test Timeout'}
- class robot.parsing.lexer.settings.ResourceFileSettings(languages: Languages)[source]
Bases:
FileSettings
- names: tuple[str, ...] = ('Documentation', 'Keyword Tags', 'Library', 'Resource', 'Variables')
- class robot.parsing.lexer.settings.TestCaseSettings(parent: SuiteFileSettings)[source]
Bases:
Settings
- names: tuple[str, ...] = ('Documentation', 'Tags', 'Setup', 'Teardown', 'Template', 'Timeout')
- property template_set: bool
- class robot.parsing.lexer.settings.KeywordSettings(parent: FileSettings)[source]
Bases:
Settings
- names: tuple[str, ...] = ('Documentation', 'Arguments', 'Setup', 'Teardown', 'Timeout', 'Tags', 'Return')
robot.parsing.lexer.statementlexers module
- class robot.parsing.lexer.statementlexers.Lexer(ctx: LexingContext)[source]
Bases:
ABC
- class robot.parsing.lexer.statementlexers.StatementLexer(ctx: LexingContext)[source]
Bases:
Lexer
,ABC
- token_type: str
- class robot.parsing.lexer.statementlexers.SingleType(ctx: LexingContext)[source]
Bases:
StatementLexer
,ABC
- class robot.parsing.lexer.statementlexers.TypeAndArguments(ctx: LexingContext)[source]
Bases:
StatementLexer
,ABC
- class robot.parsing.lexer.statementlexers.SectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SingleType
,ABC
- ctx: FileContext
- class robot.parsing.lexer.statementlexers.SettingSectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SectionHeaderLexer
- token_type: str = 'SETTING HEADER'
- class robot.parsing.lexer.statementlexers.VariableSectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SectionHeaderLexer
- token_type: str = 'VARIABLE HEADER'
- class robot.parsing.lexer.statementlexers.TestCaseSectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SectionHeaderLexer
- token_type: str = 'TESTCASE HEADER'
- class robot.parsing.lexer.statementlexers.TaskSectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SectionHeaderLexer
- token_type: str = 'TASK HEADER'
- class robot.parsing.lexer.statementlexers.KeywordSectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SectionHeaderLexer
- token_type: str = 'KEYWORD HEADER'
- class robot.parsing.lexer.statementlexers.CommentSectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SectionHeaderLexer
- token_type: str = 'COMMENT HEADER'
- class robot.parsing.lexer.statementlexers.InvalidSectionHeaderLexer(ctx: LexingContext)[source]
Bases:
SectionHeaderLexer
- token_type: str = 'INVALID HEADER'
- class robot.parsing.lexer.statementlexers.CommentLexer(ctx: LexingContext)[source]
Bases:
SingleType
- token_type: str = 'COMMENT'
- class robot.parsing.lexer.statementlexers.ImplicitCommentLexer(ctx: LexingContext)[source]
Bases:
CommentLexer
- ctx: FileContext
- class robot.parsing.lexer.statementlexers.SettingLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- ctx: FileContext
- class robot.parsing.lexer.statementlexers.TestCaseSettingLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- ctx: TestCaseContext
- class robot.parsing.lexer.statementlexers.KeywordSettingLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- ctx: KeywordContext
- class robot.parsing.lexer.statementlexers.VariableLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- ctx: FileContext
- token_type: str = 'VARIABLE'
- class robot.parsing.lexer.statementlexers.KeywordCallLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- ctx: TestCaseContext | KeywordContext
- class robot.parsing.lexer.statementlexers.ForHeaderLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- separators = ('IN', 'IN RANGE', 'IN ENUMERATE', 'IN ZIP')
- class robot.parsing.lexer.statementlexers.IfHeaderLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'IF'
- class robot.parsing.lexer.statementlexers.InlineIfHeaderLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- token_type: str = 'INLINE IF'
- class robot.parsing.lexer.statementlexers.ElseIfHeaderLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'ELSE IF'
- class robot.parsing.lexer.statementlexers.ElseHeaderLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'ELSE'
- class robot.parsing.lexer.statementlexers.TryHeaderLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'TRY'
- class robot.parsing.lexer.statementlexers.ExceptHeaderLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- token_type: str = 'EXCEPT'
- class robot.parsing.lexer.statementlexers.FinallyHeaderLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'FINALLY'
- class robot.parsing.lexer.statementlexers.WhileHeaderLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- token_type: str = 'WHILE'
- class robot.parsing.lexer.statementlexers.GroupHeaderLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'GROUP'
- class robot.parsing.lexer.statementlexers.EndLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'END'
- class robot.parsing.lexer.statementlexers.VarLexer(ctx: LexingContext)[source]
Bases:
StatementLexer
- token_type: str = 'VAR'
- class robot.parsing.lexer.statementlexers.ReturnLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'RETURN STATEMENT'
- class robot.parsing.lexer.statementlexers.ContinueLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'CONTINUE'
- class robot.parsing.lexer.statementlexers.BreakLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'BREAK'
- class robot.parsing.lexer.statementlexers.SyntaxErrorLexer(ctx: LexingContext)[source]
Bases:
TypeAndArguments
- token_type: str = 'ERROR'
robot.parsing.lexer.tokenizer module
robot.parsing.lexer.tokens module
- class robot.parsing.lexer.tokens.Token(type: str | None = None, value: str | None = None, lineno: int = -1, col_offset: int = -1, error: str | None = None)[source]
Bases:
object
Token representing piece of Robot Framework data.
Each token has type, value, line number, column offset and end column offset in
type
,value
,lineno
,col_offset
andend_col_offset
attributes, respectively. Tokens representing error also have their error message inerror
attribute.Token types are declared as class attributes such as
SETTING_HEADER
andEOL
. Values of these constants have changed slightly in Robot Framework 4.0, and they may change again in the future. It is thus safer to use the constants, not their values, when types are needed. For example, useToken(Token.EOL)
instead ofToken('EOL')
andtoken.type == Token.EOL
instead oftoken.type == 'EOL'
.If
value
is not given andtype
is a special marker likeIF
or :attr:`EOL, the value is set automatically.- SETTING_HEADER = 'SETTING HEADER'
- VARIABLE_HEADER = 'VARIABLE HEADER'
- TESTCASE_HEADER = 'TESTCASE HEADER'
- TASK_HEADER = 'TASK HEADER'
- KEYWORD_HEADER = 'KEYWORD HEADER'
- COMMENT_HEADER = 'COMMENT HEADER'
- INVALID_HEADER = 'INVALID HEADER'
- FATAL_INVALID_HEADER = 'FATAL INVALID HEADER'
- TESTCASE_NAME = 'TESTCASE NAME'
- KEYWORD_NAME = 'KEYWORD NAME'
- SUITE_NAME = 'SUITE NAME'
- DOCUMENTATION = 'DOCUMENTATION'
- SUITE_SETUP = 'SUITE SETUP'
- SUITE_TEARDOWN = 'SUITE TEARDOWN'
- METADATA = 'METADATA'
- TEST_SETUP = 'TEST SETUP'
- TEST_TEARDOWN = 'TEST TEARDOWN'
- TEST_TEMPLATE = 'TEST TEMPLATE'
- TEST_TIMEOUT = 'TEST TIMEOUT'
- TEST_TAGS = 'TEST TAGS'
- FORCE_TAGS = 'TEST TAGS'
- DEFAULT_TAGS = 'DEFAULT TAGS'
- KEYWORD_TAGS = 'KEYWORD TAGS'
- LIBRARY = 'LIBRARY'
- RESOURCE = 'RESOURCE'
- VARIABLES = 'VARIABLES'
- SETUP = 'SETUP'
- TEARDOWN = 'TEARDOWN'
- TEMPLATE = 'TEMPLATE'
- TIMEOUT = 'TIMEOUT'
- TAGS = 'TAGS'
- ARGUMENTS = 'ARGUMENTS'
- RETURN = 'RETURN'
- RETURN_SETTING = 'RETURN'
- AS = 'AS'
- WITH_NAME = 'AS'
- NAME = 'NAME'
- VARIABLE = 'VARIABLE'
- ARGUMENT = 'ARGUMENT'
- ASSIGN = 'ASSIGN'
- KEYWORD = 'KEYWORD'
- FOR = 'FOR'
- FOR_SEPARATOR = 'FOR SEPARATOR'
- END = 'END'
- IF = 'IF'
- INLINE_IF = 'INLINE IF'
- ELSE_IF = 'ELSE IF'
- ELSE = 'ELSE'
- TRY = 'TRY'
- EXCEPT = 'EXCEPT'
- FINALLY = 'FINALLY'
- WHILE = 'WHILE'
- VAR = 'VAR'
- RETURN_STATEMENT = 'RETURN STATEMENT'
- CONTINUE = 'CONTINUE'
- BREAK = 'BREAK'
- OPTION = 'OPTION'
- GROUP = 'GROUP'
- SEPARATOR = 'SEPARATOR'
- COMMENT = 'COMMENT'
- CONTINUATION = 'CONTINUATION'
- CONFIG = 'CONFIG'
- EOL = 'EOL'
- EOS = 'EOS'
- ERROR = 'ERROR'
- FATAL_ERROR = 'FATAL ERROR'
- NON_DATA_TOKENS = frozenset({'COMMENT', 'CONTINUATION', 'EOL', 'EOS', 'SEPARATOR'})
- SETTING_TOKENS = frozenset({'ARGUMENTS', 'DEFAULT TAGS', 'DOCUMENTATION', 'KEYWORD TAGS', 'LIBRARY', 'METADATA', 'RESOURCE', 'RETURN', 'SETUP', 'SUITE NAME', 'SUITE SETUP', 'SUITE TEARDOWN', 'TAGS', 'TEARDOWN', 'TEMPLATE', 'TEST SETUP', 'TEST TAGS', 'TEST TEARDOWN', 'TEST TEMPLATE', 'TEST TIMEOUT', 'TIMEOUT', 'VARIABLES'})
- HEADER_TOKENS = frozenset({'COMMENT HEADER', 'INVALID HEADER', 'KEYWORD HEADER', 'SETTING HEADER', 'TASK HEADER', 'TESTCASE HEADER', 'VARIABLE HEADER'})
- ALLOW_VARIABLES = frozenset({'ARGUMENT', 'KEYWORD NAME', 'NAME', 'TESTCASE NAME'})
- type
- value
- lineno
- col_offset
- error
- property end_col_offset: int
- tokenize_variables() Iterator[Token] [source]
Tokenizes possible variables in token value.
Yields the token itself if the token does not allow variables (see
Token.ALLOW_VARIABLES
) or its value does not contain variables. Otherwise, yields variable tokens as well as tokens before, after, or between variables so that they have the same type as the original token.
- class robot.parsing.lexer.tokens.EOS(lineno: int = -1, col_offset: int = -1)[source]
Bases:
Token
Token representing end of a statement.