robot.parsing package

Module implementing test data parsing.

Exposed API

The publicly exposed parsing entry points are the following:


Like with rest of the public API, these functions and classes are exposed also via the robot.api package. When they are used by external code, it is recommended they are imported like from robot.api import get_tokens.


The robot.parsing package has been totally rewritten in Robot Framework 3.2 and all code using it needs to be updated. Depending on the use case, it may be possible to instead use the higher level TestSuiteBuilder() that has only seen minor configuration changes.

Parsing data to tokens

Data can be parsed to tokens by using get_tokens(), get_resource_tokens() or get_init_tokens() functions depending on does the data represent a test case (or task) file, a resource file, or a suite initialization file. In practice the difference between these functions is what settings and sections are valid.

Typically the data is easier to inspect and modify by using the higher level model discussed in the next section, but in some cases the token stream can be enough. Tokens returned by the aforementioned functions are Token instances and they have the token type, value, and position easily available as their attributes. Tokens also have useful string representation used by the example below:

from robot.api import get_tokens

path = 'example.robot'

for token in get_tokens(path):

If the example.robot used by the above example would contain

*** Test Cases ***
    Keyword    argument

Second example
    Keyword    xxx

*** Keywords ***
    [Arguments]    ${arg}
    Log    ${arg}

then the beginning of the output got when running the earlier code would look like this:

Token(TESTCASE_HEADER, '*** Test Cases ***', 1, 0)
Token(EOL, '\n', 1, 18)
Token(EOS, '', 1, 19)
Token(TESTCASE_NAME, 'Example', 2, 0)
Token(EOL, '\n', 2, 7)
Token(EOS, '', 2, 8)
Token(SEPARATOR, '    ', 3, 0)
Token(KEYWORD, 'Keyword', 3, 4)
Token(SEPARATOR, '    ', 3, 11)
Token(ARGUMENT, 'argument', 3, 15)
Token(EOL, '\n', 3, 23)
Token(EOS, '', 3, 24)
Token(EOL, '\n', 4, 0)
Token(EOS, '', 4, 1)

The output shows token type, value, line number and column offset. The EOL tokens denote end of a line and they include the new line character and possible trailing spaces. The EOS tokens denote end of a logical statement. Typically a single line forms a statement, but when the ... syntax is used for continuation, a statement spans multiple lines. In special cases a single line can also contain multiple statements.

See the documentation of get_tokens() for details about different ways how to specify the data to be parsed, how to control should all tokens or only data tokens be returned, and should variables in keyword arguments and elsewhere be tokenized or not.

Parsing data to model

Data can be parsed to a higher level model by using get_model(), get_resource_model(), or get_init_model() functions depending on the data type same way as when parsing data to tokens.

The model is represented as an abstract syntax tree (AST) implemented on top of Python’s standard ast.AST class. The ast module can also be used for inspecting and modifying the module. Most importantly, ast.NodeVisitor and ast.NodeTransformer ease traversing the model as explained in the sections below. The ast.dump() function, or the third-party astpretty module, can be used for debugging:

import ast
import astpretty    # third-party module
from robot.api import get_model

model = get_model('example.robot')
print('-' * 72)

Running this code with the example.robot file from the previous section would produce so much output that it is not included here. If you are going to work with Robot Framework’s AST, you are recommended to try this on your own.

The model is build from blocks like File (the root of the model), TestCaseSection, and TestCase implemented in the blocks module and from statements like TestCaseSectionHeader, Documentation, and KeywordCall implemented in the statements module. Both blocks and statements are AST nodes based on ast.AST. Blocks can contain other blocks and statements as child nodes whereas statements have only tokens. These tokens contain the actual data represented as Token instances.

Inspecting model

The easiest way to inspect what data a model contains is implementing a visitor based on ast.NodeVisitor and implementing visit_NodeName methods as needed. The following example illustrates how to find what tests a certain test case file contains:

import ast
from robot.api import get_model

class TestNamePrinter(ast.NodeVisitor):

    def visit_File(self, node):
        print(f"File '{node.source}' has following tests:")
        # Must call `generic_visit` to visit also child nodes.

    def visit_TestCaseName(self, node):
        print(f"- {} (on line {node.lineno})")

model = get_model('example.robot')
printer = TestNamePrinter()

When the above code is run using the earlier example.robot, the output is this:

File 'example.robot' has following tests:
- Example (on line 2)
- Second example (on line 5)

Modifying token values

The model can be modified simply by modifying token values. If changes need to be saved, that is as easy as calling the save() method of the root model object. When just modifying token values, it is possible to still extend ast.NodeVisitor. The next section discusses adding or removing nodes and then ast.NodeTransformer should be used instead.

Modifications to tokens obviously require finding the tokens to be modified. The first step is finding statements containing the tokens by implementing needed visit_StatementName methods. Then the exact token or tokens can be found using node’s get_token() or get_tokens() methods. If only token values are needed, get_value() or get_values() can be used as a shortcut. First finding statements and then the right tokens is illustrated by this example that renames keywords:

import ast
from robot.api import get_model, Token

class KeywordRenamer(ast.NodeVisitor):

    def __init__(self, old_name, new_name):
        self.old_name = self.normalize(old_name)
        self.new_name = new_name

    def normalize(self, name):
        return name.lower().replace(' ', '').replace('_', '')

    def visit_KeywordName(self, node):
        # Rename keyword definitions.
        if self.normalize( == self.old_name:
            token = node.get_token(Token.KEYWORD_NAME)
            token.value = self.new_name

    def visit_KeywordCall(self, node):
        # Rename keyword usages.
        if self.normalize(node.keyword) == self.old_name:
            token = node.get_token(Token.KEYWORD)
            token.value = self.new_name

model = get_model('example.robot')
renamer = KeywordRenamer('Keyword', 'New Name')

If you run the above example using the earlier example.robot, you can see that the Keyword keyword has been renamed to New Name. Notice that a real keyword renamer needed to take into account also keywords used with setups, teardowns and templates.

When token values are changed, column offset of the other tokens on same line are likely to be wrong. This does not affect saving the model or other typical usages, but if it is a problem then the caller needs to updated offsets separately.

Adding and removing nodes

Bigger changes to model are somewhat more complicated than just modifying existing token values. When doing this kind of changes, ast.NodeTransformer needs to be used instead of ast.NodeVisitor that was used in earlier examples.

Removing nodes is relative easy and is accomplished by returning None from visit_NodeName methods. Remember to return the original node, or possibly a replacement node, from all of these methods when you do not want a node to be removed.

Adding nodes is unfortunately not supported by the public robot.api interface and the needed block and statement nodes need to be imported via the robot.parsing.model package. That package is considered private and may change in the future. A stable public API can be added, and functionality related to adding nodes improved in general, if there are concrete needs for this kind of advanced usage.

The following example demonstrates both removing and adding nodes. If you run it against the earlier example.robot, you see that the first test gets a new keyword, the second test is removed, and settings section with documentation is added.

import ast
from robot.api import get_model, Token
from robot.parsing.model import SettingSection, Statement

class TestModifier(ast.NodeTransformer):

    def visit_TestCase(self, node):
        # The matched `TestCase` node is a block with `header` and `body`
        # attributes. `header` is a statement with familiar `get_token` and
        # `get_value` methods for getting certain tokens or their value.
        name = node.header.get_value(Token.TESTCASE_NAME)
        # Returning `None` drops the node altogether i.e. removes this test.
        if name == 'Second example':
            return None
        # Construct new keyword call statement from tokens.
        new_keyword = Statement.from_tokens([
            Token(Token.SEPARATOR, '    '),
            Token(Token.KEYWORD, 'New Keyword'),
            Token(Token.SEPARATOR, '    '),
            Token(Token.ARGUMENT, 'xxx'),
            Token(Token.EOL, '\n')
        # Add the keyword call to test as the second item. `body` is a list.
        node.body.insert(1, new_keyword)
        # No need to call `generic_visit` because we are not modifying child
        # nodes. The node itself must to be returned to avoid dropping it.
        return node

    def visit_File(self, node):
        # Create settings section with documentation.
        setting_header = Statement.from_tokens([
            Token(Token.SETTING_HEADER, '*** Settings ***'),
            Token(Token.EOL, '\n')
        documentation = Statement.from_tokens([
            Token(Token.DOCUMENTATION, 'Documentation'),
            Token(Token.SEPARATOR, '    '),
            Token(Token.ARGUMENT, 'This is getting pretty advanced'),
            Token(Token.EOL, '\n'),
            Token(Token.CONTINUATION, '...'),
            Token(Token.SEPARATOR, '    '),
            Token(Token.ARGUMENT, 'and this API definitely could be better.'),
            Token(Token.EOL, '\n')
        empty_line = Statement.from_tokens([
            Token(Token.EOL, '\n')
        body = [documentation, empty_line]
        settings = SettingSection(setting_header, body)
        # Add settings to the beginning of the file.
        node.sections.insert(0, settings)
        # Must call `generic_visit` to visit also child nodes.
        return self.generic_visit(node)

model = get_model('example.robot')
modifier = TestModifier()

Executing model

It is possible to convert a parsed and possibly modified model into an executable TestSuite structure by using its from_model() class method. In this case the get_model() function should be given the curdir argument to get possible ${CURDIR} variable resolved correctly.

from robot.api import get_model, TestSuite

model = get_model('example.robot', curdir='/home/robot/example')
# modify model as needed
suite = TestSuite.from_model(model)

For more details about executing the created TestSuite object, see the documentation of its run() method. Notice also that if you do not need to modify the parsed model, it is easier to get the executable suite by using the from_file_system() class method.


robot.parsing.suitestructure module

class robot.parsing.suitestructure.SuiteStructure(source=None, init_file=None, children=None)[source]

Bases: object

class robot.parsing.suitestructure.SuiteStructureBuilder(included_extensions=('robot', ), included_suites=None)[source]

Bases: object

ignored_prefixes = ('_', '.')
ignored_dirs = ('CVS',)
class robot.parsing.suitestructure.SuiteStructureVisitor[source]

Bases: object