mistletoe.base_elements.
Token
Bases: object
object
Base class of all mistletoe tokens.
name
Return the name of the element.
to_dict
Convert instatiated attributes to a dict
walk
Traverse the syntax tree, recursively yielding children.
elements – filter children by certain token names.
depth – The depth to recurse into the tree.
include_self – whether to first yield this element.
A container for an element, its parent and depth
expand_spans
Walk through children and process any SpanContainer.
SpanContainer
BlockToken
Bases: mistletoe.base_elements.Token
mistletoe.base_elements.Token
Base class for block-level tokens. Recursively parse inner tokens.
Naming conventions:
lines denotes a list of (possibly unparsed) input lines, and is commonly used as the argument name for constructors.
BlockToken.children is a list with all the inner tokens (thus if a token has children attribute, it is not a leaf node; if a token calls tokenize_span, it is the boundary between span-level tokens and block-level tokens);
BlockToken.start takes a line from the document as argument, and returns a boolean representing whether that line marks the start of the current token. Every subclass of BlockToken must define a start function (see block_tokenizer.tokenize).
BlockToken.read takes the rest of the lines in the document as an iterator (including the start line), and consumes all the lines that should be read into this token.
Default to stop at an empty line.
Note that BlockToken.read returns a token (or None).
If BlockToken.read returns None, the read result is ignored, but the token class is responsible for resetting the iterator to a previous state. See SourceLines.anchor, SourceLines.reset.
start
Takes a line from the document as argument, and returns a boolean representing whether that line marks the start of the current token. Every subclass of BlockToken must define a start function (see block_tokenizer.tokenize_main).
read
takes the rest of the lines in the document as an iterator (including the start line), and consumes all the lines that should be read into this token.
The default is to stop at an empty line.
SpanToken
Base class for span-level tokens.
pattern – regex pattern to search for.
parse_inner – whether to do a nested parse of the content
parse_group – the group within the pattern match corresponding to the content
precedence – Alter the relative order by which the span token is assessed.
pattern
parse_inner
parse_group
precedence
__init__
Initialise basic span token.
content – raw string content of the token
children – list of child tokens
position – span position within the source text
Take a pattern match and return the instatiated token.
find
Find all tokens, matching a pattern in the given string
SourceLines
A class for storing source lines and tracking current line index.
lines – the source lines
start_line – the position of the initial line within the full source text.
standardize_ends – standardize all lines to end with \n
\n
metadata – any metadata associated with the lines
line_end_pattern
lineno
Return the line number in the source text (taking into account the start_line).
start_line
__next__
Progress the line index and return the line.
StopIteration if reached the end of the source lines.
StopIteration
anchor
Set an anchor for resetting the line index.
reset
Revert the line index to the set anchor (or 0).
peek
Return the next line, if exists, without actually advancing the line index.
backstep
Step back the line index by 1.
Position
Dataclass to store positional data of tokens, in relation to the source text.
line_start (int) – Initial line
line_end (int) – Final line (default: None)
uri (str) – The document (default: None)
data (dict) – Any additional data (default: Factory(factory=<class ‘dict’>, takes_self=False))
from_source_lines
Create an instance from a SourceLines instance.
By default, the line is taken from lines.lineno
lines.lineno
start_line – the index of the start line, if different to lines.lineno
make_loc_str
Create a location string <uri>:<line_start>:<line_end>
<uri>:<line_start>:<line_end>
This is a container for inline span text.
We use it in order to delay the assessment of span text, when parsing a document, so that all link definitions can be gathered first. After the initial block parse, we walk through the document and replace these span containers with the actual span tokens (see block_tokenizer.tokenize_main).
expand
Apply tokenize_span to text.