antlr4ts
- Version 0.5.0-alpha.4
- Published
- 3.02 MB
- No dependencies
- BSD-3-Clause license
Install
npm i antlr4ts
yarn add antlr4ts
pnpm add antlr4ts
Overview
ANTLR 4 runtime for JavaScript written in Typescript
Index
Functions
Classes
BufferedTokenStream
- adjustSeekIndex()
- consume()
- fetch()
- fetchedEOF
- fill()
- filterForChannel()
- get()
- getHiddenTokensToLeft()
- getHiddenTokensToRight()
- getRange()
- getText()
- getTextFromRange()
- getTokens()
- index
- LA()
- lazyInit()
- LT()
- mark()
- nextTokenOnChannel()
- p
- previousTokenOnChannel()
- release()
- seek()
- setup()
- size
- sourceName
- sync()
- tokens
- tokenSource
- tryLB()
- tryLT()
DefaultErrorStrategy
- beginErrorCondition()
- constructToken()
- consumeUntil()
- endErrorCondition()
- errorRecoveryMode
- escapeWSAndQuote()
- getErrorRecoverySet()
- getExpectedTokens()
- getMissingSymbol()
- getSymbolText()
- getSymbolType()
- getTokenErrorDisplay()
- inErrorRecoveryMode()
- lastErrorIndex
- lastErrorStates
- nextTokensContext
- nextTokensState
- notifyErrorListeners()
- recover()
- recoverInline()
- reportError()
- reportFailedPredicate()
- reportInputMismatch()
- reportMatch()
- reportMissingToken()
- reportNoViableAlternative()
- reportUnwantedToken()
- reset()
- singleTokenDeletion()
- singleTokenInsertion()
- sync()
Lexer
- channel
- channelNames
- charIndex
- charPositionInLine
- DEFAULT_MODE
- DEFAULT_TOKEN_CHANNEL
- emit()
- emitEOF()
- getAllTokens()
- getCharErrorDisplay()
- getErrorDisplay()
- HIDDEN
- inputStream
- line
- MAX_CHAR_VALUE
- MIN_CHAR_VALUE
- mode()
- modeNames
- more()
- MORE
- nextToken()
- notifyListeners()
- popMode()
- pushMode()
- recover()
- reset()
- skip()
- SKIP
- sourceName
- text
- token
- tokenFactory
- type
Parser
- addContextToParseTree()
- addParseListener()
- buildParseTree
- compileParseTreePattern()
- consume()
- context
- createErrorNode()
- createTerminalNode()
- currentToken
- dumpDFA()
- enterLeftFactoredRule()
- enterOuterAlt()
- enterRecursionRule()
- enterRule()
- errorHandler
- exitRule()
- getATNWithBypassAlts()
- getDFAStrings()
- getErrorListenerDispatch()
- getExpectedTokens()
- getExpectedTokensWithinCurrentRule()
- getInvokingContext()
- getParseListeners()
- getRuleIndex()
- getRuleInvocationStack()
- inContext()
- inputStream
- isExpectedToken()
- isMatchedEOF
- isTrace
- match()
- matchedEOF
- matchWildcard()
- notifyErrorListeners()
- numberOfSyntaxErrors
- parseInfo
- precedence
- precpred()
- pushNewRecursionContext()
- removeParseListener()
- removeParseListeners()
- reset()
- ruleContext
- setProfile()
- sourceName
- tokenFactory
- triggerEnterRuleEvent()
- triggerExitRuleEvent()
- unrollRecursionContexts()
ParserInterpreter
- addDecisionOverride()
- atn
- atnState
- createInterpreterRuleContext()
- enterRecursionRule()
- grammarFileName
- overrideDecision
- overrideDecisionAlt
- overrideDecisionInputIndex
- overrideDecisionReached
- overrideDecisionRoot
- parse()
- pushRecursionContextStates
- recover()
- recoverInline()
- reset()
- rootContext
- ruleNames
- visitDecisionState()
- visitRuleStopState()
- visitState()
- vocabulary
TokenStreamRewriter
- catOpText()
- DEFAULT_PROGRAM_NAME
- delete()
- deleteProgram()
- getKindOfOps()
- getLastRewriteTokenIndex()
- getProgram()
- getText()
- getTokenStream()
- insertAfter()
- insertBefore()
- lastRewriteTokenIndexes
- MIN_TOKEN_INDEX
- PROGRAM_INIT_SIZE
- programs
- reduceToSingleOperationPerIndex()
- replace()
- replaceSingle()
- rollback()
- setLastRewriteTokenIndex()
- tokens
Interfaces
Enums
Namespaces
Functions
function RuleDependency
RuleDependency: ( dependency: DependencySpecification) => ( target: object, propertyKey: PropertyKey, propertyDescriptor: PropertyDescriptor) => void;
Declares a dependency upon a grammar rule, along with a set of zero or more dependent rules.
Version numbers within a grammar should be assigned on a monotonically increasing basis to allow for accurate tracking of dependent rules.
Sam Harwell
function RuleVersion
RuleVersion: ( version: number) => <T extends ParserRuleContext>( target: Parser, propertyKey: PropertyKey, propertyDescriptor: TypedPropertyDescriptor<(...args: any[]) => T>) => void;
Sam Harwell
Classes
class ANTLRInputStream
class ANTLRInputStream implements CharStream {}
Vacuum all input from a Reader/InputStream and then treat it like a
char[]
buffer. Can also pass in a String orchar[]
to use.If you need encoding, pass in stream/reader with correct encoding.
Deprecated
as of 4.7, please use
CharStreams
interface.
constructor
constructor(input: string);
Copy data in string to a local char array
property data
protected data: string;
The data being scanned
property index
readonly index: number;
Return the current input symbol index 0..n where n indicates the last symbol has been read. The index is the index of char to be returned from LA(1).
property n
protected n: number;
How many characters are actually in the buffer
property name
name?: string;
What is name or source of this char stream?
property p
protected p: number;
0..n-1 index into string of next char
property size
readonly size: number;
property sourceName
readonly sourceName: string;
method consume
consume: () => void;
method getText
getText: (interval: Interval) => string;
method LA
LA: (i: number) => number;
method LT
LT: (i: number) => number;
method mark
mark: () => number;
mark/release do nothing; we have entire buffer
method release
release: (marker: number) => void;
method reset
reset: () => void;
Reset the stream so that it's in the same state it was when the object was created *except* the data array is not touched.
method seek
seek: (index: number) => void;
consume() ahead until p==index; can't just set p=index as we must update line and charPositionInLine. If we seek backwards, just set p
method toString
toString: () => string;
class BailErrorStrategy
class BailErrorStrategy extends DefaultErrorStrategy {}
This implementation of ANTLRErrorStrategy responds to syntax errors by immediately canceling the parse operation with a ParseCancellationException. The implementation ensures that the ParserRuleContext#exception field is set for all parse tree nodes that were not completed prior to encountering the error.
This error strategy is useful in the following scenarios.
* **Two-stage parsing:** This error strategy allows the first stage of two-stage parsing to immediately terminate if an error is encountered, and immediately fall back to the second stage. In addition to avoiding wasted work by attempting to recover from errors here, the empty implementation of BailErrorStrategy#sync improves the performance of the first stage. * **Silent validation:** When syntax errors are not being reported or logged, and the parse result is simply ignored if errors occur, the BailErrorStrategy avoids wasting work on recovering from errors when the result will be ignored either way.
myparser.errorHandler = new BailErrorStrategy();See Also
Parser.errorHandler
method recover
recover: (recognizer: Parser, e: RecognitionException) => void;
Instead of recovering from exception
e
, re-throw it wrapped in a ParseCancellationException so it is not caught by the rule function catches. Use to get the original RecognitionException.
method recoverInline
recoverInline: (recognizer: Parser) => Token;
Make sure we don't attempt to recover inline; if the parser successfully recovers, it won't throw an exception.
method sync
sync: (recognizer: Parser) => void;
Make sure we don't attempt to recover from problems in subrules.
class BufferedTokenStream
class BufferedTokenStream implements TokenStream {}
This implementation of TokenStream loads tokens from a TokenSource on-demand, and places the tokens in a buffer to provide access to any previous token by index.
This token stream ignores the value of Token#getChannel. If your parser requires the token stream filter tokens to only those on a particular channel, such as Token#DEFAULT_CHANNEL or Token#HIDDEN_CHANNEL, use a filtering token stream such a CommonTokenStream.
constructor
constructor(tokenSource: TokenSource);
property fetchedEOF
protected fetchedEOF: boolean;
Indicates whether the Token#EOF token has been fetched from and added to . This field improves performance for the following cases:
* : The lookahead check in to prevent consuming the EOF symbol is optimized by checking the values of and instead of calling . * : The check to prevent adding multiple EOF symbols into is trivial with this field.
property index
readonly index: number;
property p
protected p: number;
The index into of the current token (next token to ).
[
]
should be .This field is set to -1 when the stream is first constructed or when is called, indicating that the first token has not yet been fetched from the token source. For additional information, see the documentation of IntStream for a description of Initializing Methods.
property size
readonly size: number;
property sourceName
readonly sourceName: string;
property tokens
protected tokens: Token[];
A collection of all tokens fetched from the token source. The list is considered a complete view of the input once is set to
true
.
property tokenSource
tokenSource: TokenSource;
method adjustSeekIndex
protected adjustSeekIndex: (i: number) => number;
Allowed derived classes to modify the behavior of operations which change the current stream position by adjusting the target token index of a seek operation. The default implementation simply returns
i
. If an exception is thrown in this method, the current stream index should not be changed.For example, CommonTokenStream overrides this method to ensure that the seek target is always an on-channel token.
Parameter i
The target token index.
Returns
The adjusted target token index.
method consume
consume: () => void;
method fetch
protected fetch: (n: number) => number;
Add
n
elements to buffer.Returns
The actual number of elements added to the buffer.
method fill
fill: () => void;
Get all tokens from lexer until EOF.
method filterForChannel
protected filterForChannel: ( from: number, to: number, channel: number) => Token[];
method get
get: (i: number) => Token;
method getHiddenTokensToLeft
getHiddenTokensToLeft: (tokenIndex: number, channel?: number) => Token[];
Collect all tokens on specified channel to the left of the current token up until we see a token on Lexer#DEFAULT_TOKEN_CHANNEL. If
channel
is-1
, find any non default channel token.
method getHiddenTokensToRight
getHiddenTokensToRight: (tokenIndex: number, channel?: number) => Token[];
Collect all tokens on specified channel to the right of the current token up until we see a token on Lexer#DEFAULT_TOKEN_CHANNEL or EOF. If
channel
is-1
, find any non default channel token.
method getRange
getRange: (start: number, stop: number) => Token[];
Get all tokens from start..stop inclusively.
method getText
getText: { (): string; (interval: Interval): string; (context: RuleContext): string;};
Get the text of all tokens in this buffer.
method getTextFromRange
getTextFromRange: (start: any, stop: any) => string;
method getTokens
getTokens: { (): Token[]; (start: number, stop: number): Token[]; (start: number, stop: number, types: Set<number>): Token[]; (start: number, stop: number, ttype: number): Token[];};
method LA
LA: (i: number) => number;
method lazyInit
protected lazyInit: () => void;
method LT
LT: (k: number) => Token;
method mark
mark: () => number;
method nextTokenOnChannel
protected nextTokenOnChannel: (i: number, channel: number) => number;
Given a starting index, return the index of the next token on channel. Return
i
iftokens[i]
is on channel. Return the index of the EOF token if there are no tokens on channel betweeni
and EOF.
method previousTokenOnChannel
protected previousTokenOnChannel: (i: number, channel: number) => number;
Given a starting index, return the index of the previous token on channel. Return
i
iftokens[i]
is on channel. Return -1 if there are no tokens on channel betweeni
and 0.If
i
specifies an index at or after the EOF token, the EOF token index is returned. This is due to the fact that the EOF token is treated as though it were on every channel.
method release
release: (marker: number) => void;
method seek
seek: (index: number) => void;
method setup
protected setup: () => void;
method sync
protected sync: (i: number) => boolean;
Make sure index
i
in tokens has a token.Returns
true
if a token is located at indexi
, otherwisefalse
.See Also
#get(int i)
method tryLB
protected tryLB: (k: number) => Token | undefined;
method tryLT
tryLT: (k: number) => Token | undefined;
class CodePointBuffer
class CodePointBuffer {}
Wrapper for
Uint8Array
/Uint16Array
/Int32Array
.
constructor
constructor(buffer: Uint8Array | Uint16Array | Int32Array, size: number);
property position
position: number;
property remaining
readonly remaining: number;
method array
array: () => Uint8Array | Uint16Array | Int32Array;
method builder
static builder: (initialBufferSize: number) => CodePointBuffer.Builder;
method get
get: (offset: number) => number;
method withArray
static withArray: ( buffer: Uint8Array | Uint16Array | Int32Array) => CodePointBuffer;
class CodePointCharStream
class CodePointCharStream implements CharStream {}
Alternative to ANTLRInputStream which treats the input as a series of Unicode code points, instead of a series of UTF-16 code units.
Use this if you need to parse input which potentially contains Unicode values > U+FFFF.
constructor
protected constructor( array: Uint8Array | Uint16Array | Int32Array, position: number, remaining: number, name: string);
property index
readonly index: number;
property internalStorage
readonly internalStorage: Uint8Array | Uint16Array | Int32Array;
property size
readonly size: number;
property sourceName
readonly sourceName: string;
method consume
consume: () => void;
method fromBuffer
static fromBuffer: { (codePointBuffer: CodePointBuffer): CodePointCharStream; (codePointBuffer: CodePointBuffer, name: string): CodePointCharStream;};
Constructs a CodePointCharStream which provides access to the Unicode code points stored in .
Constructs a named CodePointCharStream which provides access to the Unicode code points stored in .
method getText
getText: (interval: Interval) => string;
Return the UTF-16 encoded string for the given interval
method LA
LA: (i: number) => number;
method mark
mark: () => number;
mark/release do nothing; we have entire buffer
method release
release: (marker: number) => void;
method seek
seek: (index: number) => void;
method toString
toString: () => string;
class CommonToken
class CommonToken implements WritableToken {}
constructor
constructor( type: number, text?: string, source?: { source?: TokenSource; stream?: CharStream }, channel?: number, start?: number, stop?: number);
property channel
channel: number;
property charPositionInLine
charPositionInLine: number;
property EMPTY_SOURCE
protected static readonly EMPTY_SOURCE: { source?: TokenSource; stream?: CharStream;};
An empty Tuple2 which is used as the default value of for tokens that do not have a source.
property index
protected index: number;
This is the backing field for
tokenIndex
.
property inputStream
readonly inputStream: CharStream;
property line
line: number;
property source
protected source: { source?: TokenSource; stream?: CharStream };
This is the backing field for and .
These properties share a field to reduce the memory footprint of CommonToken. Tokens created by a CommonTokenFactory from the same source and input stream share a reference to the same Tuple2 containing these values.
property start
protected start: number;
This is the backing field for
startIndex
.
property startIndex
startIndex: number;
property stopIndex
stopIndex: number;
property text
text: string;
property tokenIndex
tokenIndex: number;
property tokenSource
readonly tokenSource: TokenSource;
property type
type: number;
method fromToken
static fromToken: (oldToken: Token) => CommonToken;
Constructs a new CommonToken as a copy of another Token.
If
oldToken
is also a CommonToken instance, the newly constructed token will share a reference to the field and the Tuple2 stored in . Otherwise, will be assigned the result of calling , and will be constructed from the result of Token#getTokenSource and Token#getInputStream.Parameter oldToken
The token to copy.
method toString
toString: { (): string; <TSymbol, ATNInterpreter extends ATNSimulator>( recognizer: Recognizer<TSymbol, ATNInterpreter> ): string;};
class CommonTokenFactory
class CommonTokenFactory implements TokenFactory {}
This default implementation of TokenFactory creates CommonToken objects.
constructor
constructor(copyText?: boolean);
Constructs a CommonTokenFactory with the specified value for .
When
copyText
isfalse
, the instance should be used instead of constructing a new instance.Parameter copyText
The value for .
property copyText
protected copyText: boolean;
Indicates whether CommonToken#setText should be called after constructing tokens to explicitly set the text. This is useful for cases where the input stream might not be able to provide arbitrary substrings of text from the input after the lexer creates a token (e.g. the implementation of CharStream#getText in UnbufferedCharStream UnsupportedOperationException). Explicitly setting the token text allows Token#getText to be called at any time regardless of the input stream implementation.
The default value is
false
to avoid the performance and memory overhead of copying text for every token unless explicitly requested.
method create
create: ( source: { source?: TokenSource; stream?: CharStream }, type: number, text: string | undefined, channel: number, start: number, stop: number, line: number, charPositionInLine: number) => CommonToken;
method createSimple
createSimple: (type: number, text: string) => CommonToken;
class CommonTokenStream
class CommonTokenStream extends BufferedTokenStream {}
This class extends BufferedTokenStream with functionality to filter token streams to tokens on a particular channel (tokens where Token#getChannel returns a particular value).
This token stream provides access to all tokens by index or when calling methods like . The channel filtering is only used for code accessing tokens via the lookahead methods , , and .
By default, tokens are placed on the default channel (Token#DEFAULT_CHANNEL), but may be reassigned by using the
->channel(HIDDEN)
lexer command, or by using an embedded action to call Lexer#setChannel.Note: lexer rules which use the
->skip
lexer command or call Lexer#skip do not produce tokens at all, so input text matched by such a rule will not be available as part of the token stream, regardless of channel.
constructor
constructor(tokenSource: TokenSource, channel?: number);
Constructs a new CommonTokenStream using the specified token source and filtering tokens to the specified channel. Only tokens whose Token#getChannel matches
channel
or have theToken.type
equal to Token#EOF will be returned by the token stream lookahead methods.Parameter tokenSource
The token source.
Parameter channel
The channel to use for filtering tokens.
property channel
protected channel: number;
Specifies the channel to use for filtering tokens.
The default value is Token#DEFAULT_CHANNEL, which matches the default channel assigned to tokens created by the lexer.
method adjustSeekIndex
protected adjustSeekIndex: (i: number) => number;
method getNumberOfOnChannelTokens
getNumberOfOnChannelTokens: () => number;
Count EOF just once.
method tryLB
protected tryLB: (k: number) => Token | undefined;
method tryLT
tryLT: (k: number) => Token | undefined;
class ConsoleErrorListener
class ConsoleErrorListener implements ANTLRErrorListener<any> {}
Sam Harwell
property INSTANCE
static readonly INSTANCE: ConsoleErrorListener;
Provides a default instance of ConsoleErrorListener.
method syntaxError
syntaxError: <T>( recognizer: Recognizer<T, any>, offendingSymbol: T, line: number, charPositionInLine: number, msg: string, e: RecognitionException | undefined) => void;
class DefaultErrorStrategy
class DefaultErrorStrategy implements ANTLRErrorStrategy {}
This is the default implementation of ANTLRErrorStrategy used for error reporting and recovery in ANTLR parsers.
property errorRecoveryMode
protected errorRecoveryMode: boolean;
Indicates whether the error strategy is currently "recovering from an error". This is used to suppress reporting multiple error messages while attempting to recover from a detected syntax error.
See Also
#inErrorRecoveryMode
property lastErrorIndex
protected lastErrorIndex: number;
The index into the input stream where the last error occurred. This is used to prevent infinite loops where an error is found but no token is consumed during recovery...another error is found, ad nauseum. This is a failsafe mechanism to guarantee that at least one token/tree node is consumed for two errors.
property lastErrorStates
protected lastErrorStates?: IntervalSet;
property nextTokensContext
protected nextTokensContext?: ParserRuleContext;
This field is used to propagate information about the lookahead following the previous match. Since prediction prefers completing the current rule to error recovery efforts, error reporting may occur later than the original point where it was discoverable. The original context is used to compute the true expected sets as though the reporting occurred as early as possible.
property nextTokensState
protected nextTokensState: number;
See Also
#nextTokensContext
method beginErrorCondition
protected beginErrorCondition: (recognizer: Parser) => void;
This method is called to enter error recovery mode when a recognition exception is reported.
Parameter recognizer
the parser instance
method constructToken
protected constructToken: ( tokenSource: TokenSource, expectedTokenType: number, tokenText: string, current: Token) => Token;
method consumeUntil
protected consumeUntil: (recognizer: Parser, set: IntervalSet) => void;
Consume tokens until one matches the given token set.
method endErrorCondition
protected endErrorCondition: (recognizer: Parser) => void;
This method is called to leave error recovery mode after recovering from a recognition exception.
Parameter recognizer
method escapeWSAndQuote
protected escapeWSAndQuote: (s: string) => string;
method getErrorRecoverySet
protected getErrorRecoverySet: (recognizer: Parser) => IntervalSet;
method getExpectedTokens
protected getExpectedTokens: (recognizer: Parser) => IntervalSet;
method getMissingSymbol
protected getMissingSymbol: (recognizer: Parser) => Token;
Conjure up a missing token during error recovery.
The recognizer attempts to recover from single missing symbols. But, actions might refer to that missing symbol. For example, x=ID {f($x);}. The action clearly assumes that there has been an identifier matched previously and that $x points at that token. If that token is missing, but the next token in the stream is what we want we assume that this token is missing and we keep going. Because we have to return some token to replace the missing token, we have to conjure one up. This method gives the user control over the tokens returned for missing tokens. Mostly, you will want to create something special for identifier tokens. For literals such as '{' and ',', the default action in the parser or tree parser works. It simply creates a CommonToken of the appropriate type. The text will be the token. If you change what tokens must be created by the lexer, override this method to create the appropriate tokens.
method getSymbolText
protected getSymbolText: (symbol: Token) => string | undefined;
method getSymbolType
protected getSymbolType: (symbol: Token) => number;
method getTokenErrorDisplay
protected getTokenErrorDisplay: (t: Token | undefined) => string;
How should a token be displayed in an error message? The default is to display just the text, but during development you might want to have a lot of information spit out. Override in that case to use t.toString() (which, for CommonToken, dumps everything about the token). This is better than forcing you to override a method in your token objects because you don't have to go modify your lexer so that it creates a new Java type.
method inErrorRecoveryMode
inErrorRecoveryMode: (recognizer: Parser) => boolean;
method notifyErrorListeners
protected notifyErrorListeners: ( recognizer: Parser, message: string, e: RecognitionException) => void;
method recover
recover: (recognizer: Parser, e: RecognitionException) => void;
method recoverInline
recoverInline: (recognizer: Parser) => Token;
method reportError
reportError: (recognizer: Parser, e: RecognitionException) => void;
method reportFailedPredicate
protected reportFailedPredicate: ( recognizer: Parser, e: FailedPredicateException) => void;
This is called by when the exception is a FailedPredicateException.
Parameter recognizer
the parser instance
Parameter e
the recognition exception
See Also
#reportError
method reportInputMismatch
protected reportInputMismatch: ( recognizer: Parser, e: InputMismatchException) => void;
This is called by when the exception is an InputMismatchException.
Parameter recognizer
the parser instance
Parameter e
the recognition exception
See Also
#reportError
method reportMatch
reportMatch: (recognizer: Parser) => void;
method reportMissingToken
protected reportMissingToken: (recognizer: Parser) => void;
This method is called to report a syntax error which requires the insertion of a missing token into the input stream. At the time this method is called, the missing token has not yet been inserted. When this method returns,
recognizer
is in error recovery mode.This method is called when identifies single-token insertion as a viable recovery strategy for a mismatched input error.
The default implementation simply returns if the handler is already in error recovery mode. Otherwise, it calls to enter error recovery mode, followed by calling Parser#notifyErrorListeners.
Parameter recognizer
the parser instance
method reportNoViableAlternative
protected reportNoViableAlternative: ( recognizer: Parser, e: NoViableAltException) => void;
This is called by when the exception is a NoViableAltException.
Parameter recognizer
the parser instance
Parameter e
the recognition exception
See Also
#reportError
method reportUnwantedToken
protected reportUnwantedToken: (recognizer: Parser) => void;
This method is called to report a syntax error which requires the removal of a token from the input stream. At the time this method is called, the erroneous symbol is current
LT(1)
symbol and has not yet been removed from the input stream. When this method returns,recognizer
is in error recovery mode.This method is called when identifies single-token deletion as a viable recovery strategy for a mismatched input error.
The default implementation simply returns if the handler is already in error recovery mode. Otherwise, it calls to enter error recovery mode, followed by calling Parser#notifyErrorListeners.
Parameter recognizer
the parser instance
method reset
reset: (recognizer: Parser) => void;
method singleTokenDeletion
protected singleTokenDeletion: (recognizer: Parser) => Token | undefined;
This method implements the single-token deletion inline error recovery strategy. It is called by to attempt to recover from mismatched input. If this method returns
undefined
, the parser and error handler state will not have changed. If this method returns non-undefined
,recognizer
will *not* be in error recovery mode since the returned token was a successful match.If the single-token deletion is successful, this method calls to report the error, followed by Parser#consume to actually "delete" the extraneous token. Then, before returning is called to signal a successful match.
Parameter recognizer
the parser instance
Returns
the successfully matched Token instance if single-token deletion successfully recovers from the mismatched input, otherwise
undefined
method singleTokenInsertion
protected singleTokenInsertion: (recognizer: Parser) => boolean;
This method implements the single-token insertion inline error recovery strategy. It is called by if the single-token deletion strategy fails to recover from the mismatched input. If this method returns
true
,recognizer
will be in error recovery mode.This method determines whether or not single-token insertion is viable by checking if the
LA(1)
input symbol could be successfully matched if it were instead theLA(2)
symbol. If this method returnstrue
, the caller is responsible for creating and inserting a token with the correct type to produce this behavior.Parameter recognizer
the parser instance
Returns
true
if single-token insertion is a viable recovery strategy for the current mismatched input, otherwisefalse
method sync
sync: (recognizer: Parser) => void;
The default implementation of ANTLRErrorStrategy#sync makes sure that the current lookahead symbol is consistent with what were expecting at this point in the ATN. You can call this anytime but ANTLR only generates code to check before subrules/loops and each iteration.
Implements Jim Idle's magic sync mechanism in closures and optional subrules. E.g.,
a : sync ( stuff sync )* ;sync : {consume to what can follow sync} ;At the start of a sub rule upon error, performs single token deletion, if possible. If it can't do that, it bails on the current rule and uses the default error recovery, which consumes until the resynchronization set of the current rule.
If the sub rule is optional (
(...)?
,(...)*
, or block with an empty alternative), then the expected set includes what follows the subrule.During loop iteration, it consumes until it sees a token that can start a sub rule or what follows loop. Yes, that is pretty aggressive. We opt to stay in the loop as long as possible.
**ORIGINS**
Previous versions of ANTLR did a poor job of their recovery within loops. A single mismatch token or missing token would force the parser to bail out of the entire rules surrounding the loop. So, for rule
classDef : 'class' ID '{' member* '}'input with an extra token between members would force the parser to consume until it found the next class definition rather than the next member definition of the current class.
This functionality cost a little bit of effort because the parser has to compare token set at the start of the loop and at each iteration. If for some reason speed is suffering for you, you can turn off this functionality by simply overriding this method as a blank { }.
class DiagnosticErrorListener
class DiagnosticErrorListener implements ParserErrorListener {}
This implementation of ANTLRErrorListener can be used to identify certain potential correctness and performance problems in grammars. "Reports" are made by calling Parser#notifyErrorListeners with the appropriate message.
* **Ambiguities**: These are cases where more than one path through the grammar can match the input. * **Weak context sensitivity**: These are cases where full-context prediction resolved an SLL conflict to a unique alternative which equaled the minimum alternative of the SLL conflict. * **Strong (forced) context sensitivity**: These are cases where the full-context prediction resolved an SLL conflict to a unique alternative, *and* the minimum alternative of the SLL conflict was found to not be a truly viable alternative. Two-stage parsing cannot be used for inputs where this situation occurs.
Sam Harwell
constructor
constructor(exactOnly?: boolean);
Initializes a new instance of DiagnosticErrorListener, specifying whether all ambiguities or only exact ambiguities are reported.
Parameter exactOnly
true
to report only exact ambiguities, otherwisefalse
to report all ambiguities. Defaults to true.
property exactOnly
protected exactOnly: boolean;
method getConflictingAlts
protected getConflictingAlts: ( reportedAlts: BitSet | undefined, configs: ATNConfigSet) => BitSet;
Computes the set of conflicting or ambiguous alternatives from a configuration set, if that information was not already provided by the parser.
Parameter reportedAlts
The set of conflicting or ambiguous alternatives, as reported by the parser.
Parameter configs
The conflicting or ambiguous configuration set.
Returns
Returns
reportedAlts
if it is notundefined
, otherwise returns the set of alternatives represented inconfigs
.
method getDecisionDescription
protected getDecisionDescription: (recognizer: Parser, dfa: DFA) => string;
method reportAmbiguity
reportAmbiguity: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, exact: boolean, ambigAlts: BitSet | undefined, configs: ATNConfigSet) => void;
method reportAttemptingFullContext
reportAttemptingFullContext: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, conflictingAlts: BitSet | undefined, conflictState: SimulatorState) => void;
method reportContextSensitivity
reportContextSensitivity: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, prediction: number, acceptState: SimulatorState) => void;
method syntaxError
syntaxError: <T extends Token>( recognizer: Recognizer<T, any>, offendingSymbol: T | undefined, line: number, charPositionInLine: number, msg: string, e: RecognitionException | undefined) => void;
class FailedPredicateException
class FailedPredicateException extends RecognitionException {}
A semantic predicate failed during validation. Validation of predicates occurs when normally parsing the alternative just like matching a token. Disambiguating predicate evaluation occurs when we test a predicate during prediction.
constructor
constructor(recognizer: Parser, predicate?: string, message?: string);
property predicate
readonly predicate: string;
property predicateIndex
readonly predicateIndex: number;
property ruleIndex
readonly ruleIndex: number;
class InputMismatchException
class InputMismatchException extends RecognitionException {}
This signifies any kind of mismatched input exceptions such as when the current input does not match the expected token.
constructor
constructor(recognizer: Parser);
constructor
constructor(recognizer: Parser, state: number, context: ParserRuleContext);
class InterpreterRuleContext
class InterpreterRuleContext extends ParserRuleContext {}
This class extends ParserRuleContext by allowing the value of to be explicitly set for the context.
ParserRuleContext does not include field storage for the rule index since the context classes created by the code generator override the method to return the correct value for that context. Since the parser interpreter does not use the context classes generated for a parser, this class (with slightly more memory overhead per node) is used to provide equivalent functionality.
constructor
constructor(ruleIndex: number);
constructor
constructor( ruleIndex: number, parent: ParserRuleContext, invokingStateNumber: number);
Constructs a new InterpreterRuleContext with the specified parent, invoking state, and rule index.
Parameter ruleIndex
The rule index for the current context.
Parameter parent
The parent context.
Parameter invokingStateNumber
The invoking state number.
property ruleIndex
readonly ruleIndex: number;
class Lexer
abstract class Lexer extends Recognizer<number, LexerATNSimulator> implements TokenSource {}
A lexer is recognizer that draws input symbols from a character stream. lexer grammars result in a subclass of this object. A Lexer object uses simplified match() and error recovery mechanisms in the interest of speed.
constructor
constructor(input: CharStream);
property channel
channel: number;
property channelNames
abstract readonly channelNames: string[];
property charIndex
readonly charIndex: number;
What is the index of the current character of lookahead?
property charPositionInLine
charPositionInLine: number;
property DEFAULT_MODE
static readonly DEFAULT_MODE: number;
property DEFAULT_TOKEN_CHANNEL
static readonly DEFAULT_TOKEN_CHANNEL: number;
property HIDDEN
static readonly HIDDEN: number;
property inputStream
inputStream: CharStream;
property line
line: number;
property MAX_CHAR_VALUE
static readonly MAX_CHAR_VALUE: number;
property MIN_CHAR_VALUE
static readonly MIN_CHAR_VALUE: number;
property modeNames
abstract readonly modeNames: string[];
property MORE
static readonly MORE: number;
property SKIP
static readonly SKIP: number;
property sourceName
readonly sourceName: string;
property text
text: string;
Return the text matched so far for the current token or any text override.
property token
token: Token;
Override if emitting multiple tokens.
property tokenFactory
tokenFactory: TokenFactory;
property type
type: number;
method emit
emit: { (token: Token): Token; (): Token };
The standard method called to automatically emit a token at the outermost lexical rule. The token object should point into the char buffer start..stop. If there is a text override in 'text', use that to set the token's text. Override this method to emit custom Token objects or provide a new factory.
By default does not support multiple emits per nextToken invocation for efficiency reasons. Subclass and override this method, nextToken, and getToken (to push tokens into a list and pull from that list rather than a single variable as this implementation does).
method emitEOF
emitEOF: () => Token;
method getAllTokens
getAllTokens: () => Token[];
Return a list of all Token objects in input char stream. Forces load of all tokens. Does not include EOF token.
method getCharErrorDisplay
getCharErrorDisplay: (c: number) => string;
method getErrorDisplay
getErrorDisplay: (s: string | number) => string;
method mode
mode: (m: number) => void;
method more
more: () => void;
method nextToken
nextToken: () => Token;
Return a token from this source; i.e., match a token on the char stream.
method notifyListeners
notifyListeners: (e: LexerNoViableAltException) => void;
method popMode
popMode: () => number;
method pushMode
pushMode: (m: number) => void;
method recover
recover: { (re: RecognitionException): void; (re: LexerNoViableAltException): void;};
Lexers can normally match any char in it's vocabulary after matching a token, so do the easy thing and just kill a character and hope it all works out. You can instead use the rule invocation stack to do sophisticated error recovery if you are in a fragment rule.
method reset
reset: { (): void; (resetInput: boolean): void };
method skip
skip: () => void;
Instruct the lexer to skip creating a token for current lexer rule and look for another token. nextToken() knows to keep looking when a lexer rule finishes with token set to SKIP_TOKEN. Recall that if token==undefined at end of any token rule, it creates one for you and emits it.
class LexerInterpreter
class LexerInterpreter extends Lexer {}
constructor
constructor( grammarFileName: string, vocabulary: Vocabulary, ruleNames: string[], channelNames: string[], modeNames: string[], atn: ATN, input: CharStream);
property atn
readonly atn: ATN;
property channelNames
readonly channelNames: string[];
property grammarFileName
readonly grammarFileName: string;
property modeNames
readonly modeNames: string[];
property ruleNames
readonly ruleNames: string[];
property vocabulary
readonly vocabulary: Vocabulary;
class LexerNoViableAltException
class LexerNoViableAltException extends RecognitionException {}
constructor
constructor( lexer: Lexer, input: CharStream, startIndex: number, deadEndConfigs: ATNConfigSet);
property deadEndConfigs
readonly deadEndConfigs: ATNConfigSet;
property inputStream
readonly inputStream: CharStream;
property startIndex
readonly startIndex: number;
method toString
toString: () => string;
class ListTokenSource
class ListTokenSource implements TokenSource {}
Provides an implementation of TokenSource as a wrapper around a list of Token objects.
If the final token in the list is an Token#EOF token, it will be used as the EOF token for every call to after the end of the list is reached. Otherwise, an EOF token will be created.
constructor
constructor(tokens: Token[], sourceName?: string);
Constructs a new ListTokenSource instance from the specified collection of Token objects and source name.
Parameter tokens
The collection of Token objects to provide as a TokenSource.
Parameter sourceName
The name of the TokenSource. If this value is
undefined
, will attempt to infer the name from the next Token (or the previous token if the end of the input has been reached).NullPointerException if
tokens
isundefined
property charPositionInLine
readonly charPositionInLine: number;
property eofToken
protected eofToken?: Token;
This field caches the EOF token for the token source.
property i
protected i: number;
The index into of token to return by the next call to . The end of the input is indicated by this value being greater than or equal to the number of items in .
property inputStream
readonly inputStream: CharStream;
property line
readonly line: number;
property sourceName
readonly sourceName: string;
property tokenFactory
tokenFactory: TokenFactory;
property tokens
protected tokens: Token[];
The wrapped collection of Token objects to return.
method nextToken
nextToken: () => Token;
class NoViableAltException
class NoViableAltException extends RecognitionException {}
Indicates that the parser could not decide which of two or more paths to take based upon the remaining input. It tracks the starting token of the offending input and also knows where the parser was in the various paths when the error. Reported by reportNoViableAlternative()
constructor
constructor(recognizer: Parser);
constructor
constructor( recognizer: Recognizer<Token, any>, input: TokenStream, startToken: Token, offendingToken: Token, deadEndConfigs: ATNConfigSet, ctx: ParserRuleContext);
property deadEndConfigs
readonly deadEndConfigs: ATNConfigSet;
property startToken
readonly startToken: Token;
class Parser
abstract class Parser extends Recognizer<Token, ParserATNSimulator> {}
This is all the parsing support code essentially; most of it is error recovery stuff.
constructor
constructor(input: TokenStream);
property buildParseTree
buildParseTree: boolean;
Gets whether or not a complete parse tree will be constructed while parsing. This property is
true
for a newly constructed parser.Returns
true
if a complete parse tree will be constructed while parsing, otherwisefalse
property context
context: ParserRuleContext;
property currentToken
readonly currentToken: Token;
Match needs to return the current input symbol, which gets put into the label for the associated token ref; e.g., x=ID.
property errorHandler
errorHandler: ANTLRErrorStrategy;
property inputStream
inputStream: TokenStream;
property isMatchedEOF
readonly isMatchedEOF: boolean;
property isTrace
isTrace: boolean;
Gets whether a TraceListener is registered as a parse listener for the parser.
property matchedEOF
protected matchedEOF: boolean;
Indicates parser has match()ed EOF token. See .
property numberOfSyntaxErrors
readonly numberOfSyntaxErrors: number;
Gets the number of syntax errors reported during parsing. This value is incremented each time is called.
See Also
#notifyErrorListeners
property parseInfo
readonly parseInfo: Promise<ParseInfo>;
property precedence
readonly precedence: number;
Get the precedence level for the top-most precedence rule.
Returns
The precedence level for the top-most precedence rule, or -1 if the parser context is not nested within a precedence rule.
property ruleContext
readonly ruleContext: ParserRuleContext;
property sourceName
readonly sourceName: string;
property tokenFactory
readonly tokenFactory: TokenFactory;
method addContextToParseTree
protected addContextToParseTree: () => void;
method addParseListener
addParseListener: (listener: ParseTreeListener) => void;
Registers
listener
to receive events during the parsing process.To support output-preserving grammar transformations (including but not limited to left-recursion removal, automated left-factoring, and optimized code generation), calls to listener methods during the parse may differ substantially from calls made by ParseTreeWalker#DEFAULT used after the parse is complete. In particular, rule entry and exit events may occur in a different order during the parse than after the parser. In addition, calls to certain rule entry methods may be omitted.
With the following specific exceptions, calls to listener events are *deterministic*, i.e. for identical input the calls to listener methods will be the same.
* Alterations to the grammar used to generate code may change the behavior of the listener calls. * Alterations to the command line options passed to ANTLR 4 when generating the parser may change the behavior of the listener calls. * Changing the version of the ANTLR Tool used to generate the parser may change the behavior of the listener calls.
Parameter listener
the listener to add
Throws
TypeError if
listener
isundefined
method compileParseTreePattern
compileParseTreePattern: { (pattern: string, patternRuleIndex: number): Promise<ParseTreePattern>; ( pattern: string, patternRuleIndex: number, lexer?: Lexer ): Promise<ParseTreePattern>;};
The preferred method of getting a tree pattern. For example, here's a sample use:
let t: ParseTree = parser.expr();let p: ParseTreePattern = await parser.compileParseTreePattern("<ID>+0", MyParser.RULE_expr);let m: ParseTreeMatch = p.match(t);let id: string = m.get("ID");The same as but specify a Lexer rather than trying to deduce it from this parser.
method consume
consume: () => Token;
Consume and return the [current symbol](
currentToken
).E.g., given the following input with
A
being the current lookahead symbol, this function moves the cursor toB
and returnsA
.A B^If the parser is not in error recovery mode, the consumed symbol is added to the parse tree using , and ParseTreeListener#visitTerminal is called on any parse listeners. If the parser *is* in error recovery mode, the consumed symbol is added to the parse tree using then and ParseTreeListener#visitErrorNode is called on any parse listeners.
method createErrorNode
createErrorNode: (parent: ParserRuleContext, t: Token) => ErrorNode;
How to create an error node, given a token, associated with a parent. Typically, the error node to create is not a function of the parent.
4.7
method createTerminalNode
createTerminalNode: (parent: ParserRuleContext, t: Token) => TerminalNode;
How to create a token leaf node associated with a parent. Typically, the terminal node to create is not a function of the parent.
4.7
method dumpDFA
dumpDFA: () => void;
For debugging and other purposes.
method enterLeftFactoredRule
enterLeftFactoredRule: ( localctx: ParserRuleContext, state: number, ruleIndex: number) => void;
method enterOuterAlt
enterOuterAlt: (localctx: ParserRuleContext, altNum: number) => void;
method enterRecursionRule
enterRecursionRule: ( localctx: ParserRuleContext, state: number, ruleIndex: number, precedence: number) => void;
method enterRule
enterRule: ( localctx: ParserRuleContext, state: number, ruleIndex: number) => void;
Always called by generated parsers upon entry to a rule. Access field get the current context.
method exitRule
exitRule: () => void;
method getATNWithBypassAlts
getATNWithBypassAlts: () => ATN;
The ATN with bypass alternatives is expensive to create so we create it lazily.
@ if the current parser does not implement the
serializedATN
property.
method getDFAStrings
getDFAStrings: () => string[];
For debugging and other purposes.
method getErrorListenerDispatch
getErrorListenerDispatch: () => ParserErrorListener;
method getExpectedTokens
getExpectedTokens: () => IntervalSet;
Computes the set of input symbols which could follow the current parser state and context, as given by and , respectively.
See Also
ATN#getExpectedTokens(int, RuleContext)
method getExpectedTokensWithinCurrentRule
getExpectedTokensWithinCurrentRule: () => IntervalSet;
method getInvokingContext
getInvokingContext: (ruleIndex: number) => ParserRuleContext | undefined;
method getParseListeners
getParseListeners: () => ParseTreeListener[];
method getRuleIndex
getRuleIndex: (ruleName: string) => number;
Get a rule's index (i.e.,
RULE_ruleName
field) or -1 if not found.
method getRuleInvocationStack
getRuleInvocationStack: (ctx?: RuleContext) => string[];
Return List<String> of the rule names in your parser instance leading up to a call to the current rule. You could override if you want more details such as the file/line info of where in the ATN a rule is invoked.
This is very useful for error messages.
method inContext
inContext: (context: string) => boolean;
method isExpectedToken
isExpectedToken: (symbol: number) => boolean;
Checks whether or not
symbol
can follow the current state in the ATN. The behavior of this method is equivalent to the following, but is implemented such that the complete context-sensitive follow set does not need to be explicitly constructed.return getExpectedTokens().contains(symbol);Parameter symbol
the symbol type to check
Returns
true
ifsymbol
can follow the current state in the ATN, otherwisefalse
.
method match
match: (ttype: number) => Token;
Match current input symbol against
ttype
. If the symbol type matches, ANTLRErrorStrategy#reportMatch and are called to complete the match process.If the symbol type does not match, ANTLRErrorStrategy#recoverInline is called on the current error strategy to attempt recovery. If is
true
and the token index of the symbol returned by ANTLRErrorStrategy#recoverInline is -1, the symbol is added to the parse tree by calling then .Parameter ttype
the token type to match
Returns
the matched symbol @ if the current input symbol did not match
ttype
and the error strategy could not recover from the mismatched symbol
method matchWildcard
matchWildcard: () => Token;
Match current input symbol as a wildcard. If the symbol type matches (i.e. has a value greater than 0), ANTLRErrorStrategy#reportMatch and are called to complete the match process.
If the symbol type does not match, ANTLRErrorStrategy#recoverInline is called on the current error strategy to attempt recovery. If is
true
and the token index of the symbol returned by ANTLRErrorStrategy#recoverInline is -1, the symbol is added to the parse tree by calling then .Returns
the matched symbol @ if the current input symbol did not match a wildcard and the error strategy could not recover from the mismatched symbol
method notifyErrorListeners
notifyErrorListeners: { (msg: string): void; (msg: string, offendingToken: Token, e: RecognitionException): void;};
method precpred
precpred: (localctx: RuleContext, precedence: number) => boolean;
method pushNewRecursionContext
pushNewRecursionContext: ( localctx: ParserRuleContext, state: number, ruleIndex: number) => void;
Like but for recursive rules. Make the current context the child of the incoming localctx.
method removeParseListener
removeParseListener: (listener: ParseTreeListener) => void;
Remove
listener
from the list of parse listeners.If
listener
isundefined
or has not been added as a parse listener, this method does nothing.Parameter listener
the listener to remove
See Also
#addParseListener
method removeParseListeners
removeParseListeners: () => void;
Remove all parse listeners.
See Also
#addParseListener
method reset
reset: { (): void; (resetInput: boolean): void };
reset the parser's state
method setProfile
setProfile: (profile: boolean) => Promise<void>;
4.3
method triggerEnterRuleEvent
protected triggerEnterRuleEvent: () => void;
Notify any parse listeners of an enter rule event.
See Also
#addParseListener
method triggerExitRuleEvent
protected triggerExitRuleEvent: () => void;
Notify any parse listeners of an exit rule event.
See Also
#addParseListener
method unrollRecursionContexts
unrollRecursionContexts: (_parentctx: ParserRuleContext) => void;
class ParserInterpreter
class ParserInterpreter extends Parser {}
A parser simulator that mimics what ANTLR's generated parser code does. A ParserATNSimulator is used to make predictions via adaptivePredict but this class moves a pointer through the ATN to simulate parsing. ParserATNSimulator just makes us efficient rather than having to backtrack, for example.
This properly creates parse trees even for left recursive rules.
We rely on the left recursive rule invocation and special predicate transitions to make left recursive rules work.
See TestParserInterpreter for examples.
constructor
constructor(old: ParserInterpreter);
A copy constructor that creates a new parser interpreter by reusing the fields of a previous interpreter.
Parameter old
The interpreter to copy
4.5
constructor
constructor( grammarFileName: string, vocabulary: Vocabulary, ruleNames: string[], atn: ATN, input: TokenStream);
property atn
readonly atn: ATN;
property atnState
readonly atnState: ATNState;
property grammarFileName
readonly grammarFileName: string;
property overrideDecision
protected overrideDecision: number;
We need a map from (decision,inputIndex)->forced alt for computing ambiguous parse trees. For now, we allow exactly one override.
property overrideDecisionAlt
protected overrideDecisionAlt: number;
property overrideDecisionInputIndex
protected overrideDecisionInputIndex: number;
property overrideDecisionReached
protected overrideDecisionReached: boolean;
property overrideDecisionRoot
readonly overrideDecisionRoot: InterpreterRuleContext;
property pushRecursionContextStates
protected pushRecursionContextStates: BitSet;
This identifies StarLoopEntryState's that begin the (...)* precedence loops of left recursive rules.
property rootContext
readonly rootContext: InterpreterRuleContext;
Return the root of the parse, which can be useful if the parser bails out. You still can access the top node. Note that, because of the way left recursive rules add children, it's possible that the root will not have any children if the start rule immediately called and left recursive rule that fails.
4.5.1
property ruleNames
readonly ruleNames: string[];
property vocabulary
readonly vocabulary: Vocabulary;
method addDecisionOverride
addDecisionOverride: ( decision: number, tokenIndex: number, forcedAlt: number) => void;
Override this parser interpreters normal decision-making process at a particular decision and input token index. Instead of allowing the adaptive prediction mechanism to choose the first alternative within a block that leads to a successful parse, force it to take the alternative, 1..n for n alternatives.
As an implementation limitation right now, you can only specify one override. This is sufficient to allow construction of different parse trees for ambiguous input. It means re-parsing the entire input in general because you're never sure where an ambiguous sequence would live in the various parse trees. For example, in one interpretation, an ambiguous input sequence would be matched completely in expression but in another it could match all the way back to the root.
s : e '!'? ; e : ID | ID '!' ;
Here, x! can be matched as (s (e ID) !) or (s (e ID !)). In the first case, the ambiguous sequence is fully contained only by the root. In the second case, the ambiguous sequences fully contained within just e, as in: (e ID !).
Rather than trying to optimize this and make some intelligent decisions for optimization purposes, I settled on just re-parsing the whole input and then using {link Trees#getRootOfSubtreeEnclosingRegion} to find the minimal subtree that contains the ambiguous sequence. I originally tried to record the call stack at the point the parser detected and ambiguity but left recursive rules create a parse tree stack that does not reflect the actual call stack. That impedance mismatch was enough to make it it challenging to restart the parser at a deeply nested rule invocation.
Only parser interpreters can override decisions so as to avoid inserting override checking code in the critical ALL(*) prediction execution path.
4.5
method createInterpreterRuleContext
protected createInterpreterRuleContext: ( parent: ParserRuleContext | undefined, invokingStateNumber: number, ruleIndex: number) => InterpreterRuleContext;
Provide simple "factory" for InterpreterRuleContext's. 4.5.1
method enterRecursionRule
enterRecursionRule: ( localctx: ParserRuleContext, state: number, ruleIndex: number, precedence: number) => void;
method parse
parse: (startRuleIndex: number) => ParserRuleContext;
Begin parsing at startRuleIndex
method recover
protected recover: (e: RecognitionException) => void;
Rely on the error handler for this parser but, if no tokens are consumed to recover, add an error node. Otherwise, nothing is seen in the parse tree.
method recoverInline
protected recoverInline: () => Token;
method reset
reset: (resetInput?: boolean) => void;
method visitDecisionState
protected visitDecisionState: (p: DecisionState) => number;
Method visitDecisionState() is called when the interpreter reaches a decision state (instance of DecisionState). It gives an opportunity for subclasses to track interesting things.
method visitRuleStopState
protected visitRuleStopState: (p: ATNState) => void;
method visitState
protected visitState: (p: ATNState) => void;
class ParserRuleContext
class ParserRuleContext extends RuleContext {}
A rule invocation record for parsing.
Contains all of the information about the current rule not stored in the RuleContext. It handles parse tree children list, Any ATN state tracing, and the default values available for rule invocations: start, stop, rule index, current alt number.
Subclasses made for each rule and grammar track the parameters, return values, locals, and labels specific to that rule. These are the objects that are returned from rules.
Note text is not an actual field of a rule return value; it is computed from start and stop using the input stream's toString() method. I could add a ctor to this so that we can pass in and store the input stream, but I'm not sure we want to do that. It would seem to be undefined to get the .text property anyway if the rule matches tokens from multiple input streams.
I do not use getters for fields of objects that are used simply to group values such as this aggregate. The getters/setters are there to satisfy the superclass interface.
constructor
constructor();
constructor
constructor(parent: ParserRuleContext, invokingStateNumber: number);
property childCount
readonly childCount: number;
property children
children?: ParseTree[];
If we are debugging or building a parse tree for a visitor, we need to track all of the tokens and rule invocations associated with this rule's context. This is empty for parsing w/o tree constr. operation because we don't the need to track the details about how we parse this rule.
property exception
exception?: RecognitionException;
The exception that forced this rule to return. If the rule successfully completed, this is
undefined
.
property parent
readonly parent: ParserRuleContext;
property ruleContext
readonly ruleContext: ParserRuleContext;
property sourceInterval
readonly sourceInterval: Interval;
property start
readonly start: Token;
Get the initial token in this context. Note that the range from start to stop is inclusive, so for rules that do not consume anything (for example, zero length or error productions) this token may exceed stop.
property stop
readonly stop: Token;
Get the final token in this context. Note that the range from start to stop is inclusive, so for rules that do not consume anything (for example, zero length or error productions) this token may precede start.
method addAnyChild
addAnyChild: <T extends ParseTree>(t: T) => T;
Add a parse tree node to this as a child. Works for internal and leaf nodes. Does not set parent link; other add methods must do that. Other addChild methods call this.
We cannot set the parent pointer of the incoming node because the existing interfaces do not have a setParent() method and I don't want to break backward compatibility for this.
4.7
method addChild
addChild: { (t: TerminalNode): void; (ruleInvocation: RuleContext): void; (matchedToken: Token): TerminalNode;};
Add a token leaf node child and force its parent to be this node.
Add a child to this node based upon matchedToken. It creates a TerminalNodeImpl rather than using . I'm leaving this in for compatibility but the parser doesn't use this anymore.
Deprecated
Use another overload instead.
method addErrorNode
addErrorNode: { (errorNode: ErrorNode): ErrorNode; (badToken: Token): ErrorNode;};
Add an error node child and force its parent to be this node.
4.7
Add a child to this node based upon badToken. It creates a ErrorNode rather than using . I'm leaving this in for compatibility but the parser doesn't use this anymore.
Deprecated
Use another overload instead.
method copyFrom
copyFrom: (ctx: ParserRuleContext) => void;
COPY a ctx (I'm deliberately not using copy constructor) to avoid confusion with creating node with parent. Does not copy children (except error leaves).
This is used in the generated parser code to flip a generic XContext node for rule X to a YContext for alt label Y. In that sense, it is not really a generic copy function.
If we do an error sync() at start of a rule, we might add error nodes to the generic XContext so this function must copy those nodes to the YContext as well else they are lost!
method emptyContext
static emptyContext: () => ParserRuleContext;
method enterRule
enterRule: (listener: ParseTreeListener) => void;
method exitRule
exitRule: (listener: ParseTreeListener) => void;
method getChild
getChild: { (i: number): ParseTree; <T extends ParseTree>(i: number, ctxType: new (...args: any[]) => T): T;};
method getRuleContext
getRuleContext: <T extends ParserRuleContext>( i: number, ctxType: new (...args: any[]) => T) => T;
method getRuleContexts
getRuleContexts: <T extends ParserRuleContext>( ctxType: new (...args: any[]) => T) => T[];
method getToken
getToken: (ttype: number, i: number) => TerminalNode;
method getTokens
getTokens: (ttype: number) => TerminalNode[];
method removeLastChild
removeLastChild: () => void;
Used by enterOuterAlt to toss out a RuleContext previously added as we entered a rule. If we have # label, we will need to remove generic ruleContext object.
method toInfoString
toInfoString: (recognizer: Parser) => string;
Used for rule context info debugging during parse-time, not so much for ATN debugging
method tryGetChild
tryGetChild: <T extends ParseTree>( i: number, ctxType: new (...args: any[]) => T) => T | undefined;
method tryGetRuleContext
tryGetRuleContext: <T extends ParserRuleContext>( i: number, ctxType: new (...args: any[]) => T) => T | undefined;
method tryGetToken
tryGetToken: (ttype: number, i: number) => TerminalNode | undefined;
class ProxyErrorListener
class ProxyErrorListener<TSymbol, TListener extends ANTLRErrorListener<TSymbol>> implements ANTLRErrorListener<TSymbol> {}
This implementation of ANTLRErrorListener dispatches all calls to a collection of delegate listeners. This reduces the effort required to support multiple listeners.
Sam Harwell
constructor
constructor(delegates: TListener[]);
method getDelegates
protected getDelegates: () => ReadonlyArray<TListener>;
method syntaxError
syntaxError: <T extends TSymbol>( recognizer: Recognizer<T, any>, offendingSymbol: T | undefined, line: number, charPositionInLine: number, msg: string, e: RecognitionException | undefined) => void;
class ProxyParserErrorListener
class ProxyParserErrorListener extends ProxyErrorListener<Token, ParserErrorListener> implements ParserErrorListener {}
Sam Harwell
constructor
constructor(delegates: ParserErrorListener[]);
method reportAmbiguity
reportAmbiguity: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, exact: boolean, ambigAlts: BitSet | undefined, configs: ATNConfigSet) => void;
method reportAttemptingFullContext
reportAttemptingFullContext: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, conflictingAlts: BitSet | undefined, conflictState: SimulatorState) => void;
method reportContextSensitivity
reportContextSensitivity: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, prediction: number, acceptState: SimulatorState) => void;
class RecognitionException
class RecognitionException extends Error {}
The root of the ANTLR exception hierarchy. In general, ANTLR tracks just 3 kinds of errors: prediction errors, failed predicate errors, and mismatched input errors. In each case, the parser knows where it is in the input, where it is in the ATN, the rule invocation stack, and what kind of problem occurred.
constructor
constructor(lexer: Lexer, input: CharStream);
constructor
constructor( recognizer: Recognizer<Token, any>, input: IntStream, ctx: ParserRuleContext);
constructor
constructor( recognizer: Recognizer<Token, any>, input: IntStream, ctx: ParserRuleContext, message: string);
property context
readonly context: RuleContext;
Gets the RuleContext at the time this exception was thrown.
If the context is not available, this method returns
undefined
.Returns
The RuleContext at the time this exception was thrown. If the context is not available, this method returns
undefined
.
property expectedTokens
readonly expectedTokens: IntervalSet;
Gets the set of input symbols which could potentially follow the previously matched symbol at the time this exception was thrown.
If the set of expected tokens is not known and could not be computed, this method returns
undefined
.Returns
The set of token types that could potentially follow the current state in the ATN, or
undefined
if the information is not available.
property inputStream
readonly inputStream: IntStream;
Gets the input stream which is the symbol source for the recognizer where this exception was thrown.
If the input stream is not available, this method returns
undefined
.Returns
The input stream which is the symbol source for the recognizer where this exception was thrown, or
undefined
if the stream is not available.
property offendingState
readonly offendingState: number;
Get the ATN state number the parser was in at the time the error occurred. For NoViableAltException and LexerNoViableAltException exceptions, this is the DecisionState number. For others, it is the state whose outgoing edge we couldn't match.
If the state number is not known, this method returns -1.
property recognizer
readonly recognizer: Recognizer<any, any>;
Gets the Recognizer where this exception occurred.
If the recognizer is not available, this method returns
undefined
.Returns
The recognizer where this exception occurred, or
undefined
if the recognizer is not available.
method getOffendingToken
getOffendingToken: (recognizer?: Recognizer<Token, any>) => Token | undefined;
method setOffendingState
protected setOffendingState: (offendingState: number) => void;
method setOffendingToken
protected setOffendingToken: <TSymbol extends Token>( recognizer: Recognizer<TSymbol, any>, offendingToken?: TSymbol) => void;
class Recognizer
abstract class Recognizer<TSymbol, ATNInterpreter extends ATNSimulator> {}
property atn
readonly atn: ATN;
property EOF
static readonly EOF: number;
property grammarFileName
abstract readonly grammarFileName: string;
For debugging and other purposes, might want the grammar name. Have ANTLR generate an implementation for this method.
property inputStream
abstract readonly inputStream: IntStream;
property interpreter
interpreter: ATNSimulator;
Get the ATN interpreter used by the recognizer for prediction.
Returns
The ATN interpreter used by the recognizer for prediction.
property parseInfo
readonly parseInfo: Promise<ParseInfo>;
If profiling during the parse/lex, this will return DecisionInfo records for each decision in recognizer in a ParseInfo object.
4.3
property ruleNames
abstract readonly ruleNames: string[];
property serializedATN
readonly serializedATN: string;
If this recognizer was generated, it will have a serialized ATN representation of the grammar.
For interpreters, we don't know their serialized ATN despite having created the interpreter from it.
property state
state: number;
property vocabulary
abstract readonly vocabulary: Vocabulary;
Get the vocabulary used by the recognizer.
Returns
A Vocabulary instance providing information about the vocabulary used by the grammar.
method action
action: ( _localctx: RuleContext | undefined, ruleIndex: number, actionIndex: number) => void;
method addErrorListener
addErrorListener: (listener: ANTLRErrorListener<TSymbol>) => void;
NullPointerException if
listener
isundefined
.
method getErrorHeader
getErrorHeader: (e: RecognitionException) => string;
What is the error header, normally line/character position information?
method getErrorListenerDispatch
getErrorListenerDispatch: () => ANTLRErrorListener<TSymbol>;
method getErrorListeners
getErrorListeners: () => Array<ANTLRErrorListener<TSymbol>>;
method getRuleIndexMap
getRuleIndexMap: () => ReadonlyMap<string, number>;
Get a map from rule names to rule indexes.
Used for XPath and tree pattern compilation.
method getTokenType
getTokenType: (tokenName: string) => number;
method getTokenTypeMap
getTokenTypeMap: () => ReadonlyMap<string, number>;
Get a map from token names to token types.
Used for XPath and tree pattern compilation.
method precpred
precpred: (localctx: RuleContext | undefined, precedence: number) => boolean;
method removeErrorListener
removeErrorListener: (listener: ANTLRErrorListener<TSymbol>) => void;
method removeErrorListeners
removeErrorListeners: () => void;
method sempred
sempred: ( _localctx: RuleContext | undefined, ruleIndex: number, actionIndex: number) => boolean;
class RewriteOperation
class RewriteOperation {}
constructor
constructor(tokens: TokenStream, index: number, instructionIndex: number);
constructor
constructor( tokens: TokenStream, index: number, instructionIndex: number, text: {});
property index
index: number;
Token buffer index.
property instructionIndex
readonly instructionIndex: number;
What index into rewrites List are we?
property text
text: {};
property tokens
protected readonly tokens: TokenStream;
method execute
execute: (buf: string[]) => number;
Execute the rewrite operation by possibly adding to the buffer. Return the index of the next token to operate on.
method toString
toString: () => string;
class RuleContext
class RuleContext extends RuleNode {}
A rule context is a record of a single rule invocation.
We form a stack of these context objects using the parent pointer. A parent pointer of
undefined
indicates that the current context is the bottom of the stack. The ParserRuleContext subclass as a children list so that we can turn this data structure into a tree.The root node always has a
undefined
pointer and invokingState of -1.Upon entry to parsing, the first invoked rule function creates a context object (a subclass specialized for that rule such as SContext) and makes it the root of a parse tree, recorded by field Parser._ctx.
public final SContext s() throws RecognitionException { SContext _localctx = new SContext(_ctx, state); <-- create new node enterRule(_localctx, 0, RULE_s); <-- push it ... exitRule(); <-- pop back to _localctx return _localctx; }
A subsequent rule invocation of r from the start rule s pushes a new context object for r whose parent points at s and use invoking state is the state with r emanating as edge label.
The invokingState fields from a context object to the root together form a stack of rule indication states where the root (bottom of the stack) has a -1 sentinel value. If we invoke start symbol s then call r1, which calls r2, the would look like this:
SContext[-1] <- root node (bottom of the stack) R1Context[p] <- p in rule s called r1 R2Context[q] <- q in rule r1 called r2
So the top of the stack, _ctx, represents a call to the current rule and it holds the return address from another rule that invoke to this rule. To invoke a rule, we must always have a current context.
The parent contexts are useful for computing lookahead sets and getting error information.
These objects are used during parsing and prediction. For the special case of parsers, we use the subclass ParserRuleContext.
See Also
ParserRuleContext
constructor
constructor();
constructor
constructor(parent: RuleContext, invokingState: number);
property altNumber
altNumber: number;
For rule associated with this parse tree internal node, return the outer alternative number used to match the input. Default implementation does not compute nor store this alt num. Create a subclass of ParserRuleContext with backing field and set option contextSuperClass. to set it.
4.5.3
property childCount
readonly childCount: number;
property invokingState
invokingState: number;
property isEmpty
readonly isEmpty: boolean;
A context is empty if there is no invoking state; meaning nobody called current context.
property parent
readonly parent: RuleContext;
property payload
readonly payload: RuleContext;
property ruleContext
readonly ruleContext: RuleContext;
property ruleIndex
readonly ruleIndex: number;
property sourceInterval
readonly sourceInterval: Interval;
property text
readonly text: string;
Return the combined text of all child nodes. This method only considers tokens which have been added to the parse tree.
Since tokens on hidden channels (e.g. whitespace or comments) are not added to the parse trees, they will not appear in the output of this method.
method accept
accept: <T>(visitor: ParseTreeVisitor<T>) => T;
method depth
depth: () => number;
method getChild
getChild: (i: number) => ParseTree;
method getChildContext
static getChildContext: ( parent: RuleContext, invokingState: number) => RuleContext;
method setParent
setParent: (parent: RuleContext) => void;
4.7. comment
method toString
toString: { (): string; (recog: Recognizer<any, any>): string; (ruleNames: string[]): string; (recog: Recognizer<any, any>, stop: RuleContext): string; (ruleNames: string[], stop: RuleContext): string;};
method toStringTree
toStringTree: { (recog: Parser): string; (ruleNames: string[]): string; (): string;};
Print out a whole tree, not just a node, in LISP format (root child1 .. childN). Print just a node if this is a leaf. We have to know the recognizer so we can get rule names.
Print out a whole tree, not just a node, in LISP format (root child1 .. childN). Print just a node if this is a leaf.
class RuleContextWithAltNum
class RuleContextWithAltNum extends ParserRuleContext {}
A handy class for use with
options {contextSuperClass=org.antlr.v4.runtime.RuleContextWithAltNum;}
that provides a backing field / impl for the outer alternative number matched for an internal parse tree node.
I'm only putting into Java runtime as I'm certain I'm the only one that will really every use this.
constructor
constructor();
constructor
constructor(parent: ParserRuleContext, invokingStateNumber: number);
property altNumber
altNumber: number;
class TokenStreamRewriter
class TokenStreamRewriter {}
Useful for rewriting out a buffered input token stream after doing some augmentation or other manipulations on it.
You can insert stuff, replace, and delete chunks. Note that the operations are done lazily--only if you convert the buffer to a String with . This is very efficient because you are not moving data around all the time. As the buffer of tokens is converted to strings, the method(s) scan the input token stream and check to see if there is an operation at the current index. If so, the operation is done and then normal String rendering continues on the buffer. This is like having multiple Turing machine instruction streams (programs) operating on a single input tape. :)
This rewriter makes no modifications to the token stream. It does not ask the stream to fill itself up nor does it advance the input cursor. The token stream
TokenStream.index
will return the same value before and after any call.The rewriter only works on tokens that you have in the buffer and ignores the current input cursor. If you are buffering tokens on-demand, calling halfway through the input will only do rewrites for those tokens in the first half of the file.
Since the operations are done lazily at -time, operations do not screw up the token index values. That is, an insert operation at token index
i
does not change the index values for tokensi
+1..n-1.Because operations never actually alter the buffer, you may always get the original token stream back without undoing anything. Since the instructions are queued up, you can easily simulate transactions and roll back any changes if there is an error just by removing instructions. For example,
CharStream input = new ANTLRFileStream("input");TLexer lex = new TLexer(input);CommonTokenStream tokens = new CommonTokenStream(lex);T parser = new T(tokens);TokenStreamRewriter rewriter = new TokenStreamRewriter(tokens);parser.startRule();Then in the rules, you can execute (assuming rewriter is visible):
Token t,u;...rewriter.insertAfter(t, "text to put after t");}rewriter.insertAfter(u, "text after u");}System.out.println(rewriter.getText());You can also have multiple "instruction streams" and get multiple rewrites from a single pass over the input. Just name the instruction streams and use that name again when printing the buffer. This could be useful for generating a C file and also its header file--all from the same buffer:
rewriter.insertAfter("pass1", t, "text to put after t");}rewriter.insertAfter("pass2", u, "text after u");}System.out.println(rewriter.getText("pass1"));System.out.println(rewriter.getText("pass2"));If you don't use named rewrite streams, a "default" stream is used as the first example shows.
constructor
constructor(tokens: TokenStream);
property DEFAULT_PROGRAM_NAME
static readonly DEFAULT_PROGRAM_NAME: string;
property lastRewriteTokenIndexes
protected lastRewriteTokenIndexes: Map<string, number>;
Map String (program name) → Integer index
property MIN_TOKEN_INDEX
static readonly MIN_TOKEN_INDEX: number;
property PROGRAM_INIT_SIZE
static readonly PROGRAM_INIT_SIZE: number;
property programs
protected programs: Map<string, RewriteOperation[]>;
You may have multiple, named streams of rewrite operations. I'm calling these things "programs." Maps String (name) → rewrite (List)
property tokens
protected tokens: TokenStream;
Our source stream
method catOpText
protected catOpText: (a: {}, b: {}) => string;
method delete
delete: { (index: number): void; (from: number, to: number): void; (indexT: Token): void; (from: Token, to: Token): void; (from: number, to: number, programName: string): void; (from: Token, to: Token, programName: string): void;};
method deleteProgram
deleteProgram: { (): void; (programName: string): void };
Reset the program so that no instructions exist
method getKindOfOps
protected getKindOfOps: <T extends RewriteOperation>( rewrites: Array<RewriteOperation | undefined>, kind: new (...args: any[]) => T, before: number) => T[];
Get all operations before an index of a particular kind
method getLastRewriteTokenIndex
protected getLastRewriteTokenIndex: { (): number; (programName: string): number;};
method getProgram
protected getProgram: (name: string) => RewriteOperation[];
method getText
getText: { (): string; (programName: string): string; (interval: Interval): string; (interval: Interval, programName: string): string;};
Return the text from the original tokens altered per the instructions given to this rewriter.
Return the text from the original tokens altered per the instructions given to this rewriter in programName.
4.5
Return the text associated with the tokens in the interval from the original token stream but with the alterations given to this rewriter. The interval refers to the indexes in the original token stream. We do not alter the token stream in any way, so the indexes and intervals are still consistent. Includes any operations done to the first and last token in the interval. So, if you did an insertBefore on the first token, you would get that insertion. The same is true if you do an insertAfter the stop token.
method getTokenStream
getTokenStream: () => TokenStream;
method insertAfter
insertAfter: { (t: Token, text: {}): void; (index: number, text: {}): void; (t: Token, text: {}, programName: string): void; (index: number, text: {}, programName: string): void;};
method insertBefore
insertBefore: { (t: Token, text: {}): void; (index: number, text: {}): void; (t: Token, text: {}, programName: string): void; (index: number, text: {}, programName: string): void;};
method reduceToSingleOperationPerIndex
protected reduceToSingleOperationPerIndex: ( rewrites: Array<RewriteOperation | undefined>) => Map<number, RewriteOperation>;
We need to combine operations and report invalid operations (like overlapping replaces that are not completed nested). Inserts to same index need to be combined etc... Here are the cases:
I.i.u I.j.v leave alone, nonoverlapping I.i.u I.i.v combine: Iivu
R.i-j.u R.x-y.v | i-j in x-y delete first R R.i-j.u R.i-j.v delete first R R.i-j.u R.x-y.v | x-y in i-j ERROR R.i-j.u R.x-y.v | boundaries overlap ERROR
Delete special case of replace (text==undefined): D.i-j.u D.x-y.v | boundaries overlap combine to max(min)..max(right)
I.i.u R.x-y.v | i in (x+1)-y delete I (since insert before we're not deleting i) I.i.u R.x-y.v | i not in (x+1)-y leave alone, nonoverlapping R.x-y.v I.i.u | i in x-y ERROR R.x-y.v I.x.u R.x-y.uv (combine, delete I) R.x-y.v I.i.u | i not in x-y leave alone, nonoverlapping
I.i.u = insert u before op @ index i R.x-y.u = replace x-y indexed tokens with u
First we need to examine replaces. For any replace op:
1. wipe out any insertions before op within that range. 2. Drop any replace op before that is contained completely within that range. 3. Throw exception upon boundary overlap with any previous replace.
Then we can deal with inserts:
1. for any inserts to same index, combine even if not adjacent. 2. for any prior replace with same left boundary, combine this insert with replace and delete this replace. 3. throw exception if index in same range as previous replace
Don't actually delete; make op undefined in list. Easier to walk list. Later we can throw as we add to index → op map.
Note that I.2 R.2-2 will wipe out I.2 even though, technically, the inserted stuff would be before the replace range. But, if you add tokens in front of a method body '{' and then delete the method body, I think the stuff before the '{' you added should disappear too.
Return a map from token index to operation.
method replace
replace: { (from: number, to: number, text: {}): void; (from: Token, to: Token, text: {}): void; (from: number, to: number, text: {}, programName: string): void; (from: Token, to: Token, text: {}, programName: string): void;};
method replaceSingle
replaceSingle: { (index: number, text: {}): void; (indexT: Token, text: {}): void;};
method rollback
rollback: { (instructionIndex: number): void; (instructionIndex: number, programName: string): void;};
Rollback the instruction stream for a program so that the indicated instruction (via instructionIndex) is no longer in the stream. UNTESTED!
method setLastRewriteTokenIndex
protected setLastRewriteTokenIndex: (programName: string, i: number) => void;
class VocabularyImpl
class VocabularyImpl implements Vocabulary {}
This class provides a default implementation of the Vocabulary interface.
Sam Harwell
constructor
constructor( literalNames: string[], symbolicNames: string[], displayNames: string[]);
Constructs a new instance of VocabularyImpl from the specified literal, symbolic, and display token names.
Parameter literalNames
The literal names assigned to tokens, or an empty array if no literal names are assigned.
Parameter symbolicNames
The symbolic names assigned to tokens, or an empty array if no symbolic names are assigned.
Parameter displayNames
The display names assigned to tokens, or an empty array to use the values in
literalNames
andsymbolicNames
as the source of display names, as described in .See Also
#getLiteralName(int)
#getSymbolicName(int)
#getDisplayName(int)
property EMPTY_VOCABULARY
static readonly EMPTY_VOCABULARY: VocabularyImpl;
Gets an empty Vocabulary instance.
No literal or symbol names are assigned to token types, so returns the numeric value for all tokens except Token#EOF.
property maxTokenType
readonly maxTokenType: number;
method getDisplayName
getDisplayName: (tokenType: number) => string;
method getLiteralName
getLiteralName: (tokenType: number) => string | undefined;
method getSymbolicName
getSymbolicName: (tokenType: number) => string | undefined;
Interfaces
interface ANTLRErrorListener
interface ANTLRErrorListener<TSymbol> {}
property syntaxError
syntaxError?: <T extends TSymbol>( recognizer: Recognizer<T, any>, offendingSymbol: T | undefined, line: number, charPositionInLine: number, msg: string, e: RecognitionException | undefined) => void;
Upon syntax error, notify any interested parties. This is not how to recover from errors or compute error messages. ANTLRErrorStrategy specifies how to recover from syntax errors and how to compute error messages. This listener's job is simply to emit a computed message, though it has enough information to create its own message in many cases.
The RecognitionException is non-
undefined
for all syntax errors except when we discover mismatched token errors that we can recover from in-line, without returning from the surrounding rule (via the single token insertion and deletion mechanism).Parameter recognizer
What parser got the error. From this object, you can access the context as well as the input stream.
Parameter offendingSymbol
The offending token in the input token stream, unless recognizer is a lexer (then it's
undefined
). If no viable alternative error,e
has token at which we started production for the decision.Parameter line
The line number in the input where the error occurred.
Parameter charPositionInLine
The character position within that line where the error occurred.
Parameter msg
The message to emit.
Parameter e
The exception generated by the parser that led to the reporting of an error. It is
undefined
in the case where the parser was able to recover in line without exiting the surrounding rule.
interface ANTLRErrorStrategy
interface ANTLRErrorStrategy {}
The interface for defining strategies to deal with syntax errors encountered during a parse by ANTLR-generated parsers. We distinguish between three different kinds of errors:
* The parser could not figure out which path to take in the ATN (none of the available alternatives could possibly match) * The current input does not match what we were looking for * A predicate evaluated to false
Implementations of this interface report syntax errors by calling Parser#notifyErrorListeners.
TODO: what to do about lexers
method inErrorRecoveryMode
inErrorRecoveryMode: (recognizer: Parser) => boolean;
Tests whether or not
recognizer
is in the process of recovering from an error. In error recovery mode, Parser#consume adds symbols to the parse tree by calling then instead of .Parameter recognizer
the parser instance
Returns
true
if the parser is currently recovering from a parse error, otherwisefalse
method recover
recover: (recognizer: Parser, e: RecognitionException) => void;
This method is called to recover from exception
e
. This method is called after by the default exception handler generated for a rule method.Parameter recognizer
the parser instance
Parameter e
the recognition exception to recover from @ if the error strategy could not recover from the recognition exception
See Also
#reportError
method recoverInline
recoverInline: (recognizer: Parser) => Token;
This method is called when an unexpected symbol is encountered during an inline match operation, such as Parser#match. If the error strategy successfully recovers from the match failure, this method returns the Token instance which should be treated as the successful result of the match.
This method handles the consumption of any tokens - the caller should *not* call Parser#consume after a successful recovery.
Note that the calling code will not report an error if this method returns successfully. The error strategy implementation is responsible for calling Parser#notifyErrorListeners as appropriate.
Parameter recognizer
the parser instance @ if the error strategy was not able to recover from the unexpected input symbol
method reportError
reportError: (recognizer: Parser, e: RecognitionException) => void;
Report any kind of RecognitionException. This method is called by the default exception handler generated for a rule method.
Parameter recognizer
the parser instance
Parameter e
the recognition exception to report
method reportMatch
reportMatch: (recognizer: Parser) => void;
This method is called by when the parser successfully matches an input symbol.
Parameter recognizer
the parser instance
method reset
reset: (recognizer: Parser) => void;
Reset the error handler state for the specified
recognizer
.Parameter recognizer
the parser instance
method sync
sync: (recognizer: Parser) => void;
This method provides the error handler with an opportunity to handle syntactic or semantic errors in the input stream before they result in a RecognitionException.
The generated code currently contains calls to after entering the decision state of a closure block (
(...)*
or(...)+
).For an implementation based on Jim Idle's "magic sync" mechanism, see DefaultErrorStrategy#sync.
Parameter recognizer
the parser instance @ if an error is detected by the error strategy but cannot be automatically recovered at the current state in the parsing process
See Also
DefaultErrorStrategy#sync
interface CharStream
interface CharStream extends IntStream {}
A source of characters for an ANTLR lexer.
method getText
getText: (interval: Interval) => string;
This method returns the text for a range of characters within this input stream. This method is guaranteed to not throw an exception if the specified
interval
lies entirely within a marked range. For more information about marked ranges, see IntStream#mark.Parameter interval
an interval within the stream
Returns
the text of the specified interval
Throws
NullPointerException if
interval
isundefined
Throws
IllegalArgumentException if
interval.a < 0
, or ifinterval.b < interval.a - 1
, or ifinterval.b
lies at or past the end of the streamThrows
UnsupportedOperationException if the stream does not support getting the text of the specified interval
interface DependencySpecification
interface DependencySpecification {}
property dependents
readonly dependents?: Dependents[];
Specifies the set of grammar rules related to
rule
which the annotated element depends on. Even when absent from this set, the annotated element is implicitly dependent upon the explicitly specifiedrule
, which corresponds to theDependents.SELF
element.By default, the annotated element is dependent upon the specified
rule
and itsDependents.PARENTS
, i.e. the rule within one level of context information. The parents are included since the most frequent assumption about a rule is where it's used in the grammar.
property recognizer
readonly recognizer: { new (...args: any[]): Parser;};
property rule
readonly rule: number;
property version
readonly version: number;
interface IntStream
interface IntStream {}
A simple stream of symbols whose values are represented as integers. This interface provides *marked ranges* with support for a minimum level of buffering necessary to implement arbitrary lookahead during prediction. For more information on marked ranges, see .
**Initializing Methods:** Some methods in this interface have unspecified behavior if no call to an initializing method has occurred after the stream was constructed. The following is a list of initializing methods:
* * *
property index
readonly index: number;
Return the index into the stream of the input symbol referred to by
LA(1)
.The behavior of this method is unspecified if no call to an has occurred after this stream was constructed.
property size
readonly size: number;
Returns the total number of symbols in the stream, including a single EOF symbol.
Throws
UnsupportedOperationException if the size of the stream is unknown.
property sourceName
readonly sourceName: string;
Gets the name of the underlying symbol source. This method returns a non-undefined, non-empty string. If such a name is not known, this method returns .
method consume
consume: () => void;
Consumes the current symbol in the stream. This method has the following effects:
* **Forward movement:** The value of
index
before calling this method is less than the value ofindex
after calling this method. * **Ordered lookahead:** The value ofLA(1)
before calling this method becomes the value ofLA(-1)
after calling this method.Note that calling this method does not guarantee that
index
is incremented by exactly 1, as that would preclude the ability to implement filtering streams (e.g. CommonTokenStream which distinguishes between "on-channel" and "off-channel" tokens).Throws
IllegalStateException if an attempt is made to consume the end of the stream (i.e. if
LA(1)==
before callingconsume
).
method LA
LA: (i: number) => number;
Gets the value of the symbol at offset
i
from the current position. Wheni==1
, this method returns the value of the current symbol in the stream (which is the next symbol to be consumed). Wheni==-1
, this method returns the value of the previously read symbol in the stream. It is not valid to call this method withi==0
, but the specific behavior is unspecified because this method is frequently called from performance-critical code.This method is guaranteed to succeed if any of the following are true:
*
i>0
*i==-1
andindex
returns a value greater than the value ofindex
after the stream was constructed andLA(1)
was called in that order. Specifying the currentindex
relative to the index after the stream was created allows for filtering implementations that do not return every symbol from the underlying source. Specifying the call toLA(1)
allows for lazily initialized streams. *LA(i)
refers to a symbol consumed within a marked region that has not yet been released.If
i
represents a position at or beyond the end of the stream, this method returns .The return value is unspecified if
i<0
and fewer than-i
calls to have occurred from the beginning of the stream before calling this method.Throws
UnsupportedOperationException if the stream does not support retrieving the value of the specified symbol
method mark
mark: () => number;
A mark provides a guarantee that operations will be valid over a "marked range" extending from the index where
mark()
was called to the currentindex
. This allows the use of streaming input sources by specifying the minimum buffering requirements to support arbitrary lookahead during prediction.The returned mark is an opaque handle (type
int
) which is passed to when the guarantees provided by the marked range are no longer necessary. When calls tomark()
/release()
are nested, the marks must be released in reverse order of which they were obtained. Since marked regions are used during performance-critical sections of prediction, the specific behavior of invalid usage is unspecified (i.e. a mark is not released, or a mark is released twice, or marks are not released in reverse order from which they were created).The behavior of this method is unspecified if no call to an has occurred after this stream was constructed.
This method does not change the current position in the input stream.
The following example shows the use of , ,
index
, and as part of an operation to safely work within a marked region, then restore the stream position to its original value and release the mark.IntStream stream = ...;int index = -1;int mark = stream.mark();try {index = stream.index;// perform work here...} finally {if (index != -1) {stream.seek(index);}stream.release(mark);}Returns
An opaque marker which should be passed to when the marked range is no longer required.
method release
release: (marker: number) => void;
This method releases a marked range created by a call to . Calls to
release()
must appear in the reverse order of the corresponding calls tomark()
. If a mark is released twice, or if marks are not released in reverse order of the corresponding calls tomark()
, the behavior is unspecified.For more information and an example, see .
Parameter marker
A marker returned by a call to
mark()
.See Also
#mark
method seek
seek: (index: number) => void;
Set the input cursor to the position indicated by
index
. If the specified index lies past the end of the stream, the operation behaves as thoughindex
was the index of the EOF symbol. After this method returns without throwing an exception, then at least one of the following will be true.*
index
will return the index of the first symbol appearing at or after the specifiedindex
. Specifically, implementations which filter their sources should automatically adjustindex
forward the minimum amount required for the operation to target a non-ignored symbol. *LA(1)
returnsThis operation is guaranteed to not throw an exception if
index
lies within a marked region. For more information on marked regions, see . The behavior of this method is unspecified if no call to an has occurred after this stream was constructed.Parameter index
The absolute index to seek to.
Throws
IllegalArgumentException if
index
is less than 0Throws
UnsupportedOperationException if the stream does not support seeking to the specified index
interface ParserErrorListener
interface ParserErrorListener extends ANTLRErrorListener<Token> {}
How to emit recognition errors for parsers.
property reportAmbiguity
reportAmbiguity?: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, exact: boolean, ambigAlts: BitSet | undefined, configs: ATNConfigSet) => void;
This method is called by the parser when a full-context prediction results in an ambiguity.
Each full-context prediction which does not result in a syntax error will call either or .
When
ambigAlts
is notundefined
, it contains the set of potentially viable alternatives identified by the prediction algorithm. WhenambigAlts
isundefined
, use ATNConfigSet#getRepresentedAlternatives to obtain the represented alternatives from theconfigs
argument.When
exact
istrue
, *all* of the potentially viable alternatives are truly viable, i.e. this is reporting an exact ambiguity. Whenexact
isfalse
, *at least two* of the potentially viable alternatives are viable for the current input, but the prediction algorithm terminated as soon as it determined that at least the *minimum* potentially viable alternative is truly viable.When the PredictionMode#LL_EXACT_AMBIG_DETECTION prediction mode is used, the parser is required to identify exact ambiguities so
exact
will always betrue
.Parameter recognizer
the parser instance
Parameter dfa
the DFA for the current decision
Parameter startIndex
the input index where the decision started
Parameter stopIndex
the input input where the ambiguity was identified
Parameter exact
true
if the ambiguity is exactly known, otherwisefalse
. This is alwaystrue
when PredictionMode#LL_EXACT_AMBIG_DETECTION is used.Parameter ambigAlts
the potentially ambiguous alternatives, or
undefined
to indicate that the potentially ambiguous alternatives are the complete set of represented alternatives inconfigs
Parameter configs
the ATN configuration set where the ambiguity was identified
property reportAttemptingFullContext
reportAttemptingFullContext?: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, conflictingAlts: BitSet | undefined, conflictState: SimulatorState) => void;
This method is called when an SLL conflict occurs and the parser is about to use the full context information to make an LL decision.
If one or more configurations in
configs
contains a semantic predicate, the predicates are evaluated before this method is called. The subset of alternatives which are still viable after predicates are evaluated is reported inconflictingAlts
.Parameter recognizer
the parser instance
Parameter dfa
the DFA for the current decision
Parameter startIndex
the input index where the decision started
Parameter stopIndex
the input index where the SLL conflict occurred
Parameter conflictingAlts
The specific conflicting alternatives. If this is
undefined
, the conflicting alternatives are all alternatives represented inconfigs
.Parameter conflictState
the simulator state when the SLL conflict was detected
property reportContextSensitivity
reportContextSensitivity?: ( recognizer: Parser, dfa: DFA, startIndex: number, stopIndex: number, prediction: number, acceptState: SimulatorState) => void;
This method is called by the parser when a full-context prediction has a unique result.
Each full-context prediction which does not result in a syntax error will call either or .
For prediction implementations that only evaluate full-context predictions when an SLL conflict is found (including the default ParserATNSimulator implementation), this method reports cases where SLL conflicts were resolved to unique full-context predictions, i.e. the decision was context-sensitive. This report does not necessarily indicate a problem, and it may appear even in completely unambiguous grammars.
configs
may have more than one represented alternative if the full-context prediction algorithm does not evaluate predicates before beginning the full-context prediction. In all cases, the final prediction is passed as theprediction
argument.Note that the definition of "context sensitivity" in this method differs from the concept in DecisionInfo#contextSensitivities. This method reports all instances where an SLL conflict occurred but LL parsing produced a unique result, whether or not that unique result matches the minimum alternative in the SLL conflicting set.
Parameter recognizer
the parser instance
Parameter dfa
the DFA for the current decision
Parameter startIndex
the input index where the decision started
Parameter stopIndex
the input index where the context sensitivity was finally determined
Parameter prediction
the unambiguous result of the full-context prediction
Parameter acceptState
the simulator state when the unambiguous prediction was determined
interface Token
interface Token {}
A token has properties: text, type, line, character position in the line (so we can ignore tabs), token channel, index, and source from which we obtained this token.
property channel
readonly channel: number;
Return the channel this token. Each token can arrive at the parser on a different channel, but the parser only "tunes" to a single channel. The parser ignores everything not on DEFAULT_CHANNEL.
property charPositionInLine
readonly charPositionInLine: number;
The index of the first character of this token relative to the beginning of the line at which it occurs, 0..n-1
property inputStream
readonly inputStream: CharStream | undefined;
Gets the CharStream from which this token was derived.
property line
readonly line: number;
The line number on which the 1st character of this token was matched, line=1..n
property startIndex
readonly startIndex: number;
The starting character index of the token This method is optional; return -1 if not implemented.
property stopIndex
readonly stopIndex: number;
The last character index of the token. This method is optional; return -1 if not implemented.
property text
readonly text: string | undefined;
Get the text of the token.
property tokenIndex
readonly tokenIndex: number;
An index from 0..n-1 of the token object in the input stream. This must be valid in order to print token streams and use TokenRewriteStream.
Return -1 to indicate that this token was conjured up since it doesn't have a valid index.
property tokenSource
readonly tokenSource: TokenSource | undefined;
Gets the TokenSource which created this token.
property type
readonly type: number;
Get the token type of the token
interface TokenFactory
interface TokenFactory {}
The default mechanism for creating tokens. It's used by default in Lexer and the error handling strategy (to create missing tokens). Notifying the parser of a new factory means that it notifies its token source and error strategy.
method create
create: ( source: { source?: TokenSource; stream?: CharStream }, type: number, text: string | undefined, channel: number, start: number, stop: number, line: number, charPositionInLine: number) => Token;
This is the method used to create tokens in the lexer and in the error handling strategy. If text!=undefined, than the start and stop positions are wiped to -1 in the text override is set in the CommonToken.
method createSimple
createSimple: (type: number, text: string) => Token;
Generically useful
interface TokenSource
interface TokenSource {}
A source of tokens must provide a sequence of tokens via and also must reveal it's source of characters; CommonToken's text is computed from a CharStream; it only store indices into the char stream.
Errors from the lexer are never passed to the parser. Either you want to keep going or you do not upon token recognition error. If you do not want to continue lexing then you do not want to continue parsing. Just throw an exception not under RecognitionException and Java will naturally toss you all the way out of the recognizers. If you want to continue lexing then you should not throw an exception to the parser--it has already requested a token. Keep lexing until you get a valid one. Just report errors and keep going, looking for a valid token.
property charPositionInLine
readonly charPositionInLine: number;
Get the index into the current line for the current position in the input stream. The first character on a line has position 0.
Returns
The line number for the current position in the input stream, or -1 if the current token source does not track character positions.
property inputStream
readonly inputStream: CharStream | undefined;
Get the CharStream from which this token source is currently providing tokens.
Returns
The CharStream associated with the current position in the input, or
undefined
if no input stream is available for the token source.
property line
readonly line: number;
Get the line number for the current position in the input stream. The first line in the input is line 1.
Returns
The line number for the current position in the input stream, or 0 if the current token source does not track line numbers.
property sourceName
readonly sourceName: string;
Gets the name of the underlying input source. This method returns a non-undefined, non-empty string. If such a name is not known, this method returns IntStream#UNKNOWN_SOURCE_NAME.
property tokenFactory
tokenFactory: TokenFactory;
Gets or sets the
TokenFactory
this token source is currently using for creatingToken
objects from the input.
method nextToken
nextToken: () => Token;
Return a Token object from your input stream (usually a CharStream). Do not fail/return upon lexing error; keep chewing on the characters until you get a good one; errors are not passed through to the parser.
interface TokenStream
interface TokenStream extends IntStream {}
property tokenSource
readonly tokenSource: TokenSource;
Gets the underlying TokenSource which provides tokens for this stream.
method get
get: (i: number) => Token;
Gets the Token at the specified
index
in the stream. When the preconditions of this method are met, the return value is non-undefined.The preconditions for this method are the same as the preconditions of IntStream#seek. If the behavior of
seek(index)
is unspecified for the current state and givenindex
, then the behavior of this method is also unspecified.The symbol referred to by
index
differs fromseek()
only in the case of filtering streams whereindex
lies before the end of the stream. Unlikeseek()
, this method does not adjustindex
to point to a non-ignored symbol.Throws
IllegalArgumentException if {code index} is less than 0
Throws
UnsupportedOperationException if the stream does not support retrieving the token at the specified index
method getText
getText: { (interval: Interval): string; (): string; (ctx: RuleContext): string;};
Return the text of all tokens within the specified
interval
. This method behaves like the following code (including potential exceptions for violating preconditions of , but may be optimized by the specific implementation.TokenStream stream = ...;String text = "";for (int i = interval.a; i <= interval.b; i++) {text += stream.get(i).text;}Parameter interval
The interval of tokens within this stream to get text for.
Returns
The text of all tokens within the specified interval in this stream.
Throws
NullPointerException if
interval
isundefined
Return the text of all tokens in the stream. This method behaves like the following code, including potential exceptions from the calls to IntStream#size and , but may be optimized by the specific implementation.
TokenStream stream = ...;String text = stream.getText(new Interval(0, stream.size));Returns
The text of all tokens in the stream.
Return the text of all tokens in the source interval of the specified context. This method behaves like the following code, including potential exceptions from the call to , but may be optimized by the specific implementation.
If
ctx.sourceInterval
does not return a valid interval of tokens provided by this stream, the behavior is unspecified.TokenStream stream = ...;String text = stream.getText(ctx.sourceInterval);Parameter ctx
The context providing the source interval of tokens to get text for.
Returns
The text of all tokens within the source interval of
ctx
.
method getTextFromRange
getTextFromRange: (start: any, stop: any) => string;
Return the text of all tokens in this stream between
start
andstop
(inclusive).If the specified
start
orstop
token was not provided by this stream, or if thestop
occurred before thestart
} token, the behavior is unspecified.For streams which ensure that the
Token.tokenIndex
method is accurate for all of its provided tokens, this method behaves like the following code. Other streams may implement this method in other ways provided the behavior is consistent with this at a high level.TokenStream stream = ...;String text = "";for (int i = start.tokenIndex; i <= stop.tokenIndex; i++) {text += stream.get(i).text;}Parameter start
The first token in the interval to get text for.
Parameter stop
The last token in the interval to get text for (inclusive).
Returns
The text of all tokens lying between the specified
start
andstop
tokens.Throws
UnsupportedOperationException if this stream does not support this method for the specified tokens
method LT
LT: (k: number) => Token;
Get the
Token
instance associated with the value returned byLA(k)
. This method has the same pre- and post-conditions asIntStream.LA
. In addition, when the preconditions of this method are met, the return value is non-undefined and the value ofLT(k).type === LA(k)
.A
RangeError
is thrown ifk<0
and fewer than-k
calls toconsume()
have occurred from the beginning of the stream before calling this method.See
IntStream.LA
method tryLT
tryLT: (k: number) => Token | undefined;
Get the
Token
instance associated with the value returned byLA(k)
. This method has the same pre- and post-conditions asIntStream.LA
. In addition, when the preconditions of this method are met, the return value is non-undefined and the value oftryLT(k).type === LA(k)
.The return value is
undefined
ifk<0
and fewer than-k
calls toconsume()
have occurred from the beginning of the stream before calling this method.See
IntStream.LA
interface Vocabulary
interface Vocabulary {}
This interface provides information about the vocabulary used by a recognizer.
See Also
Recognizer.vocabulary Sam Harwell
property maxTokenType
readonly maxTokenType: number;
Returns the highest token type value. It can be used to iterate from zero to that number, inclusively, thus querying all stored entries.
Returns
the highest token type value
method getDisplayName
getDisplayName: (tokenType: number) => string;
Gets the display name of a token type.
ANTLR provides a default implementation of this method, but applications are free to override the behavior in any manner which makes sense for the application. The default implementation returns the first result from the following list which produces a non-
undefined
result.1. The result of 1. The result of 1. The result of Integer#toString
Parameter tokenType
The token type.
Returns
The display name of the token type, for use in error reporting or other user-visible messages which reference specific token types.
method getLiteralName
getLiteralName: (tokenType: number) => string | undefined;
Gets the string literal associated with a token type. The string returned by this method, when not
undefined
, can be used unaltered in a parser grammar to represent this token type.The following table shows examples of lexer rules and the literal names assigned to the corresponding token types.
Rule Literal Name Java String Literal
THIS : 'this';
'this'
"'this'"
SQUOTE : '\'';
'\''
"'\\''"
ID : [A-Z]+;
n/aundefined
Parameter tokenType
The token type.
Returns
The string literal associated with the specified token type, or
undefined
if no string literal is associated with the type.
method getSymbolicName
getSymbolicName: (tokenType: number) => string | undefined;
Gets the symbolic name associated with a token type. The string returned by this method, when not
undefined
, can be used unaltered in a parser grammar to represent this token type.This method supports token types defined by any of the following methods:
* Tokens created by lexer rules. * Tokens defined in a
tokens{}
block in a lexer or parser grammar. * The implicitly definedEOF
token, which has the token type Token#EOF.The following table shows examples of lexer rules and the literal names assigned to the corresponding token types.
Rule Symbolic Name
THIS : 'this';
THIS
SQUOTE : '\'';
SQUOTE
ID : [A-Z]+;
ID
Parameter tokenType
The token type.
Returns
The symbolic name associated with the specified token type, or
undefined
if no symbolic name is associated with the type.
interface WritableToken
interface WritableToken extends Token {}
property channel
channel: number;
property charPositionInLine
charPositionInLine: number;
property line
line: number;
property text
text: string | undefined;
property tokenIndex
tokenIndex: number;
property type
type: number;
Enums
enum Dependents
enum Dependents { SELF = 0, PARENTS = 1, CHILDREN = 2, ANCESTORS = 3, DESCENDANTS = 4, SIBLINGS = 5, PRECEEDING_SIBLINGS = 6, FOLLOWING_SIBLINGS = 7, PRECEEDING = 8, FOLLOWING = 9,}
Sam Harwell
member ANCESTORS
ANCESTORS = 3
The element is dependent upon the set of the specified rule's ancestors (the transitive closure of
PARENTS
rules).
member CHILDREN
CHILDREN = 2
The element is dependent upon the set of the specified rule's children (rules which it directly references).
member DESCENDANTS
DESCENDANTS = 4
The element is dependent upon the set of the specified rule's descendants (the transitive closure of
CHILDREN
rules).
member FOLLOWING
FOLLOWING = 9
The element is dependent upon the set of the specified rule's following elements (rules which might start after the end of the specified rule while parsing). This is calculated by taking the
FOLLOWING_SIBLINGS
of the rule and each of itsANCESTORS
, along with theDESCENDANTS
of those elements.
member FOLLOWING_SIBLINGS
FOLLOWING_SIBLINGS = 7
The element is dependent upon the set of the specified rule's following siblings (the union of
CHILDREN
of itsPARENTS
which appear after a reference to the rule).
member PARENTS
PARENTS = 1
The element is dependent upon the set of the specified rule's parents (rules which directly reference it).
member PRECEEDING
PRECEEDING = 8
The element is dependent upon the set of the specified rule's preceeding elements (rules which might end before the start of the specified rule while parsing). This is calculated by taking the
PRECEEDING_SIBLINGS
of the rule and each of itsANCESTORS
, along with theDESCENDANTS
of those elements.
member PRECEEDING_SIBLINGS
PRECEEDING_SIBLINGS = 6
The element is dependent upon the set of the specified rule's preceeding siblings (the union of
CHILDREN
of itsPARENTS
which appear before a reference to the rule).
member SELF
SELF = 0
The element is dependent upon the specified rule.
member SIBLINGS
SIBLINGS = 5
The element is dependent upon the set of the specified rule's siblings (the union of
CHILDREN
of itsPARENTS
).
Namespaces
namespace CharStreams
namespace CharStreams {}
This class represents the primary interface for creating CharStreams from a variety of sources as of 4.7. The motivation was to support Unicode code points > U+FFFF. ANTLRInputStream and ANTLRFileStream are now deprecated in favor of the streams created by this interface.
DEPRECATED: NEW:
WARNING: If you use both the deprecated and the new streams, you will see a nontrivial performance degradation. This speed hit is because the Lexer's internal code goes from a monomorphic to megamorphic dynamic dispatch to get characters from the input stream. Java's on-the-fly compiler (JIT) is unable to perform the same optimizations so stick with either the old or the new streams, if performance is a primary concern. See the extreme debugging and spelunking needed to identify this issue in our timing rig:
https://github.com/antlr/antlr4/pull/1781
The ANTLR character streams still buffer all the input when you create the stream, as they have done for ~20 years. If you need unbuffered access, please note that it becomes challenging to create parse trees. The parse tree has to point to tokens which will either point into a stale location in an unbuffered stream or you have to copy the characters out of the buffer into the token. That defeats the purpose of unbuffered input. Per the ANTLR book, unbuffered streams are primarily useful for processing infinite streams *during the parse.*
The new streams also use 8-bit buffers when possible so this new interface supports character streams that use half as much memory as the old ANTLRFileStream, which assumed 16-bit characters.
A big shout out to Ben Hamilton (github bhamiltoncx) for his superhuman efforts across all targets to get true Unicode 3.1 support for U+10FFFF.
4.7
function fromString
fromString: { (s: string): CodePointCharStream; (s: string, sourceName: string): CodePointCharStream;};
Creates a CharStream given a String.
Creates a CharStream given a String and the from which it came.
namespace CodePointBuffer
namespace CodePointBuffer {}
class Builder
class Builder {}
constructor
constructor(initialBufferSize: number);
method append
append: (utf16In: Uint16Array) => void;
method build
build: () => CodePointBuffer;
method ensureRemaining
ensureRemaining: (remainingNeeded: number) => void;
namespace CommonTokenFactory
namespace CommonTokenFactory {}
variable DEFAULT
const DEFAULT: TokenFactory;
The default CommonTokenFactory instance.
This token factory does not explicitly copy token text when constructing tokens.
namespace IntStream
namespace IntStream {}
variable EOF
const EOF: number;
The value returned by when the end of the stream is reached.
variable UNKNOWN_SOURCE_NAME
const UNKNOWN_SOURCE_NAME: string;
The value returned by when the actual name of the underlying source is not known.
namespace Token
namespace Token {}
variable DEFAULT_CHANNEL
const DEFAULT_CHANNEL: number;
All tokens go to the parser (unless skip() is called in that rule) on a particular "channel". The parser tunes to a particular channel so that whitespace etc... can go to the parser on a "hidden" channel.
variable EOF
const EOF: number;
variable EPSILON
const EPSILON: number;
During lookahead operations, this "token" signifies we hit rule end ATN state and did not follow it despite needing to.
variable HIDDEN_CHANNEL
const HIDDEN_CHANNEL: number;
Anything on different channel than DEFAULT_CHANNEL is not parsed by parser.
variable INVALID_TYPE
const INVALID_TYPE: number;
variable MIN_USER_CHANNEL_VALUE
const MIN_USER_CHANNEL_VALUE: number;
This is the minimum constant value which can be assigned to a user-defined token channel.
The non-negative numbers less than are assigned to the predefined channels and .
See Also
Token.channel
variable MIN_USER_TOKEN_TYPE
const MIN_USER_TOKEN_TYPE: number;
Package Files (46)
- ANTLRErrorListener.d.ts
- ANTLRErrorStrategy.d.ts
- ANTLRInputStream.d.ts
- BailErrorStrategy.d.ts
- BufferedTokenStream.d.ts
- CharStream.d.ts
- CharStreams.d.ts
- CodePointBuffer.d.ts
- CodePointCharStream.d.ts
- CommonToken.d.ts
- CommonTokenFactory.d.ts
- CommonTokenStream.d.ts
- ConsoleErrorListener.d.ts
- DefaultErrorStrategy.d.ts
- Dependents.d.ts
- DiagnosticErrorListener.d.ts
- FailedPredicateException.d.ts
- InputMismatchException.d.ts
- IntStream.d.ts
- InterpreterRuleContext.d.ts
- Lexer.d.ts
- LexerInterpreter.d.ts
- LexerNoViableAltException.d.ts
- ListTokenSource.d.ts
- NoViableAltException.d.ts
- Parser.d.ts
- ParserErrorListener.d.ts
- ParserInterpreter.d.ts
- ParserRuleContext.d.ts
- ProxyErrorListener.d.ts
- ProxyParserErrorListener.d.ts
- RecognitionException.d.ts
- Recognizer.d.ts
- RuleContext.d.ts
- RuleContextWithAltNum.d.ts
- RuleDependency.d.ts
- RuleVersion.d.ts
- Token.d.ts
- TokenFactory.d.ts
- TokenSource.d.ts
- TokenStream.d.ts
- TokenStreamRewriter.d.ts
- Vocabulary.d.ts
- VocabularyImpl.d.ts
- WritableToken.d.ts
- index.d.ts
Dependencies (0)
No dependencies.
Dev Dependencies (0)
No dev dependencies.
Peer Dependencies (0)
No peer dependencies.
Badge
To add a badge like this oneto your package's README, use the codes available below.
You may also use Shields.io to create a custom badge linking to https://www.jsdocs.io/package/antlr4ts
.
- Markdown[![jsDocs.io](https://img.shields.io/badge/jsDocs.io-reference-blue)](https://www.jsdocs.io/package/antlr4ts)
- HTML<a href="https://www.jsdocs.io/package/antlr4ts"><img src="https://img.shields.io/badge/jsDocs.io-reference-blue" alt="jsDocs.io"></a>
- Updated .
Package analyzed in 11739 ms. - Missing or incorrect documentation? Open an issue for this package.