A syntactic predicate specifies the syntactic validity of applying a production in a formal grammar and is analogous to a semantic predicate that specifies the semantic validity of applying a production. It is a simple and effective means of dramatically improving the recognition strength of an LL parser by providing arbitrary lookahead. In their original implementation, syntactic predicates had the form “?” and could only appear on the left edge of a production. The required syntactic condition α could be any valid context-free grammarfragment. More formally, a syntactic predicate is a form of production intersection, used in parser specifications or in formal grammars. In this sense, the term predicate has the meaning of a mathematical indicator function. If p1 and p2, are production rules, the language generated by bothp1andp2 is their set intersection. As typically defined or implemented, syntactic predicates implicitly order the productions so that predicated productions specified earlier have higher precedence than predicated productions specified later within the same decision. This conveys an ability to disambiguate ambiguous productions because the programmer can simply specify which production should match. Parsing expression grammars, invented by Bryan Ford, extend these simple predicates by allowing "not predicates" and permitting a predicate to appear anywhere within a production. Moreover, Ford invented packrat parsing to handle these grammars in linear time by employing memoization, at the cost of heap space. It is possible to support linear-time parsing of predicates as general as those allowed by PEGs, but reduce the memory cost associated with memoization by avoiding backtracking where some more efficient implementation of lookahead suffices. This approach is implemented by ANTLR version 3, which uses Deterministic finite automata for lookahead; this may require testing a predicate in order to choose between transitions of the DFA.
Overview
Terminology
The term syntactic predicate was coined by Parr & Quong and differentiates this form of predicate from semantic predicates. Syntactic predicates have been called multi-step matching, parse constraints, and simply predicates in various literature. This article uses the term syntactic predicate throughout for consistency and to distinguish them from semantic predicates.
et al. show that the intersection of two regular languages is also a regular language, which is to say that the regular languages are closed under intersection. The intersection of a regular language and a context-free language is also closed, and it has been known at least since Hartmanis that the intersection of two context-free languages is not necessarily a context-free language. This can be demonstrated easily using the canonical Type 1 language, : Let Let Let Given the strings', ', and ', it is clear that the only string that belongs to both L1and L2 is '.
Other considerations
In most formalisms that use syntactic predicates, the syntax of the predicate is noncommutative, which is to say that the operation of predication is ordered. For instance, using the above example, consider the following pseudo-grammar, where X ::= Y PRED Z is understood to mean: "Y produces X if and only if Y also satisfies predicate Z": S ::= a X X ::= Y PRED Z Y ::= a+ BNCN Z ::= ANBN c+ BNCN ::= b c ANBN ::= a b Given the string ', in the case where Y must be satisfied first, S will generate aX and X in turn will generate ', thereby generating '. In the case where Z must be satisfied first, ANBN will fail to generate ', and thus is not generated by the grammar. Moreover, if either Y or Z specify any action to be taken upon reduction, the order that these productions match determines the order in which those side-effects occur. Formalisms that vary over time may rely on these side effects.
Examples of use
;ANTLR Parr & Quong give this example of a syntactic predicate: stat: ? declaration | expression ; which is intended to satisfy the following informally stated constraints of C++:
In the first production of rule stat, the syntactic predicate ? indicates that declaration is the syntactic context that must be present for the rest of that production to succeed. We can interpret the use of ? as "I am not sure if declaration will match; let me try it out and, if it does not match, I shall try the next alternative." Thus, when encountering a valid declaration, the rule declaration will be recognized twice—once as syntactic predicate and once during the actual parse to execute semantic actions. Of note in the above example is the fact that any code triggered by the acceptance of the declaration production will only occur if the predicate is satisfied.
Canonical examples
The language can be represented in various grammars and formalisms as follows: ;Parsing Expression Grammars S ← & a+ B !c A ← a A? b B ← b B? c ;§-Calculus Using a bound predicate: S → B A → X 'c+' X → 'a' 'b' B → 'a+' Y Y → 'b' 'c' Using two free predicates: A → <'a+'>a <'b+'>b ΨX <'c+'>c ΨY X → 'a' 'b' Y → 'b' 'c' ;Conjunctive Grammars : S → AB&DC A → aA | ε B → bBc | ε C → cC | ε D → aDb | ε ;Perl 6 rules
rule S rule A rule B
Parsers/formalisms using some form of syntactic predicate
Although by no means an exhaustive list, the following parsers and grammar formalisms employ syntactic predicates: ; ANTLR ; Augmented Pattern Matcher ; Parsing expression grammars ; §-Calculus ; Raku rules ; ProGrammar ; Conjunctive and Boolean Grammars