[Unicode]  Technical Reports
 

Proposed Update Unicode Technical Standard #18

Unicode Regular Expressions

Version 10
Authors Mark Davis (mark.davis@us.ibm.com)
Date 2004-10-12
This Version http://www.unicode.org/reports/tr18/tr18-10.html
Previous Version http://www.unicode.org/reports/tr18/tr18-9.html
Latest Version http://www.unicode.org/reports/tr18/
Tracking Number 10


Summary

This document describes guidelines for how to adapt regular expression engines to use Unicode.

Status

This document is a proposed update of a previously approved Unicode Technical Standard. Publication does not imply endorsement by the Unicode Consortium. This is a draft document which may be updated, replaced, or superseded by other documents at any time. This is not a stable document; it is inappropriate to cite this document as other than a work in progress.

A Unicode Technical Standard (UTS) is an independent specification. Conformance to the Unicode Standard does not imply conformance to any UTS.

Please submit corrigenda and other comments with the online reporting form [Feedback]. Related information that is useful in understanding this document is found in [References]. For the latest version of the Unicode Standard see [Unicode]. For a list of current Unicode Technical Reports see [Reports]. For more information about versions of the Unicode Standard, see [Versions].

Contents

0 Introduction
0.1 Notation
0.2 Conformance
1 Basic Unicode Support: Level 1
1.1 Hex notation
1.2 Properties
1.3 Subtraction and Intersection
1.4 Simple Word Boundaries
1.5 Simple Loose Matches
1.6 Line Boundaries
1.7 Supplementary Characters
2 Extended Unicode Support: Level 2
2.1 Canonical Equivalents
2.2 Default Grapheme Clusters
2.3 Default Word Boundaries
2.4 Default Loose Matches
2.5 Name Properties
2.6 Wildcard Properties
3 Tailored Support: Level 3
3.1 Tailored Punctuation
3.2 Tailored Grapheme Clusters
3.3 Tailored Word Boundaries
3.4 Tailored Loose Matches
3.5 Tailored Ranges
3.6 Context Matching
3.7 Incremental Matches
3.8 Unicode Set Sharing
3.9 Possible Match Sets
3.10 Folded Matching
3.11 Submatchers
Annex A. Character Blocks
Annex B. Sample Collation Character Code
Annex C. Compatibility Properties
References
Acknowledgments
Modifications

0 Introduction

The following describes general guidelines for extending regular expression engines (Regex) to handle Unicode. The following issues are involved in such extensions.

There are three fundamental levels of Unicode support that can be offered by regular expression engines:

One of the most important requirements for a regular expression engine is to document clearly what Unicode features are and are not supported. Even if higher-level support is not currently offered, provision should be made for the syntax to be extended in the future to encompass those features.

Note: Unicode is a constantly evolving standard: new characters will be added in the future. This means that a regular expression that tests for, say, currency symbols will have different results in Unicode 2.0 than in Unicode 2.1 (where the Euro currency symbol was added.)

At any level, efficiently handling properties or conditions based on a large character set can take a lot of memory. A common mechanism for reducing the memory requirements — while still maintaining performance — is the two-stage table, discussed in Chapter 5 of The Unicode Standard [Unicode]. For example, the Unicode character properties required in RL1.2 Properties can be stored in memory in a two-stage table with only 7 or 8Kbytes. Accessing those properties only takes a small amount of bit-twiddling and two array accesses.

Note: For ease of reference, the section ordering for this document is intended to be as stable as possible over successive versions. That may lead, in some cases, to the ordering of the sections being less than optimal.

0.1 Notation

In order to describe regular expression syntax, we will use an extended BNF form:

x y the sequence consisting of x then y
x* zero or more occurrences of x
x? zero or one occurrence of x
x | y either x or y
( x ) for grouping
"XYZ" terminal character(s)

The following syntax for character ranges will be used in successive examples.

Note: This is only a sample syntax for the purposes of examples in this document. (Regular expression syntax varies widely: the issues discussed here would need to be adapted to the syntax of the particular implementation. However, it is important to have a concrete syntax to correctly illustrate the different issues.

In general, the syntax here is similar to that of Perl Regular Expressions [Perl].) In some cases, this gives multiple syntactic constructs that provide for the same functionality.

LIST := "[" NEGATION? ITEM (SEP? ITEM)* "]"
ITEM := CODE_POINT2
     := CODE_POINT2 "-" CODE_POINT2 // range

CODE_POINT2 := ESCAPE CODE_POINT
            := CODE_POINT

NEGATION := "^"
SEP := ""  // no separator = union 
    := "|" // union
ESCAPE := "\"

Code_point refers to any Unicode code point from U+0000 to U+10FFFF, although typically the only ones of interest will be those representing characters. Whitespace is allowed between any elements, but to simplify the presentation the many occurrences of " "* are omitted.

Code points that are syntax characters or whitespace are typically escaped. For more information see [Syntax]. In examples, we will sometimes use the syntax \s to mean whitespace. See also Annex C. Compatibility Properties.

Examples:

[a-z | A-Z | 0-9] Match ASCII alphanumerics
[a-z A-Z 0-9]
[a-zA-Z0-9]
[^a-z A-Z 0-9] Match anything but ASCII alphanumerics
[\] \- \ ] Match the literal characters ], -, <space>

Where string offsets are used in examples, they are from zero to n (the length of the string), and indicate positions between characters. Thus in "abcde", the substring from 2 to 4 includes the two characters "cd".

We also define the following notation for use here and in other Unicode documents:

\n As used within regular expressions, expands to the text matching the nth parenthesized group in regular expression. (à la Perl)
$n As used within replacement strings for regular expressions, expands to the text matching the nth parenthesized group in a corresponding regular expression. The value of $0 is the entire expression.( à la Perl)
$xyz As used within regular expressions or replacement strings, expands to an assigned variable value. The 'xyz' is of the form of an identifier. For example, given
$greek_lower = [[:greek:]&[:lowercase:]], the regular expression pattern "ab$greek_lower" is equivalent to "ab[[:greek:]&[:lowercase:]]".

Note: Since any character could occur as a literal in a regular expression, when regular expression syntax is embedded within other syntax it can be difficult to determine where the end of the regex expression is. Common practice is to allow the user to choose a delimiter like '/' in /ab(c)*/. The user can then simply choose a delimiter that is not in the particular regular expression.

0.2 Conformance

The following describes the possible ways that an implementation can claim conformance to this technical standard.

All syntax and API presented in this document is only for the purpose of illustration; there is absolutely no requirement to follow such syntax or API. Regular expression syntax varies widely: the features discussed here would need to be adapted to the syntax of the particular implementation. In general, the syntax in examples is similar to that of Perl Regular Expressions [Perl], but it may not be exactly the same. While the API examples generally follow Java style, it is again only for illustration.

C0. An implementation claiming conformance to this specification at any Level shall identify the version of this specification and the version of the Unicode Standard.
 
C1. An implementation claiming conformance to Level 1 of this specification shall meet the requirements described in the following sections:
RL1.1 Hex Notation
RL1.2 Properties
RL1.2a Compatibility Properties
RL1.3 Subtraction and Intersection
RL1.4 Simple Word Boundaries
RL1.5 Simple Loose Matches
RL1.6 Line Boundaries
RL1.7 Supplementary Characters
C2. An implementation claiming conformance to Level 2 of this specification shall satisfy C1, and meet the requirements described in the following sections:
RL2.1 Canonical Equivalents
RL2.2 Default Grapheme Clusters
RL2.3 Default Word Boundaries
RL2.4 Default Loose Matches
RL2.5 Name Properties
RL2.6 Wildcard Properties
C3. An implementation claiming conformance to Level 3 of this specification shall satisfy C1 and C2, and meet the requirements described in the following sections:
RL3.1 Tailored Punctuation
RL3.2 Tailored Grapheme Clusters
RL3.3 Tailored Word Boundaries
RL3.4 Tailored Loose Matches
RL3.5 Tailored Ranges
RL3.6 Context Matching
RL3.7 Incremental Matches
RL3.8 Unicode Set Sharing
RL3.9 Possible Match Sets
RL3.10 Folded Matching
RL3.11 Submatchers
C4. An implementation claiming partial conformance to this specification shall clearly indicate which levels are completely supported (C1-C3), plus any additional supported features from higher levels.

For example, an implementation may claim conformance to Level 1, plus Context Matching, and Incremental Matches. Another implementation may claim conformance to Level 1, except for Subtraction and Intersection.

Notes:

1 Basic Unicode Support: Level 1

Regular expression syntax usually allows for an expression to denote a set of single characters, such as [a-z A-Z 0-9]. Since there are a very large number of characters in the Unicode standard, simple list expressions do not suffice.

1.1 Hex notation

The character set used by the regular expression writer may not be Unicode, or may not have the ability to input all Unicode code points from a keyboard.

RL1.1 Hex Notation
To meet this requirement, an implementation shall supply a mechanism for specifying any Unicode code point (from U+0000 to U+10FFFF).

A sample notation for listing hex Unicode characters within strings is by prefixing four hex digits with "\u" and prefixing eight hex digits with "\U". This would provide for the following addition:

<codepoint> := <character>
<codepoint> := ESCAPE U_SHORT_MARK
               HEX_CHAR HEX_CHAR HEX_CHAR HEX_CHAR

<codepoint> := ESCAPE U_LONG_MARK
               HEX_CHAR HEX_CHAR HEX_CHAR HEX_CHAR
               HEX_CHAR HEX_CHAR HEX_CHAR HEX_CHAR

U_SHORT_MARK := "u"
U_LONG_MARK := "U"

Examples:

[\u3040-\u309F \u30FC] Match Hiragana characters, plus prolonged sound sign
[\u00B2 \u2082] Match superscript and subscript 2
[a \U00010450] Match "a" or U+10450 SHAVIAN LETTER PEEP

1.2 Properties

Since Unicode is a large character set, a regular expression engine needs to provide for the recognition of whole categories of characters as well as simply ranges of characters; otherwise the listing of characters becomes impractical and error-prone. This is done by providing syntax for sets of characters based on the Unicode character properties, and allowing them to be mixed with lists and ranges of individual code points.

There are a large number of Unicode Character Database properties. The official data mapping Unicode characters (and code points) to properties is the Unicode Character Database [UCD]. See also Chapter 4 in The Unicode Standard [Unicode].

The recommended names for UCD properties and property values are in PropertyAliases.txt [Prop] and PropertyValueAliases.txt [PropValue]. There are both abbreviated names and longer, more descriptive names. It is strongly recommended that both names be recognized, and that loose matching of property names be used, whereby the case distinctions, whitespace, hyphens, and underbar are ignored.

Note: it may be a useful implementation technique to load the Unicode tables that support properties and other features on demand, to avoid unnecessary memory overhead for simple regular expressions that don't use those properties.

Where a regular expression is expressed as much as possible in terms of higher-level semantic constructs such as Letter, it makes it practical to work with the different alphabets and languages in Unicode. Here is an example of a syntax addition that permits properties.

Notice that following Perl Syntax, the p is lowercase to indicate a positive match, and uppercase to indicate a negative match.

ITEM := POSITIVE_SPEC | NEGATIVE_SPEC
POSITIVE_SPEC := ("\p{" PROP_SPEC "}") | ("[:" PROP_SPEC ":]")
NEGATIVE_SPEC := ("\P{" PROP_SPEC "}") | ("[:^" PROP_SPEC ":]")
PROP_SPEC := <binary_unicode_property>
PROP_SPEC := <unicode_property> (":" | "=") <unicode_property_value>
PROP_SPEC := <script_or_category_property_value>

Examples:

[\p{L} \p{Nd}] Match all letters and decimal digits
[\p{letter} \p{decimal number}]
\P{L} Match anything that is not a letter
\P{letter}
\p{East Asian Width:Narrow} Match anything that has the East Asian Width property value of Narrow
\p{Whitespace} Match anything that has the binary property Whitespace

Some properties are binary: they are either true or false for a given code point. In that case, only the property name is required. Others have multiple values, so for uniqueness both the property name and the property value need to be included. For example, Alphabetic is both a binary property and a value of the Line_Break enumeration, so \p{Alphabetic} would mean the binary property, and \p{Line Break:Alphabetic} or \p{Line_Break=Alphabetic} would mean the enumerated property. There are two exceptions to this: the properties Script and General Category commonly have the property name omitted. Thus \p{Not_Assigned} is equivalent to \p{General_Category = Not_Assigned}, and \p{Greek} is equivalent to \p{Script:Greek}.

RL1.2 Properties
To meet this requirement, an implementation shall provide a least a minimal list of properties, consisting of the following:
RL1.2a  Compatibility Properties
  To meet this requirement, an implementation shall provide the properties listed in Annex C. Compatibility Properties, with the property values as listed there. Such an implementation shall document whether it is using the Standard Recommendation or POSIX-compatible properties.

Of the properties in RL1.2, only General Category and Script have multiple values; the rest are binary. An implementation that does not support non-binary enumerated properties can essentially "flatten" the enumerated type. Thus, for example, instead of \p{script=latin} the syntax could be \p{script_latin}.

General Category Property

The most basic overall character property is the General Category, which is a basic categorization of Unicode characters into: Letters, Punctuation, Symbols, Marks, Numbers, Separators, and Other. These property values each have a single letter abbreviation, which is the uppercase first character except for separators, which use Z. The official data mapping Unicode characters to the General Category value is in UnicodeData.txt [UData].

Each of these categories has different subcategories. For example, the subcategories for Letter are uppercase, lowercase, titlecase, modifier, and other (in this case, other includes uncased letters such as Chinese). By convention, the subcategory is abbreviated by the category letter (in uppercase), followed by the first character of the subcategory in lowercase. For example, Lu stands for Uppercase Letter.

Note: Since it is recommended that the property syntax be lenient as to spaces, casing, hyphens and underbars, any of the following should be equivalent: \p{Lu}, \p{lu}, \p{uppercase letter}, \p{uppercase letter}, \p{Uppercase_Letter}, and \p{uppercaseletter}

The General Category property values are listed below. For more information on the meaning of these values, see UCD.html [UDataDoc].

Abb. Long form
L Letter
Lu Uppercase Letter
Ll Lowercase Letter
Lt Titlecase Letter
Lm Modifier Letter
Lo Other Letter
M Mark
Mn Non-Spacing Mark
Mc Spacing Combining Mark
Me Enclosing Mark
N Number
Nd Decimal Digit Number
Nl Letter Number
No Other Number
Abb. Long form
S Symbol
Sm Math Symbol
Sc Currency Symbol
Sk Modifier Symbol
So Other Symbol
P Punctuation
Pc Connector Punctuation
Pd Dash Punctuation
Ps Open Punctuation
Pe Close Punctuation
Pi Initial Punctuation
Pf Final Punctuation
Po Other Punctuation
Abb. Long form
Z Separator
Zs Space Separator
Zl Line Separator
Zp Paragraph Separator
C Other
Cc Control
Cf Format
Cs Surrogate
Co Private Use
Cn Not Assigned
- Any*
- Assigned*
- ASCII
* The last few properties are not part of the General Category.
  • Any matches all code points. This could also be captured with [\u0000-\u10FFFF], but with Tailored Ranges off. In some regular expression languages, \p{Any} may be expressed by a period, but that may exclude newline characters.
  • Assigned is equivalent to \P{Cn}, and matches all assigned characters (for the target version of Unicode). It also includes all private use characters. It is useful for avoiding confusing double negatives. Note that Cn includes noncharacters, so Assigned excludes them.
  • ASCII is equivalent to [\u0000-\u007F], but with Tailored Ranges off.

Script Property

A regular-expression mechanism may choose to offer the ability to identify characters on the basis of other Unicode properties besides the General Category. In particular, Unicode characters are also divided into scripts as described in UTR #24: Script Names [ScriptDoc] (for the data file, see Scripts.txt [ScriptData]). Using a property such as \p{Greek} allows people test letters for whether they are Greek or not.

Note, however, that the usage model for the script property normally requires that people construct somewhat more complex regular expressions, because a great many characters are shared between scripts. Documentation should point users to the description in UTR #24.

Other Properties

Other useful properties are described in Section 2 of Unicode Normalization Forms [Norm], in Section 3.13 of The Unicode Standard, Version 4.0 [Case], and in the documentation for the Unicode Character Database [UCD].

The binary properties include:

The enumerated non-binary properties include:

The numeric properties include:

The string properties include:

A full list of the available UCD properties is on UCD Properties. Of those, the following are only useful in very restricted cases, such as in the internal implementation of normalization or case conversions:

Blocks

Unicode blocks can sometimes also be a useful enumerated property. However, there are some very significant caveats to the use of Unicode blocks for the identification of characters: see Annex A. Character Blocks. If blocks are used, some of the names can collide with Script names, so they should be distinguished, such as with syntax like \p{Greek Block} or \p{Block=Greek}.

1.3 Subtraction and Intersection

As discussed above, character properties are essential with a large character set. In addition, there needs to be a way to "subtract" characters from what is already in the list. For example, one may want to include all non-ASCII letters without having to list every character in \p{letter} that is not one of those 52.

RL1.3 Subtraction and Intersection
To meet this requirement, an implementation shall supply mechanisms for both intersection and set-difference of Unicode sets.


ITEM := "[" ITEM "]" // for grouping
SEP := ""            // no separator = union 
    := "|"           // union
    := "&"           // intersection
    := "-"           // removal = set difference

Implementations may also choose to offer other set operations, such as symmetric difference: [\p{letter} ⊖ \p{ascii}]

Note: In the sample syntax used here:

  1. The symbol "-" between two characters still means a range, not a set-difference. That is:
    • [\p{ascii} - aeiouy] is equivalent to [\p{ascii} - [aeiouy]]
    • [aeiouy & \p{ascii}] is equivalent to [[aeiouy] & \p{ascii}]
  2. Union binds more closely than intersection, which binds more closely than removal. Otherwise items bind from the left. (However, such binding or precedence may vary by regular expression engine.)

    For example, we get the following, where A..E stand for expressions, not characters:

    • [A|B|C-D|E] is the same as [[A|B|C] - [D|E]]
      • That is, it means form the union of A, B, and C, and then subtract the union of D and E.
    • [A - B - C - D & E] is the same as [[[A - B] - C] - [D & E]]
      • That is, take A, then remove all Bs, then all Cs, then remove the intersection of D and E.

Examples:

[\p{L} - QW] Match all letters but Q and W
[\p{N} - [\p{Nd} - 0-9]] Match all non-decimal numbers, plus 0-9.
[\u0000-\u007F - \P{letter}] Match all letters in the ASCII range, by subtracting non-letters.
[\p{Greek } - \N{GREEK SMALL LETTER ALPHA}] Match Greek letters except alpha
[\p{Assigned} - \p{Decimal Digit Number} - a-f A-F a-f A-F] Match all assigned characters except for hex digits (using a broad definition).


1.4 Simple Word Boundaries

Most regular expression engines allow a test for word boundaries (such as by "\b" in Perl). They generally use a very simple mechanism for determining word boundaries: one example of that would be having word boundaries between any pair of characters where one is a <word_character> and the other is not, or at the start and end of a string. This is not adequate for Unicode regular expressions.

RL1.4 Simple Word Boundaries
To meet this requirement, an implementation shall extend the word boundary mechanism so that:
  1. The class of <word_character> includes all the Alphabetic values from the Unicode character database, from UnicodeData.txt [UData], plus the U+200C ZERO WIDTH NON-JOINER and U+200D ZERO WIDTH JOINER. See also Annex C: Compatibility Properties.
  2. Non-spacing marks are never divided from their base characters, and otherwise ignored in locating boundaries.

Level 2 provides more general support for word boundaries between arbitrary Unicode characters which may override this behavior.

1.5 Simple Loose Matches

The only loose matches that most regular expression engines offer is caseless matching. If the engine does offers this, then it needs to account for the large range of cased Unicode characters outside of ASCII.

RL1.5 Simple Loose Matches
To meet this requirement, if an implementation provides for case-insensitive matching, then it shall provide at least the simple, default Unicode case-insensitive matching.

To meet this requirement, if an implementation provides for case conversions, then it shall provide at least the simple, default Unicode case conversion.

In addition, because of the vagaries of natural language, there are situations where two different Unicode characters have the same uppercase or lowercase. To meet this requirement, implementations must implement these in accordance with the Unicode Standard. For example, the Greek U+03C3 "σ" small sigma, U+03C2 "ς" small final sigma, and U+03A3 "Σ" capital sigma all match.

Some caseless matches may match one character against two: for example, U+00DF "ß" matches the two characters "SS". And case matching may vary by locale. However, because many implementations are not set up to handle this, at Level 1 only simple case matches are necessary. To correctly implement a caseless match, see Chapter 3 of the Unicode Standard [Unicode]. The data file supporting caseless matching is CaseFolding.txt [CaseData].

To meet this requirement, where an implementation also offers case conversions, then these must also follow Chapter 3 of the Unicode Standard [Unicode]. The relevant data files are SpecialCasing.txt [SpecialCasing] and UnicodeData.txt [UData].

1.6 Line Boundaries

Most regular expression engines also allow a test for line boundaries: end-of-line or start-of-line. This presumes that lines of text are separated by line (or paragraph) separators.

RL1.6 Line Boundaries
To meet this requirement, if an implementation provides for line-boundary testing, it shall recognize not only CRLF, LF, CR, but also NEL (U+0085), PS (U+2029) and LS (U+2028).

Formfeed (U+000C) also normally indicates an end-of-line. For more information, see Chapter 3 of [Unicode].

These characters should be uniformly handled in determining logical line numbers, start-of-line, end-of-line, and arbitrary-character implementations. Logical line number is useful for compiler error messages and the like. Regular expressions often allow for SOL and EOL patterns, which match certain boundaries. Often there is also a "non-line-separator" arbitrary character pattern that excludes line separator characters.

The behavior of these may also differ depending one whether one is in a "multiline" mode or not. For more information, see Anchors and Other "Zero-Width Assertions" in Chapter 3 of [Friedl].

A newline sequence is defined to be any of the following:

\u000A | \u000B | \u000C | \u000D | \u0085 | \u2028 | \u2029 | \u000D\u000A

  1. Logical line number
    • The line number is increased by one for each occurrence of a newline sequence.
    • Note that different implementations may call the first line either line zero or line one.
  2. Logical beginning of line (often "^")
    • SOL is at the start of a file or string, and also immediately following any occurrence of a newline sequence.
    • There is no empty line within the sequence \u000D\u000A.
    • Note that there may be a separate pattern for "beginning of text" for a multiline mode, one which matches only at the beginning of the first line, e.g., in Perl \A.
  3. Logical end of line (often "$")
    • EOL at the end of a file or string, and also immediately preceding a final occurrence of a newline sequence.
    • There is no empty line within the sequence \u000D\u000A.
    • SOL and EOL are not symmetric because of multiline mode: EOL can be interpreted in at least three different ways:
      1. EOL matches at the end of the string
      2. EOL matches before final newline
      3. EOL matches before any newline
  4. Arbitrary character pattern (often ".")
    • Should not match any of  the newline sequences.
    • In "multiline mode", these would match, and \u000D\u000A matches as if it were a single character.
    • Note that ^.*$ (an empty line pattern) should not match the empty string within the sequence \u000D\u000A, but should match the empty string within the reversed sequence \u000A\u000D.

It is strongly recommended that there be a regular expression meta-character, such as "\R", for matching all line ending characters and sequences listed above (e.g. in #1). It would thus be shorthand for ([\u000A\u000B\u000C\u000D\u0085\u2028\u2029] | \u000D\u000A).

For more information on line breaking, see [LineBreak].

1.7 Code Points

A fundamental requirement is that Unicode text be interpreted semantically by code point, not code units.

RL1.7 Supplementary Characters
To meet this requirement, an implementation shall handle the full range of Unicode code points, including values from U+FFFF to U+10FFFF. In particular, where UTF-16 is used, a sequence consisting of a leading surrogate followed by a trailing surrogate shall be handled as a single code point in matching.

UTF-16 uses pairs of Unicode code units to express code points above FFFF16. Surrogate pairs (or their equivalents in other encoding forms) are be handled internally as single code point values. In particular, [\u0000-\U0010000] will match all the following sequence of code units:

Code Point UTF-8 Code Units UTF-16 Code Units UTF-32 Code Units
7F 7F 007F 0000007F
80 C2 80 0080 00000080
7FF DF BF 07FF 000007FF
800 E0 A0 80 0800 00000800
FFFF EF BF BF FFFF 0000FFFF
10000 F0 90 80 80 D800 DC00 00010000



2 Extended Unicode Support: Level 2

Level 1 support works well in many circumstances. However, it does not handle more complex languages or extensions to the Unicode Standard very well. Particularly important cases are canonical equivalence, word boundaries, default grapheme cluster boundaries, and loose matches. (For more information about boundary conditions, see The Unicode Standard, Section 5-15.)

Level 2 support matches much more what user expectations are for sequences of Unicode characters. It is still locale-independent and easily implementable. However, the implementation may be slower when supporting Level 2, and some expressions may require Level 1 matches. Thus it is often useful to have some sort of syntax that will turn Level 2 support on and off.

2.1 Canonical Equivalents

There are many instances where a character can be equivalently expressed by two different sequences of Unicode characters. For example, [ä] should match both "ä" and "a\u0308". (See UAX #15: Unicode Normalization [Norm] and Sections 2.5 and 3.9 of The Unicode Standard for more information.)

RL2.1 Canonical Equivalents
To meet this requirement, an implementation shall provide a mechanism for ensuring that all canonically equivalent literal characters match.

There are two main options for implementing this:

  1. Before (or during) processing, translate text (and pattern) into a normalized form. This is the simplest to implement, since there are available code libraries for doing normalization
  1. Expand the regular expression internally into a more generalized regular expression that takes canonical equivalence into account. For example, the expression [a-z ä] can be internally turned into [a-z ä] | (a \u0308). While this can be faster, it may also be substantially more difficult to generate expressions capturing all of the possible equivalent sequences.

It may be useful to distinguish a regular-expression engine from the larger software package which uses it. For example, the requirements of this section can be met by requiring the package to normalize text before supplying it to the regular expression engine. However, where the regular expression engine returns offsets into the text, the package may need to map those back to what the offsets would be in the original, unnormalized text.

Note: Combining characters are required for many languages. Even when text is in Normalization Form C, there may be combining characters in the text.

2.2 Default Grapheme Clusters

One or more Unicode characters may make up what the user thinks of as a character. To avoid ambiguity with the computer use of the term character, this is called a grapheme cluster. For example, "G" + acute-accent is a grapheme cluster: it is thought of as a single character by users, yet is actually represented by two Unicode characters.

Note: default grapheme clusters were previously referred to as "locale-independent graphemes". The term cluster has been added to emphasize that the term grapheme as used differently in linguistics. For simplicity and to align with UTS #10: Unicode Collation Algorithm [Collation], the terms "locale-independent" and "locale-dependent" been also changed to "default" and "tailored" respectively.

Essentially, the default grapheme clusters do only two things: they keep Hangul syllables together, and they don't break before non-spacing marks.

These default grapheme clusters are not the same as tailored grapheme clusters, which are covered in Level 3, Tailored Grapheme Clusters. The default grapheme clusters are determined according to the rules in UTR #29: Text Boundaries [Boundaries].

RL2.2 Default Grapheme Clusters
To meet this requirement, an implementation shall provide a mechanism for matching against an arbitrary default grapheme cluster, a literal cluster, and matching default grapheme cluster boundaries.

For example, an implementation could interpret "\X" as matching any default grapheme cluster, while interpreting "." as matching any single code point. It could interpret "\h" as a zero-width match against any grapheme cluster boundary, and "\H" as the negation of that.

Regular expression engines should also provide some mechanism for easily matching against literal clusters, since they are more likely to match user expectations for many languages. One mechanism for doing that is to have explicit syntax for literal clusters, as in the following. This syntax can also be used for tailored grapheme clusters (Tailored Grapheme Clusters).

ITEM := "\q{" CODE_POINT + "}"

Examples:

[a-z\q{x\u0323}] Match a-z, and x with an under-dot (used in American Indian languages)
[a-z\q{aa}] Match a-z, and aa (treated as a single character in Danish).
[a-z ñ \q{ch} \q{ll} \q{rr}] Match lowercase characters in traditional Spanish.

These can be expressed syntactically by breaking them into combinations of code point sets and other constructs:

Original Equivalence
[a-z ñ \q{ch} \q{ll} \q{rr}] ([[a-z ñ]-[clr]] | c(?!h) | ch | l(?!l) | ll | r(?!r) | rr)

This is a bit cumbersome for users: notice the use of the negative lookahead syntax ?! to force, for example, force "c" followed by "h" to match only "ch". Not only is this cumbersome, but that is not as convenient when other options are added, as in Level 3.

A typical implementation of the inverse of a set containing literal clusters simply removes those strings, thus [^a-z ñ \q{ch} \q{ll} \q{rr}] is equivalent to [^a-z ñ]. Without literal clusters, intersection and set-difference can be expressed simply as a combination of the other with inverse. However, this is not the case if the implementation of inverse simply removes the strings. Thus:

2.3 Default Word Boundaries

RL2.3 Default Word Boundaries
To meet this requirement, an implementation shall provide a mechanism for matching Unicode default word boundaries.

The simple Level 1 support using simple <word_character> classes is only a very rough approximation of user word boundaries. A much better method takes into account more context than just a single pair of letters. A general algorithm can take care of character and word boundaries for most of the world's languages. For more information, see UTR #29: Text Boundaries [Boundaries].

Note: Word boundaries and "soft" line-break boundaries (where one could break in line wrapping) are not generally the same; line breaking has a much more complex set of requirements to meet the typographic requirements of different languages. See UAX #14: Line Breaking Properties [LineBreak] for more information. However, soft line breaks are not generally relevant to general regular expression engines.

A fine-grained approach to languages such as Chinese or Thai, languages that do not have spaces, requires information that is beyond the bounds of what a Level 2 algorithm can provide.

2.4 Default Loose Matches

RL2.4 Default Loose Matches
To meet this requirement:
  • if an implementation provides for case-insensitive matching, then it shall provide at least the full, default Unicode case-insensitive matching.
  • if an implementation provides for case conversions, then it shall provide at least the full, default Unicode case conversion.

At Level 1, caseless matches do not need to handle cases where one character matches against two. Level 2 includes caseless matches where one character may match against two (or more) characters. For example, 00DF "ß" will match against the two characters "SS".

To correctly implement a caseless match and case conversions, see UAX #21: Case Mappings [Case]. For ease of implementation, a complete case folding file is supplied at CaseFolding.txt [CaseData].

If the implementation containing the regular expression engine also offers case conversions, then these should also be done in accordance with UAX #21, with the full mappings. The relevant data files are SpecialCasing.txt [SpecialCasing] and UnicodeData.txt [UData].

2.5 Name Properties

RL2.5 Name Properties
To meet this requirement, an implementation shall support individually named characters.

When using names in regular expressions, the main data is supplied in the Name property in the UCD, as described in [UDataDoc], or computed as in the case of CJK Ideographs or Hangul Syllables. Certain code points are not assigned names in the standard. These should be given names based on the General_Category:

Control: The Unicode 1.0 name field (ISO control names).
Private Use: <no name>
Unassigned:

The ISO names for the control characters may be unfamiliar, especially since many people are not familiar with changes in the formal ISO names to make them more language neutral, so it is recommended that they be supplemented with other aliases. For example, for U+0009 the implementation could accept the official name CHARACTER TABULATION, and also the aliases HORIZONTAL TABULATION, HT, and TAB.

Individually Named Characters

This facility provides syntax for specifying a code point by supplying the precise name, such as the following. This syntax specifies a single code point, which can thus be used in ranges.

<codepoint> := "\N{" <character_name> "}"

This is equivalent to using the property name, as in \p{name=WHITE SMILING FACE}. The only distinction between them is that \N should cause a syntax error if it fails to match a character.

As with other property values, names should use a loose match, disregarding case, spaces and hyphen ("-") (the underbar character "_" cannot occur in Unicode character names). An implementation may also choose to allow namespaces, where some prefix like "LATIN LETTER" is set globally and used if there is no match otherwise.

There are, however, three instances that require special-casing with loose matching, where an extra test shall be made for the presence or absence of a hyphen.

 Examples:

2.6 Wildcard Properties

RL2.5 Wildcard Properties
To meet this requirement, an implementation shall support Unicode properties with wildcards.

Instead of a single property value, this feature allows the use of a regular expression to pick out a set of characters based on whether the property values match the regular expression. The regular expression must support at least wild-cards; other regular expressions features are recommended but optional.

Note: Where regular expressions are using in matching, case, spaces, hyphen, and underbar are significant; it is presumed that people will use regular-expression features to ignore these if desired.

Examples, where ".*" matches any character, | is alternation, and (...) is grouping:

\p{name=/LATIN LETTER.*P/} is equivalent to:
01AA          # LATIN LETTER REVERSED ESH LOOP
0294          # LATIN LETTER GLOTTAL STOP
0296          # LATIN LETTER INVERTED GLOTTAL STOP
1D18          # LATIN LETTER SMALL CAPITAL P
\p{name=/.*VARIA(TION|NT).*/} is equivalent to:
180B..180D    # MONGOLIAN FREE VARIATION SELECTOR ONE..~ ~ ~ ~ THREE
299C          # RIGHT ANGLE VARIANT WITH SQUARE
303E          # IDEOGRAPHIC VARIATION INDICATOR
FE00..FE0F    # VARIATION SELECTOR-1..
~ ~16
E0100..E01EF  # VARIATION SELECTOR-17..
~ ~256
\p{nfd=/.*b.*/} is equivalent to:
0062 # Ll [1] U+0062 LATIN SMALL LETTER B
1E03 # Ll [1] U+1E03 LATIN SMALL LETTER B WITH DOT ABOVE
1E05 # Ll [1] U+1E05 LATIN SMALL LETTER B WITH DOT BELOW
1E07 # Ll [1] U+1E07 LATIN SMALL LETTER B WITH LINE BELOW

The above are all on the basis of Unicode 4.0; different versions of Unicode may produce different results.


3 Tailored Support: Level 3

All of the above deals with a default specification for a regular expression. However, a regular expression engine also may want to support tailored specifications, typically tailored for a particular language or locale. This may be important when the regular expression engine is being used by end-users instead of programmers, such as in a word-processor allowing some level of regular expressions in searching.

For example, the order of Unicode characters may differ substantially from the order expected by users of a particular language. The regular expression engine has to decide, for example, whether the list [a-ä] means:

If both tailored and default regular expressions are supported, then a number of different mechanism are affected. There are a two main alternatives for control of tailored support:

For example, fine-grained support could use some syntax like the following indicate tailoring to a locale within a certain range. (Marking locales is generally specified by means of the common ISO 639 and 3166 tags, such as "en_US". For more information on these tags, see the online data in [Online].)

\T{<locale>}..\E

Level 3 support may be considerably slower than Level 2, and some scripts may require either Level 1 or Level 2 matches instead. Thus there must be some sort of syntax that will allow Level 3 support to be turned on and off. Because tailored regular expression patterns are usually quite specific to the locale, and will generally not work across different locales, the syntax should also specify the particular locale or other tailoring customization that the pattern was designed for.

Sections 3.6 and following describe some additional capabilities of regular expression engines that are very useful in a Unicode environment, especially in dealing with the complexities of the large number of writing systems and languages expressible in Unicode.

3.1. Tailored Punctuation

The Unicode character properties for punctuation may vary from language to language or from country to country. In most cases, the effects of such changes will be apparent in other operations, such as a determination of word breaks. But there are other circumstances where the effects should be apparent in the general APIs, such as when testing whether a curly quotation mark is opening or closing punctuation may vary.

RL3.1 Tailored Punctuation
To meet this requirement, an implementation shall allow for punctuation properties to be tailored according to locale.

As described in 3.0, there must be the capability of turning this support on or off.

3.2 Tailored Grapheme Clusters

RL3.2 Tailored Grapheme Clusters
To meet this requirement, an implementation shall provide for collation grapheme clusters matches based on a locale's collation order.

Tailored grapheme clusters may be somewhat different than the default grapheme clusters discussed in Level 2. They are coordinated with the collation ordering for a given language in the following way. A collation ordering determines a collation grapheme cluster, which is a sequence of characters that is treated as a unit by the ordering. For example, ch is a collation character for a traditional Spanish ordering. More specifically, a collation character is the longest sequence of characters that maps to a sequence of one or more collation elements where the first collation element has a primary weight and subsequent elements do not, and no completely ignorable characters are included.

The tailored grapheme clusters for a particular locale are the collation characters for the collation ordering for that locale. The determination of tailored grapheme clusters requires the regular expression engine to either draw upon the platform's collation data, or incorporate its own tailored data for each supported locale.

See UTS #10: Unicode Collation Algorithm [Collation] for more information about collation, and Annex B. Sample Collation Character Code for sample code.

3.3 Tailored Word Boundaries

RL3.3 Tailored Word Boundaries
To meet this requirement, an implementation shall allow for the ability to have word boundaries to be tailored according to locale.

Semantic analysis may be required for correct word boundary detection in languages that don't require spaces, such as Thai, Japanese, Chinese or Korean. This can require fairly sophisticated support if Level 3 word boundary detection is required, and usually requires drawing on platform OS services.

3.4 Tailored Loose Matches

RL3.4 Tailored Loose Matches
To meet this requirement, an implementation shall provide for loose matches based on a locale's collation order, with at least 3 levels.

In Level 1 and 2, caseless matches are described, but there are other interesting linguistic features that users may want to match. For example, V and W are considered equivalent in Swedish collations, and so [V] should match W in Swedish. In line with the UTS #10: Unicode Collation Algorithm [Collation], at the following four levels of equivalences are recommended:

If users are to have control over these equivalence classes, here is an example of how the sample syntax could be modified to account for this. The syntax for switching the strength or type of matching varies widely. Note that these tags switch behavior on and off in the middle of a regular expression; they do not match a character.

ITEM := \v{PRIMARY}   // match primary only
ITEM := \v{SECONDARY} // match primary & secondary only
ITEM := \v{TERTIARY}  // match primary, secondary, tertiary
ITEM := \v{EXACT}     // match all levels, normal state

Examples:

[\v{SECONDARY}a-m] Match a-m, plus case variants A-M, plus compatibility variants

Basic information for these equivalence classes can be derived from the data tables referenced by UTS #10: Unicode Collation Algorithm [Collation].

3.5. Tailored Ranges

RL3.5 Tailored Ranges
To meet this requirement, an implementation shall provide for ranges based on a locale's collation order.

Tailored character ranges will include tailored grapheme clusters, as discussed above. This broadens the set of grapheme clusters — in traditional Spanish, for example, [b-d] would match against "ch".

Note: this is another reason why a property for all characters \p{Any} is needed—it is possible for a locale's collation to not have [\u0000-\U0010FFFF] encompass all characters.

Languages may also vary whether they consider lowercase below uppercase or the reverse. This can have some surprising results: [a-Z] may not match anything if Z < a in that locale!

3.6 Context Matching

RL3.6 Context Matching
To meet this requirement, an implementation shall provide for a restrictive match against input text, allowing for context before and after the match.

For parallel, filtered transformations, such as involved in script transliteration, it is important to restrict the matching of a regular expression to a substring of a given string, and yet allow for context before and after the affected area. Here is a sample API that implements such functionality, where m is an extension of a Regex Matcher.

  if (m.matches(text, contextStart, targetStart, targetLimit, contextLimit)) {
    int end = p.getMatchEnd();
  }

The range of characters between contextStart and targetStart define a precontext; the characters between targetStart and targetLimit define a target, and the offsets between targetLimit and contextLimit define a postcontext. Thus contextStarttargetStart targetLimitcontextLimit. The meaning of this function is that

Examples:

In these examples, the text in the pre- and postcontext is italicized and the target is underlined. In the output column, the text in bold is the matched portion. The pattern syntax "(←x)" means a backwards match for x (without moving the cursor) This would be (?<=x) in Perl. The pattern "(→x)" means a forwards match for x (without moving the cursor). This would be (?=x) in Perl.

Pattern Input Output Comment
/(←a) (bc)* (→d)/ 1abcbcd2 1abcbcd2 matching with context
/(←a) (bc)* (→bcd)/ 1abcbcd2 1abcbcd2 stops early, since otherwise 'd' wouldn't match.
/(bc)*d/ 1abcbcd2 no match 'd' can't be matched in the target, only in the postcontext
/(←a) (bc)* (→d)/ 1abcbcd2 no match 'a' can't be matched, since it is before the precontext (which is zero-length, here)

While it would be possible to simulate this API call with other regular expression calls, it would require subdividing the string and making multiple regular expression engine calls, significantly affecting performance.

There should also be pattern syntax for matches (like ^ and $) for the contextStart and contextLimit positions.

Internally, this can be implemented by modifying the regular expression engine so that all matches are limited to characters between contextStart and contextLimit, and that all matches that are not zero-width look-arounds are limited to the characters between targetStart and targetLimit.

3.7 Incremental Matches

RL3.7 Incremental Matches
To meet this requirement, an implementation shall provide for incremental matching.

For buffered matching, one needs to be able to return whether there is a partial match; that is, whether there would be a match if additional characters were added after the targetLimit. This can be done with a separate method having an enumerated return value: match, no_match, or partial_match.

  if (m.incrementalmatches(text, cs, ts, tl, cl) == Matcher.MATCH) {
    ...
  }

Thus performing an incremental match of /bcbce(→d)/ against "1abcbcd2" would return a partial_match because the addition of an e to the end of the target would allow it to match. Note that /(bc)*(→d)/ would also return a partial match, because if bc were added at the end of the target, it would match.

Here is the above table, when an incremental match method is called:

Pattern Input Output Comment
/(←a) (bc)* (→d)/ 1abcbcd2 partial match 'bc' could be inserted.
/(←a) (bc)* (→bcd)/ 1abcbcd2 partial match 'bc' could be inserted.
/(bc)*d/ 1abcbcd2 partial match 'd' could be inserted.
/(←a) (bc)* (→d)/ 1abcbcd2 no match as with the matches function; the backwards search for 'a' fails.

The typical usage of incremental match is to make a series of incremental match calls, marching through a buffer with each successful match. At the end, if there is a partial match, one loads another buffer (or waits for other input). When the process terminates (no more buffers/input available), then a regular match call is made.

Internally, incremental matching can be implemented in the regular expression engine by detecting whether the matching process ever fails when the current position is at or after targetLimit, and setting a flag if so. If the overall match fails, and this flag is set, then the return value is set to partial_match. Otherwise, either match or no_match is returned, as appropriate.

The return value partial_match indicates that there was a partial match: if further characters were added there could be a match to the resulting string. It may be useful to divide this return value into two, instead:

3.8 Unicode Set Sharing

RL3.8 Unicode Set Sharing
To meet this requirement, an implementation shall provide for shared storage of Unicode sets.

For script transliteration and similar applications, there may be a hundreds of regular expressions, sharing a number of Unicode sets in common. These Unicode sets, such as [\p{Alphabetic} - \p{Latin}], could take a fair amount of memory, since they would typically be expanded into an internal memory representation that allows for fast lookup. If these sets separately stored, this means an excessive memory burden.

To reduce the storage requirements, an API may allow regular expressions to share storage of these and other constructs, by having a 'pool' of data associated with a set of compiled regular expressions.

rules.registerSet("$lglow", "[\p{lowercase}&[\p{latin}\p{greek}]] ");
rules.registerSet("$mark", "[\p{Mark}]");
...
rules.add("θ", "th");
rules.add("Θ(→$mark*$lglow)", "Th");
rules.add("Θ", "TH");
...
rules.add("φ", "th");
rules.add("Ψ(→$mark*$lglow)", "Ps");
rules.add("Ψ", "PS");
...

3.9 Possible Match Sets

RL3.9 Possible Match Sets
To meet this requirement, an implementation shall provide for the generation of possible match sets from any regular expression pattern.

There are a number of circumstances where additional functions on regular expression patterns can be useful for performance or analysis of those patterns. These are functions that return information about the sets of characters that a regular expression can match.

When applying a list of regular expressions (with replacements) against a given piece of text, one can do that either serially or in parallel. With a serial application, each regular expression is applied the text, repeatedly from start to end. With parallel application, each position in the text is checked against the entire list, with the first match winning. After the replacement, the next position in the text is checked, and so on.

For such a parallel process to be efficient, one needs to be able to winnow out the regular expressions that simply could not match text starting with a given code point. For that, it is very useful to have a function on a regular expression pattern that returns a set of all the code points that the pattern would partially or fully match.

   myFirstMatchingSet = pattern.getFirstMatchSet(Regex.POSSIBLE_FIRST_CODEPOINT);

For example, the pattern /[[\u0000-\u00FF] & [:Latin:]] * [0-9]/ would return the set {0..9, A..Z, a..z}. Logically, this is the set of all code points that would be at least partial matches (if considered in isolation).

Note: an additional useful function would be one that returned the set of all code points that could be matched at any point. Thus a code point outside of this set cannot be in any part of a matching range.

The second useful case is the set of all code points that could be matched in any particular group, that is, that could be set in the standard $0, $1, $2, etc. variables.

   myAllMatchingSet = pattern.getAllMatchSet(Regex.POSSIBLE_IN$0);

Internally, this can be implemented by analysing the regular expression (or parts of it) recursively to determine which characters match. For example, the first match set of an alternation (a | b) is the union of the first match sets of the terms a and b.

The set that is returned is only guaranteed to include all possible first characters; if an expression gets too complicated it could be a proper superset of all the possible characters.

3.10 Folded Matching

RL3.10 Folded Matching
To meet this requirement, an implementation shall provide for registration of folding functions for providing insensitive matching for linguistic features other than case.

Regular expressions typically provide for case-sensitive or case-insensitive matching. This accounts for the fact that in English and many other languages, users quite often want to disregard the differences between characters that are solely due to case. It would be quite awkward to do this manually: for example, to do a caseless match against the last name in /Mark\sDavis/, one would have to use the pattern /Mark\s[Dd][Aa][Vv][Ii][Ss]/, instead of some syntax that can indicate that the target text is to be matched after folding case, such as /Mark\s\CDavis\E/.

For many languages and writing systems, there are other differences besides case where users want to allow a loose match. Once such way to do this is given above in the discussion of matching according to collation strength. There are others: for example, for Ethiopic one may need to match characters independent of their inherent vowel, or match certain types of vowels. It is difficult to tell exactly which ways people might want to match text for different languages, so the most flexible way to provide such support is to provide a general mechanism for overriding the way that regular expressions match literals.

One way to do this is to use folding functions. These are functions that map strings to strings, and are idempotent (applying a function more than once produces the same result: f(f(x)) = f(x). There are two parts to this: (a) allow folding functions to be registered, and (b) extend patterns so that registered folding functions can be activated. During the span of text in which a folding function is activated, both the pattern literals and the input text will be processed according to the folding before comparing. For example:

// Folds katakana and hiragana together
class KanaFolder implements RegExFolder {
  // from RegExFolder, must be overridden in subclasses
  String fold(String source) {...}

  // from RegExFolder, may be overridden for efficiency
  RegExFolder clone(String parameter, Locale locale) {...}
  int fold(int source) {...}
  UnicodeSet fold(UnicodeSet source) {...}
}
  ...

  RegExFolder.registerFolding("k_h", new KanaFolder());

  ...

  p = Pattern.compile("(\F{k_h=argument}マルク (\s)* ダ (ヸ | ビ) ス \E : \s+)*");

In the above example, the Kana folding is in force until terminated with \E. Within the scope of the folding, all text in the target would be folded before matching (the literal text in the pattern would also be folded). This only affects literals; regular expression syntax such as '(' or '*' are unaffected.

While it is sufficient to provide a folding function for strings, for efficiency one can also provide functions for folding single code points and Unicode sets (e.g. [a-z...]). For more information, see [Folding].

3.11 Submatchers

RL3.11 Submatchers
To meet this requirement, an implementation shall provide for general registration of matching functions for providing matching for general linguistic features.

There are over 70 properties in the Unicode character database, yet there are many other sequences of characters that people may want to match, many of them specific to given languages. For example, characters that are used as vowels may vary by language. This goes beyond single-character properties, since certain sequences of characters may need to be matched; such sequences may not be easy themselves to express using regular expressions. Extending the regular expression syntax to provide for registration of arbitrary properties of characters allows these requirements to be handled.

The following provides an example of this. The actual function is just for illustration.

class MultipleMatcher implements RegExSubmatcher {
  // from RegExFolder, must be overridden in subclasses
  /**
   * Returns -1 if there is no match; otherwise returns the endpoint;
   * an offset indicating how far the match got.
   * The endpoint is always between targetStart and targetLimit, inclusive.
   * Note that there may be zero-width matches.
   */
  int match(String text, int contextStart, int targetStart, int targetLimit, int contextLimit) {
    // code for matching numbers according to numeric value.
  }

  // from RegExFolder, may be overridden for efficiency
  /**
   * The parameter is a number. The match will match any numeric value that is a multiple.
   * Example: for "2.3", it will match "0002.3000", "4.6", "11.5", and any non-Western
   * script variants, like Indic numbers.
   */
  RegExSubmatcher clone(String parameter, Locale locale) {...}
}
  ...

  RegExSubmatcher.registerMatcher("multiple", new MultipleMatcher());

  ...

  p = Pattern.compile("xxx\M{multiple=2.3}xxx");

In this example, the match function can be written to parse numbers according to the conventions of different locales, based on OS functions available for doing such parsing. If there are mechanisms for setting a locale for a portion of a regular expression, then that locale would be used; otherwise the default locale would be used.

Note: It might be advantageous to make the Submatcher API identical to the Matcher API; that is, only have one base class "Matcher", and have user extensions derive from the base class. The base class itself can allow for nested matchers.


Annex A. Character Blocks

The Block property from the Unicode Character Database can be a useful property for quickly describing a set of Unicode characters. It assigns a name to segments of the Unicode codepoint space; for example, [\u0370-\u03FF] is the Greek block.

However, block names need to be used with discretion; they are very easy to misuse since they only supply a very coarse view of the Unicode character allocation. For example:

The following table illustrates the mismatch between writing systems and blocks. These are only examples; this table is not a complete analysis. It also doesn't include common punctuation used with all of these writing systems.

Writing Systems vs. Blocks
Writing Systems Blocks
Latin   Basic Latin, Latin-1 Supplement, Latin Extended-A, Latin Extended-B, Latin Extended Additional, Diacritics
Greek   Greek, Greek Extended, Diacritics
Arabic   Arabic, Arabic Presentation Forms-A, Arabic Presentation Forms-B
Korean Hangul Jamo, Hangul Compatibility Jamo, Hangul Syllables, CJK Unified Ideographs, CJK Unified Ideographs Extension A, CJK Compatibility Ideographs, CJK Compatibility Forms, Enclosed CJK Letters and Months, Small Form Variants
Yi   Yi Syllables, Yi Radicals
Chinese CJK Unified Ideographs, CJK Unified Ideographs Extension A, CJK Compatibility Ideographs, CJK Compatibility Forms, Enclosed CJK Letters and Months, Small Form Variants, Bopomofo, Bopomofo Extended

For the above reasons, Script values are generally preferred to Block values. Even there, they should be used in accordance with the guidelines in UTR #24: Script Names [ScriptDoc].

Annex B: Sample Collation Character Code

The following provides sample code for doing Level 3 collation character detection. This code is meant to be illustrative, and has not been optimized. Although written in Java, it could be easily expressed in any programming language that allows access to the Unicode Collation Algorithm mappings.

/**
 * Return the end of a collation character.
 * @param s         the source string
 * @param start     the position in the string to search
 *                  forward from
 * @param collator  the collator used to produce collation elements.
 * This can either be a custom-built one, or produced from
 * the factory method Collator.getInstance(someLocale).
 * @return          the end position of the collation character
 */

static int getLocaleCharacterEnd(String s,
  int start, RuleBasedCollator collator) {
    int lastPosition = start;
    CollationElementIterator it 
      = collator.getCollationElementIterator(
          s.substring(start, s.length()));
    it.next(); // discard first collation element
    int primary;
        
    // accumulate characters until we get to a non-zero primary
        
    do {
        lastPosition = it.getOffset();
        int ce = it.next();
        if (ce == CollationElementIterator.NULLORDER) break;
        primary = CollationElementIterator.primaryOrder(ce);
    } while (primary == 0);
    return lastPosition;
}

Annex C: Compatibility Properties

The following are recommended assignments for compatibility property names, for use in Regular Expressions. There are two alternatives: the Standard Recommendation and the POSIX Compatible versions. The latter is modified to meet the formal requirements of [POSIX], and also to maintain (as much as possible compatibility) with the usage in practice. So, for example, \p{Lowercase} and \p{Uppercase} remove anything not in alpha, since one of the POSIX requirements is that alpha contain lower and upper and there are a few Unicode Symbols that are Lowercase bug not included in Alphabetic.

Note to Reviewers: Not only do we want to match the 'letter of the law' for POSIX, we also
want to try to also match the prevailing practice, insofar as that can be
expressed as consistent combinations of Unicode properties. Ideally we would not have two columns different alternatives for assignment. There are, however, a few problems with that, for which we would especially appreciate feedback:

  1. First is that certain properties have the same names in POSIX and in Unicode: alpha, upper, lower (these are short-name aliases in Unicode). Unfortunately, POSIX imposes a constraint that is not met by the Unicode properties, so there is a mismatch between what what would be natural and what must be done for POSIX requirements. The problem arises because of the following characters. The options are to subtract these from upper, add them to alpha, or change the Unicode properties so that Alphabetic contains Uppercase.

    • 24B6..24CF # So [26] CIRCLED LATIN CAPITAL LETTER A..CIRCLED LATIN CAPITAL LETTER Z
  2. POSIX also imposes some unreasonable requirements on digit: there can be at most 20. And our feedback is that for many UNIX implementations, anything but the ASCII digits will cause at least some programs to choke.

  3. The table below adds symbols (where possible) to punct, since that matches the "POSIX locale" (the default). POSIX does allow other languages to change that; the question is what gives the best compatibility. Another alternative is to add just the symbols in the ASCII range.

  4. We have gotten feedback that -- while not a formal requirement in POSIX -- graph should include all assigned characters but cntrl and blank. So we should add Format characters either to cntrl or to graph, or split them on a case-by case basis between those two categories. These include:

    • 00AD           # Cf       SOFT HYPHEN
      0600..0603     # Cf   [4] ARABIC NUMBER SIGN..ARABIC SIGN SAFHA
      06DD           # Cf       ARABIC END OF AYAH
      070F           # Cf       SYRIAC ABBREVIATION MARK
      17B4..17B5     # Cf   [2] KHMER VOWEL INHERENT AQ..KHMER VOWEL INHERENT AA
      200B           # Cf       ZERO WIDTH SPACE
      200E..200F     # Cf   [2] LEFT-TO-RIGHT MARK..RIGHT-TO-LEFT MARK
      202A..202E     # Cf   [5] LEFT-TO-RIGHT EMBEDDING..RIGHT-TO-LEFT OVERRIDE
      2060..2063     # Cf   [4] WORD JOINER..INVISIBLE SEPARATOR
      206A..206F     # Cf   [6] INHIBIT SYMMETRIC SWAPPING..NOMINAL DIGIT SHAPES
      FEFF           # Cf       ZERO WIDTH NO-BREAK SPACE
      FFF9..FFFB     # Cf   [3] INTERLINEAR ANNOTATION ANCHOR..INTERLINEAR
      ANNOTATION TERMINATOR
      1D173..1D17A   # Cf   [8] MUSICAL SYMBOL BEGIN BEAM..MUSICAL SYMBOL END
      PHRASE
      E0001          # Cf       LANGUAGE TAG
      E0020..E007F   # Cf  [96] TAG SPACE..CANCEL TAG
      

Are there any other changes in the POSIX column that would promote compatibility?

Property Standard Recommendation POSIX Compatible Comments
alpha Alphabetic includes more than gc = Letter. Note that marks (Me, Mn, Mc) are required for words of many languages. While they could be applied to non-alphabetics, their principle use is on alphabetics. See DerivedCoreProperties [UCD] for Alphabetic, also DerivedGeneralCategory [UCD].

Alphabetic should not be used as an approximation for word boundaries: see word below.

lower Lowercase includes more than gc = Lowercase_Letter (Ll). See DerivedCoreProperties [UCD].
upper Uppercase includes more than gc = Uppercase_Letter (Lu).
punct POSIX adds symbols. Not recommended generally, due to the confusion of having punct include non-punctuation marks.
digit
(\d)
Non-decimal numbers (like Roman numerals) are normally excluded. In U4.0+, the recommended column is the same as gc = Decimal_Number (Nd). See DerivedNumericType [UCD].
xdigit
Hex_Digit contains 0-9 A-F, fullwidth and halfwidth, upper and lowercase.
alnum Simple combination of other properties
space
\s
See PropList [UCD] for the definition of Whitespace.
blank "horizontal" whitespace.
cntrl The characters in \p{gc=Format} share some, but not all aspects of control characters. Many format characters are required in the representation of plain text.
graph Warning: the set to the left is defined by excluding space, controls, etc. with ^.

Perl 5.8.0 is similar except that it excludes: Z, Cc, Cf, Cs, Cn. The intent is for Perl 5.8.1 to align with the specification here.

print Includes graph and space-like characters.
word
(\w)
This is only an approximation to Word Boundaries (see b below). The Connector Punctuation is added in for programming language identifiers, thus adding "_" and similar characters.
\X See [Boundaries], also GraphemeClusterBreakTest. Other functions are used for programming language identifier boundaries.
\b If there is a requirement that \b align with \w, then it would use the approximation above instead. See [Boundaries], also WordBreakTest.

Note that different functions are used for programming language identifier boundaries. See also [Syntax].

References

[Boundaries] UAX #29: Text Boundaries
http://www.unicode.org/reports/tr29/
[Case] Section 3.13 [Unicode]
http://www.unicode.org/versions/Unicode4.0.0/ch03.pdf#G33992
[CaseData] http://www.unicode.org/Public/UNIDATA/CaseFolding.txt
[Collation] UTS #10: Unicode Collation Algorithm
http://www.unicode.org/reports/tr10/
[FAQ] Unicode Frequently Asked Questions
http://www.unicode.org/faq/
For answers to common questions on technical issues.
[Feedback] Reporting Errors and Requesting Information Online
http://www.unicode.org/reporting.html
[Folding] UTR #30 Character Foldings (Current Status: Draft)
http://www.unicode.org/reports/tr30/
[Friedl] Jeffrey Friedl, "Mastering Regular Expressions", 2nd Edition 2002, O'Reilly and Associates, ISBN 0-596-00289-0
[Glossary] Unicode Glossary
http://www.unicode.org/glossary/
For explanations of terminology used in this and other documents.
[LineBreak] UAX #14: Line Breaking Properties
http://www.unicode.org/reports/tr14/
[NewLine] UAX #13, Unicode Newline Guidelines
http://www.unicode.org/reports/tr13/
[Norm] UAX #15: Unicode Normalization
http://www.unicode.org/reports/tr15/
[Online] http://www.unicode.org/onlinedat/online.html
[Perl] http://www.perl.com/pub/q/documentation
See especially:
http://www.perldoc.com/perl5.8.0/lib/charnames.html
http://www.perldoc.com/perl5.8.0/pod/perlre.html
http://www.perldoc.com/perl5.8.0/pod/perluniintro.html
http://www.perldoc.com/perl5.8.0/pod/perlunicode.html
[POSIX] The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2003 Edition, "Locale" chapter
http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap07.html
[Prop] http://www.unicode.org/Public/UNIDATA/PropertyAliases.txt
[PropValue] http://www.unicode.org/Public/UNIDATA/PropertyValueAliases.txt
[Reports] Unicode Technical Reports
http://www.unicode.org/reports/
For information on the status and development process for technical reports, and for a list of technical reports.
[ScriptData] http://www.unicode.org/Public/UNIDATA/Scripts.txt
[ScriptDoc] UTR #24: Script Names
http://www.unicode.org/reports/tr24/
[SpecialCasing] http://www.unicode.org/Public/UNIDATA/SpecialCasing.txt
[Syntax] UTR #31: Identifier and Pattern Syntax (Current Status: Draft)
http://www.unicode.org/reports/tr31/
[UCD] http://www.unicode.org/ucd/
[UData] http://www.unicode.org/Public/UNIDATA/UnicodeData.txt
[UDataDoc] http://www.unicode.org/Public/UNIDATA/UnicodeData.html
[Unicode] The Unicode Consortium. The Unicode Standard, Version 4.0. Reading, MA, Addison-Wesley, 2003. 0-321-18578-1.
[Versions] Versions of the Unicode Standard
http://www.unicode.org/versions/
For details on the precise contents of each version of the Unicode Standard, and how to cite them.

 

Acknowledgments

Thanks to Jeffrey Friedl, Andy Heninger, Alan Liu, Kent Karlsson, Jarkko Hietaniemi, Gurusamy Sarathy, Henry Spencer, Tom Watson, and Kento Tamura for their feedback on the document.

Modifications

The following summarizes modifications from the previous version of this document.

10
  • R1.4, item 2 changed for ZW(N)J
  • Added conformance clause to allow a claim of conformance to the Compatibility properties.
  • Split the Compatibility properties into two, to allow for regular vs. strict POSIX properties.
  • Added other notation for use here and in other Unicode Standards
  • Added vertical tab to newline sequences. Reorganized text slightly to only list codepoints once.
  • Minor Editing
9
  • Split 2.5 into two sections, expanding latter.
  • Misc. editing and clarifications.
8
  • Renumbered sections to match levels
  • Introduced "RL" numbering on clauses
  • Misc. editing and clarifications.
7
  • Now proposed as a UTS, adding Conformance and specific wording in each relevant section.
  • Move hex notation for surrogates from 1.7 Surrogates into 1.1 Hex notation.
  • Added 3.6 Context Matching and following.
  • Updated to Unicode 4.0
  • Minor editing
  • Note: paragraphs with major changes are highlighted in this document; less substantive wording changes may not be.
6
  • Fixed 16-bit reference, moved Supplementary characters support (surrogates) to level 1.
  • Generally changed "locale-dependent" to "default", "locale-independent" to "tailored" and "grapheme" to "grapheme cluster"
  • Changed syntax slightly to be more like Perl
  • Added explicit table of General Category values
  • Added clarifications about scripts and blocks
  • Added descriptions of other properties, and a pointer to the default names
  • Referred to TR 29 for grapheme cluster and word boundaries
  • Removed old annex B (word boundary code)
  • Removed spaces from anchors
  • Added references, modification sections
  • Rearranged property section
  • Minor editing