Logo Unicode Technical Report #15

Unicode Normalization Forms

Revision 18.0
Authors Mark Davis (mark.davis@us.ibm.com), Martin Dürst (duerst@w3.org)
Date 1999-11-11
This Version http://www.unicode.org/unicode/reports/tr15/tr15-18.html
Previous Version http://www.unicode.org/unicode/reports/tr15/tr15-17.html
Latest Version http://www.unicode.org/unicode/reports/tr15

Summary

This document describes specifications for four normalized forms of Unicode text. With these forms, equivalent text (canonical or compatibility) will have identical binary representations.

Status

This document contains informative material and normative specifications which have been considered and approved by the Unicode Technical Committee for publication as a Technical Report and as part of the Unicode Standard, Version 3.0. Any reference to version 3.0 of the Unicode Standard automatically includes this technical report. Please mail corrigenda and other comments to the author.

The content of all technical reports must be understood in the context of the appropriate version of the Unicode Standard. References in this technical report to sections of the Unicode Standard refer to the Unicode Standard, Version 3.0. See http://www.unicode.org/unicode/standard/versions for more information.

Contents

1 Introduction

The Unicode Standard defines two equivalences between characters: canonical equivalence and compatibility equivalence. Canonical equivalence is a basic equivalency between characters or sequences of characters. The following figure illustrates this equivalence:

Figure for canonical equivalence

For round-trip compatibility with existing standards, Unicode has encoded many entities that are really variants of existing nominal characters. The visual representations of these character is typically a subset of the possible visual representations of the nominal character. These are given compatibility decompositions in the standard. Because the characters are visually distinguished, replacing a character by a compatibility equivalent may lose formatting information unless supplemented by markup or styling. See the figure below for examples of compatibility equivalents:

Figure for compatibility equivalence

Both of these equivalences are explained in more detail in The Unicode Standard, Chapters 2 and 3. In addition, the Standard describes several forms of normalization in Section 5.7 (Section 5.9 in Version 2.0). These normalization forms are designed to produce a unique normalized form for any given string. Two of these forms are precisely specified in Section 3.6. In particular, the standard defines a canonical decomposition format, which can be used as a normalization for interchanging text. This format allows for binary comparison while maintaining canonical equivalence with the original unnormalized text.

The standard also defines a compatibility decomposition format, which allows for binary comparison while maintaining compatibility equivalence with the original unnormalized text. The latter can also be useful in many circumstances, since it levels the differences between compatibility characters which are inappropriate in those circumstances. For example, the half-width and full-width katakana characters will have the same compatibility decomposition and are thus compatibility equivalents; however, they are not canonical equivalents.

Both of these formats are normalizations to decomposed characters. While Section 3.6 also discusses normalization to composite characters (also known as decomposible or precomposed characters), it does not precisely specify a format. Because of the nature of the precomposed forms in the Unicode Standard, there is more than one possible specification for a normalized form with composite characters. This document provides a unique specification for normalization, and a label for each normalized form.

The four normalization forms are labeled as follows.

Title

Description

Specification

Normalization Form D Canonical Decomposition Sections 3.6, 3.10, and 3.11 of The Unicode Standard, also summarized under Annex 4: Decomposition
Normalization Form C Canonical Decomposition,
followed by Canonical Composition
see §6 Specification
Normalization Form KD Compatibility Decomposition Sections 3.6, 3.10, and 3.11 of The Unicode Standard, also summarized under Annex 4: Decomposition
Normalization Form KC Compatibility Decomposition,
followed by Canonical Composition
see §6 Specification

As with decomposition, there are two forms of normalization to composite characters, Form C and Form KC. The difference between these depends on whether the resulting text is to be a canonical equivalent to the original unnormalized text, or is to be a compatibility equivalent to the original unnormalized text. (In KC and KD, a K is used to stand for compatibility to avoid confusion with the C standing for canonical.) Both types of normalization can be useful in different circumstances.

The following diagram illustrates the effect of applying different normalization forms to denormalized text. In the diagram, glyphs are colored according to the characters they represent.

Figure for different normalizations

With all normalization forms, singleton characters (those with singleton canonical mappings) are replaced. With forms D and C, compatibility characters (those with compatibility mappings) are retained; with KD and KC they are replaced. Notice that this is sometimes loses significant information, unless supplemented by markup or styling.

With forms D and KD, composite characters are mapped to their canonical decompositions. With forms C and KC, combining character sequences are mapped to composites, if possible. Notice that since there is no composite for e-ring, so it is left decomposed in forms C and KC.

All of the definitions in this document depend on the rules for equivalence and decomposition found in Chapter 3 of The Unicode Standard and the decomposition mappings in the Unicode Character Database.

Note: Text containing only ASCII characters (U+0000 to U+007F) is left unaffected by all of the normalization forms. This is particularly important for programming languages (see Annex 7: Programming Language Identifiers).

Normalization Form C uses canonical composite characters where possible, and maintains the distinction between characters that are compatibility equivalents. Typical strings of composite accented Unicode characters are already in Normalization Form C. Implementations of Unicode which restrict themselves to a repertoire containing no combining marks (such as those that declare themselves to be implementations at Level 1 as defined in ISO/IEC 10646-1) are already typically using Normalization Form C. (Implementations of later versions of 10646 need to be aware of the versioning issues--see §3 Versioning.) The W3C Character Model for the World Wide Web (http://www.w3.org/TR/WD-charmod) requires the use of Normalization Form C for XML and related standards (this document is not yet final, but this requirement is not expected to change). See also the W3C Requirements for String Identity Matching and String Indexing (http://www.w3.org/TR/WD-charreq) for more background.

Normalization Form KC additionally levels the differences between compatibility characters which are inappropriately distinguished in many circumstances. For example, the half-width and full-width katakana characters will normalize to the same strings, as will Roman Numerals and their letter equivalents. More complete examples are provided in Annex 1: Examples.

Normalization forms KC and KD must not be blindly applied to arbitrary text. Since they erase many formatting distinctions, they will prevent round-trip conversion to and from many legacy character sets, and unless supplanted by formatting markup, may remove distinctions that are important to the semantics of the text. The best way to think of these normalization forms is like uppercase or lowercase mappings: useful in certain contexts for identifying core meanings, but also performing modifications to the text that may not always be appropriate.

To summarize the treatment of compability characters that were in the source text:

Note: Normalization Form KC does not attempt to map characters to compatibility composites. For example, a compatibility composition of "office" does not produce "o\uFB03ce", even though "\uFB03" is a character that is the compatibility equivalent of the sequence of three characters 'ffi'.

None of the normalization forms are closed under string concatenation. Consider the following examples:

Form String1 String2 Concatenation Correct Normalization
C "a" "^" "a"+"^" "â"
D "a"+"^" "." (dot under) "a"+"^" + "." "a" + "." +"^"

Without limiting the repertoire, there is no way to produce a normalized form that is closed under simple string concatenation. If desired, however, a specialized function could be constructed that produced a normalized concatenation. However, all of the normalization forms are closed under substringing.

2 Notation

All of the definitions in this document depend on the rules for equivalence and decomposition found in Chapter 3 of The Unicode Standard and the decomposition and combining class mappings in the Unicode Character Database. Decomposition must be done in accordance with these rules. In particular, the decomposition mappings found in the Unicode Character Database must be applied recursively, and then the string put into canonical order based on the characters' combining classes.

We will use the following notation for brevity:

3 Versioning

Because additional composite characters may be added to future versions of the Unicode Standard, composition is less stable than decomposition. So that implementations can get the same result for normalization even if they upgrade to a new version of Unicode, it is necessary to specify a fixed version for the composition process, called the composition version.

Note: Decomposition is only unstable if an existing character's decomposition mapping changes. The Unicode Technical Committee has the policy of carefully reviewing proposed corrections in character decompositions, and only making changes where the benefits very clearly outweigh the drawbacks.

The composition version is defined to be Version 3.0.0 of  the Unicode Character Database. For more information, see:

To see what difference the composition version makes, suppose that Unicode 4.0 adds the composite Q-caron. For an implementation that uses Unicode 4.0, strings in Normalization Forms C or KC will continue to contain the sequence Q + caron, and not the new character Q-caron, since a canonical composition for Q-caron was not defined in the composition version. See §6 Composition Exclusion Table for more information.

4 Conformance

C1. A process that produces Unicode text that purports to be in a Normalization Form shall do so in accordance with the specifications in this document.

C2. A process that tests Unicode text to determine whether it is in a Normalization Form shall do so in accordance with the specifications in this document.

Note: The specifications for Normalization Forms are written in terms of a process for producing a decomposition or composition from an arbitrary Unicode string. This is a logical description — particular implementations can have more efficient mechanisms as long as they produce the same result. Similarly, testing for a particular Normalization Form does not require applying the process of normalization, so long as the result of the test is equivalent to applying normalization and then testing for binary identity.

5 Specification

This section specifies the format for Normalization Forms C and KC. It uses the following four definitions D1, D2, D3, D4, and two rules R1 and R2.

All combining character sequences start with a character of canonical class zero. For simplicity, we define a term for such characters:

D1. A character S is a starter if it has a canonical class of zero in the Unicode Character Database.

Because of the definition of canonical equivalence, the order of combining characters with the same canonical class makes a difference. For example, a-macron-breve is not the same as a-breve-macron. Characters can not be composed if that would change the canonical order of the combining characters.

D2. In any character sequence beginning with a starter S, a character C is blocked from S if and only if there is some character B between S and C, and either B is a starter or it has the same canonical class as C.

Note: When B blocks C, changing the order of B and C would result in a character sequence that is not canonically equivalent to the original. See Section 3.9 Canonical Ordering Behavior in the Unicode Standard.

Note: If a combining character sequence is in canonical order, then testing whether a character is blocked only requires looking at the immediately preceding character.

The process of forming a composition in Normalization Form C or KC involves:

Figure 1 shows a sample of how this works. The dark green cubes represent starters, and the light gray cubes represent non-starters. In the first step, the string is fully decomposed, and reordered. In the second step, each character is checked against the last non-starter, and combined if all the conditions are met. Examples are provided in Annex 1: Examples, and a code sample is provided in Annex 5: Code Sample.

Basic composition process

Figure 1. Composition Process

A precise notion is required for when an unblocked character can be composed with a starter. This uses the following two definitions.

D3. A primary composite is a character that has a canonical decomposition mapping in the Unicode Character Database (or is a canonical Hangul decomposition) but is not in the §6 Composition Exclusion Table.

Note: Hangul syllable decomposition is considered a canonical decomposition. See Technical Report #8: The Unicode Standard, Version 2.1 (http://www.unicode.org/unicode/reports/tr8.html) or The Unicode Standard, Version 3.0. Also see Annex 10: Hangul.

D4. A character X can be primary combined with a character Y if and only if there is a primary composite Z which is canonically equivalent to the sequence <X, Y>.

Based upon these definitions, the following rules specify the Normalization Forms C and KC.

R1. Normalization Form C

The Normalization Form C for a string S is obtained by applying the following process, or any other process that leads to the same result:

  1. Generate the canonical decomposition for the source string S according to the decomposition mappings in the latest supported version of the Unicode Character Database.
  2. Iterate through each character C in that decomposition, from first to last. If C is not blocked from the last starter L, and it can be primary combined with L, then replace L by the composite L-C, and remove C.

The result of this process is a new string S' which is in Normalization Form C.

R2. Normalization Form KC

The Normalization Form KC for a string S is obtained by applying the following process, or any other process that leads to the same result:

  1. Generate the compatibility decomposition for the source string S according to the decomposition mappings in the latest supported version of the Unicode Character Database.
  2. Iterate through each character C in that decomposition, from first to last. If C is not blocked from the last starter L, and it can be primary combined with L, then replace L by the composite L-C, and remove C.

The result of this process is a new string S' which is in Normalization Form KC.

6 Composition Exclusion Table

There are four classes of characters that are excluded from composition.

  1. Script-specifics: precomposed characters that are generally not the preferred form for particular scripts.
  2. Post Composition Version: precomposed characters that are added to Unicode after the composition version is fixed. This set is currently empty, but will be updated with each subsequent version of Unicode. See §3 Versioning.
  3. Singletons: precomposed characters having decompositions that consist of single characters (as described below).
  4. Non-starter decompositions: precomposed characters whose decompositions start with a non-starter.

Two characters may have the same canonical decomposition the Unicode Character Database. Here is an example of this:

Source Same Decomposition
212B 'Å' ANGSTROM SIGN

0041 'A' LATIN CAPITAL LETTER A + 030A '°' COMBINING RING ABOVE

00C5 'Å' LATIN CAPITAL LETTER A WITH RING ABOVE

The Unicode Character Database will first decompose one of the characters to the other, and then decompose from there. That is, one of the characters (in this case ANGSTROM SIGN) will have a singleton decomposition. Characters with singleton decompositions are included in Unicode essentially for compatibility with certain pre-existing standards. These singleton decompositions are excluded from primary composition.

A machine-readable form of the Composition Exclusion Table for Unicode 3.0.0 is found in ftp://ftp.unicode.org/Public/3.0-Update/. All four classes of characters are included in this file, although the singletons and non-starter decompositions are commented out. If your implementation does not compute these latter classes directly from the Unicode Character Database, then it can uncomment the appropriate lines.


Annex 1: Examples

This annex provides some detailed examples of the results of applying each of the normalization forms.

Common Examples

The following examples are cases where the Forms D and KD are identical, and Forms C and KC are identical.

Original Form D, KD Form C, KC

Notes

a D-dot_above D + dot_above D-dot_above Both decomposed and precomposed canonical sequences produce the same result.
b D + dot_above D + dot_above D-dot_above
c D-dot_below + dot_above D + dot_below + dot_above D-dot_below + dot_above

By the time we have gotten to dot_above, it cannot be combined with the base character.

There may be intervening combining marks (see f), so long as the result of the combination is canonically equivalent.

d D-dot_above + dot_below D + dot_below + dot_above D-dot_below + dot_above
e D + dot_above + dot_below D + dot_below + dot_above D-dot_below + dot_above
f D + dot_above+ horn + dot_below D + horn + dot_below + dot_above D-dot_below + horn + dot_above
g E-macron-grave E + macron + grave E-macron-grave Multiple combining characters are combined with the base character.
h E-macron + grave E + macron + grave E-macron-grave
i E-grave + macron E + grave + macron E-grave + macron Characters will not be combined if they would not be canonical equivalents because of their ordering.
j angstrom_sign A + ring A-ring Since Å (A-ring) is the preferred composite, it is the form produced for both characters.
k A-ring A + ring A-ring

Normalization Forms D and C Examples

The following are examples of Forms D and C that illustrate how they differ from Forms KD and KC, respectively.

Original Form D Form C

Notes

l "Äffin" "A\u0308ffin" "Äffin" The ffi_ligature (U+FB03) is not decomposed, since it has a compatibility mapping, not a canonical mapping. (See Normalization Forms KD and KC Examples.)
m "Ä\uFB03n" "A\u0308\uFB03n" "Ä\uFB03n"
n "Henry IV" "Henry IV" "Henry IV" Similarly, the ROMAN NUMERAL IV (U+2163) is not decomposed.
o "Henry \u2163" "Henry \u2163" "Henry \u2163"
p ga ka + ten ga Different compatibility equivalents of a single Japanese character will not result in the same string in Normalization Form C.
q ka + ten ka + ten ga
r hw_ka + hw_ten hw_ka + hw_ten hw_ka + hw_ten
s ka + hw_ten ka + hw_ten ka + hw_ten
t hw_ka + ten hw_ka + ten hw_ka + ten
u kaks ki + am + ksf kaks

Hangul syllables are maintained under normalization.

Normalization Forms KD and KC Examples

The following are examples of Forms KD and KC that illustrate how they differ from Forms D and C, respectively.

Original Form KD Form KC

Notes

l' "Äffin" "A\u0308ffin" "Äffin" The ffi_ligature (U+FB03) is decomposed in Normalization Form KC (where it is not in Normalization Form C).
m' "Ä\uFB03n" "A\u0308\ffin" "Äffin"
n' "Henry IV" "Henry IV" "Henry IV" Similarly, the resulting strings here are identical in Normalization Form KC.
o' "Henry \u2163" "Henry IV" "Henry IV"
p' ga ka + ten ga Different compatibility equivalents of a single Japanese character will result in the same string in Normalization Form KC.
q' ka + ten ka + ten ga
r' hw_ka + hw_ten ka + ten ga
s' ka + hw_ten ka + ten ga
t' hw_ka + ten ka + ten ga
u' kaks ki + am + ksf kaks

Hangul syllables are maintained under normalization.*

*In earlier versions of Unicode, jamo characters like ksf had compatibility mappings to kf + sf. These mappings were removed in Unicode 2.1.9 to ensure that Hangul syllables are maintained.)

Annex 2: Design Goals

The following are the design goals for the specification of the normalization forms, and are presented here for reference.

Goal 1: Uniqueness

The first, and by far the most important, design goal for the normalization forms is uniqueness: two equivalent strings will have precisely the same normalized form. More explicitly,

  1. If two strings x and y are canonical equivalents, then
  2. If two strings are compatibility equivalents, then

Goal 2: Stability

The second major design goal for the normalization forms is stability of characters that are not involved in the composition or decomposition process.

  1. If X contains a character with a compatibility decomposition, then D(X) and C(X) still contain that character.
     
  2. As much as possible, if there are no combining characters in X, then C(X) = X.
     
  3. Irrelevant combining marks should not affect the results of composition. See example f in Annex 1: Examples, where the horn character does not affect the results of composition.

Note: The only characters for which Goal 2.2 is not true are those in the §6 Composition Exclusion Table.

Goal 3: Efficiency

The third major design goal for the normalization forms is that it allow for efficient implementations.

  1. It is possible to implement efficient code for producing the Normalization Forms. In particular, it should be possible to produce Normalization Form C very quickly from strings that are already in Normalization Form C or are in Normalization Form D.
     
  2. Composition Forms do not have to produce the shortest possible results, because that can be computationally expensive.

Annex 3: Implementation Notes

There are a number of optimizations that can be made in programs that produce Normalization Form C. Rather than first decomposing the text fully, a quick check can be made on each character. If it is already in the proper precomposed form, then no work has to be done. Only if the current character is combining or in the §6 Composition Exclusion Table does a slower code path need to be invoked. (This code path will need to look at previous characters, back to the last starter. See Annex 8: Trailing Characters for more information.)

The majority of the cycles spent in doing composition is spent looking up the appropriate data. The data lookup for Normalization Form C can be very efficiently implemented, since it only has to look up pairs of characters, not arbitrary strings. First a multi-stage table (as discussed in Chapter 5 of the Unicode Standard) is used to map a character c to a small integer i in a contiguous range from 0 to n. The code for doing this looks like:

i = data[index[c >> BLOCKSHIFT] + (c & BLOCKMASK)];

Then a pair of these small integers are simply mapped through a two-dimensional array to get a resulting value. This yields much better performance than a general-purpose string lookup in a hash table.

Since the Hangul compositions and decompositions are algorithmic, memory storage can be significantly reduced if the corresponding operations are done in code. See Annex 10: Hangul for more information.

Annex 4: Decomposition

For those reading this document without access to the Unicode Standard, the following summarizes the canonical decomposition process. For a complete discussion, see Sections 3.6 and 3.10 of the Unicode Standard.

Canonical decomposition is the process of taking a string, recursively replacing composite characters using the Unicode canonical decomposition mappings (including the algorithmic Hangul canonical decomposition mappings, see Annex 10: Hangul), and putting the result in canonical order.

Compatibility decomposition is the process of taking a string, replacing composite characters using both the Unicode canonical decomposition mappings and the Unicode compatibility decomposition mappings, and putting the result in canonical order.

A string is put into canonical order by repeatedly replacing any exchangeable pair by the pair in reversed order. When there are no remaining exchangeable pairs, then the string is in canonical order. Note that the replacements can be done in any order.

A sequence of two adjacent characters in a string is an exchangeable pair if the combining class (from the Unicode Character Database) for the first character is greater than the combining class for the second, and the second is not a starter; that is, if combiningClass(first) > combiningClass(second) > 0.

Examples of exchangeable pairs:

Sequence Combining classes Status
<acute, cedilla> 230, 202 exchangeable, since 230 > 202
<a, acute> 0, 230 not exchangeable, since 0 <= 230
<diaeresis, acute> 230, 230 not exchangeable, since 230 <= 230
<acute, a> 230, 0 not exchangeable, since the second class is zero.

Example of decomposition:

  1. Take the string with the characters "ác´¸" (a-acute, c, acute, cedilla)
  2. The data file contains the following relevant information:
    code; name; ... canonical class; ... decomposition.
    0061;LATIN SMALL LETTER A;...0;...
    0063;LATIN SMALL LETTER C;...0;...
    00E1;LATIN SMALL LETTER A WITH ACUTE;...0;...0061 0301;...
    0107;LATIN SMALL LETTER C WITH ACUTE;...0;...0063 0301;...
    0301;COMBINING ACUTE ACCENT;...230;...
    0327;COMBINING CEDILLA;...202;...
  3. Applying the canonical decomposition mappings, we get "a´c´¸" (a, acute, c, acute, cedilla).
    • This is because 00E1 (a-acute) has a canonical decomposition mapping to 0061 0301 (a, acute)
  4. Applying the canonical ordering, we get "a´c¸´" (a, acute, c, cedilla, acute).
    • This is because cedilla has a lower canonical ordering value (202) than acute (230) does. The positions of 'a' and 'c' are not affected, since they are starters.

Annex 5: Code Sample

A code sample is available for the four different normalization forms. For clarity, this sample is not optimized. The implementation transforms a string in two passes: first decomposing, then recomposing that result by successively composing each unblocked character with the last starter.

In some implementations, people may be working with streaming interfaces that read and write small amounts at a time. In those implementations, the text back to the last starter needs to be buffered. Whenever a second starter would be added to that buffer, the buffer can be flushed.

The sample is written in Java, though for accessibility it avoids the use of object-oriented techniques. For access to the code, and for a live demonstration, see Normalizer.html. Equivalent Perl code is available on the W3C site, at http://www.w3.org/International/charlint/.

Annex 6: Legacy Encodings

While the Normalization Forms are specified for Unicode text, they can also be extended to non-Unicode (legacy) character encodings. This is based on mapping the legacy character set strings to and from Unicode using definitions D5 and D6.

D5. An invertible transcoding T for a legacy character set L is a one-to-one mapping from characters encoded in L to characters in Unicode with an associated mapping T-1 such that for any string S in L, T-1(T(S)) = S.

Note: Typically there is a single accepted invertible transcoding for a given legacy character set. In in a few cases there may be multiple invertible transcodings: for example, Shift-JIS may have two different mappings used in different circumstances: one to preserve the '/' semantics of 2F16, and one to preserve the '¥' semantics.

Note: The character indexes in the legacy character set string may be very different than character indexes in the Unicode equivalent. For example, if a legacy string uses visual encoding for Hebrew, then its first character might be the last character in the Unicode string.

If you implement transcoders for legacy character sets, it is recommended that you ensure that the result is in Normalization Form C where possible. See UTR #22: Character Mapping Tables for more information.

D6. Given a string S encoded in L and an invertible transcoding T for L, the Normalization Form X of S under T is defined to be the result of mapping to Unicode, normalizing to Unicode Normalization Form X, and mapping back to the legacy character encoding, e.g., T-1(X(T(S))). Where there is a single accepted invertible transcoding for that character set, we can simply speak of the Normalization Form X of S.

Legacy character sets fall into three categories based on their normalization behavior with accepted transcoders.

Annex 7: Programming Language Identifiers

This section discusses issues that must be taken into account when considering normalization of identifiers in programming languages or scripting languages. The Unicode Standard provides a recommended syntax for identifiers for programming languages that allow the use of non-ASCII languages in code. It is a natural extension of the identifier syntax used in C and other programming languages:

<identifier> ::= <identifier_start> ( <identifier_start> | <identifier_extend> )*

<identifier_start> ::= [{Lu}{Ll}{Lt}{Lm}{Lo}{Nl}]

<identifier_extend> ::= [{Mn}{Mc}{Nd}{Pc}{Cf}]

That is, the first character of an identifier can be an uppercase letter, lowercase letter, titlecase letter, modifier letter, other letter, or letter number. The subsequent characters of an identifier can be any of those, plus non-spacing marks, spacing combining marks, decimal numbers, connector punctuations, and formatting codes (such as right-left-mark). Normally the formatting codes should be filtered out before storing or comparing identifiers.

Normalization as described in this report can be used to avoid problems where apparently identical identifiers are not treated equivalently. Such problems can appear both during compilation and during linking, in particular also across different programming languages. To avoid such problems, programming languages should normalize identifiers before storing or comparing them. Generally if the programming language has case-sensitive identifiers then Normalization Form C should be used, while if the programming language has case-insensitive identifiers then Normalization Form KC should be used.

If programming languages are using form KC to level differences between characters, then they need to use a slight modification of the identifier syntax from the Unicode Standard to deal with the idiosyncrasies of a small number of characters. These characters fall into three classes:

  1. Middle Dot. Because most Catalan legacy data will be encoded in Latin-1, U+00B7 MIDDLE DOT needs to be allowed in <identifier_extend>. (If the programming language is using a dot as an operator, then U+2219 BULLET OPERATOR or U+22C5 DOT OPERATOR should be used instead. However, care should be taken when dealing with U+00B7 MIDDLE DOT, as many processes will assume its use as punctuation, rather than as a letter extender.)
  2. Combining-like characters. Certain characters are not formally combining characters, although they behave in most respects as if they were. Ideally, they should not be in <identifier_start>, but rather in <identifier_extend>, along with combining characters. In most cases, the mismatch does not cause a problem, but when these characters have compatibility decompositions, they can cause identifiers not to be closed under Normalization Form KC. In particular, the following four characters should be in <identifier_extend> and not <identifier_start>:
  3. Irregularly decomposing characters. U+037A GREEK YPOGEGRAMMENI and certain Arabic presentation forms have irregular compatibility decompositions, and need to be excluded from both <identifier_start> and <identifier_extend>. It is recommended that all Arabic presentation forms be excluded from identifiers in any event, although only a few of them are required to be excluded for normalization to guarantee identifier closure.

With these amendments to the identifier syntax, all identifiers are closed under all four Normalization forms. This means that for any string S,

isIdentifier(S) == isIdentifier(D(S))

isIdentifier(S) == isIdentifier(C(S))

isIdentifier(S) == isIdentifier(KD(S))

isIdentifier(S) == isIdentifier(KC(S))

Identifiers are also closed under lowercasing, so that for any string S,

 isIdentifier(S) == isIdentifier(toLower(S))

In addition, identifiers almost closed under uppercasing. For any string S, if S doesn't start with the character U+0345 COMBINING GREEK YPOGEGRAMMENI,

isIdentifier(S) == isIdentifier(toUpper(S))

In the very unusual case that U+0345 is at the start of S, identifiers are not closed because U+0345 is not in <identifier_start>, but its uppercase is. In practice this is not a problem, because of the way normalization is used with identifiers.

Those programming languages with case-insensitive identifiers should also use the case mappings described in UTR #21 Case Mappings to produce a case-insensitive normalized form. This means using both the data in UnicodeData.txt and in SpecialCasing.txt.

Identifiers must be parsed before applying leveling distinctions such as case mapping or normalization. Literals, of course, should not be leveled.

Sample code in Java that shows parsing for identifiers, including leveling distinctions using Normalization and case conversion, is available via Normalizer.html.

Annex 8: Trailing Characters

The Trailing Characters Table lists the characters in Unicode 3.0 that may occur in a canonical decomposition of a character, but not as the first character of that decomposition. If a string does not contain characters in the Trailing Characters table or in the Composition Exclusion Table, then none of its characters participate in compositions, so the only processing required for Normalization Form C is to make sure that the characters are in canonical order.

The Other Non-Starters Table contains all of the Unicode 3.0 non-starters that are neither in the Trailing Characters table nor in the Composition Exclusion table. If a string contains no characters from any of these three tables, then it is in Normalization Form C already.

The inclusion of these tables is informative: both can be generated from the Unicode Character Database and the Composition Exclusion table.

Annex 9: Conformance Testing

Implementations must be thoroughly tested for conformance to the normalization specification, especially for Normalization Form C. The following provides conditions that should be tested for in any implementation. These tests make use of a particularly good test string, formed by prepending an non-interacting base character '#' to the given character, and then appending a non-spacing mark with a low combining class '\u0334' (COMBINING TILDE OVERLAY).

For every character X in Unicode, let the string Y be D(X), and the string Z be C(Y). (For the notation used here, see §2 Notation.) Check that the following conditions for these strings are true:

  1. If X does not have a canonical decomposition mapping in the Unicode Character Database, then X = Y = Z.

    otherwise,
       

  2. Y and Z must be in canonical order
  3. X ≠ Y
  4. No character in Y can have a canonical decomposition mapping in the Unicode Character Database
  5. If X is in the CompositionExclusions table, then:
  6. If X is not in the CompositionExclusions table, then:

To test for canonical order in a string S, check the following condition for each character index i in the string (except the first). If this condition fails, the string is not in canonical order.

if combiningClass(S[i-1]) > combiningClass(S[i])
then combiningClass(S[i]) == 0.

Annex 10: Hangul

Since the Hangul compositions and decompositions are algorithmic, memory storage can be significantly reduced if the corresponding operations are done in code rather than by simply storing the data in the general purpose tables. Here is sample code illustrating algorithmic Hangul canonical decomposition and composition done according to the specification in Section 3.11 Combining Jamo Behavior. Although coded in Java, the same structure can be used in other programming languages.

Common Constants

    static final int
        SBase = 0xAC00, LBase = 0x1100, VBase = 0x1161, TBase = 0x11A7,
        LCount = 19, VCount = 21, TCount = 28,
        NCount = VCount * TCount,   // 588
        SCount = LCount * NCount;   // 11172

Hangul Decomposition

    public static String decomposeHangul(char s) {
        int SIndex = s - SBase;
        if (SIndex < 0 || SIndex >= SCount) {
            return String.valueOf(s);
        }
        StringBuffer result = new StringBuffer();
        int L = LBase + SIndex / NCount;
        int V = VBase + (SIndex % NCount) / TCount;
        int T = TBase + SIndex % TCount;
        result.append((char)L);
        result.append((char)V);
        if (T != TBase) result.append((char)T);
        return result.toString();
    }

Hangul Composition

Notice an important feature of Hangul composition: whenever the source string is not in Normalization Form D, you can not just detect character sequences of the form <L, V> and <L, V, T>. You also must catch the sequences of the form <LV, T>. To guarantee uniqueness, these sequences must also be composed. This is illustrated in Step 2 below.

    public static String composeHangul(String source) {
        int len = source.length();
        if (len == 0) return "";
        StringBuffer result = new StringBuffer();
        char last = source.charAt(0);            // copy first char
        result.append(last);

        for (int i = 1; i < len; ++i) {
            char ch = source.charAt(i);

            // 1. check to see if two current characters are L and V

            int LIndex = last - LBase;
            if (0 <= LIndex && LIndex < LCount) {
                int VIndex = ch - VBase;
                if (0 <= VIndex && VIndex < VCount) {

                    // make syllable of form LV

                    last = (char)(SBase + (LIndex * VCount + VIndex) * TCount);
                    result.setCharAt(result.length()-1, last); // reset last
                    continue; // discard ch
                }
            }

            // 2. check to see if two current characters are LV and T

            int SIndex = last - SBase;
            if (0 <= SIndex && SIndex < SCount && (SIndex % TCount) == 0) {
                int TIndex = ch - TBase;
                if (0 <= TIndex && TIndex <= TCount) {

                    // make syllable of form LVT

                    last += TIndex;
                    result.setCharAt(result.length()-1, last); // reset last
                    continue; // discard ch
                }
            }

            // if neither case was true, just add the character

            last = ch;
            result.append(ch);
        }
        return result.toString();
    }

Additional transformations can be performed on sequences of Hangul jamo for various purposes. For example, to regularize sequences of Hangul jamo into standard syllables, the choseong and jungseong fillers can be inserted, as described in Chapter 3. (In the text of the 2.0 version of the Unicode Standard, these standard syllables were called canonical syllables, but this has nothing to do with canonical composition or decomposition.) For keyboard input, additional compositions may be performed. For example, the trailing consonants kf + sf may be combined into ksf. In addition, some Hangul input methods do not require a distinction on input between initial and final consonants, and change between them on the basis of context. For example, in the keyboard sequence mi + em + ni + si + am, the consonant ni would be reinterpreted as nf, since there is no possible syllable nsa. This results in the two syllables men and sa.

However, none of these additional transformations are considered part of the Unicode Normalization Formats.

Hangul Character Names

Hangul decomposition is also used to form the character names for the Hangul syllables. While the sample code that illustrates this process is not directly related to normalization, it is worth including because it is so similar to the decomposition code.

    public static String getHangulName(char s) {
        int SIndex = s - SBase;
        if (0 > SIndex || SIndex >= SCount) {
            throw new IllegalArgumentException("Not a Hangul Syllable: " + s);
        }
        StringBuffer result = new StringBuffer();
        int LIndex = SIndex / NCount;
        int VIndex = (SIndex % NCount) / TCount;
        int TIndex = SIndex % TCount;
        return "HANGUL SYLLABLE " + JAMO_L_TABLE[LIndex]
          + JAMO_V_TABLE[VIndex] + JAMO_T_TABLE[TIndex];
    }

    static private String[] JAMO_L_TABLE = {
        "G", "GG", "N", "D", "DD", "R", "M", "B", "BB",
        "S", "SS", "", "J", "JJ", "C", "K", "T", "P", "H"
    };

    static private String[] JAMO_V_TABLE = {
        "A", "AE", "YA", "YAE", "EO", "E", "YEO", "YE", "O",
        "WA", "WAE", "OE", "YO", "U", "WEO", "WE", "WI",
        "YU", "EU", "YI", "I"
    };

    static private String[] JAMO_T_TABLE = {
        "", "G", "GG", "GS", "N", "NJ", "NH", "D", "L", "LG", "LM",
        "LB", "LS", "LT", "LP", "LH", "M", "B", "BS",
        "S", "SS", "NG", "J", "C", "K", "T", "P", "H"
    };

Annex 11: Intellectual Property

Transcript of letter regarding disclosure of IBM Technology
(Hard copy is on file with the Chair of UTC and the Chair of NCITS/L2)
Transcribed on 1998-03-10

February 26, 1999

 

The Chair, Unicode Technical Committee

Subject: Disclosure of IBM Technology - Unicode Normalization Forms

The attached document entitled "Unicode Normalization Forms" does not require IBM technology, but may be implemented using IBM technology that has been filed for US Patent. However, IBM believes that the technology could be beneficial to the software community at large, especially with respect to usage on the Internet, allowing the community to derive the enormous benefits provided by Unicode.

This letter is to inform you that IBM is pleased to make the Unicode normalization technology that has been filed for patent freely available to anyone using them in implementing to the Unicode standard.

Sincerely,

 

W. J. Sullivan,
Acting Director of National Language Support
and Information Development

 


Copyright © 1998-1999 Unicode, Inc. All Rights Reserved.

The Unicode Consortium makes no expressed or implied warranty of any kind, and assumes no liability for errors or omissions. No liability is assumed for incidental and consequential damages in connection with or arising out of the use of the information or programs contained or accompanying this technical report.

Unicode and the Unicode logo are trademarks of Unicode, Inc., and are registered in some jurisdictions.