|Authors||Mark Davis (firstname.lastname@example.org)|
This document specifies an XML format for the interchange of data for character encodings. It provides a complete description for such encodings in terms of a defined mapping to and from Unicode.
This document contains material which has been considered and approved by the Unicode Technical Committee for publication as a Proposed Draft Technical Report. At the current time, the specifications in this technical report are provided as information and guidance to implementers of the Unicode Standard, but do not form part of the standard itself. The Unicode Technical Committee may decide to incorporate all or part of the material of this technical report into a future version of the Unicode Standard, either as informative material or as normative specification. Please mail corrigenda and other comments to the author.
The content of all technical reports must be understood in the context of the appropriate version of the Unicode Standard. References in this technical report to sections of the Unicode Standard refer to the Unicode Standard, Version 3.0. See http://www.unicode.org/unicode/standard/versions for more information.
The ability to seamlessly handle multiple character encodings is crucial in today's world, where a server may need to handle many different client character encodings covering many different markets. No matter how characters are represented, servers need to be able to process them appropriately. Unicode provides a common model and representation of characters for all the languages of the world. Because of this, Unicode is being adopted by more and more systems as the internal storage processing code. Rather than trying to maintain data in literally hundreds of different encodings, a program can translate the source data into Unicode on entry, process it as required, and translate it into a target character set on request.
Even where Unicode is not used as a process code, it is often used as a pivot encoding. Rather than requiring ten thousand tables to map each of a hundred character encodings to one another, data can be transcoded first to Unicode and then into the eventual target encoding. This requires only a hundred tables, rather than ten thousand.
Whether or not Unicode is used, it is ever more vital to maintain the consistency of data across conversions between different character encodings. Because of the fluidity of data in a networked world, it is easy for it to be converted from, say, CP930 on a Windows platform, sent to a UNIX server as UTF-8, processed, and converted back to CP930 for representation on another client machine. This requires implementations to have identical mappings for different character encodings, no matter what platform they are working on. It also requires them to use the same name for the same encoding, and different names for different encodings. This is difficult to do unless there is a standard specification for the mappings so that it can be precisely determined what the encoding maps to.
This technical report provides such a standard specification for the interchange of mapping data that defines a character encoding. By using this specification, implementations can be assured of providing precisely the same mappings as other implementations on different platforms.
This report is in the initial stages of development; feedback is welcome.
Client software needs to distinguish the different types of mismatches that can occur when transcoding data between different character encodings. These fall into the following categories:
In the case of illegal source sequences, a conversion routine will typically provide the following options:
Note: There is an important difference between the case where a sequence represents a real REPLACEMENT CHARACTER in a legacy encoding, as opposed to just being unassigned, and thereby mapped to REPLACEMENT CHARACTER (using an API substitution option).
Note: An API may choose to signal an illegal sequence in a legacy character set by mapping it to one of the explicit NOT A CHARACTER code points in Unicode (any of the form xxFFFE or xxFFFF). However, this mechanism runs the risk of these values being transmitted in Unicode text (which is non-conformant), and should be used with caution.
Unassigned sequences can be handled with any of the above options, plus some additional ones. They should always be treated as a single code point: for example, 0xA3BF is treated as a single code point when mapping into Unicode from CP950. Especially because unassigned character may actually come from a more recent version of the character encoding, it is often important to preserve round-trip mappings if possible. This can be done with additional options:
For unmappable sequences, all of the above options and one additional options may be available:
It is very important that systems be able to distinguish between the fallback mappings and regular mappings. Systems like XML require the use of hex escape sequences to preserve round-trip integrity; use of fallback characters in that case corrupts the data.
Because illegal values represent some corruption of the data stream, conversion routines may be directed to handle them in a different way than by replacement characters. For example, a routine might map unassigned characters to a substitution character, but throw an exception on illegal values.
It is important that a mapping file be a complete description. From the data in the file, it should be possible to tell for any sequence of bytes whether that sequence is assigned, unassigned, or illegal. It should also be possible to tell if characters need to be rearranged to be in Unicode standard order (visual order, combining marks after base forms, etc).
The Unicode Standard has two equivalent ways of representing composite characters such as â. The standard provides for two normalized formats that provide for unique representations of data in UTR #15: Unicode Normalization Forms. The standard format for character encoding specification itself is to map to sequences of Unicode characters in Normalization Form C. However, this does not guarantee that the result of transcoding into Unicode will be normalized, since individual characters in the source encoding may separately map to an unnormalized sequence.
For example, suppose the source encoding maps 0x83 to 0x030A in Unicode (combining ring above), and 0x61 to 0x0061 (a). Then the sequence <0x61,0x83> will map to <0x0061,0x030A> in Unicode, which is not in Normalization Form C.
This problem will only arise when the source encoding has separate characters that, in the proper context, would not be present in normalized text. If a process wishes to guarantee that the result is in a particular Unicode normalization form, then it should normalize after transcoding. Information is provided below that can determine whether this step is required.
A character mapping specification file starts with the following lines. There is a difference between the encoding of the XML file, and the encoding of the mapping data. The encoding of the file can be any valid XML encoding. Only the ASCII repertoire of characters is required in the specification of the mapping data, but comments may be in other character encodings. The example below happens to use UTF-8.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE characterMapping SYSTEM "http://www.unicode.org/unicode/reports/tr22/CharacterMapping.dtd">
Note: In the rest of this specification, short attribute and element names are used just to conserve space where there may be a large number of items, or for consistency with other elements that may have a large number of items.
A mapping file begins with a comment header. Here is an (artificial) example:
<characterMapping name="CP938" description="Sun variant of CP942 for Japanese" unicodeVersion="3.0" tableVersion="2" contact="email@example.com" registrationAuthority="IBM" registrationName="CP935" copyright="SomeCompany" bidiOrder="logical" combiningOrder="after" normalization="C" >
The characterMapping element is the root. It contains a number of required attributes:
name is a string which uniquely identifies this mapping table from all others. Where possible this should the preferred name from the IANA character set registry. This string must be limited to the minimal character repertoire of ASCII letters, digits, plus '-', '.', ':', and '_'. The name value is not case-sensitive. It must be unique; if two mapping tables differ in map any characters, in the specification of illegal characters, in their bidi ordering, in their combining character ordering, etc. then they must have a different name (or different version: see below).
description is a string which describes the mapping enough to distinguish it from other similar mappings. This string must be limited to the Unicode range 0x0020 - 0x007E and should be in English. The string normally contains the set of mappings, the script, language, or locale for which it is intended, and optionally the variation. For instance, "Windows Japanese JIS-1990", "EBCDIC Latin 1 with Euro", "PC Greek".
unicodeVersion is the earliest version of the Unicode standard (after 2.0) that contains all of the characters mapped to. That is, most of the ISO 8859 series will use Unicode 2.0; the new ones with Euro will use Unicode 2.1.
tableVersion is the version of the data, a small integer normally starting at one. Any time the data is modified, the value must be increased. If only additions are made, then the same name can be retained; if not, then a new name must be used. Additions change mappings from "unassigned" to "assigned". Any change in the validity of character sequences requires a new name.
contact is the person to contact in case errors are found in the data. This must be an e-mail address or URL.
registrationAuthority is the organization responsible for the encoding.
registrationName is a string that provides the name and version of the mapping, as known to that authority.
copyright provides the copyright information. While this can be provided in comments, use of a field allows copyright propagation when converting to a binary form of the table. (Typically the right to use the information is granted, but not the right to erase the copyright or pretend that you created the information.)
contact is the person to contact in case errors are found in the data. This must be an e-mail address or URL.
bidiOrder specifies whether the character encoding is to be interpreted in one of two orders: "visual" or "logical". Unicode is strictly logical order. Application of the Unicode Bidirectional Algorithm is required to map to a visual-order character encoding; application of a reverse bidirectional algorithm is required to map back to Unicode. The default value for this attribute is "logical". It is only relevant for character encodings for the Middle East (Arabic and Hebrew). For more information, see UTR #9: The Bidirectional Algorithm.
combiningOrder specifies the order of combining marks: either "before" or "after". Some character encodings, typically those for bibliographic use, store combining marks before base characters. Unicode stores them uniformly after base characters. The default value for this attribute is "after". This is only relevant for character encodings with combining marks.
normalization specifies whether the result of transcoding into Unicode using this mapping will be automatically in Normalization Form C or D. The possible values are "neither", "C", "D", "CD". While this information can be derived from an analysis of the assignment statements (see UTR #15: Unicode Normalization Forms), providing the information in the header is a useful validity check. Most mappings specifications will have the value "C". Character encodings that contain neither composite characters nor combining marks (such as 7-bit ASCII) will have the value "CD".
<history supercedes="CP501" derivedFrom="CP500"> <modified version="2" date="1999-09-25"> Added Euro. </modified> <modified version="1" date="1997-01-01"> Made out of whole cloth for illustration. </modified> </history>
history provides information about the changes to the file and relations to other encodings. This is a required element.
modified provides information about the changes to the file, coordinated with the version. The latest version should be first.
supercedes is an optional attribute that indicates a relation to another encoding. This encoding supercedes another when all of the assigned values in the other mapping are contained in this one, and there are additional assigned values.
derivedFrom is an optional attribute that indicates a relation to another encoding. This encoding derives from another encoding when it was formed by replacing some of the assigned values by different assignments. For instance, Cp1148 is derived from Cp500 with Euro substituted for a currency sign.
<aliases> <!--List of aliases, such as IANA names--> <n n="MS983"/> <n n="SJIS"/> </aliases> <displayNames> <!--List of display names for this encoding in different languages--> <d xml:lang="en" n="Sun Chinese"/> <d xml:lang="fr-BE" n="Sun Chinoise"/> </displayNames>
The aliases element provides a list of possible aliases for this code page. It is optional. The names may not be unique, because of the history behind the development of character encoding names. The n attribute is used to supply the name. Aliases should only be provided where the character encoding mappings are known to match this table precisely. Related mappings can be included in the history element.
The displayNames element is optional, but strongly recommended. It provide user-level names that can be presented in menus, such as in Netscape Navigator View>Character Set or the Microsoft Internet Explorer View>Encoding. The individual names are supplied with the d elements. The xml:lang attribute supplies the locale in the format. The n attribute supplies the name. Both attributes are required.
It is possible to supply just the differences between one table and a base table. This is done with the import element, which is optional. If this is used, then any further data simply overrides the data in the base table. The value of the source attribute is a valid URL pointing to a valid character encoding table.
As discussed above, it is important to be able to distinguish when characters are unassigned vs. when they are invalid. Valid and invalid sequences are specified by the validity element. Here is an example of what this might look like:
<validity> <!--Validity specification for SJIS--> <illegal s="FD" e="FF"/> <legal s="81" e="9F" next="second" /> <legal s="E0" e="FC" next="second" /> <legal type="second" s="40" e="7E"/> <legal type="second" s="80" e="FC"/> </validity>
The subelements are legal or illegal. Their attributes are:
All values referring to code units are in hexadecimal. If we look at the above table, the first line tells us that the single bytes FD through FF are illegal. The next two lines say that the bytes in the ranges 81 through 9F and E0 through FC are legal, if they are followed by a byte of type="second". More detailed samples for a complex validity specifications are given in Samples.
If any bytes are not explicitly set for type="start", then they are assumed to be legal with next="end". Thus most single-byte encodings do not need validity elements. Any string can be used for the value of type or next, as long as it is not subject to an error condition.
The main part of the table provides the assignments of mappings between byte sequences and Unicode characters. Here is an example:
<assignments sub="A3"> <!--Unassignments--> <a b="AA"/> <a b="AB"/> <!--Fallbacks--> <a b="22" u="201C" f="u" n="LEFT DOUBLE QUOTATION MARK"/> <a b="22" u="201D" f="u" n="RIGHT DOUBLE QUOTATION MARK"/> <!--Main mappings--> <!--Map ASCII to the same range--> <a b="00" u="0000" c="7F"/> <!--maps 00..7F to 0000..007F--> <!--Map user-defined area to private use--> <a b="F040" u="E000" e="3E"/> <!--maps F040..F03E to E000..E07E--> <!--Map other characters specifically--> <a b="A1" u="FF61" n="HALFWIDTH IDEOGRAPHIC FULL STOP"/> <a b="A2" u="FF62" n="HALFWIDTH LEFT CORNER BRACKET"/> <a b="8156" u="3003" n="DITTO MARK"/> <a b="8157" u="4EDD"/> <a b="8158" u="3005" n="IDEOGRAPHIC ITERATION MARK"/> <a b="8159" u="3006" n="IDEOGRAPHIC CLOSING MARK"/> <a b="815A" u="3007" n="IDEOGRAPHIC NUMBER ZERO"/> <a b="815B" u="30FC" n="KATAKANA-HIRAGANA PROLONGED SOUND MARK"/> <a b="815C" u="2015" n="HORIZONTAL BAR"/> </assignments>
sub is an attribute that specifies the replacement character used in the legacy character encoding. (U+FFFD REPLACEMENT CHARACTER is used in Unicode.) The value is a sequence of bytes, as described under b below. The default is the ASCII control value SUB = "1A".The element a specifies an mapping from byte sequences to Unicode and back. It has the following attributes:
005C"\" instead of
25BA BLACK RIGHT-POINTING POINTER.
25BA BLACK RIGHT-POINTING POINTER.
For example, using "Cp932/path/LF" specifies that Cp932 is to be used, but with a backslash instead of a yen sign, and that CRLF and CR are to be converted to LF.
10FFFF), unpaired surrogate values (
DF00), and non-character values (of the form
Open Issue: if we required that all valid byte sequences be either explicitly assigned or unassigned, then that would provide more error checking for the file, at the expense of having to specify extra unassignments.
The following provide samples that illustrate features of the format.
A full example is on CharacterMapping.xml. It is not a real example since it tries to show all of the features in one file, whereas in real life only a subset would be used. You can view it directly with Internet Explorer, which will interpret the XML. The DTD is on CharacterMapping.dtd (if you are looking at this in a browser, choose the View Source menu item).
Here is a simple version of the UTF-8 validity specification, with the shortest-form bounds checking and exact limit bounds checking omitted. While in practice a mapping file is never required for UTF-8 since it is algorithmically derived, it is instructive to see the use of the validity element as a complicated example. As a reminder, first here are the valid ranges for UTF-8:
|Unicode Code Points||UTF-8 Code Units|
|800||E0 A0 80|
|FFFF||EF BF BF|
|010000||F0 90 80 80|
|10FFFF||F4 8F BF BF|
Here is a simple version of the UTF-8 validity specification, with the shortest-form bounds checking and exact limit bounds checking omitted. This specification only checks the bounds for the first byte, and that there are the appropriate number (0, 1, 2, or 3) of following bytes in the right ranges. The single byte form does not need to be explicitly set; it is simply any single byte that neither is illegal nor requires additional bytes.
<validity> <!--Validity specification for UTF-8, partial boundary checks--> <illegal s="80" e = "BF"/> <illegal s="F5" e = "FF"/> <!-- 2 byte form --> <legal s="C0" e="DF" next="final" /> <legal type="final" s="80" e="BF" /> <!-- 3 byte form --> <legal s="DF" e="EF" next="prefinal" /> <legal type="prefinal" s="80" e="BF" next="final" /> <!-- 4 byte form --> <legal s="F0" e="F4" next="preprefinal" /> <legal type="preprefinal" s="80" e="BF" next="prefinal" /> </validity>
The following provides the full validity specification for UTF-8, as shown in Figure 2: UTF-8 Boundaries.
<validity> <!--Validity specification for UTF-8, full boundary checks--> <illegal s="80" e = "C1"/> <illegal s="F5" e = "FF"/> <!-- 2 byte form --> <legal s="C2" e="DF" next="final" /> <legal type="final" s="80" e="BF"/> <!-- 3 byte form; Low range is special--> <legal s="E0" next="prefinalLow" /> <legal type="prefinalLow" s="A0" e="BF" next="final" /> <!-- 3 byte form, Normal --> <legal s="E1" e="EF" next="prefinal" /> <legal type="prefinal" s="80" e="BF" next="final" /> <!-- 4 byte form, Low range is special --> <legal s="F0" next="preprefinalLow" /> <legal type="preprefinalLow" s="90" e="BF" next="prefinal"/> <!-- 4 byte form, Normal --> <legal s="F1" e="F3" next="preprefinal" /> <legal type="preprefinal" s="80" e="BF" next="prefinal" /> <!-- 4 byte form, High range is special--> <legal s="F4" next="preprefinalHigh" /> <legal type="preprefinalHigh" s="80" e="8F" next="prefinal"/> </validity>
Thanks to Karlsson Kent, Ken Borgendale, Bertrand Damiba, Mark Leisher, Tony Graham, and Ken Whistler for their feedback on the document.
Copyright © 1999 Unicode, Inc. All Rights Reserved. The Unicode Consortium makes no expressed or implied warranty of any kind, and assumes no liability for errors or omissions. No liability is assumed for incidental and consequential damages in connection with or arising out of the use of the information or programs contained or accompanying this technical report.
Unicode and the Unicode logo are trademarks of Unicode, Inc., and are registered in some jurisdictions.