> Excluding the hex-byte characters (which almost nobody seems to like),
> we're only talking about 256 characters, aren't we? I guess I don't
> understand why the opposition is so vigorous.
As *glyphs*, nobody cares. They're fine. Anybody who wants to use
glyphs like these to represent hex byte values may feel free to do
so, and nobody will object.
As *characters*, they are useless dreck. There is no reason to
introduce into a text stream a *character*--say U+2841--to serve
as a visible symbolic placeholder for the byte value 0x41. What
purpose does this serve? Debuggers translate *byte values* into
visibly displayed glyphs (either unitary, as proposed here, or
simply as sequences of glyphs for the hex digits, i.e. "41").
Adding an arbitrary layer of textual *characters* in between
just gets in the way of what the debugger should be doing.
Unicode is a *character* encoding standard. It is not a glyph
registry. People who want a registry of well-defined glyphs that
font vendors can use to produce common collections of displayable
glyphs (for terminal emulations or whatever) should be talking
to AFII, instead.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:42 EDT