RE: What is the principle?

From: Mike Ayers (
Date: Fri Mar 26 2004 - 14:24:23 EST

  • Next message: Peter Constable: "RE: Printing and Displaying Dependent Vowels"

    > From: []On
    > Behalf Of Jim Allan
    > Sent: Friday, March 26, 2004 1:34 PM

    > Arcane Jill posted:
    > > (A) A proposed character will be rejected if its glyph is
    > identical in
    > > appearance to that of an extant glyph, regardless of its semantic
    > > meaning,

    > Examples from the recent past...
    > and the introduction of numerous Latin alphabet letter forms
    > in various
    > styles as mathematical characters.

            These were encoded specifically because of the difference in glyph.
    This is the contrapositive of (A), and has no bearing on the truth of (A).

    > > (B) A proposed character will be rejected if its semantic meaning is
    > > identical to that of an extant character, regardless of the
    > appearance
    > > of its glyph,

    > For example, that a proposed character has the approximate semantic
    > value of IPA _b_ doesn't mean that it should be taken as just a
    > variant glyph of IPA _b_ and coded as U+0062. By that rule a large
    > number of uncoded scripts could be easily coded by assigning
    > the glyphs
    > to encoded glyphs of approximately the same meaning and using a font
    > change to render the script.

            I'm lost on your argument. It seems to me that you are confusing
    "semantic meaning" and "pronounced sound". For a well chosen definition of
    "semantic meaning", (B) would be mostly true.

    > When the question of unifying or distinguishing between characters is
    > considered, it seems to me that the most important question is how
    > confusing or useful it would be to unify or distinguish between those
    > particular characters from the point of view of current users or
    > expected users.
    > Unicode should do what is most useful.

            I agree, but "should" != "does". At this point, it is important to
    note that political issues play a very big role in encoding. Unifying the
    world's characters cannot be done without unifying the people who use them,
    at least with respect to character encoding, and often decisions are made
    for this reason. This is why an algorithmic approach to character encoding
    decisions is doomed to fail. This is not, as it may appear, a bad thing -
    it is proof that Unicode exists to serve us. This is also why it is
    frequently pointed out that citing an existing encoded character with
    similar characteristics is not necessarily an argument to encode (although
    there are also technical reasons for that point as well).

    > Honest debate does arise, because what is useful in one
    > sphere or from
    > one point of view may cause problems in another sphere or
    > from another
    > point of view. Sometimes there is no definite correct answer.

            This is also a factor.


    This archive was generated by hypermail 2.1.5 : Fri Mar 26 2004 - 14:59:57 EST