Re: Script variants and compatibility equivalence

From: Asmus Freytag (asmusf@ix.netcom.com)
Date: Fri Jun 04 2004 - 19:43:12 CDT

  • Next message: Asmus Freytag: "Re: Revised Phoenician proposal"

    At 02:21 PM 6/4/2004, Peter Kirk wrote:

    >>>>>There is no consensus that this Phoenician proposal is necessary......
    >
    >I am revisiting this one because I realise now that Ken has been somewhat
    >economical with the truth here. There ARE cases in which entire alphabets
    >have been given compatibility decompositions to other alphabets.

    There are alphabets and alphabets.

    >For example there are the Mathematical Alphanumeric Symbols, the Enclosed
    >Alphanumerics, and the Fullwidth and Halfwidth Forms, as well as
    >superscripts, subscripts, modifier letters etc. These symbols

    Most of them are symbol sets that are graphically derived from alphabets.
    They don't function like alphabets and you don't see entire text runs in
    one of these sets, but a character at a time.

    >have these compatibility decompositions because they are not considered to
    >form a separate script, but rather to be glyph variants of characters in
    >Latin, Greek, Katakana etc script. Do these compatibility decompositions
    >cause technical difficulties?

    Yes, they do, If people apply them. Compatibility decompositions modify the
    semantics of a text (a known feature of them) so you must limit their use
    to situations where that is the intended effect or you will disappoint your
    users.

    >>Compatibility decompositions directly impact normalization.
    >
    >Of course. And the point of suggesting compatibility decomposition here is
    >precisely so that compatibility normalisation, as well as default
    >collation, folds together Phoenician and Hebrew variant glyphs of the same
    >script.

    No. Compatibility decompositions are a blunt instrument as it stands. Read
    UTR#30 Character Foldings, to see what I mean.

    >>Cross-script equivalencing is done by transliteration algorithms,
    >>not by normalization algorithms.
    >
    >This begs the question.

    No, I think Ken's been very clear here.

    You continue "... just as one would not describe as transliteration
    representation in Times New Roman of a Latin script text in mediaeval
    handwriting or in Fraktur", which in fact is incorrect. Fraktur uses the
    long s which needs to be transliterated to normal s.

    >If what I have suggested is ridiculous, so is what the UTC has already
    >defined for Mathematical Alphanumeric Symbols.

    It's not, because it's not the same issue. Symbol sets and alphabets are
    different. And trying to re-open and re-argue this repeatedly on this list
    doesn't improve the analogy.

    A./



    This archive was generated by hypermail 2.1.5 : Fri Jun 04 2004 - 19:44:45 CDT