Re: New contribution

From: Kenneth Whistler (kenw@sybase.com)
Date: Wed Apr 28 2004 - 21:15:00 EDT

  • Next message: Patrick Andries: "Re: New contribution"

    Dean,

    > Then why were Chinese, Japanese, and Korean unified?

    Please refer to TUS 4.0, pp. 296-303, and, in particular, Table 11-2.

    > I'm really not
    > trying to open a can of worms here,

    Yes you are.

    > but what explicitly are the triggers
    > for script unification in Unicode? If these were clearly spelled out the
    > decision should be simple for Phoenician/Hebrew. But if different
    > criteria are applied in different scenarios then every situation will
    > generate the kind of ongoing discussion we have had about Phoenician/Hebrew.

    The latter will pertain, because each situation *is* different, and
    the easy cases have almost all been dealt with already.

    The only slam dunks regarding script identity left tend to be the
    con-scripts, and those are problematical for encoding for other
    reasons than their identity as scripts.

    You are well aware of the ongoing controversies regarding the
    exact historical boundaries of the Sumero-Akkadian cuneiform
    script, for example. There *are* no axiomatic principles of
    script identity which can be applied across the board to decide
    that and all other instances of historical boundaries for a
    candidate script to have its repertoire of characters separately
    encoded in Unicode.

    > There is no comparison between looking at characters in a chart and in
    > working with, and reading, texts in those characters. You cannot just
    > brush off the Phoenician scholarly community, the most important users of
    > these characters, nor can you brush off their input on this, so flippantly.

    This I concur with.
     
    >
    >
    > >To unify Hebrew and Phoenician scripts would be ahistorical at best.
    > >A silly unification.
    >
    > Not anywhere near as "silly" as CJK unification.

    The unification of Han characters was not "silly".

    > The Canaanite script is
    > a script continuum spread across Phoenician, Punic, Aramaic, Hebrew,
    > Moabite, and Ammonite communities, all sharing a common script origin
    > with each developing independently, some more and some less, over the
    > centuries. Where one dips ones finger in this stream of continuity and
    > pronounces "script dis-unification!" is not an easy thing to do.

    Of course, but not in principle different from attempting the same
    for Greek, Latin, Old Italic, Alpine, and Gothic, for example.

    > >I am
    > >actually astonished to see it suggested that it should be unified
    > >with Hebrew.
    >
    > I suggest that this is only because you are not an actual reader of
    > ancient Canaanite/Aramaic/Hebrew texts.

    As long as we are arguing ad hominem, I might add that you suggest
    that CJK unification was silly only because you are not an actual
    reader of ancient Chinese, Japanese, and Korean texts.

    (Try your argument on for size.)

    > I'm not so sure. But at any rate, you are comparing the endpoints of this
    > script continuum (Phoenician and modern Hebrew). Before you proceed here,
    > you'd better decide what criteria you will use to separate out scripts in
    > this script continuum or we will be right back here having the same
    > discussions over and over again with people who want to distinguish
    > between Moabite, Ammonite, Old Aramaic, Imperial Aramaic, Punic, ... in
    > plain text.

    Correct. There clearly needs to be consensus among the likely users
    of a script encoded in Unicode that the repertoire and its encoding
    actually meets some demonstrable need for text representation. If
    it does not, then we can skip encoding it and get on with the
    encoding of something like Tifinagh, for which there are official
    standards bodies of governments clamoring for encoding for
    particular repertoires. I suspect I know where the UTC priority
    will lie if it comes to push or shove between those two scenarios.

    For the Aramaic script continuum there are two potential easy
    answers:

    1. Hebrew is already encoded, so just use Hebrew letters for
    everything and change fonts for every historical variety.

    2. Encode a separate repertoire for each stylistically distinct
    abjad ever recorded in the history of Aramaic studies, from
    Proto-Canaanite to modern Hebrew (and toss in cursive Hebrew, for
    that matter), starting with Tables 5.1, 5.3, 5.4, and 5.5 of
    Daniels and Bright and adding whatever you wish to that.

    But the *correct* answer is likely to be the hard one that carves
    up that continuum into some useful small set of repertoires to
    be encoded as separate "scripts" and identifies each of the
    abjad varieties to be associated with each respective "script",
    so that extant texts can be correctly encoded in an
    interoperable way.

    > I'm not saying we shouldn't encode the "landmarks" in the Canaanite
    > script continuum;

    You aren't? Good. Then instead of objecting on generic grounds
    to the Phoenician proposal, answer the following question:

    A. Does Phoenician constitute a "landmark" in the Canaanite
       script continuum? Yes/No
       
    And once you answer that question, perhaps you can contribute to
    a specification of what the rest of the list of appropriate "landmarks"
    consists of.

    > I'm only saying that expert opinion is needed in
    > determining just what those landmarks are,

    Absolutely. Please provide your expert opinion.

    > based on some set of agreed
    > upon criteria.

    But do not expect a set of axioms to be provided to you that will
    make answering the questions easy.

    The very nature of the problem requires reaching a consensus
    among users of the proposed text encoding regarding what
    text representation purpose it is intended for, and within
    that context, what the useful boundaries of encoding would
    be. That is an *operational* definitional problem, not an
    *axiomatic* one.

    The question to ask is: Does this proposed identification of
    script *make sense* for the text representational use it is
    intended for?

    The question not to ask is: Where is the set of criteria
    whereby I can determine whether "Unicode" considers Ammonite
    and Moabite to be the same script or not?

    --Ken

    P.S. In case anyone should wish to misconstrue by position, I
    am *not* an expert on Aramaic, and I do *not* have a preconceived
    opinion about whether Phoenician should or should not be
    encoded as a distinct script, and if encoded as a distinct script,
    what other varieties of Aramaic script it would be considered
    distinct from. Neither I nor the company I represent has a
    burning need for this encoding, so I am depending on expert
    testimony from those who do have such needs to inform me regarding
    what the best way forward would be when this actually comes up
    to the UTC for decisions.



    This archive was generated by hypermail 2.1.5 : Wed Apr 28 2004 - 21:50:14 EDT