Re: Response to Everson Phoenician and why June 7?

From: Dean Snyder (dean.snyder@jhu.edu)
Date: Thu May 20 2004 - 21:43:04 CDT

  • Next message: John Hudson: "Re: Response to Everson Phoenician and why June 7?"

    Kenneth Whistler wrote at 4:51 PM on Thursday, May 20, 2004:

    >John Hudson asked, again:
    >
    >> My question, again, is whether there is a need for the plain
    >> text distinction in the first place?
    >
    >And I claim that there is no final answer for this question. We
    >simply have irresolvable differences of opinion, with some
    >asserting that it is self-evident that there is such a need,
    >and others asserting that it is ridiculous to even consider
    >encoding Phoenician as a distinct script, and that there is
    >no such need.
    >
    >My own take on this seemingly irreconcilable clash of opinion is
    >that if *some* people assert a need (and if they seem to be
    >reasonable people instead of crackpots with no demonstrable
    >knowledge of the standard and of plain text) then there *is*
    >a need. And that people who assert that there is *no* need
    >are really asserting that *they* have no need and are making
    >the reasonable (but fallacious) assumption that since they
    >are rational and knowledgable, the fact that *they* have no
    >need demonstrates that there *is* no need.
    >
    >If such is the case, then there *is* a need -- the question
    >then just devolves to whether the need is significant enough
    >for the UTC and WG2 to bother with it, and whether even if
    >the need is met by encoding of characters, anyone will actually
    >implement any relevant behavior in software or design fonts
    >for it.
    >
    >In my opinion, Phoenician as a script has passed a
    >reasonable need test, and has also passed a significant-enough-
    >to-bother test.
    >
    >Note that these considerations need to be matters of
    >reasonableness and appropriateness. There is no absolutely
    >correct answer to be sought here. A character encoding standard
    >is an engineering construct, not a revelation of truth, and
    >we are seeking solutions that will enable software handling
    >text content and display to do reasonable things with it at
    >reasonable costs.
    >
    >If you start looking for absolutes here, it is relatively easy
    >to apply reductio ad absurdum. In an absolute sense, there is
    >no "need" to encode *any* other script, because they can *all*
    >be represented by one or another transliteration scheme or
    >masquerading scheme and be rendered with some variety or
    >other of symbol font encoding. After all, that's exactly what
    >people have been doing to date already for them -- or they
    >are making use of encodings outside the context of Unicode,
    >which they could go on using, or they are making use of graphics
    >and facsimiles, and so on. The world wouldn't end if all such
    >methods and "hacks" continued in use.
    >
    >The question is rather, given the fundamental nature of the
    >Unicode Standard as enabling text processing for modern
    >software, it is cost-effective and *reasonable* to provide
    >a Unicode encoding for one particular script or another,
    >unencoded to date, so as to maximize the chances that it
    >will be handled more easily by modern software in the global
    >infrastructure and to minimize the costs associated with
    >doing so.
    >
    >*That* is the test which should be applied when trying to
    >make decisions about which of the remaining varieties of
    >unencoded writing systems rise to the level of distinctness,
    >utility, and cost-effectiveness to be encoded as another
    >script in the standard.
    >
    >--Ken

    Your seven-repeated "reasonable" analysis of this engineering issue does
    not even mention once, much less address, the PROBLEMS that will be
    caused by encoding this diascript.

    The precedent you are bent on setting here will open a can of worms for
    the dozens of diascripts of these 22 West Semitic letters.

    Your analysis here is a rationalization for a decision you made years ago
    before you ever consulted Semitic scholars on this issue.

    And you are ignoring the advice you are being given by Semitic scholars now.

    The problem is, you will not have to live with the resultant mess, but WE
    WILL. That's why we care more about this issue.

    I know Phoenician has been sexy, provocative, glamorous, and enthralling
    to historians of the alphabet for centuries - it was a part of the Greek
    cultural psyche that they got their letters from the Phoenicians; and
    many modern books have just repeated such ancient dicta uncritically. But
    among serious scholars of West Semitic scripts there are standing
    controversies about just what were, in fact, the exact sources for the
    Archaic Greek alphabets. No one doubts, to my knowledge, that the sources
    were Levantine, but there are conflicting signs, for example, in the
    shapes of individual letters, the letter names, and the multiple
    directions of writing, that point to sources other than what we call
    "Phoenician" today. The issue is an open area of discussion among
    knowledgeable historians of the alphabet.

    On the other hand, Hebrew, as a script system, is such a loaded and
    complicated series of cultural artifacts, that it seems incongruous
    (mostly to non-Semitists, I fear) to associate Hebrew and Phoenician
    together in the same encoding.

    It's really a shame that the Unicode Technical Consortium did not first
    encode the 22 West Semitic characters as Canaanite and then additionally
    encode the Hebrew pointing systems, legacy encodings, etc.

    Remember, anything more than the original 22 letters only appeared
    starting in the later Roman Empire. As scholars of ancient West Semitic
    languages we work on texts in several dialects and languages, written
    over a period of more than 1500 years, in various hands, reflecting
    several orthographies, but all using the very same 22 West Semitic
    characters. And, make no mistake, these are indeed the same sort of
    abstract characters Unicode seeks to encode.

    I think the unspoken issues here are based more on culture, more on
    intellectual chauvinism, and maybe even on religion, than on encoding
    issues per se. If that is, in fact, the case, I have nothing further to
    offer here other than to say, that, since cultures, intellectual
    paradigms, and religions come into and go out of favor, I would support
    international standard encodings that are more objective and more
    permanent in nature than currently fashionable trends.

    Respectfully,

    Dean A. Snyder

    Assistant Research Scholar
    Manager, Digital Hammurabi Project
    Computer Science Department
    Whiting School of Engineering
    218C New Engineering Building
    3400 North Charles Street
    Johns Hopkins University
    Baltimore, Maryland, USA 21218

    office: 410 516-6850
    cell: 717 817-4897
    www.jhu.edu/digitalhammurabi



    This archive was generated by hypermail 2.1.5 : Thu May 20 2004 - 21:42:59 CDT