Re: double hyphen

From: Patrick Andries (patrick.andries@xcential.com)
Date: Mon Mar 07 2005 - 14:12:05 CST

  • Next message: Peter Kirk: "Re: Languages using multiple scripts"

    Michael Everson a écrit :

    > At 07:34 -0800 2005-03-07, Patrick Andries wrote:
    >
    >>> Surely the problem is on the side of the people who made the decree
    >>> without thinking of the processing consequences.
    >>
    >>
    >> Ah, but this is what the user community decided...
    >
    >
    > Is it, indeed.

    Well, it all depends what the meaning of "the user community" is. It is
    here *a* user community, but this has no impact on Unicode, only on
    "clever" typesetting macros substituting an emdash when two hyphens
    follow each other.

    >
    > I assume that your comment is intended to be ironic, because you
    > disagree with the Copticists

    No this was no the reason since this does not involve Unicode, but since
    you regrettably mention this and attempt to embarrass me (Patrick
    against the Copticist). I will address this issue in an attempt to show
    that Canada's concerns (not only me) were or are reasonable not that we
    are meddling and stubborn know-it-all that are right.

    But I willd address this only once here since it is of historical
    interest for Coptic and I don't particularly like the tone of some of
    the exchanges here recently ("you've pissed off all the right people"
    addressed publicly to an inquisitive but not well-read newcomer, the
    more patient people now know they are not among the "right people").

    The Copticicts : you mean a Copticist as co-author who, in fact, said
    precisely what we said in his public correspondence: caseless letters.
    Never saw a document where the Copticists (how many ? in what forum ?
    etc.) said « we want new characters that never existed and for which we
    never felt a need in the centuries (two at least ;-)) we have been
    writing Coptic intermixed with Latin. » ;-) No a single attested use of
    case for these Old Coptic and Nubian letters.

    > about their requirement that all of the letters their script be casing

    even though these Old letters never were cased...One of the side-effects
    of your models? What did you say again? «Unicode is supposed to help
    people represent that mess. Unicode is not supposed to tidy it up and
    fix it. » Nor invent any new characters to fix and tidy up, I suppose?
    We don't usually accept letters found in a single author (no interchange
    need), how about in no author? The typographical grounds mentioned in
    the proposal could, we (Canada) believe, be addressed with proper font
    technology (e.g. OpenType), and as far as the programmatic "need" it is
    a simple table lookup that imposes no uniform camerality (ok, casing).
    But this is settled and unfortunately encoded in Unicode 4.1, there is
    no need for you, Michael, to bring up publicly this subject.

    Only bad blood results from it. Our (Canada's) position is well-known
    and documented as a publicly available ISO document, we have no reason
    to be ashamed because we disagreed with you.

    However, in a constructive spirit, we voted yes to Amd. 1. of ISO/EIC
    10646, which will be part of Unicode 4.1 and includes Coptic as you
    proposed it. Let this rest.

    > , and with the N'Ko about their requirement to retain a distinction
    > between some "old" and "new" letters, and their rejection of a
    > unification of their script-specific diacritics with generic ones.

    Again some people, you have admitted yourself (on Unicode-Afrique) that
    old letterforms have been replaced by new letterforms in new editions
    and, as stated before, that the plaintext use of these old letterforms
    is to explain the evolution of this young script.

    Also you say N'ko diacritics are script-specific but it remains to be
    proven that they need to coded in the same block as other N'ko letters
    (Syriac uses some generic diacritics as we advocate for N'ko) : it may
    be more prudent to consider them as generic diacritics until an opposite
    and compelling technical reason is given. For us, there is no need to
    duplicate signs of similar shape and behaviour (there may be unexpected
    complications when unnecessary signs are added : phishing springs to
    mind). I personally don't understand this fascination to have new
    historic scripts fit neatly in one block as if mimicking some History of
    the Writing systems book.

    I understand this opposition may be a question of philosophy : "why code
    these signs since you can live without them and represent the same
    texts?" vs. "why not since it seems also to work ?" To me, the prudent
    way is the one that requires justification for addition (why a smaller
    set of signs is not sufficient), since adding needed characters to
    Unicode is relatively easy but removal impossible and deprecation
    voluntary, it seems. A formal document to this effect (why less is not
    good in the case of N'ko?) would be very much appreciated, no need for
    you to again get upset, here or elsewhere. We would also like to know
    why it seems all the old signs have not been included in your proposal
    (see "anciennes lettres non utilisables",
    http://www.hapax.qc.ca/pdf/anciennes-et-nouvelles-formes.pdf ), how
    could one discuss these old signs in plaintext ? An answer to this
    question in your formal document would be greatly appreciated.

    I will not debate this topic (here or elsewhere) in the absence of
    something more convincing than « I spoke to a few people and they told
    me they wanted this coded as a separate character » (btw no trace of
    such a request from the copticist*s*). A formal and public document
    would be an excellent medium to progress and try to resolve our
    different points of view : it will limit unpleasant exchanges and allow
    more thoughtful consideration by all. I think I have already said so in
    other circles.

    Again, we have mentioned in our adhoc report ways in which we could be
    convinced to accept the n'ko proposal as it stands, other arguments are
    obviously also welcome. We have not heard any possible ways that you
    would be convinced to adopt another position.

    In any case, coming back to the user community's wishes, Unicode and ISO
    do not systematically code as single new code points what some users may
    want to see encoded as such, their writing systems can often be
    respected as it exists in more than one encoding way, it is precisely
    the job of the encoding committees to analyse this calmly and not be
    bullied into coming to a technical encoding decision.

    Best regards,

    P. A.
    (who will not discuss this thread any further on this list)



    This archive was generated by hypermail 2.1.5 : Mon Mar 07 2005 - 11:12:51 CST