Re: ASCII and Unicode lifespan

From: Doug Ewell (dewell@adelphia.net)
Date: Wed May 18 2005 - 00:15:24 CDT

  • Next message: Stephane Bortzmeyer: "Re: SMTP and unicode"

    Peter Kirk <peterkirk at qaya dot org> wrote:

    >> It's usually considered better engineering practice to assume that a
    >> building, a bridge, or a standard will be in existence for a long
    >> time, and to build it so as to allow incremental upgrades such as
    >> earthquake retrofitting, than to assume its imminent obsolescence and
    >> underengineer it.
    >
    > True enough, although one needs to be realistic about such things.
    > There is no point in designing a car to last 50 years when its design
    > is likely to be obsolete in 10.

    "Its design" is probably where you and I differ. There's no doubt that
    the internal workings of a car have changed and improved over the years.
    We have catalytic converters today, not carburetors. But as far as the
    user-visible parts are concerned, I tend to agree with Curtis; once you
    close the hood (bonnet), there isn't much
    difference other than styling between today's cars and those of 50 years
    ago.

    The Cubans, by the way, are probably pretty glad that at least some cars
    were designed to last 50 years. :-)

    > And one needs to allow for necessary incremental upgrades instead of
    > sticking to over-restrictive stability policies.

    I thought Unicode did allow for incremental upgrades. Isn't that what
    4.1 was?

    To continue the car analogy, radical design changes in the automotive
    user interface don't seem to meet with much success or popular approval.
    You don't see a joystick or a Game Boy-style plus-shaped button in place
    of the steering wheel, even though our future drivers (and many of our
    current ones) are most familiar with that UI.

    > After all, when that earthquake comes, flexible structures are
    > likely to survive, but the inflexible ones which rejected retrofitting
    > are likely to collapse catastrophically.

    I'd be interested to know what part of Unicode you think is in danger of
    this type of obsolescence. (You too, Hans.)

    The ITU alphabets and BCDIC were ill-suited to data processing because
    of their limited repertoire and non-contiguous letters and digits.
    FIELDATA still did not provide lowercase. EBCDIC had non-contiguous
    letters and way too many different flavors. ASCII provided little or no
    support for languages other than English. The ISO 8859 family supported
    many more languages, but only a few at a time, and required out-of-band
    information or ISO 2022 switching to be correctly identified. Vendor
    standards were proprietary and too easily changed, and sources differ as
    to their exact content.

    Each of these character encodings has faced certain problems or
    limitations that led to their progressive replacement.

    Now, in keeping with this, what problems does Unicode present that will
    lead to its replacement by something better? How will the "something
    better" solve these problems without introducing new ones? How will it
    meet the challenge of transcoding untold amounts of "legacy" Unicode
    data? How will it respond to the inevitable objections from supporters
    of other encoding systems as Unicode has done?

    This is not a troll, by the way. I'm really interested in how you think
    this will work. Because heaven knows, I don't get it.

    --
    Doug Ewell
    Fullerton, California
    http://users.adelphia.net/~dewell/
    


    This archive was generated by hypermail 2.1.5 : Wed May 18 2005 - 00:41:00 CDT