Re: On the possibility of encoding webdings in Unicode (from Re: square bullets added to unicode.)

From: Asmus Freytag (
Date: Thu Jan 27 2011 - 12:02:00 CST

  • Next message: Asmus Freytag: "Re: What can be encoded in 2011? (from Re: On the possibility of encoding webdings in Unicode)"

    On 1/27/2011 6:44 AM, Doug Ewell wrote:
    > William_J_G Overington<wjgo underscore 10009 at btinternet dot com>
    > wrote:
    >> Even if the door has been opened, I do not understand why the opening
    >> of the door for encoding anything graphic as characters would be
    >> considered to be wrong. There are many unused planes of character
    >> codepoints. I feel that it is better to use some of them if some
    >> people want to use them. Indeed I have various ideas for encoding
    >> things as characters that are not just graphic characters, such as my
    >> idea for encoding localizable sentences.
    > The UTC and WG2 do have a set of principles and policies for what sort
    > of things are appropriate to encode in a character encoding and what
    > sort of things are not. Regardless of whether any of those principles
    > were compromised by encoding the emoji, the door is NOT wide open for
    > adding all kinds of encodable "stuff" to Unicode.

    > The fact that there are 865,000 unassigned code points does NOT change
    > this. It would not matter if there were 865 million.
    > This needs to be an FAQ, if it is not already. I've seen several people
    > besides William express this view; William is merely one of the most
    > persistent.

    And a FAQ will not stop them. (And see, you're not even sure whether a
    FAQ exists already.)

    Also, the case is not 100% black and white.

    The standard will, over time, reflect advances in the understanding of
    writing systems as well as adapt to changes in the way written text is
    used on devices -- in 1988 nobody could have foreseen the explosion of
    text messages on handheld devices, nor their adoption of symbols.

    There are other writing systems which were listed explicitly as
    ineligible in Unicode 1.0. But today, a new understanding has emerged
    that some of them have and encodable character "backbone" even if the
    full rendition of the system requires markup and intricate rendering rules.

    However, while these adaptions are necessary, the way to get there is a
    slow and deliberate process. And that is a good thing.

    So far, the committees have successfully filtered all the external
    impulses, letting through those that fit the existing design and
    principles and yielding on the latter only rarely and only if
    overwhelming evidence suggested a proposal did have strong merits and
    therefore required the standard to adapt.

    After seeing this process in operation for over 20 years, largely
    successfully, I see no reason to fear its sudden collapse. There have
    always been those advancing completely incompatible agendas, but it has
    never resulted in "anything goes", and there's no reason to think it
    would in the future.

    So far, my view is that the standard is retaining, for the moment, the
    right level of flexibility and that principles should best be seen as
    "molasses" and not doorstops - they prevent the doors from being blown
    open by every kind of whim, but will allow them to yield slowly to the
    combined forces behind a well-justified and well-constructed proposal
    that has won acceptance and agreement, even if some aspect of it is

    > --
    > Doug Ewell | Thornton, Colorado, USA |
    > RFC 5645, 4645, UTN #14 | ietf-languages @ is dot gd slash 2kf0s ­

    This archive was generated by hypermail 2.1.5 : Thu Jan 27 2011 - 12:03:50 CST