At 16:51 97-01-09 -0500, email@example.com wrote:
>"Alain LaBont/e'/" writes:
>> I understand this but it would be nice if, within plane 2, or within any
>> eventual chunk allocated eventually in the BMP (plane 0), that it be
>> allocated by chunks related to radical/strokecount segments. After all we
>> know what is the number of radicals, we know the order of magnitude of the
>> number of characters in the most comprehensive Hanzi dictionaries and we
>> could even make provision for even more characters to be created.
>> It is desirable that allocation not be made randomly for such a huge
>> character set. For small numbers of characters, it would not matter for
>> ordering purposes at all, but for this one, otherwise we risk future
>> nightmares when making default ordering tables (by radical and/or
>> strokecount) in ISO/IEC 14651 for extra Chinese characters that will have to
>> be inserted in the nicely-ordered set of Han characters in the current
>I think we cannot achieve a goal of easy ordering of the Han characters.
>We already have a section in BMP and then another is coming
>along, either in BMP or in plane 1. We need to sort these two sets
>together. So we already now need to merge the two sets, with
>individual weights for each character.
In the BMP the allocation was ordered at once per radical/strokecount as
agreed between CJKSV countries (so for the current BMP it is the easiest
script to order with this shape-oriented criterium that was agreed by
involved countries). We'll of course have to make insertions for additions.
That, we know.
But it would be easier to make insertions by contiguous chunks by
radical/strokecount than one by one for tens of thousands of other
characters. If code allocation is made with this in mind (whenever possible,
of course), that will be less of a nightmare for a good international default.
That's all what I'm saying. Seems obvious. Maybe it's not for everybody
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:33 EDT