Re: Your Message Forwarded

From: Edward Cherlin (
Date: Sat Aug 17 1996 - 16:56:15 EDT

Timothy Huang wrote:
>Hello, Gary Liu
>Your message to the is posted as followed.
>Gary Liu
>>Subject: current unicode standard -- a bad solution , thus a born loser
>>I agree with Mr. Timothy Huang , Unicode is not a good solution to
>>computing , but a compromiss of some US computer giants -- those do not
>>care about how their product will shape the culture of other nations
>>before they sell so-called localized software to the world .

Considering what ASCII and EBCDIC have done to U.S. culture, you have
nothing to complain about. :-) Unicode 1.0 very liberally included every
character in every Chinese standard. If your standards people can't do the
job right, you have no business blaming us.

>>Suppose someday my computer says that this Chinese character can not be
>>displayed just because it is not a unicode character . I do not want
>>this kind of computer at all .

That is impossible. Unicode in no way forbids the use of characters not in
the standard. Anyway, what do you say today when your computer tells you
that you can't use a character that isn't in Big 5? According to you
statement, you cannot use any computer, typewriter, or typesetting machine,
and must go back to calligraphy.

>>I am wondering by cutting many characters
>>of other countries whether unicode could be called "uni" anymore .
>>I think a real useable unicode standard will at least need 32-bits ,
>>it should accept other nations' national standards at first .

You are revealing almost complete ignorance of the standards process. We
have a 32-bit code, ISO 10646. We have modified 16-bit Unicode with UTF-16
to encode more than a million characters.

>> Because
>>are already national standards in other countries such as China, Japan ,
>>widely accepted and used in the computer industry . Why doesn't unicode
>>other nations' standards besides ASCII ?

This is simply ignorant again. We have already accepted every available

>>Here is my proposol , a 4-byte unicode takes the format as :
>> Byte3 Byte2 Byte1 Byte0
>> country code |<-- national code standard -->|
>>Gary Liu
Mr. Huang:

Please pass on my reply to Mr. Liu.

By now I would have hoped that you could explain to Mr. Liu that this is
all wrong. Indeed, it is more than wrong. It is arrant, ignorant,
incompetent nonsense. Mr. Liu has taken no trouble to find out what Unicode
is and does if he can complain that we have ignored everything outside

He is arguing against Unicode v.1.0, which has all along been understood as
a temporary stopping place, with full support of all writing systems as the
longer term goal. We do not need another 4-byte code, since we have ISO
10646, and in any case Unicode 2.0 with UTF-16 now has space for more than
one million characters, and will move as quickly as is consistent with due
care and attention to add all the presently missing CJK characters, among

Mr. Liu's suggestion of a country code byte and a 24-bit space per country
is impossible. There are presently fewer than 256 countries, but there are
at least 6,000 languages, and there have been many more than 256 countries
with their own national languages. The Unicode and ISO 10646 committees
have worked out much more reasonable allocation schemes, giving most
alphabetic scripts one or just a few pages of 256 characters each, and
providing suitable spaces for syllabic and logographic writing systems.

Chinese opponents of Unicode seem to think that they have been singled out
for mistreatment. Actually the unification of extended Latin alphabet
characters has been much more severe than CJK unification, and is still
undergoing correction. As you can see on this list, there are problems
remaining with many scripts and with special letters for particular
languages within each script. We are dealing with these errors, rather than
feeling sorry for ourselves or attributing malice to others.

Edward Cherlin Helping Newbies to become "knowbies" Point Top 5%
Vice President of Web sites
NewbieNet, Inc. Everything should be made as simple as possible, __but no simpler__. Albert Einstein

This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:31 EDT