Re: Just if and where is the then?

From: Philippe Verdy (verdy_p@wanadoo.fr)
Date: Wed May 05 2004 - 12:27:43 CDT


From: "Doug Ewell" <dewell@adelphia.net>
> I guarantee you that creating a new 8-bit encoding specific to the
> language(s) you are dealing with, and getting fonts developed for that
> encoding, and trying to exchange data in this new encoding with others,
> will cause more problems for the university than working with Unicode.

For your university yes, most probably, but for local native users of the script
I would disagree, there's a radically different usage and need pattern between
interchanged data in a heterogeneous environment, and local usage.

I'm not advocating for any private definition of a new 8-bit charset. But I find
nothing wrong if a country standardization body wants to promote its own charset
to help increase the stability of orthographs, and define a stable subset
appropriate for a language. If such charsets gets to a national standard; it
will give incitations to font makers so that they make the few additions needed
in their Unicode fonts.

That's something which seems impossible to ask to font makers when they are
exposed to tens of thousands of combinations of letters and diacritics: unless
there's a well known standard that exposes the needed combinations, many of them
will remain untested and they won't feel that the addition is necessary to
support users communities in some countries, because they will feel, that
there's no market incitation to make these corrections.

See how GB18030, whose support for commercial usages was made mandatory, helped
to improve the support of larger charsets than the many incomplete ones that
were initially made for limited usages badly targeted for China. Since then, the
support of Chinese with Unicode has been considerably enhanced on most
platforms.

National or regional official standards are a great help to improve the correct
support of languages. This does not limit the development of Unicode for
interchanges, even if locally the data can be processed more easily by smaller
subsets.

I would say the same for other subsets already registered with ISO/IEC 10646,
such as European ones: subsets with multiple levels help understanding which
characters should have a priority support for a relevant market. Microsoft made
a similar initiative by pushing foundries to support at least the WGL4 subset in
their fonts.

Unicode will remain the worldwide interoperability solution, but I see nothing
wrong in regional development initiatives.



This archive was generated by hypermail 2.1.5 : Fri May 07 2004 - 18:45:25 CDT