From: Jon Hanna (firstname.lastname@example.org)
Date: Wed May 05 2004 - 08:31:00 CDT
> It's up to African communities or governments or local instituions and
> educational organizations to decide if they wish such encoding, if this
> development is justified by a reasonable reduction of costs with an
> compatibility with low-cost softwares and systems, and simplified processes
> get appropriate fonts and input methods supporting a well-defined and
> subset, needed for the languages they wish to normalize and stabilize with
> accepted orthography that can be taught.
This is nonsense. Replacing systems that are increasingly available at
decreasing cost (of free) with a newly-designed system is not going to reduce
costs. Even if it meant that very old hardware could be used that would be
difficult to use otherwise and this hardware was already available there would
be little short-term saving and a great long-term cost.
even if this is
> by 1-to-N mappings from the new charset and Unicode. This won't break the
> compatibility with the Unicode stability policy.
It would make the encoding a ghetto encoding. Hopefully the effect would be
strong enough that it wouldn't take off even within the targetted community.
Unfortuanately there is a risk that some people might actually start using it,
but not enough that they would be well-supported Internationally.
Besides which, there really is very little point in such a new precomposed
character. Does the Francophonic community really benefit because you have two
different canonically-equivalent ways of communicating "ç"?
> Knowing that Unicode-ISO/IEC 10646 is a now de facto standard (after being a
> jure one in ISO) will clearly guide those charset developments complying
> Unicode rules and policies, so that such adoption will not create a nightmare
> handle, with unreasonable additional costs for transcoding to/from/through
If you can't round-trip directly then the cost is unreasonable.
> I see absolutely no problem if new ISO-8859-* variants is added in the
> for better support of African or Asian languages (or even for European ones,
> i.e. Georgian and Armenian), and no opposition of principles if some newer
> ISO2022 charset is created for Canadian Syllabics or Ethiopic if this helps
> processing the corresponding languages.
If they can be round-tripped trivially (as trivially as the current ISO-8859
family) then I see no problem either, but I also see little point, and the
motivation gets less every year. Frankly, we have a global encoding now. It has
problems (many of which come from the fact that it was not practical to act as
if we were at encoding year-zero - if we had then we probably wouldn't have
precomposed characters for European languages, never mind any others) but those
problems are considerably less than existed previously and ISO-8859-17+ is
always going to be inferior to UTF-8 or UTF-16.
-- Jon Hanna <http://www.hackcraft.net/> "…it has been truly said that hackers have even more words for equipment failures than Yiddish has for obnoxious people." - jargon.txt
This archive was generated by hypermail 2.1.5 : Fri May 07 2004 - 18:45:25 CDT