From: Philippe Verdy (email@example.com)
Date: Fri Oct 17 2003 - 15:19:40 CST
From: "John Cowan" <firstname.lastname@example.org>
> Marco Cimarosti scripsit:
> > Why? 200 millions should be more than enough: that's more than 30.000
> > for each living language.
> The Oxford English Dictionary has almost 10 times that many main entries.
> And if we want to record every obvious derivative, 4 million words (times
> 6000 languages) seems a reasonable upper bound. Granted, English has
> a fat vocabulary, but let's think big here.
> > In practice, it will always be rendered as
> > "-tu wa-" because no one will invest in implementing Swahili rendering.
Isn't most of that work already implemented for Indic scripts that require
glyph reordering? Or do you mean the complexity of the work needed to
create the glyph reordering tables (for logical to visual order)?
Isn't the Indic system now flexible enough to encode Banthu & Swahili
languages? That's a shame because Swahili is one of the most spoken
languages of the world (with millions of speakers), even before a lot of
regional European languages that are fully encoded and supported in
Unicode, and it really urgently needs to be more easily published to
keep its associated culture.
This archive was generated by hypermail 2.1.5 : Thu Jan 18 2007 - 15:54:24 CST