From: Sinnathurai Srivas (firstname.lastname@example.org)
Date: Mon Apr 04 2005 - 11:43:06 CST
I'm not discussing about errors and wheather it was made by ISCII or UC. I'm
only discussing about the refusal to correct the errors in the correct way.
I'm discussing how to deal with these matters for the future.
I read the UC's rule about anotating. It clearly states to deal with it in
such a manner that error is corrected and continuity is supported.
At present the solution is continuity is supported at with the absent of
There is no justification for refusing to correct properly.
The definition is wrong. The correction still places errornous definition as
primary. All the argument of strings are not worth a cent. UC need to look
at this with deep breath.
Current definition misleads developers. This is a fctor far worse in dimage
to Unicode than continuity. Ofcourse continuity is essential, but should be
kept in perspective.
Consider a software or hardware manufacturer, who made some mistake and are
refusing to correct them. Can you imagine the reaction?
If a problem can be solved it should be solved. Only situation I can think
where it would become problematic is if a code point is shared by many
languages and are all bickering about what to call it. Please resolve this
----- Original Message -----
From: "William J Poser" <email@example.com>
Sent: Monday, April 04, 2005 8:00 AM
Subject: Tamil Aytham and the role of Unicode names
> If Unicode character names are taken as descriptions, contrary to the
> of the Consortium, there are much more egregious "errors" than for Tamil
> which will perhaps make Sinnathurai Srivas feel less insulted. The
> Canadian Aboriginal
> Syllabics range represents the union of a handful of different writing
> They are all historically derived from the Cree system. The versions used
> for Ojibwe,
> Inuktitut, Slave, Dogrib, and Chipewyan stick relatively close to the
> original system,
> and where they differ differ primarily by the addition of characters
> (since the
> phonemic inventory of Cree is small). The Carrier version, however, not
> added quite a few characters so as to represent its much larger inventory,
> and discarded
> most of the characters in the original system, it was also thoroughly
> and rationalized. As a result, the phonetic values of the few characters
> Carrier shares with the other languages are different from their Unicode
> names more often than not since the latter are based primarily on Cree.
> If the Unicode names are taken to be descriptive this would be misleading
> and irritating, but it doesn't matter much because the names are really
> an alternative encoding, one that is perhaps for some people less
> and more mnemonic than hex numbers.
> I suspect that there are two factors that make people take the names more
> seriously than is intended. One is sensitivity due to a history of
> or second-class treatment of the language. This is probably the primary
> factor in the case of Tamil, whose speakers perceive it as playing second
> fiddle to Hindi (and in Sri Lanka, Sinhala). The other is that really good
> reference materials on writing systems do not exist, as a result of
> which the Unicode standard is called upon by some to serve as such.
> When it proves imperfect in this role, it is judged inadequate by a
> it was never intended to meet.
> Bill Poser, Linguistics, University of Pennsylvania
> http://www.ling.upenn.edu/~wjposer/ firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Mon Apr 04 2005 - 11:44:05 CST