From: Erkki Kolehmainen (email@example.com)
Date: Tue Aug 02 2005 - 02:06:06 CDT
The users not only see the characters on the screen. They also want to
process them both in their local (other than typesetting) applications
and in several remote applications implemented by others (possibly in a
semantic web, in which environment you don't necessarily even know who
will process the data).
Erkki I. Kolehmainen (RILF)
Gregg Reynolds wrote:
> Jony Rosenne wrote:
>> I object. The proposal, were it to be accepted, would create havoc.
>> 1) How would one tell, for example, if a 1 is an LTR 1 or an RTL 1?
> Ok, I must be missing something here. If we have <digit-1> (defaulting
> to LTR) and <digit-1-RTL>, what is the problem? Where is the havoc?
> Maybe I'm just thick. If I were to write a Unicode editor, for example,
> I would think the way to go would be to check the directionality of the
> various characters before deciding how to position them graphically. So
> if I encounter <digit-1-rtl>, I know it what it "means" (not really, in
> Unicode), but more importantly, I know how to typeset it.
> The user doesn't care if it is an LTR or RTL "1"; s/he only cares that
> it means "one" and that it be typeset correctly. After all, the user
> only ever "sees" it on screen. I truly don't see the problem. Having
> RTL digits etc. - i.e., dispensing with the totally bogus bidi
> requirement - would IMO create an absolute explosion of software in the
> RTL world. How is that problematic?
> I'm being perfectly serious, and possibly dense: I don't understand why
> such codepoints would "create havoc". Please enlighten.
This archive was generated by hypermail 2.1.5 : Tue Aug 02 2005 - 02:08:53 CDT