As I see it, what is considered as a glyph or font variation in one language
or context (e.g. modern prose) could be considered as a significant
difference in another language or context (e.g. ancient poetry).
In the cases when these differences become significant, there should be a
way to encode them in plain text, if some general solutions like a ligator
mechanism are not found, people will keep on asking new ligatures or
variants for ever -- because they need them, not because they are are all
By the way, the ability to display these difference is only one of the
reasons for encoding them, and not the main one. So one strong advantage of
Michael's ZWL is, paradoxically, the possibility for the software to simply
I don't see why you say that it sounds scary. To me it sounds like a relief.
If I write an application today and, tomorrow, Unicode N.0 adds a new "qw"
ligature, my software won't be able to display it. Even worst, Unicode N.0
would not even define a compatibility mapping "q"+"w" for me, so won't even
be able to quickly add a poor man's version of the new ligature (remember
the Normix/Ny-Normix discussion?). This is scaring to me.
But if the new "qw" ligature is added (quasi-privately) as "q"+ZWL+"w", then
my software will keep a decent display with no need for any change (of
course, a new software that will have a specific glyph for "q"+ZWL+"w" will
win a little point on my old software anyway...)
> -----Original Message-----
> From: firstname.lastname@example.org [SMTP:email@example.com]
> Sent: 1999 December 28, Tuesday 17.37
> To: Unicode List
> Subject: Re: Latin ligatures and Unicode
> MC>And indeed, when we are dealing with extinct languages, or
> with texts that may possibly contain hidden messages, we cannot
> be totally sure that what seems to be an arbitrary graphical
> choice isn't really a meaningful feature. So it makes sense to
> have a device to encode the graphic difference, just to be as
> literal as possible. And it makes sense to have it in plain
> text, because a character set is a character set, not a word
> processor, and it should not rely too much on font
> technologies... Who said that the primary thing I want to do
> with my text is to display or print it, rather than, say, store
> it in a database for doing a statistical research?
> If we start talking about encoding in Unicode all presentation
> distinctions in ancient documents that might prove to be
> significant (but also might not), won't we end up turning this
> character encoding standard into a glyph encoding standard?
> Maybe this rhetorical question is reactionary and that is could
> be feasible to add such to Unicode in a controlled manner. Just
> sounds scary.
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:57 EDT