From: Erkki Kolehmainen (firstname.lastname@example.org)
Date: Wed Jun 29 2005 - 04:00:53 CDT
Although I wasn't supposed to see the exchange, I'd like to comment on
the following statement:
"After all, ask yourself why legacy compatibility was required in the
first place. Maybe many reasons, but one of them was surely so that
software that uses a legacy encoding internally can continue to function
without modification, with only an import/export filter."
In my mind, legacy compability was required to ensure that the data that
has originally been encoded using whatever scheme remains processable,
often but not necessarily together with data originally encoded using
whatever other scheme. A reliable way to convert the data (into Unicode)
in order to preserve it permanently was required, not the preservation
of some pieces of software except possibly for a transition period. In
fact, many of the platforms for the implementations that were originally
used to process and store the legacy data were becoming extinct already
in the early days of Unicode and many more are extinct by now.
The need to limit the character repertoire that is acceptable for
processing in a given application is a different issue. There are
certainly better and much easier ways to achieve this than using some
restrictive encoding scheme. As examples, one should validate the input
into a national population register or for Internationalized Domain
Names - or produce a "True Tamil" text processing software package.
Erkki I. Kolehmainen
Gregg Reynolds wrote:
> I'm obviously not so good with email clients with multiple accounts...;)
> Re: [Fwd: Re: A Tamil-Roman transliterator (Unicode)]
> Gregg Reynolds <email@example.com>
> Tue, 28 Jun 2005 17:43:18 -0500
> Michael Everson <firstname.lastname@example.org>
> Michael Everson wrote:
>> Are you CRAZY?
> Ah Mr. Everson. Your style amuses me, even though I do occassionaly
> want to strangle you.
> To answer your question: technically, yes. However, my suggestion is not.
> Unicode is a character encoding. Explicitly not in the service of
> languages. Furthermore, it is not and I sincerely hope it does not
> pretend to be, a guide to the linguistics of written languages. If it
> does have this pretension, I for one can assure you it fails miserably
> when it comes to Arabic.
> And even furtherfurthermore, I'm not aware that "eliminating all
> competing methods of encoding text" was one of Unicode's goals. Why
> would it be? It's an exchange encoding, no more, no less. If I or
> anybody else wants to design an encoding, and go to the trouble of
> adapting a piece of open source software (say, Vi, or Mozilla, etc.) to
> accomodate that encoding - well, what of it? Why is that any of
> Unicode's business? After all, ask yourself why legacy compatibility
> was required in the first place. Maybe many reasons, but one of them
> was surely so that software that uses a legacy encoding internally can
> continue to function without modification, with only an import/export
> filter. So if that is good enough for software supporting legacy
> encodings, why not for new encodings? Surely you don't believe the
> Unicode Church is the One True Way.
>> At 17:03 -0500 2005-06-28, Gregg Reynolds wrote:
>>> So instead of ranting about plots and conspiracies, why don't you try
>>> something constructive? Design a Tamil encoding from scratch, with
>>> no regard at all for legacy encodings, for the sole purpose of
>>> serving the Tamil-speaking community. Then hack up some open-source
>>> software to work with your design. Then you can find out if anybody
>>> really wants to use it. If they do, then you can come back to
>>> Unicode and make an argument based on facts on the ground, instead of
>>> political begging.
This archive was generated by hypermail 2.1.5 : Wed Jun 29 2005 - 05:13:30 CDT