From: Gregg Reynolds (email@example.com)
Date: Mon Jul 11 2005 - 06:54:30 CDT
Asmus Freytag wrote:
> I think asking proofreaders to proof the underlying encoding is
> backwards. If the task is to ensure a preferred encoding, the best
> approach is to use software, whether a Perl script for plain text files
> with markup or a macro inside an editor that produces a proprietary
> binary format.
> After all, differences in *encoding* is something that software is
> easily made aware of, where as differences in *spelling* still require
> human proofing.
True enough, but then you still have to trust the software and answer
the question "how do I know that I know?". Most people in the world are
not capable of writing a perl script. And suppose you have a document
being passed around and proofread by people using a variety of software.
How can you be sure the encoding doesn't get munged somewhere along
the line? The simplest and most reliable way (IMO) is to have a
transparent proofreader's font of some kind.
Also, aside from composition/decomposition, there's the question of
whether all the code elements are chosen from the proper script block.
Not the most pressing issue in the world, I admit, and maybe not such a
problem for latinate scripts. This came up in the context of
proofreading an encoding of the Quran. Seems like it might be an issue
for any script with complex rendering logic.
This archive was generated by hypermail 2.1.5 : Mon Jul 11 2005 - 06:55:34 CDT