Re: Comments on <draft-ietf-acap-mlsf-00.txt>?

From: David Goldsmith (goldsmith@apple.com)
Date: Wed Jun 04 1997 - 21:40:03 EDT


Pete Resnick (presnick@qualcomm.com) wrote:

>And if the two languages being used are Chinese and Japanese?

I suppose I could answer "so what?", but I guess we all know what the
positions are on Han unification and on rendering Unicode plaincode text.

>Rare is not an acceptable retort; if it can happen, it will happen, and the
>protocol must deal with it.

I'm not sure I agree with this. Complexity in a protocol to handle rare
circumstances is not necessarily a good thing, though it may be necessary
sometimes. Saying "we need language-tagged plain text for multilingual
users who are blind" seems a little extreme to me.

Since you can't synthesize speech from all plain text, why not demand
that phonetic hints be included as well? Yomi for Japanese? After all,
otherwise you can't properly pronounce it under all circumstances with
just the plain text.

>>Wouldn't sending a phonetic alternate form (suitable for driving a speech
>>synthesizer) work even better?
>
>Well, yes it would work better for the speech synthesizer, but would be
>exceedingly poor for the text display engine. :-)

Does this protocol have anything like "accept-language" in HTTP? (I would
think so, given some of the other statements about it.) I had in mind
that the user's client would request the phonetic version as the
preferred variant, rather than that you'd send it all the time. This
seems like a far preferable approach to me than trying to synthesize
speech from plain text (which is still more an art than a science,
especially for some languages). On the other hand, it's better than
nothing if a phonetic form isn't available.

David Goldsmith
Architect
International, Text, and Graphics Department
Apple Computer, Inc.
goldsmith@apple.com



This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:34 EDT