From: Mark E. Shoulson (email@example.com)
Date: Fri Jan 28 2011 - 09:29:39 CST
On 01/28/2011 07:13 AM, "Martin J. Dürst" wrote:
> On 2011/01/28 16:50, William_J_G Overington wrote:
>> My idea does not seek to do that. I am suggesting that there be a
>> finite number of localizable sentences. It could be hundreds, maybe
>> thousands eventually. I am thinking that Unicode could encode a few
>> to start in plane 7 in Unicode 7.0 so as to get started and then more
>> could be added gradually with each update to Unicode as people
>> discover particular needs.
> Even if you start just with your "gluten-free restaurant" sentence,
> there's hundreds of things people some are allergic against (or
> otherwise have health problems or just can't stand), so that single
> example already gives you hundreds of sentences. And there's obviously
> much more to talk about.
Keep in mind also the difficulty on the user end. It's one thing to
expect me to know all the letters in my writing system, so I know which
one I want to use, even if it's Chinese. It's another to expect me to
remember all the "encoded" sentences so I know *if* the sentence I want
to say is even encoded. Yes, I can search for it, but am I going to
think to search if I don't know/suspect it is encoded? It's a little
high as a burden for the user.
I don't see why Neil Harris' suggestion of a big standardized database
of sentences with ordinary numbers as keys would be any less successful
than using characters. And with much less objection. If there
characters were encoded, there would have to be such a database
distributed *anyway*. The tech is already there: use what we have and
maybe get a user-base that way.
This archive was generated by hypermail 2.1.5 : Fri Jan 28 2011 - 09:30:56 CST