Re: Encoding italic

From: Martin J. Dürst via Unicode <unicode_at_unicode.org>
Date: Thu, 17 Jan 2019 11:50:08 +0000

On 2019/01/17 17:51, James Kass via Unicode wrote:
>
> On 2019-01-17 6:27 AM, Martin J. Dürst replied:

> > ...
> > Based by these data points, and knowing many of the people involved, my
> > description would be that decisions about what to encode as characters
> > (plain text) and what to deal with on a higher layer (rich text) were
> > taken with a wide and deep background, in a gradually forming industry
> > consensus.
>
> (IMO) All of which had to deal with the existing font size limitations
> of 256 characters and the need to reserve many of those for other
> textual symbols as well as box drawing characters.  Cause and effect.
> The computer fonts weren't designed that way *because* there was a
> technical notion to create "layers".  It's the other way around.  (If
> I'm not mistaken.)

Most probably not. I think Asmus has already alluded to it, but in good
typography, roman and italic fonts are considered separate. They are
often used in sets, but it's not impossible e.g. to cut a new italic to
an existing roman or the other way round. This predates any 8-bit/256
characters limitations. Also, Unicode from the start knew that it had to
deal with more than 256 characters, not only for East Asia, and so I
don't think such size limits were a major issue when designing Unicode.

On the other hand, the idea that all Unicode characters (or a
significant and as yet undetermined subset of them) would need
italic,... variants definitely will have let do shooting down such
ideas, in particular because Unicode started as strictly 16-bit.

Regards, Martin.
Received on Thu Jan 17 2019 - 05:50:29 CST

This archive was generated by hypermail 2.2.0 : Thu Jan 17 2019 - 05:50:29 CST