Productive Glyph Design vs. Productive Character Representation (was: Re: Quick survey of Apple symbol fonts ... )

From: Ken Whistler <kenw_at_sybase.com>
Date: Mon, 18 Jul 2011 15:02:54 -0700

[changing the thread title to disentangle this issue from the Apple
symbol font discussion]

On 7/16/2011 1:08 AM, Julian Bradfield wrote:
>> The other two could be proposed as unitary symbols, if anybody really
>> >needs to
>> >represent them. They are commensurate with a large number of similar symbols
>> >consisting of various numbers of horizontal lines crossed by various numbers
>> >of vertical lines. See, e.g., 29FA, 29FB, 2A68, 2A69, 2AF2, 2AF5.
> They could, but wouldn't the same principle that bans new precomposed
> accented characters applies? If not, why not?
>

For symbols of this sort one needs to attempt to distinguish the level of
productivity one is dealing with. Is it a matter of generalizing some
concept
of glyph design to graphic productivity? Or is it a matter of representing
a complex symbol by means of combination of already encoded symbols
existing as characters.

We see instances of both clearly among the mathematical symbols, for
example.

Stacking of graphical elements to create new operators or relations has
been around
for centuries. We have now in Unicode, for example:

U+2212 MINUS SIGN
U+003D EQUALS SIGN
U+2261 IDENTICAL TO
U+2263 STRICTLY EQUIVALENT TO

or "stacking" in the horizontal direction, for example:

U+22A6 ASSERTION
U+22A9 FORCES
U+22AA TRIPLE VERTICAL BAR RIGHT TURNSTYLE

While it would certainly be possible to write some kind of productive glyph
formation rules that turned such symbols into sequences of primitives of
horizontal and vertical lines, doing so wouldn't benefit anyone using those
operators/relations as symbolic characters.

At the other extreme, the use of such diacritics as U+0338 COMBINING LONG
SOLIDUS OVERLAY as a negation mark, or U+20D7 COMBINING RIGHT
ARROW ABOVE for a vector notation, is related to the separate
meaning of those diacritics. The productivity in such cases is semantic,
and not merely graphical.

It is in the latter cases, where the argument is strong that we are looking
at the productive representation of characters by sequences, rather
than merely generic patterns for glyph construction. And particularly when
the base character and combining mark(s) already exist in the
standard, then the prohibition against encoding "precomposed [in
this case] symbols" would kick in. Why? Because of the likelihood that
people are already representing such symbols by means of the existing
sequences, and the impact that would have on the normalization
stability guarantee.

There is no stability guarantee when we are just talking about what look
like productive patterns for glyph design.

The IBM glyphs for the record mark, segment mark, and group mark, strike
me as clearly in the camp of "Let's make up new symbols by
crossing different numbers of horizontal and vertical lines." That's why I
cited the already-encoded examples of such symbols -- all of which are
encoded as unitary symbols, and none of which has any formal decompositions.
I think we'd be heading down the rabbit hole if we looked towards trying
to represent them by sequences of some existing base symbol with
one or more lines in one orientation and some existing combining mark
with one or more lines in the other orientation.

--Ken
Received on Mon Jul 18 2011 - 17:06:54 CDT

This archive was generated by hypermail 2.2.0 : Mon Jul 18 2011 - 17:06:56 CDT