Re: Missing geometric shapes

From: Philippe Verdy <verdy_p_at_wanadoo.fr>
Date: Sun, 11 Nov 2012 01:14:50 +0100

2012/11/10 Asmus Freytag <asmusf_at_ix.netcom.com>:
>> Even today, using the existing Unicode for the WHITE STAR character
>> allows performing styling on it to render an empty, full, or partially
>> filled star.
>
> There's clear precedent that Unicode views white/black/partially filled as a
> distinction on the character level (this is definitely the case for several
> types of geometrical symbols - witness circles and squares). Using styles to
> achieve that effect is possible (lots of things are possible), but it would
> be a violation of the character / glyph model to achieve such distinction by
> style, when it is present on the character level.

There would be in this case absolutely NO vilation of the
character/glyph model : if you can half-fill any character, including
the WHITE STAR, at arbitrary filling levels, then it is the SAME
character, on which an **orthogonalùù styling property is applied.

Then why would be the 0% and the 100% filling style treated specially
? We would certainly prefer using only the same base character (most
probably the filled symbol, from which a graphic renderer can easily
extract the contours to derive any empty version (rendering the
contours as strokes with a styling width, and filling the symbol by
appliying an intersection with a proportionnaly filled block.

Sor such uses of stars for denoting a numeric value rating or other),
it is not stupid to consider these symbols as represented by the SAME
abstract character.

"white" symbols (including basic shapes) are just derivations fo the
normal black symbol, on which a style has already been applied to
create the "hollow" effect. Byt themselves I son't see them as really
distinct abstract characters. If white symbols are encoded, it's only
to allow their inclusion in plain-text (a very technical concept
specific to computer), but this is absolutely not the most
representative way of how characters are understood, used, and drawn
on other medias (including graphics or printing, or handdrawing :
change your drawing tool, use a pencil or a pen or a brush, or a piece
of wood, or stone or metal to engrave them, you'll still want to
reproduce the same abstract texts, even if their final rendering looks
different.

In other words, the "white" symbols are just encoded as exceptions to
the rules, they are actually not needed at the abstract level. But if
you want to apply a semantic to the fact that they be unfilled
("white"), or filled ('black", the default), or half-filled, then the
best to do is not to add new ("white") symbols, but to encode some
standard combining symbol modifiers (or format controls ?) for a list
of numeric values (from 0% to 100% : it would require just 100 new
abstract characters), which you'll encode in addition to the base
character (or combining sequence, or around grapheme clusters,
possibly even around strings by using a starter format control holding
the value, and a final format control) : finally you'll get exactly
what you are already doing when using rich-text documents with
out-of-band style (or semantic) markup (like element attributes for
style="" or class="" or element types)...

For this reason, those derived "white" characters should remain only a
few exceptions, when there's a demonstrated use where their semantic
actually don't carry a variable numeric value by themselves.

But for usages like rating levels, this is really not a good
demonstration that they are needed as plain-text. What would be more
convincing is when those symbols are used as distinct math operators,
or as distinct punctuation marks, or because the empty and full
characters need to behave differentely in some contexts (such as when
they are used wit haover combining characters whose placement will be
different to make them visible such as a combining central dot
modifier).
Received on Sat Nov 10 2012 - 18:20:31 CST

This archive was generated by hypermail 2.2.0 : Sat Nov 10 2012 - 18:20:37 CST