From: Debbie Garside (email@example.com)
Date: Fri Oct 03 2008 - 09:15:43 CDT
Thank you very much for this, it is most helpful.
So as I see it, Unicode is the labelling system for each character for
interchange purposes and fonts are the graphical representation of the
characters with applications using the font character encoded coordinates to
render glyphs within the chosen media.
One final question, are Cartesian coordinates used within the programming
script used to define glyphs?
From: Ed Trager [mailto:firstname.lastname@example.org]
Sent: 03 October 2008 14:42
Subject: Re: Pixel Rendering in Unicode characters
Your question is not really about Unicode per se, but rather about how
character glyphs (the images of letters, numerals, symbols, etc.) are
rendered on various media (paper, computer screen, etc.) by computers. So
your question is really a question in the realm of Digital Typography.
The FreeType project documentation at the following URL provides a good
introduction to glyph metrics and may answer your question:
I will also provide you here with a simplified explanation: Each character
glyph (such as for the lower-case Latin letter "i") in a modern vector-based
digital font file is stored as a set of drawing instructions.
The "i" has a "bounding box" which determines the "space around the
character". This bounding box is analagous to the rectangular slab of metal
around the letter "i" in old-fashioned metal type. The instructions for
drawing an "i" might go something like this:
1. Move the "pen" to such-and-such a position.
2. Draw a circle with a diameter of such-and such centered on that
3. Fill in the circle.
4. Move to such-and-such new position (below the circle just
5. Draw a long vertical rectangle of a certain height and width.
6. Fill in that rectangle too.
7. Move the "pen" toward the right the correct horizontal advance
distance in preparation for the next glyph.
These instructions are then interpreted by a "rasterizer" which knows how to
convert these geometric instructions into a grid of filled and empty "dots"
that are either printed on paper or displayed on a computer screen. As they
say, "the devil is in the details" -- and much effort has been expended
trying to find good ways to decide which "dots" to fill or leave empty at
the edges of the glyph forms. At the high resolutions (i.e., "dots per
inch") of modern laser printers, this is not much of a problem as the
individual dots are too small to be discernible by the human eye. But
computer screens have much lower screen resolutions : in this domain the
problems of "grid fitting" and "hinting" are quite important to achieve good
on-screen legibility. The URL mentioned above provides a good introduction
to these topics.
So to answer your question, yes, an application can have access to
information about the space around and within a character glyph. A software
library such as the FreeType library provides developers with a documented
interface for accessing such information from within an application.
Best Wishes -- Ed Trager
On Fri, Oct 3, 2008 at 7:55 AM, Debbie Garside <email@example.com>
I have a pretty obscure question about Unicode and how it is used to render
characters when printed.
Can you tell me how a character such as an 'i' has space within it and
around it but also joins the dot within the 'i' . Is this part of the
encoding and how is it created within each character? Is there a piece of
code within Unicode that tells an application where not to print?
What I am after is to see if there is the ability to be able to tell an
application to behave in a certain manner when it hits the space within or
around a character and before it hits the next character.
From this you can tell I am neither a software developer or Unicode expert
so responses in words of one or two syllables please :-)
Hopes this makes sense
Pembrokeshire SA61 1BW
Tel: 0044 1437 766441
Fax: 0044 1437 766173
This archive was generated by hypermail 2.1.5 : Fri Oct 03 2008 - 09:19:31 CDT