From: verdy_p (firstname.lastname@example.org)
Date: Mon Sep 07 2009 - 17:33:45 CDT
"Neil Harris" wrote:
> Ed Trager wrote:
> > I second Chris Fynn's "grass roots" suggestion.
> > On Sun, Sep 6, 2009 at 2:20 PM, Chris Fynn wrote:
> >> Instead of insisting on run-time checking of fonts - perhaps have a service
> >> which certifies fonts as being compliant with your national standard and
> >> maintains a public list of these fonts.
> Making the font-checker code available under the BSD or other suitable
> license through the same site would also be a good idea. This would
> allow people to perform their own checks in-house without needing to
> send copies of their fonts to third parties.
Another solution is to augment the "Unicode help" pages found in Wikipedia or Wiktionnaries, where there are some
test pages where you could find examples of corect rendering (with bitmap or SVG rendered images, side by side with
the encoding, and comments explaining what is acceptable and what is wrong. You can then update this same page to
list some working configurations (operating systems, browsers, fonts, and their respected tested version). Such page
can then contain links to online ressources on how to setup these systems or install the necessary support, without
limiting the solutions. When things are corrected and no longer require a specific software installation, the list
of working solutions can be simplified, because such list is expected to evolve over time, and new problems will
also occur with more complex cases.
In fact this strategy has been used since long on such highly visible projects on the web, and it has helped users
updating their systems to create more contents with correct encoding (in fact I would suggest that your test pages
are made and tested in Wiktionnary rather than just in Wikipedia, only because much more importance is given in
Wiktionnary to the respect of orthographies, and the articles contents are much less subject to instability and lack
of coherence than in Wiktionnary or even in Wikinews where articles have a much shorter life span.
The impact of this widely visible help has been much wider than just on Wikimedia sites: it has also helped a lot of
site designers to review their own practices, and helped prioprietary software developers to develop solutions to
echibit the expected behavior. It has also helped web search engines to start indesing better the contents available
You can also adopt a similar strategy for any online national public library that wants to distribute text
documents, in other forms thatn just scanned copies or PDF. Such help system, even if it is minimal can be also
included within the help pages of your national library indexes. But it did not require implementing any automatic
(and possibly very complex and adding its wown problems) font validation within the system or in the implmentation
of renderers and browsers: what they do is trying to adopt and implement correctly the most widely used conventions
The existing problems when they still remain, should be documented, notably if a non-ideal paliative encoding method
is used, wating for a future technical solution ; this is done for example in Wiktionnary by annotating the articles
with maintenance categories to be fixed later, and with some general warning notice : this helps later maintenance
operations by allowing semi-automated corrections or manual corrections without having to perform very complex
searches in a huge database or corpus of text.
This archive was generated by hypermail 2.1.5 : Mon Sep 07 2009 - 17:36:55 CDT