I am interested in alternative representations of fonts which make use of parameters which allow a designer to continuously vary properties such as weight, relative width, slant angle and optical scaling. Ideally, such a representation would make use of a single master outline which can be adapted by all these parameters to generate any number of synthesized typefaces. In contrast, TrueType and PostScript (and OpenType, which is in effect a wrapper for either format) describe glyphs simply as outlines, and a font family of several dozen typefaces is commonly used to represent all common combinations of these parameters that a designer may wish to choose.
The use of a parameterizable, model-based representation brings other benefits too, as it can aid in the hinting process for on-screen display (more on this below) and in further (lossy) compression for transmitting font outlines on the Web. Additionally, it could be used to aid font classification (e.g. finding similar typefaces), font recognition (e.g. recognizing a font from scanned images) and even OCR. An example of a commercial use-case in the digital regeneration of scanned works is described below. And finally, such a representation could arguably reduce the amount of time required in designing new typefaces, making it easy to re-use parts of existing typefaces where appropriate.
The best known (and perhaps most successful) parameterized font format may be Metafont, which was conceived by Donald Knuth as part of the TeX publishing system. However, due to its mathematical complexity, it is generally ignored by the majority of type designers (a notable exception being Hermann Zapf), and only a handful of typefaces make full use of its features. Adobe's now discontinued Multiple Master fonts were in fact collections of several "master" outlines with equivalent control points so that new typefaces could be synthesized by interpolating between the masters. There also appears to be a significant amount of work undertaken in the '90s, notably by the group led by Prof. Hersch, in which a representation based on shape components is used. Hinting and anti-aliasing are addressed as well. Since the turn of the century there appears to have been little continuation of such work; Mitsubishi's Saffron Type system being a notable exception.
It would interest me to find out why parameterized fonts have not yet made it into mainstream, professional publishing applications, and if any further work could be undertaken to "bridge this gap" between science and art. Metafont may be too complicated for non-mathematically acute type designers, who are rarely interested in the most elegant representation of a typeface. Other designers reject the idea of "computer generated fonts" altogether. This argument could be seen as being analogous to the use of InDesign vs. LaTeX for desktop publishing, or Word vs. LaTeX for word processing.
It seems to me as if the representation developed by Hersch and Hu is more straightforward than that of Metafont. Whilst the online results show a faithful synthesis of the typefaces Times, Helvetica and Bodoni, they still appear to lack the necessary refinement that would make them suitable for professional desktop publishing applications.
Nevertheless, this fascinating work poses a number of questions, which I believe would make interesting research topics:
One application in which parameterized fonts could prove very useful is in the scanning of old books. Here, the typefaces that were used at the time do not necessarily have exact digital equivalents. Even if the typeface does exist in digital form, the digital version does not always precisely correspond to the version used by traditional print methods. Font metrics (the widths of individual characters) can differ, and the perceived "colour" of the text (i.e. how "black" it appears) can vary due to the printing process used. Finally, due to optical scaling (i.e. the characters being designed to be read at that particular size), the shape of the characters can differ. Often, a digital reconstruction is not as easy on the eyes as an earlier print edition of a book.
The flexibility and power of a parameterized font representation system would overcome the above drawbacks of modern type. Where a digital equivalent of the typeface is not available, a near-exact version could be synthesized simply by changing a few parameters obtained from the scan. Font metrics and other parameters, such as colour, can be individually adjusted until the perfect result is obtained. And the resulting typeface and text will take up much less space as the respective scanned bitmaps at a moderate resolution, making such a system ideal for e-book and Internet applications.
The concept of optical scaling is rather interesting, as it appears to be the one transformation that is completely neglected in modern typography. When a typeface is emboldened, not all strokes are widened uniformly and, in more extreme cases, the shape of the letters can be significantly altered to give a more pleasing appearance. Similarly, when a typeface is expanded or condensed, the stroke widths remain roughly the same, and the roundness of round segments is usually preserved. When a designer does not have the correct font available, he can always try to "fake" this effect by applying a geometric transformation, but the result is always inferior.
The point of optical scaling is to improve readability and, to some extent, to account for potential printing problems when characters are printed at smaller sizes. This paper contains a good example of how the original metal Garamond typeface differed with point size. Smaller characters have a larger x-height, thicker strokes (to approximately match those of the body text) and a slightly wider shape to aid legibility. In addition, fonts scaled in this way are more pleasing to the eye, and more fitting to the body text that they complement.
It is possible that advances in printing technology have rendered such treatment of small text redundant. Nowadays, designers seem to simply use a bolder face for smaller items of text. To my knowledge, only TeX does this properly if a corresponding Metafont typeface is being used.
The process of displaying fonts at very low resolutions (e.g. on screen) is very complicated. The TrueType format, in particular, allows for very complex instructions to adjust the way glyphs are rendered at such resolutions. For fonts that are specially hinted in this way, such as the typefaces that come with Windows, the results are excellent. However, as this hinting process is very specialized and labour-intensive, there are only a handful of fonts on the market which are hinted to this degree. Other fonts, not specifically intended for screen use, produce relatively poor results on screen.
The use of a parameterized representation, as described above, could make hinting far less laborious, if not fully automatic. If one were to consider any character being rendered with an x-height of 5 pixels, there are only a handful of possible ways that this character could be rendered and still appear legible. With a parameterized represenation, hints could be designed for each shape, or "building block" of the typeface. At very small sizes, these hints could even represent single pixel patterns. These hints would then automatically be applicable to every typeface that is represented in this format.