TL;DR – Pathfinder is my master's degree, I explored the possible evolution of future type design tools through artificial intelligence and developed various tools/experiments for this purpose.
The latest developments in the fields of artificial intelligence raise a wide range of questions about the automation of processes. In particular, repetitive, non-creative work can be carried out by machines. But how will the work of designers, whose work is largely creative, change? In which direction could design tools develop in the next few years? And how will this affect the productivity of designers? In this thesis, these questions are related to type design.
The core question is, which areas of type design have more or less the potential to be supported or even replaced by machines? The typographic design process is divided and analyzed in four sections: Preparation, Proportion, Shaping and Hinting / Kerning. I have developed ideas and experiments to work on these areas. For the purpose of shaping, an attempt is made to design Latin characters using "machine learning" and algorithms.

Cover of my thesis




Preperation
The preparation, my first big topic, I did not do any experiments myself. During my research I came across wonderful tools like the Typedesign Framework by Manuel von Gebhardi and the Charset Builder by Alphabet-Type and linked to them.
Proportion

I have an extra post which only deals with my Proportions experiment and the tool I developed. Below I have embedded the explanatory video.
The Proportion Prediction – Glyphs Plugin
Shape Prediction
Shaping is the step of a type design that has been automated the least. This is because shaping is a very creative job, and computers are mainly good at computational tasks. However, Neural Networks could provide assistance in such tasks in the coming years. So far, in the experiments with ML and typography, the tool used - machine learning - is strongly evident.
I have developed several experiments on this topic. Here is a selection.
There are very few experiments in generating vector pathes with Neural Networks. In the last years there have been many experiment, which have been generating typography based on pixel images. Like the breautiful Project »50k fonts« by Eric Bern. I was questioning my self, why are most people using pixel based networks for Typography?
It is mainly because there are hardly any networks that can generate vectors at this time. However, there are many networks that can work with pixel data. However, most of the time the images were very small, which could not represent the details of a font.
I really wanted to work with vectors because they require much less data and are much more accurate than pixels. For this I found only one network: Sketch RNN.
One drawback with Sketch RNN was that it could only understand lines. I had to develop a tool that could automatically convert fonts to this so called stroke-3 format. I built this tool with Python. I used it to convert 15,000 fonts into the Stroke-3 format.

On the left side a curve on the right side the stroke-3 equivalent
I had to find the right balance to reduce the data and still have all the details provided by any font.

The accuracy in stroke-3. I sticked to 0.33.
This enabled me to train Sketch RNN with letters. I had to train the letters one by one. Otherwise there was too much data for the network. As a result, I could not create a "font-family". But I was very happy with the results. They turned out to be very interesting.

Small a's generated by Sketch RNN





Latent Space of a small a
pathfinder latent space animation
I also tried to generate letters via the skeletons.
Since the skeleton only determines the anatomy of letters and the actual shaping is completed by the designers, e.g. by brushes, the generation of skeletons can promote the interaction between computer and designers. Letters still offer possibilities for change after generation and do not have to be used as preformed by Neural Networks.
For this purpose I wrote a tool that converts the 15,000 fonts and all their letters into skeletons. To do this, I partly made use of Floris Steenkamp's wonderful algorithm.

Principle of the skeletonisation

The principle of designing with skeletons
Now I've taken all these skeletons and gave them to Sketch RNN to train with. The result was strangely not much different than with outlines. I also tried to add a brush to these skeletons.

Skeletons generated by Sketch RNN

Skeletons with added brush
After the successful experiments with Sketch RNN, I was curious to see what results could be achieved with other neural networks. For this purpose I chose a network that is designed to write texts in the style of other authors. The GPT-2 network can be trained with text. The special thing is that it can imitate the authors style.
To achieve a result, I have taken the pure path code of SVGs. A SVG is a vector based file format. SVGs are mainly used in HTML documents and are a standard in the representation of vectors on the web. They have several graphical elements, the "path" element is probably the most extensive and powerful element ind SVGs. A path contains instructions on how to draw a vector object. These "texts" consist of letters and numbers. I could then give these individual instructions as text to the GPT-2 network to train.
To make the results as good as possible, I had to make sure that the path instructions were all based on the same principle and within the same numberspace (the same size). In SVGs paths can be written differently, I decided to use pathes with absolute commands and without arcs. For this translation I used svgpath from fontello.
After this I wrote a tool in Node.Js, which runs each font from my dataset and saves the pathes of each letter in a text file. To separate the individual letters each letter had a single line. So I had e.g. for the small a a text file, which was about 12000 lines long and contained many small a's.

File with letters as text
The results I achieved with it have been very good. The letters are clearly recognizable as lowecase a's. However, it has to be said that only in about every hundredth attempt of the network a working SVG path was generated at all. However, the letters it did successfully generate were pretty perfect.

Generated lowercase a's by gpt-2

Generated small s's
The nice thing also was that these vectors also had real curves.

Generated small a
Also nice was that the GPT-2 network could also complete sentences. So I could give the network a counter of a small »a« and the network tried to draw the letter around it.
GPT-2 Completing a letter
Since the letters were pretty pefect, I was wondering what would happen if I took letters of the same shape group and put them together in a file. So as an example, I threw the letters "hmnru" into a file and trained them. This time the network generated very interesting shapes and new letter creations.
I think that the very good letters were already too perfect.

Interesting errors
What makes the networks so interesting for me are their errors. These errors lead to new ideas and shapes. And that is exactly what I tried to realize with these tools. I don't want to have an AI that generates perfect text fonts, because there are already enough good text fonts. The tool I came up with would have been one that helps the designer directly within the design. Ideally I would like an AI to take the letters that a designer has already sketched and generate the rest of the font. That could also be just rough, but in that way design decisions could be seen in the type very early in the creation process. And every change to letters, the AI should never change again. A design decision by the designer remains the supreme value.
Kerning and Hinting
For kerning and hinting, just as for the preparation I have referred to existing tools and experiments that already achieve strong results in this area. I referred to »Steal Kerning from InDesign«, »iKern«, »LS Cadencer Tool« and this blog post.