Show HN: I built a tool to turn handwriting into a font with PyTorch/OpenCV

2 reshamgaire 0 6/13/2025, 1:46:57 PM handfonted.xyz ↗
Hey HN,

For the last few months, I've been working on a personal project called HandFonted, and I'm excited to share the result with you all. The goal was to build a fully automated pipeline that could take a single image of a person's handwritten alphabets and generate a usable .ttf font file.

You can try it here: https://handfonted.xyz And the code is open-source here: https://github.com/reshamgaire/HandFonted

The Technical Stack and Workflow: The process is broken down into three main stages:

1. Segmentation (OpenCV): The user uploads an image. I use a series of OpenCV functions to process it: resizing for consistency, adaptive Gaussian thresholding to handle lighting variations, and morphological operations (opening/dilation) to clean up noise. Contours are then detected to isolate each character. I also added a small heuristic to merge dots with their parent 'i' and 'j' bodies by checking for components that are close and vertically aligned.

2. Classification (PyTorch): This is the machine learning core. The segmented character images are fed into a custom-built CNN. I experimented with a few architectures and landed on a hybrid model that combines concepts from ResNet (residual blocks for deep training) and Inception (parallel convolutions of different kernel sizes). The model was trained on a dataset of character images to classify each of the 52 uppercase and lowercase letters.

3. Font Generation (fontTools & scikit-image): Once a character is classified, the real fun begins. First, the bitmap image is skeletonized using scikit-image. A distance transform is then applied to the skeleton to create a stroke of uniform thickness. skimage.measure.find_contours is used to trace the outline of this new, clean character, converting it from a raster image to a set of vector coordinates. Finally, I use the fontTools library to programmatically build the font. It takes the vector outlines, converts them into TTF-compatible glyphs, and inserts them into a base font file, replacing the original glyph data and adjusting metrics like side-bearing and advance width.

Challenges & Learnings: The biggest challenge was glyph metrics. Simply plopping a new character shape into a font file doesn't work. I had to write logic to estimate a reasonable LSB (Left Side Bearing) and advance width to make the font somewhat usable for typing, though it's still an area for improvement. Training a robust classification model that works on varied handwriting styles was tough. Data augmentation was key here. The project was a fantastic deep dive into some classic computer vision problems and the surprisingly complex world of font file structures.

I built this because I thought it was a cool problem to solve. It's completely free and open source. I would love to hear any feedback you have, especially on the technical implementation. Happy to answer any questions!

Comments (0)

No comments yet