I've been working on Zignal, a zero-dependency image processing library that we use in production for virtual makeup try-on at Ameli (https://ameli.co.kr/)
What makes this library especially useful to me is the terminal rendering support - you can output images directly to your terminal using Sixel, Kitty graphics protocol, ANSI colors, or even braille patterns. This makes debugging image processing much nicer when you can see results right in your terminal.
It has the basics you'd expect: color space conversions (RGB, HSL, Lab, etc.), image I/O, transforms, and filtering. There's also a canvas API with many drawing primitives. Python bindings are available too, though very incomplete at the moment.
Fair warning: it's not feature-complete. I add features as I need them or when I'm curious about how something works (for example, when I wanted to understand SVD, I ported dlib's implementation). But what's there is solid, and the API is designed to be consistent as it grows.
The library is heavily inspired by dlib but written from scratch in Zig. MIT licensed.
What makes this library especially useful to me is the terminal rendering support - you can output images directly to your terminal using Sixel, Kitty graphics protocol, ANSI colors, or even braille patterns. This makes debugging image processing much nicer when you can see results right in your terminal.
It has the basics you'd expect: color space conversions (RGB, HSL, Lab, etc.), image I/O, transforms, and filtering. There's also a canvas API with many drawing primitives. Python bindings are available too, though very incomplete at the moment.
Fair warning: it's not feature-complete. I add features as I need them or when I'm curious about how something works (for example, when I wanted to understand SVD, I ported dlib's implementation). But what's there is solid, and the API is designed to be consistent as it grows.
The library is heavily inspired by dlib but written from scratch in Zig. MIT licensed.