Show HN: Red Candle – Run LLMs Natively in Ruby with Rust

4 cpetersen 0 7/28/2025, 12:45:04 PM github.com ↗
I've been working on Red Candle, a Ruby gem that runs LLMs (Llama, Mistral, Gemma, Phi) directly in your Ruby process through Rust bindings (based on the candle crate from Hugging Face). No Python, no servers - just FFI with Metal/CUDA acceleration.

It's been useful for adding AI features to Rails apps without the complexity of managing separate services. Would love feedback from anyone working with LLMs in Ruby.

Comments (0)

No comments yet