Show HN: SwiftAI – open-source library to easily build LLM features on iOS/macOS
SwiftAI gives you: - A single, model-agnostic API - An agent/tool loop - Strongly-typed structured outputs - Optional chat state
Backstory: We started experimenting with Apple’s local models because they’re free (no API calls), private, and work offline. The problem: not all devices support them (older iPhones, Apple Intelligence disabled, low battery, etc.). That meant writing two codepaths — one for local, one for cloud — and scattering branching logic across the app. SwiftAI centralizes that decision. Your feature code stays the same whether you’re on-device or cloud.
Example
import SwiftAI
let llm: any LLM = SystemLLM.ifAvailable ?? OpenaiLLM(model: "gpt-5-mini", apiKey: "<key>")
let response = try await llm.reply(to: "Write a haiku about Hacker News")
print(response.content)
It's open source — we'd love for you to try it, break it, and help shape the roadmap. Join our discord / slack or email us at root@mit12.dev.Links
- GitHub (source, docs): https://github.com/mi12labs/SwiftAI
- System Design: https://github.com/mi12labs/SwiftAI/blob/main/Docs/Proposals...
- Swift Package Index (compat/builds): https://swiftpackageindex.com/mi12labs/SwiftAI
- Discord https://discord.com/invite/ckfVGE5r and slack https://mi12swiftai.slack.com/join/shared_invite/zt-3c3lr6da...
Question/feature request: Is it possible to bring my own CoreML models over and use them? I honestly end up bundling llama.cpp and doing gguf right now because I can't figure out the setup for using CoreML models, would love for all of that to be abstracted away for me :)
Nice to see someone digging in on the system models. That's on my list to play with, but I haven't seen much new info on them or how they perform yet.