Show HN: SwiftAI – open-source library to easily build LLM features on iOS/macOS

43 mi12-root 6 8/28/2025, 1:51:51 PM github.com ↗
We built SwiftAI, an open-source Swift library that lets you use Apple’s on-device LLMs when available (Apple opened access in June), and fall back to a cloud model when they aren’t available — all without duplicating code.

SwiftAI gives you: - A single, model-agnostic API - An agent/tool loop - Strongly-typed structured outputs - Optional chat state

Backstory: We started experimenting with Apple’s local models because they’re free (no API calls), private, and work offline. The problem: not all devices support them (older iPhones, Apple Intelligence disabled, low battery, etc.). That meant writing two codepaths — one for local, one for cloud — and scattering branching logic across the app. SwiftAI centralizes that decision. Your feature code stays the same whether you’re on-device or cloud.

Example

  import SwiftAI
   
  let llm: any LLM = SystemLLM.ifAvailable ?? OpenaiLLM(model: "gpt-5-mini", apiKey: "<key>")

  let response = try await llm.reply(to: "Write a haiku about Hacker News")
  print(response.content)
It's open source — we'd love for you to try it, break it, and help shape the roadmap. Join our discord / slack or email us at root@mit12.dev.

Links

- GitHub (source, docs): https://github.com/mi12labs/SwiftAI

- System Design: https://github.com/mi12labs/SwiftAI/blob/main/Docs/Proposals...

- Swift Package Index (compat/builds): https://swiftpackageindex.com/mi12labs/SwiftAI

- Discord https://discord.com/invite/ckfVGE5r and slack https://mi12swiftai.slack.com/join/shared_invite/zt-3c3lr6da...

Comments (6)

jc4p · 1h ago
I do a lot of AI work and right now the story for doing LLMs on iOS is very painful (but doing Whisper or etc is pretty nice) so this is existing and the API looks Swift native and great, I can't wait to use it!

Question/feature request: Is it possible to bring my own CoreML models over and use them? I honestly end up bundling llama.cpp and doing gguf right now because I can't figure out the setup for using CoreML models, would love for all of that to be abstracted away for me :)

mi12-root · 56m ago
That’s a good suggestion, and it indeed sounds like something we’d want to support. Could you help us better understand your use case? For example, where do you usually get the models (e.g., Hugging Face)? Do you fine-tune them? Do you mostly care about LLMs (since you only mentioned llama.cpp)?
deanputney · 2h ago
Awesome, this is a good idea! Having a nice wrapper to make LLM calls easier is very helpful too :)

Nice to see someone digging in on the system models. That's on my list to play with, but I haven't seen much new info on them or how they perform yet.

mi12-root · 1h ago
We’ve begun internally evaluating the model and will share our findings more in details later. So far, we’ve found that it performs well on tasks such as summarization, writing, and data extraction, and shows particular strength in areas like history and marketing. However, it struggles with STEM topics (e.g., math and physics), often fails to follow long or complex instructions, and sometimes avoids answering certain queries. If you want us to evaluate a certain use case or vertical, please share it with us!
keyle · 1h ago
Needs more example on custom.
mi12-root · 1h ago
Thanks for the feedback! When you say “custom,” do you mean additional integrations with LLM providers, or more documentation on how to build your own custom integration? If you mean the former, we’re currently focused on stabilizing the API and reaching feature parity with FoundationModels (e.g., adding streaming). After that, we plan to add more integrations, such as Claude, Gemini, and on-device LLMs from Hugging Face.