Launch HN: Cactus (YC S25) – AI inference on smartphones
We're seeing a major push towards on-device AI, and for good reason: on-device AI decreases latency from >1sec to <100ms, guarantees privacy by default, works offline, and doesn't rack up a massive API bill at scale.
Also, tools and agentic designs make small models really good beyond benchmarks. This has been corroborated by other papers like https://arxiv.org/abs/2506.02153, and we see model companies like DeepMind aggressively going into smaller models with Gemma3 270m and 308m. We found Qwen3 600m to be great at tool calls for instance.
Some frameworks already try to solve this but in my previous job, they struggled in production compared to research and playgrounds:
- They optimise for modern devices but 70% of phones today are low-mid budget.
- Bloated app bundle sizes and battery drain are serious concerns for users.
- Phone GPU battery drain is unacceptable, NPUs are preferred, but few phones have those for now.
- Some are platform-specific, requiring different models and workflows for different operating systems.
At Cactus, we’ve written kernels and inference engine for running AI locally on any phone, from the ground-up.
Cactus is designed for mobile devices and their constraints. Every design choice like energy efficiency, accelerator support, quantization levels, supported models, weight format, and context management were determined by this. We also provide minimalist SDKs for app developers to build agentic workflows in 2-5 lines of code.
We made a Show HN post when we started the project to get the community's thoughts (https://news.ycombinator.com/item?id=44524544). Based on your feedback, we built Cactus bottom-up to solve those problems, and are launching the Cactus Kernels, Cactus Graph and Cactus Engine, all designed for phones and tiny devices.
CPU benchmarks for Qwen3-600m-INT8 :
- 16-20 toks/sec on Pixel 6a / Galaxy S21 / iPhone 11 Pro
- 50-70 toks/sec on Pixel 9 / Galaxy S25 / iPhone 16.
- Time-to-first-token is as low as 50ms depending on prompt size.
On NPUs, we see Qwen3-4B-INT4 run at 21 toks/sec.
We are open-source (https://github.com/cactus-compute/cactus). Cactus is free for hobbyists and personal projects, with a paid license required for commercial use.
We have a demo app on the App Store at https://apps.apple.com/gb/app/cactus-chat/id6744444212 and on Google Play at https://play.google.com/store/apps/details?id=com.rshemetsub....
In addition, there are numerous apps using Cactus in production, including AnythingLLM (https://anythingllm.com/mobile) and KinAI (https://mykin.ai/). Collectively they run over 500k weekly inference tasks in production.
While Cactus can be used for all Apple devices including Macbooks due to their design, for computers/AMD/Intel/Nvidia generally, please use HuggingFace, Llama.cpp, Ollama, vLLM, MLX. They're built for those, support x86, and are all great!
Thanks again, please share your thoughts, we’re keen to understand your views.
Makes it really easy to plug and play different models on my phone.
If anybody is curious what a Pixel 9 Pro is capable of:
Tokens: 277- TTFT: 1609ms 9 tok/sec
qwen2.5 1.5b instruct q6_k
Sure, here's a simple implementation of the Bubble Sort algorithm in Python:
def bubble_sort(arr): n = len(arr) for i in range(n): # Flag to detect any swap in current pass swapped = False for j in range(0, n-i-1): # Swap if the element found is greater than the next element if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] swapped = True # If no swap occurs in the inner loop, the array is already sorted if not swapped: break
# Example usage: arr = [64, 34, 25, 12, 22, 11, 90] bubble_sort(arr) print("Sorted array is:", arr)
This function sorts the array in ascending order using the Butbble Sort algorithm. The outer loop runs n times, where n is the length of the array. The inner loop runs through the array, comparing adjacent elements and swapping them if they are in the wrong order. The swapped flag is used to detect if any elements were swapped in the current pass, which would indicate that the array is already sorted and can be exited early.
Same model should run 3x faster on the same phone.
These improvements are still being pushed to the SDKs though.
I've had great experiences with gpt-oss20b on my laptop, a genuinely useful local model.
3x probably doesn't get my Pixel Pro 9 to being able to run 20b models, but its getting close!
[0] react-native-ai.dev
I already knew to avoid "please share your thoughts," although I guess I am kind of violating that one by even commenting
I downloaded Cactus a couple months back because I saw a comment, but bait and switch like this makes we want to look for an actual open source solution.
Your license change goes against that. You say it’s free for personal use but how many times do people create something for personal use and monetize it later? What if I use Cactus chat to control a commercial app? Does that make Cactus chat use “commercial”?
https://github.com/cactus-compute/cactus/commit/b1b5650d1132...
Use open source and stick with it, or don't touch it at all, and tell any VC shitheels saying otherwise to pound sand.
If your business is so fragile or unoriginal that it can't survive being open source, then it will fail anyway. If you make it open source, embrace the ethos and build community, then your product or service will be stronger for it. If the big players clone your work, you get instant underdog credibility and notoriety.
It’s still free for the community, just that corporations need a license. Should we make this clearer in the license?
Just say that in the license.
e.g. if I built a basic LLM chat app with Qwen3 600m + Cactus, whats the total app size?