Seven Hours, Zero Internet, and Local AI Coding at 40k Feet

1 scastiel 1 9/3/2025, 11:11:23 AM betweentheprompts.com ↗

Comments (1)

NitpickLawyer · 12h ago
I don't know what ollama uses behind the scenes, but there's also MLX for macs, which should be faster in general. Also there's a thing about top_k on gpt-oss which might need tweaking. Saw reports that setting it to 100 vs. default 0 brings an extra ~20t/s in generation speed.