Small language models are the future of agentic AI

64 favoboa 25 7/1/2025, 3:33:49 AM arxiv.org ↗

Comments (25)

bryant · 4h ago
A few weeks ago, I processed a product refund with Amazon via agent. It was simple, straightforward, and surprisingly obvious that it was backed by a language model based on how it responded to my frustration about it asking tons of questions. But in the end, it processed my refund without ever connecting me with a human being.

I don't know whether Amazon relies on LLMs or SLMs for this and for similar interactions, but it makes tons of financial sense to use SLMs for narrowly scoped agents. In use cases like customer service, the intelligence behind LLMs is all wasted on the task the agents are trained for.

Wouldn't surprise me if down the road we start suggesting role-specific SLMs rather than general LLMs as both an ethics- and security-risk mitigation too.

automatic6131 · 2h ago
You can (used to?) get a refund on Amazon with normal CRUD app flow. Putting an SLM and a conversational interface over it is a backwards step.
oblio · 34m ago
From our perspective as users. From the company's perspective? Net positive, they don't need to hire people.

We're going to be so messed up in a decade or so when only 10-20-30% of the population is employable in decent jobs.

People keep harping on about people moving on with their lives, but people don't. Many industrial heartlands in the developed world are wastelands compared to what they were: Walloonia in Belgium, Scotland in the UK, the Rust Belt in the US.

People don't really move on, they suffer, sometimes for generations.

thatjoeoverthr · 16m ago
A CRUD flow is the actual automation, which was already digested into the economy by 2005 or so. PHP is not a guy in the back who types HTML really fast when you click a button :)

The LLM, here, is the opposite; additional human labor to build the integrations, additional capital for chips, heavy cost of inference, an additional skeuomorphic UI (it self identifies as a chat/texting situation) and your wasted time. I would almost call it "make work".

torginus · 2h ago
I just had my first experience with a customer service LLM. I needed to get my account details changed, and for that I needed to use the customer support chat.

The LLM told me what sort of information they need, and what is the process, after which I followed through the whole thing.

After I went through the whole thing it reassured me everything is in order, and my request is being processed.

For two weeks, nothing happened, I emailed the (human) support staff, and they responded to me, that they can see no such request in their system, turns out the LLM hallucinated the entire customer flow and was just spewing BS at me.

dotancohen · 1h ago
This is reason number two why I always request the service ticket number.

Reason number one being that when the rep feels you are going to hold them accountable to the point of requesting such a number, you might not be the type of client to pull shenanigans with. Maybe they suspect me of being a cooperate QC agent? Either way, requesting such a number demonstrably reduces friction.

koakuma-chan · 17m ago
You basically have to always use tool_choice="required" or the LLM will derail
thatjoeoverthr · 14m ago
The LLM is a smoke bomb they shot in your face :)
ttctciyf · 1h ago
There really should be some comeback for this type of enshAItification.

We're supposed to think "oh it's an LLM, well, that's ok then"? A question we'll be asking more frequently as time goes on, I suspect.

exe34 · 2h ago
That's why I take screenshots of anything that I don't get an email confirmation for.
quietbritishjim · 1h ago
Air Canada famously lost a court case recently (though the actual interaction happened in 2022) where their chat bot promised a discount that they didn't actually offer. They tried to argue that the chatbot was a "separate legal entity that is responsible for its own actions"!! It still took that person a court case and countless hours to get the discount so it's hardly a victory really.

https://www.bbc.co.uk/travel/article/20240222-air-canada-cha...

nurettin · 59m ago
This is why law in it's current form is wrong in every country and jurisdiction.

We need "cumulative cases" that work like this: you submit your complaints to existing cumulative cases or open a new one, these are vetted by prosecutors.

They accumulate evidence over time and once it is a respectable sum, a court case is opened (paid by the corporation) everyone receives what they are owed if/when the case is won. If the court case loses, is appealed, and loses again, that cumulative case is banned.

Cumulative cases would have greater reprocussions to large corporate entities than "single person takes to court for several months to fight for a $40 discount".

And the people who complain rightfully eventually get a nice surprise in their bank accounts.

flowerthoughts · 1h ago
No mention of mixture-of-exports. Seems related. They do list a DeepSeek R1 distillate as an SLM. The introduction starts with sales pitch. And there's a call-to-action at the end. This seems like marketing with source references sprinkled in.

That said, I also think the "Unix" approach to ML is right. We should see more splits, however currently all these tools rely on great language comprehension. Sure, we might be able to train a model on only English and delegate translation to another model, but that will certainly lose (much needed) color. So if all of these agents will need comprehensive language understanding anyway, to be able to communicate with each other, is SLM really better than MoE?

What I'd love to "distill" out of these models is domain knowledge that is stale anyway. It's great that I can ask Claude to implement a React component, but why does the model that can do taxes so-so also try to write a React component so-so? Perhaps what's needed is a search engine to find agents. Now we're into expensive market place subscription territory, but that's probably viable for companies. It'll create a larger us-them chasm, though and the winner takes it all.

iagooar · 1h ago
I think that part of the beauty of LLMs is their versatility in so many different scenarios. When I build my agentic pipeline, I can plug in any of the major LLMs, add a prompt to it, and have it go off to do its job.

Specialized, fine-tuned models sit somewhere in between LLMs and traditional procedural code. The fine-tuning process takes time and is a risk if it goes wrong. In the meantime, the LLMs by major providers get smarter every day.

Sure enough, latency and cost are a thing. But unless you have a very specific task performed at a huge scale, you might be better off using an off-the-shelf LLM.

mg · 1h ago
I wonder how the math turns out when we compare the energy use of local vs remote models from first principles.

A server needs energy to build it, house, power and maintain it. It is optimized for throughoutput and can be used 100% of the time. To use the server, additional energy is needed to send packets through the internet.

A local machine needs energy to build and power it. If it lives inside a person's phone or laptop, one could say housing and maintenance is free. It is optimized to have a nice form factor for personal use. It is used maybe 10% of the time or so. No energy for internet packages is needed when using the local machine.

My initial gut feeling is that the server will have way better energy efficiency when it comes to the amount of calculations it can do over its lifetime and how much energy it needs over its lifetime. But I would love to see the actual math.

danhor · 1h ago
As the local machine is there anyway, only the increase in energy usage should be considered, while the server only exists for this use case (distributed across all users).

The local machine is usually also highly constrained in computing power, energy (when battery driven) and thermals, I would expect the compute needed to be very different. The remote user will happily choose a large(r) model, while for the local use case a highly optimized (small) model will be chosen.

rayxi271828 · 1h ago
Wonder what I'm missing here. A smaller number of repetitive tasks - that's basically just simple coding + some RPA sprinkled on top, no?

Once you've settled down on a few well-known paths of action, wouldn't you want to freeze those paths and make it 100% predictable, for the most part?

janpmz · 4h ago
One could start with a large model for exploration during development, and then distill it down to a small model that covers the variety of the task and fits on a USB drive. E.g. when I use a model for gardening purposes, I could prune knowledge about other topics.
loktarogar · 4h ago
Pruning is exactly what you're looking for in a gardening SLM
dotancohen · 53m ago
In what sense would you need an LLM while gardening for? I'm imagining for problem solving, like asking "what worm looks like a small horse hair". But that would require the LLM to know what a horse hair is. In other words, not a distilled model, but rather a model that contains pretty much anything our gardener's imagination will make analogies out of.
moqizhengz · 2h ago
How is SLM the future of AI while we are not even sure about if LMs are the future of AI?
boxed · 2h ago
"Future" maybe means "next two months"? :P
eric-burel · 4h ago
Slightly related, on the cooperation between large models and small models (traditional ML) : https://arxiv.org/abs/2409.06857
sReinwald · 20m ago
IMO, the paper commits an omission that undermines the thesis quite a bit: context window limitations are mentioned only once in passing (unless I missed something) and then completely ignored throughout the analysis of SLM suitability for agentic systems.

This is not a minor oversight - it's arguably, in my experience, the most prohibitive technical barrier to this vision. Consider the actual context requirements of modern agentic systems:

    - Claude 4 Sonnet's system prompt alone is reportedly roughly 25k tokens for the behavioral instructions and instructions for tool use
    - A typical coding agent needs: system instructions, tool definitions, current file context, broader context of the project it's working in. Additionally, you might also want to pull in documentation for any frameworks or API specs.
    - You're already at 5-10k tokens of "meta" content before any actual work begins
Most SLM that can run on consumer hardware are capped at 32k or 128k contexts architecturally, but depending on what you consider a "common consumer electronic device" you'll never be able to make use of that window if you want inference at reasonable inference speeds. A 7b or 8b Model like DeepSeek-R1-Distill or Salesforce xLAM-2-8b would take 8GB of VRAM at Q4_K_M Quant with Q8_0 K/V cache at 128k context. IMO, that's not just simple consumer hardware in the sense of the broad computing market, it's enthusiast gaming hardware. Not to mention that performance degrades significantly before hitting those limits.

The "context rot" phenomenon is real: as the ratio of instructional/tool content to actual tasks content increases, models become increasingly confused, hallucinate non-existent tools or forget earlier context. If you have worked with these smaller models, you'll have experienced this firsthand - and big models like o3 or Claude 3.7/4 are not above that either.

Beyond context limitations, the paper's economic efficiency claims simply fall apart under system-level analysis. The authors present simplistic FLOP comparisons while ignoring critical inefficiencies:

    - Retry tax: An LLM completing a complex task with 90% success rate might very well become 3 or 4 attempts at task completion for an SLM, each with full orchestration overhead
    - Task decomposition overhead: Splitting a task that an LLM might be able to complete in one call into five SLM sub-tasks means 5x context setup, inter-task communication costs, and multiplicative error rates
    - Infrastructure efficiency: Modern datacenters achieve PUE ratios near 1.1 with liquid cooling and >90% GPU utilization through batching. Consumer hardware? Gaming GPUS at 5-10% utilization, residential HVAC never designed for sustained compute, and 80-85% power conversion efficiency per device.
When you account for failed attempts, orchestration overhead and infrastructure efficiency, many "economical" SLM deployments likely consume more total energy than centralized LLM inference. It's telling that NVIDIA Research, with deep access to both datacenter and consumer GPU performance data, provides no actual system-level efficiency analysis.

For a paper positioning itself as a comprehensive analysis of SLM viability in agentic systems, sidestepping both context limitations and true system economics while making sweeping efficiency claims feels intellectually dishonest. Though, perhaps I shouldn't be surprised that NVIDIA Research concludes that running language models on both server and consumer hardware represents the optimal path forward.

ewuhic · 1h ago
Why is this a paper and not a blog post. Anyone who thinks it deserves to be a paper is either dumb or snakeoil salesman.