LoRA Fine-Tuning Tiny LLMs as Expert Agents

4 jamesbriggs 3 5/28/2025, 2:31:04 PM youtube.com ↗

Comments (3)

jamesbriggs · 17h ago
Sharing my walkthrough on fine-tuning LLMs with LoRA using NVIDIA's NeMo microservices. The result is a llama-3.2-1b-instruct model fine-tuned to be really good at function-calling, making it ideal for agent-use.

It was a ton of fun to figure it out and it brought back some nostalgia from the days of training ML models, tweaking learning rates, dropout, and watching loss charts in W&B.

Final performance was way better than any 1-3B parameter LLM I tried with agentic workflows in the past.

kordlessagain · 16h ago
Thank you for making this. I clicked through on the container page on the cookbook/gen-ai/training/lora/nvidia-nemo /nemo-lora-function-calling.ipynb and it was a 404. I did find this: https://catalog.ngc.nvidia.com/orgs/nim/teams/meta/container...

Can you point to a public version of this model you trained. I'd like to test with an agentic framework I'm working on.

jamesbriggs · 13h ago
My bad, the link was wrong - you found the right one. I've updated it in the repo too, thanks. Let me know how it goes!