New method for targetted LoRA finetuning

1 soufiane001 1 7/7/2025, 2:49:09 AM github.com ↗

Comments (1)

soufiane001 · 2m ago
LoRA is amazing for finetuning large models cheaply, but WHERE you place the adapters makes a huge difference. Most people are just guessing where to put them (Attention, MLP, etc).

Our solution: PLoP (Precise LoRA Placement)

Instead of guessing, it automatically identifies the optimal modules for LoRA placement based on a notion of module-data alignment.