LoRA is amazing for finetuning large models cheaply, but WHERE you place the adapters makes a huge difference. Most people are just guessing where to put them (Attention, MLP, etc).
Our solution: PLoP (Precise LoRA Placement)
Instead of guessing, it automatically identifies the optimal modules for LoRA placement based on a notion of module-data alignment.
Our solution: PLoP (Precise LoRA Placement)
Instead of guessing, it automatically identifies the optimal modules for LoRA placement based on a notion of module-data alignment.