[Untitled]
Subject
Mathematics, Computer Science
            Creator
Thomas Yu
            Date
2025
            Contributor
Clarice Poon, Paris Giampouras
            Abstract
This poster investigates the effect of preconditioning in low-rank adaptation methods for large language models. We compare baseline LoRTA and a modified Preconditioned LoRTA implementation, both applied to RoBERTa fine-tuning on the MRPC task. Preconditioning yields smoother optimisation dynamics and improved long-term convergence, achieving lower evaluation loss compared to baseline LoRTA. The results highlight the potential of tensor-based preconditioning to enhance efficiency and stability in low-rank model adaptation.
            Files
Citation
Thomas Yu, “[Untitled],” URSS SHOWCASE, accessed November 4, 2025, https://urss.warwick.ac.uk/items/show/885.