Current research

Published

February 19, 2024

Parameter-efficient fine-tuning

Level
Basic (22 points)

In this lab, you will implement LoRA, one of the most well-known methods for parameter-efficient fine-tuning. LoRA stands for “Low-Rank Adaptation of Large Language Models” and was originally described in a research article by Hu et al. (2021). Along the way, you will learn a best-practice workflow for downloading a Transformer model and fine-tuning it on the downstream task of binary sentiment classification using Hugging Face Transformers.

Link to the basic lab