Parameter-Efficient Fine-Tuning (PEFT)
Last updated
Last updated
Parameter-efficient fine-tuning (PEFT) is a method designed to adapt large pre-trained models for specific tasks while minimizing the number of parameters that need to be updated. Unlike traditional fine-tuning approaches, such as full fine-tuning or "half fine-tuning," where you freeze some layers and update the rest of the model, PEFT focuses on freezing most of the model's parameters while only modifying a small subset of them. This could include the addition of task-specific adapters or updates to certain layers, significantly reducing the number of parameters that need to be trained.
The concept of Parameter-Efficient Fine-Tuning (PEFT) has significantly lowered the barriers to applying large language models (LLMs) in development. This has sparked a wide range of research into various methods for achieving PEFT. These methods can be classified into three main categories:
Selective Fine-Tuning: This approach focuses on updating a carefully chosen subset of a pre-trained model's parameters, rather than fine-tuning the entire model. This method enables more efficient adaptation to specific tasks.
Additive Fine-Tuning: New modules are added to the pre-trained model for fine-tuning. These modules are then trained to incorporate domain-specific knowledge, allowing the model to adapt to new tasks while preserving the original model's capabilities.
Reparameterization: Where a low-dimensional representation is created for specific model components. This reduces the complexity of the fine-tuning process by working with a smaller set of parameters.
In this section, we will be exploring a few of the most effective techniques for applying parameter-efficient fine-tuning (PEFT).