# DoRA (Decomposed  Low-Rank Adaptation)

### What is DoRA?

**DoRA** (Decomposed Low-Rank Adaptation) is another extended version of LoRA. This technique takes the concept of Low-Rank Adaptation (LoRA) a step further by decomposing the pre-trained weight matrices into two distinct components: **magnitude** and **direction**. This decomposition allows for more efficient fine-tuning by applying directional updates through LoRA, minimizing the number of trainable parameters and improving the overall model's learning efficiency and training stability.

<figure><img src="https://3596415781-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FE3mf9U8IthHvHNa2tlfe%2Fuploads%2FiSW0YWiKoep4efgfLzcD%2Fimage.png?alt=media&#x26;token=1a36ab6c-df60-46a0-b63f-2414e1531c84" alt=""><figcaption></figcaption></figure>

### The DoRA Process Explained

{% stepper %}
{% step %}
**Decompose Pretrained Weights**

The pretrained weight <img src="https://3596415781-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FE3mf9U8IthHvHNa2tlfe%2Fuploads%2FOqxIXO64EMgUbx3sKpiG%2Fimage.png?alt=media&#x26;token=e306da7a-eb3e-4cb1-8cb3-c4c8d92da908" alt="" data-size="line">​ is decomposed into two components:

* **Magnitude (𝑚)**: Represents the scale of the weights, initialized as the norm of the pretrained weights.
* **Direction (𝑉)**: The normalized pretrained weight matrix.
  {% endstep %}

{% step %}
**Adapt Direction**

During fine-tuning, updates ( Δ 𝑉) are applied only to the directional component (𝑉), which is represented using low-rank matrices (𝐴 and 𝐵). This enables efficient parameter adaptation while keeping the number of trainable parameters minimal. The updated directional component is recalculated as: <img src="https://3596415781-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FE3mf9U8IthHvHNa2tlfe%2Fuploads%2FOaEwaed48ezbDAQ3pz2u%2Fimage.png?alt=media&#x26;token=d9a41557-d86e-4bcd-9286-1f1dc5a29979" alt="" data-size="line">
{% endstep %}

{% step %}
**Recombine Magnitude and Direction**

After training, the updated weights are merged back by recomposing the magnitude ( 𝑚 ) and the adapted direction.
{% endstep %}

{% step %}
**Generate Merged Weights**

The final merged weight (𝑊′) incorporates the pretrained knowledge from 𝑊 along with the fine-tuned updates, ready for downstream tasks.
{% endstep %}
{% endstepper %}

**The implementation of DoRA is quite similar to LoRA when using Hugging Face's PEFT library:**

Both methods utilize The same steps we covered on the previous sections of this guide. The main distinction lies in enabling DoRA through the configuration. Instead of the standard LoRA setup, you initialize the configuration with the use\_dora parameter set to True:

```python
from peft import LoraConfig, get_peft_model

# Initialize DoRA configuration
config = LoraConfig(
    #Some other parameters here
    use_dora=True
    
)

```

This change activates the decomposition mechanism specific to DoRA.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ubiai.gitbook.io/llm-guide/supervised-fine-tuning-strategies/parameter-efficient-fine-tuning-peft/dora-decomposed-low-rank-adaptation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
