Low-Rank Adaptation (LoRA)

LoRA is a technique for efficiently specializing an AI model for a specific task. Instead of modifying the entire original model, it only trains to add a small adaptation layer that adjusts the behavior of the original model.
LoRA's operation can be understood as adding a small "learning module" to the main model. This module acts as a translator that adjusts the original model's responses for the new specific task, without needing to modify its base knowledge.

Unlike traditional fine-tuning, which modifies all the learned knowledge of the original model, LoRA keeps the original model intact while adding new specific capabilities, making it a much faster and more economical process.

For example, in image generation models, you can use LoRA to teach the model to create images in a specific artistic style using few example images. It's like adding a special filter to a camera: the camera maintains all its original functions, but the filter allows you to get a specific effect when you need it.

Related Definitions

Trustpilot
This website uses technical, personalization and analysis cookies, both our own and from third parties, to facilitate anonymous browsing and analyze website usage statistics. We consider that if you continue browsing, you accept their use.