LoRA is a technique for efficiently specializing an
AI model for a specific task. Instead of modifying the entire original model, it only
trains to add a small adaptation layer that adjusts the behavior of the original model.
LoRA's operation can be understood as adding a small "
learning module" to the main model. This module acts as a translator that adjusts the original model's responses for the new specific task, without needing to modify its base knowledge.
Unlike traditional
fine-tuning, which modifies all the
learned knowledge of the original model, LoRA keeps the original model intact while adding new specific capabilities, making it a much faster and more economical process.
For example, in image generation models, you can use LoRA to teach the model to create images in a specific artistic style using few example images. It's like adding a special filter to a camera: the camera maintains all its original functions, but the filter allows you to get a specific effect when you need it.