Retrieval Augmented Generation (RAG) and Fine-Tuning
Several recent papers comparing RAG vs Fine-Tuning in various settings
There are different outcomes of these studies, but the aggregated answer — hybrid systems are needed
Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLM (from MS)
Compare unsupervised fine-tuning vs RAG https://lnkd.in/dZd_uc3x
The authors find, that in their settings, RAG consistently outperforms fine-tuning both for new knowledge and existing (available during training) tasks — on the set of knowledge-intensive tasks. Interesting analysis of fine-tuning weaknesses
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture
https://lnkd.in/drAJ-B2E
The authors study fine-tuning vs RAG for very particular tasks of their business domain. In their cases, improvements because of fine-tuning are significantly greater than improvements caused by RAG (both are significant) but improvements are complimentary. RAG and fine-tuning improve model’s answers for different sets and in different ways. Fine-tuning , for example, leverages information as I understand obtained through LLM reasoning (geographic reasoning and relations between regions)
Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge
The authors focus on queries about less popular concepts and entities . FT improves the performance, but RAG consistently outperforms, especially in smaller models
https://lnkd.in/dTTTHuj8