Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates
Abstract
Source-Shielded Updates (SSU) enables the adaptation of instruct LLMs to new languages using only unlabeled data, preserving source knowledge and achieving competitive target-language performance.
Expanding the linguistic diversity of instruct large language models (LLMs) is crucial for global accessibility but is often hindered by the reliance on costly specialized target language labeled data and catastrophic forgetting during adaptation. We tackle this challenge under a realistic, low-resource constraint: adapting instruct LLMs using only unlabeled target language data. We introduce Source-Shielded Updates (SSU), a selective parameter update strategy that proactively preserves source knowledge. Using a small set of source data and a parameter importance scoring method, SSU identifies parameters critical to maintaining source abilities. It then applies a column-wise freezing strategy to protect these parameters before adaptation. Experiments across five typologically diverse languages and 7B and 13B models demonstrate that SSU successfully mitigates catastrophic forgetting. It reduces performance degradation on monolingual source tasks to just 3.4% (7B) and 2.8% (13B) on average, a stark contrast to the 20.3% and 22.3% from full fine-tuning. SSU also achieves target-language performance highly competitive with full fine-tuning, outperforming it on all benchmarks for 7B models and the majority for 13B models.
Community
Our code and a step-by-step guide for preprocessing, training, evaluation, and analysis for both our proposed method (SSU) and all baselines are available on GitHub: https://github.com/gucci-j/ssu.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Conditions for Catastrophic Forgetting in Multilingual Translation (2025)
- Sparse Subnetwork Enhancement for Underrepresented Languages in Large Language Models (2025)
- RECALL: REpresentation-aligned Catastrophic-forgetting ALLeviation via Hierarchical Model Merging (2025)
- Learning from the Undesirable: Robust Adaptation of Language Models without Forgetting (2025)
- OPLoRA: Orthogonal Projection LoRA Prevents Catastrophic Forgetting during Parameter-Efficient Fine-Tuning (2025)
- Logits Replay + MoClip: Stabilized, Low-Cost Post-Training with Minimal Forgetting (2025)
- Parameter Importance-Driven Continual Learning for Foundation Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 99
Browse 99 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper