Researchers propose a self-distillation fix for ‘catastrophic forgetting’ in LLMs
A new fine-tuning technique aims to solve “catastrophic forgetting,” a limitation that often complicates repeated model updates in enterprise deployments. Researchers at MIT, the Improbable AI Lab, and ETH Zurich have introduced a fine-tuning method designed to let models learn new tasks while preserving previously acquired capabilities. To prevent degrading existing capabilities, many organizations isolate…