“When a new task is introduced, new adaptations overwrite the knowledge that the neural network had previously acquired,” the DeepMind team explains.
“This phenomenon is known in cognitive science as ‘catastrophic forgetting’, and is considered one of the fundamental limitations of neural networks.”
Researchers tested their new algorithm by letting the AI play Atari video games and getting networks to remember old skills by selectively slowing down learning on the weights important for those tasks.
They were able to show that it is possible to overcome ‘catastrophic forgetting’ and train networks that can “maintain expertise on tasks that they have not experienced for a long time”.
“Computer programs that learn to perform tasks also typically forget them very quickly,” DeepMind explained.
“We show that the learning rule can be modified so that a program can remember old tasks when learning a new one. This is an important step towards more intelligent programs that are able to learn progressively and adaptively.”
They added: “Today, computer programs cannot learn from data adaptively and in real time. However, we have shown that catastrophic forgetting is not an insurmountable challenge for neural networks. We hope that this research represents a step towards programs that can learn in a more flexible and efficient way.”