TITLE:
Incremental Learning Based on Data Translation and Knowledge Distillation
AUTHORS:
Tan Cheng, Jielong Wang
KEYWORDS:
Incremental Domain Learning, Data Translation, Knowledge Distillation, Cat-astrophic Forgetting
JOURNAL NAME:
International Journal of Intelligence Science,
Vol.13 No.2,
April
21,
2023
ABSTRACT: Recently, deep convolutional neural networks (DCNNs)
have achieved remarkable results in image classification tasks. Despite
convolutional networks’ great successes, their training process relies on a
large amount of data prepared in advance, which is often challenging in
real-world applications, such as streaming data and concept drift. For this reason,
incremental learning (continual learning) has attracted increasing attention from
scholars. However, incremental learning is associated with the challenge of
catastrophic forgetting: the performance on previous tasks drastically degrades
after learning a new task. In this paper, we propose a new strategy to
alleviate catastrophic forgetting when neural networks are trained in continual
domains. Specifically, two components are applied: data translation based on
transfer learning and knowledge distillation. The former translates a portion
of new data to reconstruct the partial data distribution of the old domain. The
latter uses an old model as a teacher to guide a new model. The experimental
results on three datasets have shown that our work can effectively alleviate
catastrophic forgetting by a combination of the two methods aforementioned.