Informally, (a) in task-incremental learning, an algorithm must incrementally learn a set of clearly distinguishable tasks (b) in domain-incremental learning, an algorithm must learn the same kind of problem but in different contexts and (c) in class-incremental learning, an algorithm must incrementally learn to distinguish between a growing number of objects or classes. We put forward the view that, at the computational level 10, there are three fundamental types, or ‘scenarios’, of supervised continual learning. To help address this, here we describe a structured and intuitive framework for continual learning. It is therefore not surprising that numerous continual learning methods claim to be state-of-the-art. Because of an abundance of subtle, but often important, differences between evaluation protocols, systematic comparison between continual learning algorithms is challenging, even when papers use the same datasets 9. In recent years, this area of machine learning research has been rapidly expanding, fuelled by the potential utility of deploying continual learning algorithms for applications such as medical diagnosis 6, autonomous driving 7 or predicting financial markets 8.ĭespite its scope, continual learning research is relatively unstructured and the field lacks a shared framework. The field of continual learning, also referred to as lifelong learning, is devoted to closing the gap in incremental learning ability between natural and artificial intelligence. In stark contrast, humans and other animals are able to incrementally learn new skills without compromising those that were already learned 5. For example, when deep neural networks are trained on samples from a new task or data distribution, they tend to rapidly lose previously acquired capabilities, a phenomenon referred to as catastrophic forgetting 3, 4. ![]() Similar content being viewed by othersĪn important open problem in deep learning is enabling neural networks to incrementally learn from non-stationary streams of data 1, 2. The proposed categorization aims to structure the continual learning field, by forming a key foundation for clearly defining benchmark problems. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of the effectiveness of different strategies. To illustrate this, we provide a comprehensive empirical comparison of currently used continual learning strategies, by performing the Split MNIST and Split CIFAR-100 protocols according to each scenario. ![]() Each of these scenarios has its own set of challenges. To help address this, we describe three fundamental types, or ‘scenarios’, of continual learning: task-incremental, domain-incremental and class-incremental learning. In recent years, numerous deep learning methods for continual learning have been proposed, but comparing their performances is difficult due to the lack of a common framework. Incrementally learning new information from a non-stationary stream of data, referred to as ‘continual learning’, is a key feature of natural intelligence, but a challenging problem for deep neural networks.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |