Presented By: Department of Statistics Dissertation Defenses
On the Benefits of Multitask Learning: A Perspective Based on Task Diversity
Ziping Xu
Abstract:
Multitask learning (MTL) has achieved remarkable success in numerous domains, such as healthcare, computer vision, and natural language processing, by leveraging the relatedness across tasks. However, current theories of multitask learning fall short in explaining the success of some phenomena commonly observed in practice. For instance, many empirical studies have shown that having a diverse set of tasks improves both training and testing performance. This thesis aims at providing new theoretical insights into the significance of task diversity in two major learning settings: Supervised Learning and Reinforcement Learning. For supervised MTL, we focus on studying a popular learning paradigm known as multitask representation learning and provide a theoretical foundation that establishes diversity as a crucial condition for achieving good generalization performance. In the setting where tasks can be
adaptively chosen, we propose an online learning algorithm that effectively achieves diversity with low regret. I then expand the discussion to Reinforcement Learning (RL), which involves making sequential decisions to optimize long-term rewards. Previous exploration designs in RL were either computationally intractable or lacked formal guarantees. We show that, in addition to the generalization benefits demonstrated in supervised learning, multitask reinforcement learning with a diverse set of tasks enables sample-efficient myopic exploration. This is surprising because myopic exploration is provably sample efficient in the worst case even for a single task.
Multitask learning (MTL) has achieved remarkable success in numerous domains, such as healthcare, computer vision, and natural language processing, by leveraging the relatedness across tasks. However, current theories of multitask learning fall short in explaining the success of some phenomena commonly observed in practice. For instance, many empirical studies have shown that having a diverse set of tasks improves both training and testing performance. This thesis aims at providing new theoretical insights into the significance of task diversity in two major learning settings: Supervised Learning and Reinforcement Learning. For supervised MTL, we focus on studying a popular learning paradigm known as multitask representation learning and provide a theoretical foundation that establishes diversity as a crucial condition for achieving good generalization performance. In the setting where tasks can be
adaptively chosen, we propose an online learning algorithm that effectively achieves diversity with low regret. I then expand the discussion to Reinforcement Learning (RL), which involves making sequential decisions to optimize long-term rewards. Previous exploration designs in RL were either computationally intractable or lacked formal guarantees. We show that, in addition to the generalization benefits demonstrated in supervised learning, multitask reinforcement learning with a diverse set of tasks enables sample-efficient myopic exploration. This is surprising because myopic exploration is provably sample efficient in the worst case even for a single task.