Understanding Few-Shot Multi-Task Representation Learning Theory

Published in ICLR Blog Track, 2022

Recommended citation: "Understanding Few-Shot Multi-Task Representation Learning Theory" Quentin Bouniot & Ievgen Redko, ICLR Blog Track, 2022

Abstract: Multi-Task Representation Learning (MTR) is a popular paradigm to learn shared representations from multiple related tasks. It has demonstrated its efficiency for solving different problems, ranging from machine translation for natural language processing to object detection in computer vision. On the other hand, Few-Shot Learning is a recent problem that seeks to mimic the human capability to quickly learn how to solve a target task with little supervision. For this topic, researchers have turned to meta-learning that learns to learn a new task by training a model on a lot of small tasks. As meta-learning still suffers from a lack of theoretical understanding for its success in few-shot tasks, an intuitively appealing approach would be to bridge the gap between it and multi-task learning to better understand the former using the results established for the latter. In this post, we dive into a recent ICLR 2021 paper by S. Du, W. Hu, S. Kakade, J. Lee and Q. Lei, that demonstrated novel learning bounds for multi-task learning in the few-shot setting and go beyond it by establishing the connections that allow to better understand the inner workings of meta-learning algorithms as well.

For attribution in academic contexts, please cite this work as

Bouniot & Redko, “Understanding Few-Shot Multi-Task Representation Learning Theory”, ICLR Blog Track, 2022.

@inproceedings{quentin2022understandingfewshotmultitask,
 author = {Bouniot, Quentin and Redko, Ievgen},
 title = {Understanding Few-Shot Multi-Task Representation Learning Theory},
 booktitle = {ICLR Blog Track},
 year = {2022},
 note = {https://iclr-blog-track.github.io/2022/03/25/understanding_mtr_meta/},
 url  = {https://iclr-blog-track.github.io/2022/03/25/understanding_mtr_meta/}
}