Posts by Tags

Adversarial Attacks

Optimal Transport as a Defense Against Adversarial Attacks

less than 1 minute read

Published:

We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.

Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

13 minute read

Published:

We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !

Adversarial Defense

Optimal Transport as a Defense Against Adversarial Attacks

less than 1 minute read

Published:

We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.

Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

13 minute read

Published:

We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !

Domain Adaptation

Optimal Transport as a Defense Against Adversarial Attacks

less than 1 minute read

Published:

We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.

Metric Learning

Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

13 minute read

Published:

We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !

Optimal Transport

Optimal Transport as a Defense Against Adversarial Attacks

less than 1 minute read

Published:

We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.

Person Re-Identification

Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

13 minute read

Published:

We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !

few-shot learning

Understanding Few-Shot Multi-Task Representation Learning Theory

23 minute read

Published:

Multi-Task Representation Learning (MTR) is a popular paradigm to learn shared representations from multiple related tasks. It has demonstrated its efficiency for solving different problems, ranging from machine translation for natural language processing to object detection in computer vision. On the other hand, Few-Shot Learning is a recent problem that seeks to mimic the human capability to quickly learn how to solve a target task with little supervision. For this topic, researchers have turned to meta-learning that learns to learn a new task by training a model on a lot of small tasks. As meta-learning still suffers from a lack of theoretical understanding for its success in few-shot tasks, an intuitively appealing approach would be to bridge the gap between it and multi-task learning to better understand the former using the results established for the latter. In this post, we dive into a recent ICLR 2021 paper by S. Du, W. Hu, S. Kakade, J. Lee and Q. Lei, that demonstrated novel learning bounds for multi-task learning in the few-shot setting and go beyond it by establishing the connections that allow to better understand the inner workings of meta-learning algorithms as well.

learning theory

Understanding Few-Shot Multi-Task Representation Learning Theory

23 minute read

Published:

Multi-Task Representation Learning (MTR) is a popular paradigm to learn shared representations from multiple related tasks. It has demonstrated its efficiency for solving different problems, ranging from machine translation for natural language processing to object detection in computer vision. On the other hand, Few-Shot Learning is a recent problem that seeks to mimic the human capability to quickly learn how to solve a target task with little supervision. For this topic, researchers have turned to meta-learning that learns to learn a new task by training a model on a lot of small tasks. As meta-learning still suffers from a lack of theoretical understanding for its success in few-shot tasks, an intuitively appealing approach would be to bridge the gap between it and multi-task learning to better understand the former using the results established for the latter. In this post, we dive into a recent ICLR 2021 paper by S. Du, W. Hu, S. Kakade, J. Lee and Q. Lei, that demonstrated novel learning bounds for multi-task learning in the few-shot setting and go beyond it by establishing the connections that allow to better understand the inner workings of meta-learning algorithms as well.

multi-task learning

Understanding Few-Shot Multi-Task Representation Learning Theory

23 minute read

Published:

Multi-Task Representation Learning (MTR) is a popular paradigm to learn shared representations from multiple related tasks. It has demonstrated its efficiency for solving different problems, ranging from machine translation for natural language processing to object detection in computer vision. On the other hand, Few-Shot Learning is a recent problem that seeks to mimic the human capability to quickly learn how to solve a target task with little supervision. For this topic, researchers have turned to meta-learning that learns to learn a new task by training a model on a lot of small tasks. As meta-learning still suffers from a lack of theoretical understanding for its success in few-shot tasks, an intuitively appealing approach would be to bridge the gap between it and multi-task learning to better understand the former using the results established for the latter. In this post, we dive into a recent ICLR 2021 paper by S. Du, W. Hu, S. Kakade, J. Lee and Q. Lei, that demonstrated novel learning bounds for multi-task learning in the few-shot setting and go beyond it by establishing the connections that allow to better understand the inner workings of meta-learning algorithms as well.