Talks and presentations

Understanding Deep Neural Networks Through the Lens of their Non-Linearity

February 06, 2024

Séminaire Palaisien, ENSAE, Paris, France

The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity. Indeed, DNNs are highly non-linear models, and activation functions introduced into them are largely responsible for this. While many works studied the expressive power of DNNs through the lens of their approximation capabilities, quantifying the non-linearity of DNNs or of individual activation functions remains an open problem. In this work, we propose the first theoretically sound solution to track non-linearity propagation in deep neural networks with a specific focus on computer vision applications. Our proposed affinity score allows us to gain insights into the inner workings of a wide range of different architectures and learning paradigms. We provide extensive experimental results that highlight the practical utility of the proposed affinity score and its potential for long-reaching applications.


On Few-Annotation Learning and Non-Linearity in Deep Neural Networks

February 06, 2024

Séminaire, CMAP, Paris, France

Learning something new in real life does not necessarily mean going through a lot of examples in order to capture the essence of it. Humans are able to build upon prior experience and have the ability to adapt, allowing to combine previous observations with only little evidence for fast learning. This is particularly the case for recognition tasks, for which we are often capable of differentiating between two distinct objects after having seen only a few examples of them. In this talk, I will develop three different contributions for Machine Learning with limited labels, and more specifically for Computer Vision tasks, addressing theoretical, algorithmic and experimental aspects. In a first contribution, we are interested in bridging the gap between theory and practice for popular Meta-Learning algorithms used in Few-Shot Classification. We make connections to Multi-Task Representation Learning, which benefits from solid theoretical foundations, to verify the best conditions for a more efficient meta-learning. Then, to leverage unlabeled data when training object detectors based on the Transformer architecture, we propose an unsupervised pretraining approach that improves contrastive learning for object detectors through the introduction of the localization information. Finally, we present the first theoretically sound tool to track non-linearity propagation in deep neural networks, with a specific focus on computer vision applications. Our proposed affinity score allows us to gain insights into the inner workings of a wide range of different architectures and learning paradigms. We present extensive experimental results that highlight the practical utility of the proposed affinity score and its potential for long-reaching applications.


Improving Few-Shot Learning through Multi-task Representation Learning Theory

November 26, 2021

National Research Group Talk, Vers un apprentissage pragmatique dans un contexte de données visuelles labellisées limitées; GdR ISIS, Paris, France

We consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the generalization capacity of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification


Vers une meilleure compréhension des méthodes de méta-apprentissage à travers la théorie de l’apprentissage de représentations multi-tâches

June 14, 2021

National Conference Talk, Conférence sur l'Apprentissage Automatique (CAp) 2021, Saint-Étienne, France

Dans nos travaux nous avons cherché à faire le lien entre le meta-learning et l’apprentissage de représentation multi-tâche, qui possède une importante littérature théorique ainsi que des bornes d’apprentissage solides. Et en analysant les bornes les plus récentes d’apprentissage de représentation multi-tâches et leurs hypothèses, nous avons mis en évidence des critères qui permettent un méta-apprentissage plus efficace.


Optimal Transport as a Defense Against Adversarial Attacks

May 26, 2021

Conference Talk, International Conference on Pattern Recognition (ICPR) 2020, Virtual

We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.



Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

September 23, 2020

Talk, DATAIA Workshop « Safety & AI » 2020, CentraleSupélec, Université Paris-Saclay, France

We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !


Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

June 19, 2020

Conference Workshop, CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision, Virtual

We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !