Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in CVPRW, 2020
Recommended citation: "Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks." Quentin Bouniot, Romaric Audigier, Angélique Loesch (2020). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
Published in ICPR, 2021
Recommended citation: "Optimal Transport as a Defense Against Adversarial Attacks." Quentin Bouniot, Romaric Audigier, Angélique Loesch (2021). IEEE International Conference on Pattern Recognition (ICPR) 2020.
Published in ICLR Blog Track, 2022
Recommended citation: "Understanding Few-Shot Multi-Task Representation Learning Theory" Quentin Bouniot & Ievgen Redko, ICLR Blog Track, 2022
Published in ECCV, 2022
Recommended citation: "Improving Few-Shot Learning through Multi-task Representation Learning Theory" Quentin Bouniot, Ievgen Redko, Romaric Audigier, Angélique Loesch, Amaury Habrard. Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX. Cham: Springer Nature Switzerland, 2022.
Published in WACV, 2023
Recommended citation: "Towards Few-Annotation Learning for Object Detection: Are Transformer-Based Models More Efficient?" Quentin Bouniot, Angélique Loesch, Romaric Audigier, Amaury Habrard; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 75-84
Published in ICLR, 2023
Recommended citation: "Proposal-Contrastive Pretraining for Object Detection from Fewer Data" Quentin Bouniot,Romaric Audigier, Angelique Loesch, Amaury Habrard; The Eleventh International Conference on Learning Representations, 2023
Published in PhD Thesis, 2023
Recommended citation: "Towards Few-Annotation Learning in Computer Vision: Application to Image Classification and Object Detection tasks" Quentin Bouniot (2023). PhD Thesis.
Published:
We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !
Published:
We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !
Published:
In this work, we want to apply the latest insights from meta-learning theory in practice to ensure a good meta-learning.
Published:
We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.
Published:
Dans nos travaux nous avons cherché à faire le lien entre le meta-learning et l’apprentissage de représentation multi-tâche, qui possède une importante littérature théorique ainsi que des bornes d’apprentissage solides. Et en analysant les bornes les plus récentes d’apprentissage de représentation multi-tâches et leurs hypothèses, nous avons mis en évidence des critères qui permettent un méta-apprentissage plus efficace.
Published:
We consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the generalization capacity of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification
Published:
Learning something new in real life does not necessarily mean going through a lot of examples in order to capture the essence of it. Humans are able to build upon prior experience and have the ability to adapt, allowing to combine previous observations with only little evidence for fast learning. This is particularly the case for recognition tasks, for which we are often capable of differentiating between two distinct objects after having seen only a few examples of them. In this talk, I will develop three different contributions for Machine Learning with limited labels, and more specifically for Computer Vision tasks, addressing theoretical, algorithmic and experimental aspects. In a first contribution, we are interested in bridging the gap between theory and practice for popular Meta-Learning algorithms used in Few-Shot Classification. We make connections to Multi-Task Representation Learning, which benefits from solid theoretical foundations, to verify the best conditions for a more efficient meta-learning. Then, to leverage unlabeled data when training object detectors based on the Transformer architecture, we propose an unsupervised pretraining approach that improves contrastive learning for object detectors through the introduction of the localization information. Finally, we present the first theoretically sound tool to track non-linearity propagation in deep neural networks, with a specific focus on computer vision applications. Our proposed affinity score allows us to gain insights into the inner workings of a wide range of different architectures and learning paradigms. We present extensive experimental results that highlight the practical utility of the proposed affinity score and its potential for long-reaching applications.
Published:
The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity. Indeed, DNNs are highly non-linear models, and activation functions introduced into them are largely responsible for this. While many works studied the expressive power of DNNs through the lens of their approximation capabilities, quantifying the non-linearity of DNNs or of individual activation functions remains an open problem. In this work, we propose the first theoretically sound solution to track non-linearity propagation in deep neural networks with a specific focus on computer vision applications. Our proposed affinity score allows us to gain insights into the inner workings of a wide range of different architectures and learning paradigms. We provide extensive experimental results that highlight the practical utility of the proposed affinity score and its potential for long-reaching applications.
Teaching Assistant, CentraleSupélec, Université Paris-Saclay, 1900
First year Computer Science course for the main engineering track at CentraleSupélec
Course Lecturer, Télécom Paris, Institut Polytechnique de Paris, 1900
This course is part of the Master 2 Data Science program from IP Paris.
Published:
Useful commands for bash.
Published:
Manage Python environments with conda.
Published:
Basic notions of Pytorch and useful functions to manipulate tensors.
Published:
A trick to improve computation time when working with lists.
Published:
Clean and efficient string formatting in Python >3.6
Published:
Easy path handling in Python >3.4
Published:
Beautiful progress bars for loops in Python
Published:
Einstein Summation in Numpy of Pytorch
Published:
Basic notions for kubernetes
Published:
Basic notions of Git. Branching, Merging and Stashing.
Published:
Basic Notions of Regex
Published:
Basic notions of Vim and useful shortcuts.
Published:
Manipulate PDF document and images.