Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Optimal Transport as a Defense Against Adversarial Attacks

less than 1 minute read

Published:

We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.

Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

13 minute read

Published:

We investigate different possible attacks on metric learning models depending on the number and type of guides available. Two particularly effective attacks stand out. To defend against these attacks, we adapt the adversarial training protocol for metric learning. Let us guide you !

portfolio

publications

talks

Optimal Transport as a Defense Against Adversarial Attacks

Published:

We present Sinkhorn Adversarial Training (SAT), a robust adversarial training method based on the latest theory of optimal transport. We also propose a new metric, the Area Under Accuracy Curve (AUAC), to quantify more precisely the robustness of a model to adversarial attacks over a wide range of perturbation sizes.

Vers une meilleure compréhension des méthodes de méta-apprentissage à travers la théorie de l’apprentissage de représentations multi-tâches

Published:

Dans nos travaux nous avons cherché à faire le lien entre le meta-learning et l’apprentissage de représentation multi-tâche, qui possède une importante littérature théorique ainsi que des bornes d’apprentissage solides. Et en analysant les bornes les plus récentes d’apprentissage de représentation multi-tâches et leurs hypothèses, nous avons mis en évidence des critères qui permettent un méta-apprentissage plus efficace.

Improving Few-Shot Learning through Multi-task Representation Learning Theory

Published:

We consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task. We start by reviewing recent advances in MTR theory and show that they can provide novel insights for popular meta-learning algorithms when analyzed within this framework. In particular, we highlight a fundamental difference between gradient-based and metric-based algorithms and put forward a theoretical analysis to explain it. Finally, we use the derived insights to improve the generalization capacity of meta-learning methods via a new spectral-based regularization term and confirm its efficiency through experimental studies on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification

teaching

Algorithms and complexity

Course Lecturer, CentraleSupélec, Université Paris-Saclay, 1900

First year Computer Science course for the main engineering track at CentraleSupélec

tips

Bash

Published:

Useful commands for bash.

Conda

Published:

Manage Python environments with conda.

Pytorch

Published:

Basic notions of Pytorch and useful functions to manipulate tensors.

Computation time

Published:

A trick to improve computation time when working with lists.

f-string

Published:

Clean and efficient string formatting in Python >3.6

Pathlib

Published:

Easy path handling in Python >3.4

Tqdm

Published:

Beautiful progress bars for loops in Python

Einsum

Published:

Einstein Summation in Numpy of Pytorch

Kubernetes

Published:

Basic notions for kubernetes

Git

Published:

Basic notions of Git. Branching, Merging and Stashing.

Regex

Published:

Basic Notions of Regex