Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks.

Published in CVPRW, 2020

Recommended citation: "Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks." Quentin Bouniot, Romaric Audigier, Angélique Loesch (2020). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.

DeepMind Travel Award

Abstract

Person re-identification (re-ID) is a key problem in smart supervision of camera networks. Over the past years, models using deep learning have become state of the art. However, it has been shown that deep neural networks are flawed with adversarial examples, i.e. human-imperceptible perturbations. Extensively studied for the task of image closed-set classification, this problem can also appear in the case of open-set retrieval tasks. Indeed, recent work has shown that we can also generate adversarial examples for metric learning systems such as re-ID ones. These models remain vulnerable: when faced with adversarial examples, they fail to correctly recognize a person, which represents a security breach. These attacks are all the more dangerous as they are impossible to detect for a human operator. Attacking a metric consists in altering the distances between the feature of an attacked image and those of reference images, i.e. guides. In this article, we investigate different possible attacks depending on the number and type of guides available. From this metric attack family, two particularly effective attacks stand out. The first one, called Self Metric Attack, is a strong attack that does not need any image apart from the attacked image. The second one, called Furthest-Negative Attack, makes full use of a set of images. Attacks are evaluated on commonly used datasets: Market1501 and DukeMTMC. Finally, we propose an efficient extension of adversarial training protocol adapted to metric learning as a defense that increases the robustness of re-ID models.

Resources

Paper link
Video presentation
Code

For attribution in academic contexts, please cite this work as

“Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks.” Quentin Bouniot, Romaric Audigier, Angelique Loesch. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 794-795

@InProceedings{Bouniot_2020_CVPR_Workshops,
author = {Bouniot, Quentin and Audigier, Romaric and Loesch, Angelique},
title = {Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}