liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Beyond Recognition: Privacy Protections in a Surveilled World
Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-2391-5951
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications.

Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology.

Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.  

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2024. , p. 81
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2392
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-203225DOI: 10.3384/9789180756761ISBN: 9789180756754 (print)ISBN: 9789180756761 (electronic)OAI: oai:DiVA.org:liu-203225DiVA, id: diva2:1856142
Public defence
2024-06-12, Ada Lovelace, B-building, Campus Valla, Linköping, 09:15 (English)
Opponent
Supervisors
Note

Funding: This work was supported by the Swedsih Research Council (VR) and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Foundation.

Available from: 2024-05-06 Created: 2024-05-06 Last updated: 2024-05-08Bibliographically approved
List of papers
1. AnonFACES: Anonymizing Faces Adjusted to Constraints on Efficacy and Security
Open this publication in new window or tab >>AnonFACES: Anonymizing Faces Adjusted to Constraints on Efficacy and Security
Show others...
2020 (English)In: WPES'20: Proceedings of the 19th Workshop on Privacy in the Electronic Society / [ed] Wouter Lueks, Paul Syverson, New York, NY, United States: Association for Computing Machinery (ACM) , 2020, p. 87-100Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
New York, NY, United States: Association for Computing Machinery (ACM), 2020
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-179791 (URN)10.1145/3411497.3420220 (DOI)2-s2.0-85097241828 (Scopus ID)9781450380867 (ISBN)
Conference
19th ACM Workshop on Privacy in the Electronic Society, WPES 2020, held in conjunction with the 27th ACM Conference on Computer and Communication Security, CCS 2020, Virtual, Online, 9 November 2020
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2021-10-01 Created: 2021-10-01 Last updated: 2024-09-15Bibliographically approved
2. StyleID: Identity Disentanglement for Anonymizing Faces
Open this publication in new window or tab >>StyleID: Identity Disentanglement for Anonymizing Faces
2023 (English)In: Proceedings on Privacy Enhancing Technologies (PoPETs), ISSN 2299-0984, Vol. 1, p. 1-4Article in journal, Editorial material (Other academic) Published
Abstract [en]

Privacy of machine learning models is one of the remaining challenges that hinder the broad adoption of Artificial Intelligent (AI). This paper considers this problem in the context of image datasets containing faces. Anonymization of such datasets is becoming increasingly important due to their central role in the training of autonomous cars, for example, and the vast amount of data generated by surveillance systems. While most prior work de-identifies facial images by modifying identity features in pixel space, we instead project the image onto the latent space of a Generative Adversarial Network (GAN) model, find the features that provide the biggest identity disentanglement, and then manipulate these features in latent space, pixel space, or both. The main contribution of the paper is the design of a feature-preserving anonymization framework, StyleID, which protects the individuals’ identity, while preserving as many characteristics of the original faces in the image dataset as possible. As part of the contribution, we present a novel disentanglement metric, three complementing disentanglement methods, and new insights into identity disentanglement. StyleID provides tunable privacy, has low computational complexity, and is shown to outperform current state-of-the-art solutions.

Place, publisher, year, edition, pages
De Gruyter Open, 2023
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-188914 (URN)10.56553/popets-2023-0001 (DOI)
Conference
Will also be presented at the Privacy Enhancing Technologies Symposium (PETS) July 2023.
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

This work is accepted and will soon be published open access.  We are still waiting for doi etc.  

Available from: 2022-09-30 Created: 2022-09-30 Last updated: 2024-08-22
3. IdDecoder: A Face Embedding Inversion Tool and its Privacy and Security Implications on Facial Recognition Systems
Open this publication in new window or tab >>IdDecoder: A Face Embedding Inversion Tool and its Privacy and Security Implications on Facial Recognition Systems
2023 (English)In: Proceedings of the Thirteenth ACM Conference on Data and Application Security and Privacy, ACM Digital Library, 2023, p. 15-26Conference paper, Published paper (Refereed)
Abstract [en]

Most state-of-the-art facial recognition systems (FRS:s) use face embeddings. In this paper, we present the IdDecoder framework, capable of effectively synthesizing realistic-neutralized face images from face embeddings, and two effective attacks on state-of-the-art facial recognition models using embeddings. The first attack is a black-box version of a model inversion attack that allows the attacker to reconstruct a realistic face image that is both visually and numerically (as determined by the FRS:s) recognized as the same identity as the original face used to create a given face embedding. This attack raises significant privacy concerns regarding the membership of the gallery dataset of these systems and highlights the importance of both the people designing and deploying FRS:s paying greater attention to the protection of the face embeddings than currently done. The second attack is a novel attack that performs the model inversion, so to instead create the face of an alternative identity that is visually different from the original identity but has close identity distance (ensuring that it is recognized as being of the same identity). This attack increases the attacked system's false acceptance rate and raises significant security concerns. Finally, we use IdDecoder to visualize, evaluate, and provide insights into differences between three state-of-the-art facial embedding models.

Place, publisher, year, edition, pages
ACM Digital Library, 2023
Keywords
Face embedding inversion; Black-box attack; Facial recognition
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-199091 (URN)10.1145/3577923.3583645 (DOI)001352235200003 ()2-s2.0-85158107585 (Scopus ID)9798400700675 (ISBN)
Conference
CODASPY '23: Thirteenth ACM Conference on Data and Application Security and Privacy, Charlotte, NC, USA, April 24 - 26, 2023
Note

Funding Agencies|Swedish Research Council (VR); Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Available from: 2023-11-11 Created: 2023-11-11 Last updated: 2024-12-11Bibliographically approved
4. StyleAdv: A Usable Privacy Framework Against Facial Recognition with Adversarial Image Editing
Open this publication in new window or tab >>StyleAdv: A Usable Privacy Framework Against Facial Recognition with Adversarial Image Editing
2024 (English)In: Proceedings on Privacy Enhancing Technologies / [ed] De Gruyter Open, 2024, Vol. 2, p. 106-123Conference paper, Published paper (Refereed)
Abstract [en]

In this era of ubiquitous surveillance and online presence, protecting facial privacy has become a critical concern for individuals and society as a whole. Adversarial attacks have emerged as a promising solution to this problem, but current methods are limited in quality or are impractical for sensitive domains such as facial editing. This paper presents a novel adversarial image editing framework called StyleAdv, which leverages StyleGAN's latent spaces to generate powerful adversarial images, providing an effective tool against facial recognition systems. StyleAdv achieves high success rates by employing meaningful facial editing with StyleGAN while maintaining image quality, addressing a challenge faced by existing methods. To do so, the comprehensive framework integrates semantic editing, adversarial attacks, and face recognition systems, providing a cohesive and robust tool for privacy protection. We also introduce the ``residual attack`` strategy, using residual information to enhance attack success rates. Our evaluation offers insights into effective editing, discussing tradeoffs in latent spaces, optimal edits for our optimizer, and the impact of utilizing residual information. Our approach is transferable to state-of-the-art facial recognition systems, making it a versatile tool for privacy protection. In addition, we provide a user-friendly interface with multiple editing options to help users create effective adversarial images. Extensive experiments are used to provide insights and demonstrate that StyleAdv outperforms state-of-the-art methods in terms of both attack success rate and image quality. By providing a versatile tool for generating high-quality adversarial samples, StyleAdv can be used both to enhance individual users' privacy and to stimulate advances in adversarial attack and defense research.

Keywords
Adversarial samples, Privacy filter, Facial anonymization
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-203224 (URN)10.56553/popets-2024-0043 (DOI)
Conference
The 24th Privacy Enhancing Technologies Symposium July 15–20, 2024, Bristol, UK
Note

Funding: This work was supported by the Swedish Research Council (VR) and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Available from: 2024-05-06 Created: 2024-05-06 Last updated: 2024-05-06Bibliographically approved

Open Access in DiVA

fulltext(4778 kB)803 downloads
File information
File name FULLTEXT02.pdfFile size 4778 kBChecksum SHA-512
2f2de329f1ed126cf4e9cb16c190006310d8116c0cf87bcfe0329adba5d561b3fd9b3436e938d96e52f922568eeb32cd69dec0b2908e3a04d0644eb516a75bcb
Type fulltextMimetype application/pdf
Order online >>

Other links

Publisher's full text

Authority records

Minh-Ha, Le

Search in DiVA

By author/editor
Minh-Ha, Le
By organisation
Database and information techniquesFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 805 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 2670 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf