reteLLMe: Design Rules for using Large Language Models to Protect the Privacy of Individuals in their Textual Contributions - Université d'Orléans
Communication Dans Un Congrès Année : 2024

reteLLMe: Design Rules for using Large Language Models to Protect the Privacy of Individuals in their Textual Contributions

Résumé

The advanced inference capabilities of Large Language Models (LLMs) pose a significant threat to the privacy of individuals by enabling third parties to accurately infer certain personal attributes (such as gender, age, location, religion, and political opinions) from their writings. Paradoxically, LLMs can also be used to protect individuals by helping them to modify their textual output from certain unwanted inferences, opening the way to new tools. Examples include sanitising online reviews (e.g., of hotels, movies), or sanitising CVs and cover letters. However, how can we avoid miss estimating the risks of inference for LLM-based text sanitisers? Can the protection offered be overestimated? Is the original purpose of the produced text preserved? To the best of authors knowledge, no previous work has tackled these questions. Thus, in this paper four design rules (collectively referred to as reteLLM e) are proposed to minimise these potential issues. We validate these rules and quantify the benefits obtained in a given use casesanitising hotel reviews. We show that up to 76% of at-risk texts are not flagged as such without fine-tuning. Moreover, classic techniques such as BLEU and ROUGE are shown to be incapable of assessing the amount of purposeful information in a text. Finally, a sanitisation tool based on reteLLM e demonstrates superior performance to a state-of-the-art sanitiser, with better results on up to 90% of texts.
LLM-privacy-2024 (1).pdf (1.09 Mo) Télécharger le fichier

Dates et versions

hal-04684512 , version 1 (02-09-2024)

Licence

Identifiants

  • HAL Id : hal-04684512 , version 1

Citer

Mariem Brahem, Jasmine Watissee, Cédric Eichler, Adrien Boiret, Nicolas Anciaux, et al.. reteLLMe: Design Rules for using Large Language Models to Protect the Privacy of Individuals in their Textual Contributions. DPM 2024 - International Workshop on Data Privacy Management @ ESORICS, Sep 2024, Barcelona, Spain. ⟨hal-04684512⟩
227 Consultations
52 Téléchargements

Partager

More