Does Fine-tuning GPT-3 with the OpenAI API leak personally-identifable information?

31 Jul 2023

Albert Yu Sun, Elliott Zemour, Arushi Saxena, Udith Vaidyanathan, Eric Lin, Christian Lau, Vaikkunth Mugunthan | DynamoFL, Inc.

ABSTRACT

“Our findings reveal that fine-tuning GPT3 for both tasks led to the model memorizing and disclosing critical personally identifiable information (PII) obtained from the underlying fine-tuning dataset. To encourage further research, we have made our codes and datasets publicly available on GitHub at: https://github.com/albertson1/gpt3-pii-attacks

Download the full paper

more insights