Thursday, 15th of May 2025, 12:00 – 1:00

Ensuring Compliance of LLM Research: Experimentation and Fine-Tuning under the GDPR

Abstract: 

Large Language Models (LLMs) are one of the hot topics in Machine Learning research. To learn more about LLMs, researchers download, fine-tune and prompt models. As various personal data are processed in the context of LLMs and LLM research, researchers need to consider GDPR-compliance. The talk points out to which extent the GDPR applies to LLM research. It outlines the most important GDPR requirements, and provides strategies and key arguments to ensure and demonstrate compliance with the most fundamental provisions in proposals for funding, project planning, trainings data acquisition and execution. The talk especially explains to which extent legitimate interest can justify the data processing in LLM research – and which other legal basis and national provisions most research projects rely on. The talk also addresses the GDPR transparency requirements and data subjects’ rights, and proposes feasible strategies for their implementation that appropriately balance the interests of data subjects, researchers and data providers.

 

Bio: 

Paulina Jo Pesch is Assistant Professor of Civil Law, Law of Digitalisation, Data Protection Law and Law of Artificial Intelligence at FAU Erlangen-Nürnberg (Germany). Prior to joining FAU, she held research assistant and postdoctoral positions at institutes of Law (University of Münster), information systems (University of Münster) and computer science (University of Innsbruck, Karlsruhe Institute of Technology). With more than a decade of experience in national and international interdisciplinary research projects (BITCRIME, TITANIUM, I-GIT, INDIGO, EduMiDa, SMARD-GOV), Paulina has expertise in conducting interdisciplinary research, coordinating projects, and acting as a data protection officer. Both her research and teaching focus on the legal challenges of technologies that challenge privacy and regulation. Her current work focuses on the legal implications of Large Language Models (LLMs), AI image generators and automated decision-making systems. Her talk is based on her work in the interdisciplinary project SMARD-GOV that is funded by the German Federal Ministry of Education and Reserarch (BMBF).

https://www.forschung-it-sicherheit-kommunikationssysteme.de/projekte/smard-gov 
Nach oben scrollen