Breaking News: Italy fines OpenAI for violating privacy regulations

On December 20, 2024, Italian data protection agency Garante announced a fine of 15 million euros (approximately 15.58 million US dollars) imposed on the developer of ChatGPT, OpenAI, following the conclusion of an investigation into the handling of personal data by this generative artificial intelligence application.

According to Reuters, Garante stated that it found OpenAI had processed users’ personal data during the training of ChatGPT without “sufficient legal grounds,” violating the “principle of transparency” and its obligations regarding user information disclosure.

OpenAI responded by stating that the fine does not align with the circumstances and that they will appeal the decision.

The regulatory authority highlighted that the investigation began in 2023 and also uncovered that the American company had not implemented adequate age verification systems, failing to prevent children under 13 from accessing inappropriate AI-generated content.

In addition to the fine, the Italian regulatory authority ordered OpenAI to conduct a six-month promotional campaign in Italian media to enhance public awareness of how ChatGPT operates, particularly concerning the collection of data from both users and non-users to train algorithms.

Garante, the data protection agency in Italy, is one of the most proactive regulatory bodies in the European Union for assessing whether AI platforms comply with data privacy regulations.

Last year, due to suspected violations of EU privacy regulations, the Italian supervisory authority temporarily banned the use of ChatGPT in Italy. Following the resolution of issues such as users’ right to refuse consent for the use of personal data to train algorithms by OpenAI, supported by Microsoft, the service was reinstated.