Italian Data Protection Authority investigates new OpenAI model Sora

Garante is investigating how the Sora algorithm is trained, and how personal data is collected and processed.

An investigation against the US company OpenAI and its new AI model, ‘Sora’, has been opened by the Italian Data Protection Authority over personal data concerns.

According to OpenAI, Sora can produce “dynamic, realistic and imaginative scenes” from text instructions, which Garante per la protezione dei dati personali (Garante) says could have implications for the processing of users’ personal data in the European Union and in Italy.

Garante has asked the company to clarify within 20 days whether users can access Sora now, and if it will be offered to users in the EU and Italy. Garante has also sent other questions to provide clarifications to, which include:

  • how the algorithm is trained;
  • what data is collected and processed to train the algorithm, especially whether it is personal data;
  • whether particular categories of data are collected (such as religious or philosophical beliefs, political opinions, genetic data, health, and sexual life); and
  • which sources are used.

If Sora is or will be made public to users in the EU, the authority has asked OpenAI to state whether “the methods envisaged for informing users and non-users about data processing procedures and the legal bases for said processing comply with the European Regulation”.

ChatGPT breaching EU GDPR

AT the end of January, Garante also announced it has ‘available evidence’ that OpenAI’s ChatGPT was contravening provisions of the EU GDPR. The details of the findings have not been disclosed, but in the provision to OpenAI in March last year, the authority said that it suspected ChatGPT to be breaching GPDR Articles 5, 6, 8, 13 and 25.

Italy also became the first western country to temporarily ban ChatGPT, after concerns that the app is collecting personal data unlawfully, and the lack of age verification for children.

An order with nine conditions on was imposed on OpenAI in April, 2023, which included requirements such as ensuring human oversight of the AI system, implementing robust security protocols, disclosing relevant information about the AI system to users, and establishing clear channels for user feedback and redressal.

OpenAI addressed some issues quickly, and was able to resume ChatGPT service in Italy in the end of April.