Get the latest tech news
ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI
OpenAI has been told it's suspected of violating European Union privacy, following a multi-month investigation of its AI chatbot, ChatGPT, by Italy's data OpenAI has been told it's suspected of violating European Union privacy, following a multi-month investigation of its AI chatbot, ChatGPT, by Italy's data protection authority.
The Garante ‘s March 30 provision to OpenAI, aka a “register of measures”, highlighted both the lack of a suitable legal basis for the collection and processing of personal data for the purpose of training the algorithms underlying ChatGPT; and the tendency of the AI tool to ‘hallucinate'(i.e. its potential to produce inaccurate information about individuals) — as among its issues of concern at that point. While the Italian authority hasn’t yet said which of the previously suspected ChatGPT breaches it’s confirmed at this stage, the legal basis OpenAI claims for processing personal data to train its AI models looks like a particular crux issue. Following the provisional restriction of processing order, adopted by the Garante against the company on March 30, and at the outcome of the preliminary investigation carried out, the Authority considered that the elements acquired may constitute one or more unlawful acts with respect to the provisions of the EU Regulation.
Or read this on TechCrunch