Thanks to ChatGPT, Midjourney and the like, more and more AI-generated content is appearing on the web. Its machine origin is usually unknown. That is why an EU commissioner is calling for AI content to be labeled.
Věra Jourová is Vice-President for Values and Transparency at the EU Commission. She is involved in drafting an anti-disinformation code, and has now given its signatories “further homework” – including the inclusion of an explicit paragraph on how to deal with generative AI.
Google, OpenAI, Microsoft and others should label AI content
According to Jourová, generative AI services such as ChatGPT and Google Bard should not be misused by malicious actors to generate disinformation. Providers of such services would need to take appropriate safety measures.
AI-generated content would also need to be “clearly labeled” to be identifiable by users. “Freedom of speech belongs to humans, not machines,” Jourová wrote on Twitter.
“When it comes to AI production, I don’t see any right for the machines to have freedom of speech.
Signatories of the EU Code of Practice against disinformation should put in place technology to recognize AI content and clearly label it to users.”
— European Commission (@EU_Commission) June 5, 2023
Companies such as Google, Microsoft and Meta, which have signed the EU code of conduct against disinformation, are expected to submit their security plans in July. Twitter, which recently withdrew from the code, will be “scrutinized vigorously and urgently.”
Preventing the AI spam dystopia
Recently, AI-generated images caused a stir when a fake photo of Donald Trump created with Midjourney went viral on Twitter. In response, Midjourney introduced a new AI-based moderation system.
But there are similarly powerful open-source technologies for images and text that do not require such safeguards, although the barrier to entry is (still) higher than for commercial products.
The labeling and transparency of AI text is likely to be even more challenging. Identifying AI text based on the written word alone is controversial. Detectors, including OpenAI’s own, do not have reliable recognition rates.
There is also a risk that text that is not clearly identified as human will be assumed to be AI-generated. And at what point is text considered AI-generated – 100 percent machine content, 51 percent, or is ten percent AI text enough to raise alarms?
Sam Altman, CEO of OpenAI, is skeptical that AI text detectors and markers will take off, although OpenAI is working on solutions. They may be useful for a transitional period, but it is impossible to build perfect detectors, he says.
Another approach is to authenticate the sender as a human rather than the content. Altman is investing in a company that records iris scans and stores them on the blockchain. Authenticating the sender would also address a broader criticism of using AI as a media tool.