OpenAI is launching a tool that can detect images created by its text-to-image generator DALLE 3 to thwart the concerns of Ai-generated content.
(Reuters) – OpenAI is launching a tool that can detect images created by its text-to-image generator DALL-E 3, the Microsoft-backed startup said on Tuesday amid rising worries about the influence of AI-generated content in this year’s global elections.
The company said the tool correctly identified images created by DALL-E 3 about 98% of the time in internal testing and can handle common modifications such as compression, cropping and saturation changes with minimal impact.
The ChatGPT creator also plans to add tamper-resistant watermarking to mark digital content such as photos or audio with a signal that should be hard to remove.
As part of the efforts, OpenAI has also joined an industry group that includes Google, Microsoft and Adobe and plans to provide a standard that would help trace origin of different media.
In April, during the ongoing general election in India, fake videos of two Bollywood actors that are seen criticizing Prime Minister Narendra Modi have gone viral online.
The spread of AI-generated content and deepfakes are being increasingly used in India and in elections elsewhere in the world including in the U.S., Pakistan and Indonesia.
OpenAI said it is joining Microsoft in launching a $2 million “societal resilience” fund to support AI education.