WASHINGTON: The Federal Trade Commission has opened an investigation into OpenAI, the artificial intelligence startup that makes ChatGPT, over whether the chatbot has harmed consumers through its collection of data and its publication of false information on individuals. In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices.
The FTC asked the company dozens of questions in its letter, including how the startup trains its AI models and treats personal data.
The FTC’s investigation poses the first major regulatory threat to OpenAI. Sam Altman, the startup’s co-founder, testified in Congress in May and said he invited AI legislation to oversee the fast-growing industry, which is under scrutiny because of how the technology can potentially kill jobs and spread disinformation. OpenAI did not respond to a request for comment.
When OpenAI first released ChatGPT in November, it instantly captured the public imagination with its ability to answer questions, write poetry and riff on almost any topic tossed its way. But the technology can also blend fact with fiction and even make up information, a phenomenon that scientists call “hallucination”. One of the questions has to do with steps OpenAI has taken to address the potential for its products to “generate statements about real individuals that are false, misleading, or disparaging”.