On October 3, the Federal Trade Commission (“FTC”) released a blog post titled Consumers Are Voicing Concerns About AI, which discusses consumer concerns that the FTC received via its Consumer Sentinel Network concerning artificial intelligence (“AI”) and priority areas the agency is watching. Although the FTC’s blog post acknowledged that it did not investigate whether the concerns cited indeed correlated to actual AI applications and practices, it found that these concerns fell into three general categories:
- Issues concerning how AI is built. The FTC indicated the large amounts of data required to train AI models raise consumer protection and competition concerns. The FTC is also focused on the use of voice recordings and web scraped data to train AI models.
- Issues concerning how AI works and interacts with users. The post flags the potential for bias, inaccuracies, and “hallucinations” (e.g., false information) as concerns, as well as AI-powered service bots and being able to reach a human for customer service complaints.
- Issues concerning how AI is applied in the real world. The post states it is becoming more difficult to distinguish human generated content from AI-generated content, and the FTC is focused on the use of AI systems that try to scam or defraud consumers (e.g., phishing emails that use generative AI making them harder to spot).
Separately, the FTC also hosted a virtual roundtable on October 4th that focused on generative AI and the creative economy, summarized here.