It seems the moment of reckoning for AI chatbots is here. After numerous reports detailing the problematic behavior and deadly incidents involving children and teens’ interaction with AI chatbots, the US government is finally intervening. The Federal Trade Commission (FTC) has today asked the makers of popular AI chatbots to detail how exactly they test and assess the suitability of these “AI companions for children.”
What’s happening?
Highlighting how the likes of ChatGPT, Gemini, and Meta can mimic human-like communication and personal relationships, the agency notes that these AI chatbots nudge teens and children into building trust and relationships. The FTC now seeks to understand how the companies behind these tools evaluate the safety aspect and limit the negative impact on the young audience.
Nadeem Sarwar / Digital Trends
In a letter addressed to the tech giants developing AI chatbots, the FTC has asked them about the intended audience of their AI companions, the risks they pose, and how the data is handled. The agency has also sought clarification on how these companies “monetize user engagement; process user inputs; share user data with third parties; generate outputs; measure, test, and monitor for negative impacts before and after deployment; develop and approve characters, whether company- or user-created.”
The agency seeks Meta, Alphabet (Google’s parent company), Instagram, Snap, xAI, and OpenAI to answer its queries regarding AI chatbots and whether they are in compliance with the Children’s Online Privacy Protection Act Rule. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children,” FTC Chairman, Andrew N. Ferguson, shared in a statement.
There’s more action brewing
The FTC’s problem is a big step forward towards seeking accountability from AI companies regarding the safety of AI chatbots. Earlier this month, an investigation by non-profit Common Sense Media revealed that Google’s Gemini chatbot is a high-risk tool for kids and teens. In the tests, Gemini was seen doling out content related to sex, drugs, alcohol, and unsafe mental health suggestions to young users. Meta’s AI chatbot was spotted supporting suicide plans a few weeks ago.
Matheus Bertelli / Pexels
Elsewhere, the state of California passed a bill that aims to regulate AI chatbots. The SB 243 bill was moved forward with bipartisan support, and it seeks to require AI companies to build safety protocols and to be held accountable if they harm users. The bill also mandates “AI companion” chatbots to issue recurring warnings about their risks and annual transparency disclosures.
Rattled by the recent incidents where lives have been lost under the influence of AI chatbots, ChatGPT will soon get parental controls and a warning system for guardians when their young wards show signs of serious distress. Meta has also made changes so its AI chatbots avoid talking about sensitive topics.