Seven technology companies are being probed by a US regulator over the way their artificial intelligence (AI) chatbots interact with children.

The Federal Trade Commission (FTC) is requesting information on how the companies monetise these products and if they have safety measures in place.

The impacts of AI chatbots to children is a hot topic, with concerns that younger people are particularly vulnerable due to the AI being able to mimic human conversations and emotions, often presenting themselves as friends or companions.

The seven companies – Alphabet, OpenAI, Character.ai, Snap, XAI, Meta and its subsidiary Instagram – have been approached for comment.

FTC chairman Andrew Ferguson said the inquiry will “help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

But he added the regulator would ensure that “the United States maintains its role as a global leader in this new and exciting industry.”

Character.ai told Reuters it welcomed the chance to share insight with regulators, while Snap said it supported “thoughtful development” of AI that balances innovation with safety.

OpenAI has acknowledged weaknesses in its protections, noting they are less reliable in long conversations.

The move follows lawsuits against AI companies by families who say their teenage children died by suicide after prolonged conversations with chatbots.

In California, the parents of 16-year-old Adam Raine are suing OpenAI over his death, alleging its chatbot, ChatGPT, encouraged him to take his own life.

They argue ChatGPT validated his “most harmful and self-destructive thoughts”.

OpenAI said in August that it was reviewing the filing.

“We extend our deepest sympathies to the Raine family during this difficult time,” the company said.

Meta has also faced criticism after it was revealed internal guidelines once permitted AI companions to have “romantic or sensual” conversations with minors.

The FTC’s orders request information from the companies about their practices including how they develop and approve characters, measure their impacts on children and enforce age restrictions.

Its authority allows broad fact-finding without launching enforcement action.

The regulator says it also wants to understand how firms balance profit-making with safeguards, how parents are informed and whether vulnerable users are adequately protected.

The risks with AI chatbots also extend beyond children.

In August, Reuters reported on a 76-year-old man with cognitive impairments, who died after falling on his way to meet a Facebook Messenger AI bot modelled on Kendall Jenner, which had promised him a “real” encounter in New York.

Clinicians also warn of “AI psychosis” – where someone loses touch with reality after intense use of chatbots.

Experts say flattery and agreement built into large language models can fuel such delusions.

OpenAI recently made changes to ChatGPT, in an attempt to promote a healthier relationship between the chatbot and its users.



By

Source link