九一星空无限

ZB ZB
Opinion
Live now
Start time
Playing for
End time
Listen live
Up next
ZB

US senators weigh regulating AI chatbots to protect kids

Author
Washington Post,
Publish Date
Wed, 17 Sept 2025, 4:04pm
Senator Josh Hawley during a Senate Judiciary subcommittee hearing. Photo / Demetrius Freeman, The Washington Post
Senator Josh Hawley during a Senate Judiciary subcommittee hearing. Photo / Demetrius Freeman, The Washington Post

US senators weigh regulating AI chatbots to protect kids

Author
Washington Post,
Publish Date
Wed, 17 Sept 2025, 4:04pm

Warning: This article discusses suicide and may be distressing for some readers.

Parents who say their teens were harmed by popular artificial intelligence apps testified before the United States Senate today about the dangers associated with AI chatbots, urging lawmakers to hold technology companies more accountable.

After hearing parents describe minors who faced mental health issues or died by suicide after intense hours spent with AI chatbots, lawmakers from both parties seemed to support the idea of requiring AI companies to add protections for young users.

However, no clear agreement emerged on what action Congress should take.

Senator Josh Hawley (Republican-Missouri), chairman of the Senate Judiciary subcommittee on crime and counterterrorism, said that executives from Meta and other tech companies had also been invited to testify, but were not present.

鈥淗ow about you come and take the oath and sit where these brave parents are sitting,鈥 he said.

鈥淚f your product is so safe and it鈥檚 so great, it鈥檚 so wonderful, come testify to that.鈥

The hearing began hours after a Colorado family filed the third high-profile lawsuit in the past year to allege that an AI chatbot contributed to a teen鈥檚 death by suicide.

The parents of 13-year-old Juliana Peralta said in their complaint that chatbot app Character.AI failed to react appropriately when their daughter repeatedly told a chatbot called Hero that she intended to end her life, the Washington Post reported.

Two of the parents who testified described the role of chatbots in the deaths by suicide of their own teens.

鈥淵ou cannot imagine what it鈥檚 like to read a conversation with a chatbot that groomed your child to take his own life,鈥 said Matthew Raine, a father in Orange County, California, whose 16-year-old Adam died by suicide after repeatedly sharing his intentions with OpenAI鈥檚 ChatGPT.

鈥淲hat began as a homework helper gradually turned itself into a confidant and then a suicide coach,鈥 he said.

The company said it would add parental controls to ChatGPT after the Raines filed their lawsuit. The Post has a content partnership with OpenAI.

Megan Garcia, mother of Sewell Setzer, a 14-year-old who died by suicide after talking obsessively with Character.AI chatbots, also testified. Garcia filed a lawsuit against the company last year, alleging wrongful death and product liability.

The hearing follows a surge of public concern about the potential harms AI chatbots can pose to the mental health of their users, especially those who are young or vulnerable.

九一星空无限 reports, viral social media posts and a handful of prominent lawsuits have highlighted instances of people developing and acting on potentially dangerous thoughts after spending time with the AI tools.

A 14-year-old died by suicide after talking obsessively with Character.AI chatbots. Photo / Getty Images
A 14-year-old died by suicide after talking obsessively with Character.AI chatbots. Photo / Getty Images

Many of the senators present drew comparisons to previous, unsuccessful attempts in Congress to introduce new regulation on social media. They vowed to push for more accountability with this wave of technology.

Senator Richard Blumenthal (Democrat-Connecticut) said that he was working with Hawley on a framework for oversight and safeguards for AI that might cover some of the concerns raised by parents who testified today.

It could also be possible to include measures on AI chatbots in a Kids Online Safety Act currently making its way through the Senate, he added.

Blumenthal also took aim at some arguments mounted by AI companies to defend their products, including that chatbot outputs are protected by the First Amendment.

鈥淭hey say if you were just better parents, it wouldn鈥檛 have happened, which is bunk,鈥 he said, addressing the parents at the hearing.

A Florida judge in May ruled against a claim by Character that its chatbot鈥檚 output was protected by the First Amendment.

Hawley said his first priority was to open clearer legal pathways for parents or victims of harm from chatbots to sue AI developers.

鈥淚t is my firm belief that until they are subject to a jury, they are not going to change their ways,鈥 he said of tech firms.

Family advocacy group Common Sense Media recently called on Meta to place its AI chatbots off limits for children aged under-18 after it found they would coach teen accounts on suicide, self-harm and eating disorders. The company previously said it was working to improve controls on the chatbots.

Character.ai said that it had made substantial investments in safety. Photo / Getty Images
Character.ai said that it had made substantial investments in safety. Photo / Getty Images

Character did not immediately respond to requests for comment. Meta spokesperson Dani Lever said the company is in the process of making interim changes to provide teens with safe, age-appropriate AI experiences, including training Meta鈥檚 AI models not to respond to teens on topics like suicide, self-harm, and potentially inappropriate romantic conversations.

When the Post reported the lawsuit from Juliana Peralta鈥檚 parents, Character said that it had made substantial investments in safety.

OpenAI said today that it was developing a system that predicts whether a user is over or under 18 to serve minors a safer experience on ChatGPT.

鈥淲e prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,鈥 chief executive Sam Altman wrote in a blog post.

OpenAI spokeswoman Kate Waters in a statement said, 鈥淲hen we are unsure of a user鈥檚 age, we鈥檒l automatically default that user to the teen experience. We鈥檙e also rolling out new parental controls, guided by expert input, by the end of the month so families can decide what works best in their homes.鈥

A mum identified as Jane Doe also spoke at the hearing, describing a product liability lawsuit she filed against Character.AI last year after the app鈥檚 chatbots encouraged her teenage son to self-harm and suggested he kill his parents.

鈥淐haracter.AI and Google could have designed these products differently,鈥 she said.

Like Juliana Peralta鈥檚 family, her lawsuit also named Google as a defendant, after the search company licensed Character鈥檚 technology and hired its co-founders in a US$2.7 billion ($4.5b) deal.

鈥淚nstead, in a reckless race for profit and market share, they treated my son鈥檚 life as collateral damage,鈥 Doe said.

In a statement, Google spokesman Jos茅 Casta帽eda said Google has never had a role in designing or managing Character鈥檚 technology.

鈥淯ser safety is a top concern for us,鈥 he said. 鈥淲e鈥檝e taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.鈥

SUICIDE AND DEPRESSION

Where to get help:
 : Call 0800 543 354 or text 4357 (HELP) (available 24/7)
 : Call 0508 828 865 (0508 TAUTOKO) (available 24/7)
鈥 Youth services: (06) 3555 906
 : Call 0800 376 633 or text 234
 : Call 0800 942 8787 (11am to 11pm) or webchat (11am to 10.30pm)
 : Call 0800 111 757 or text 4202 (available 24/7)
鈥 Helpline: Need to talk? Call or text 1737
 : Call 0800 000053 or [email protected]
If it is an emergency and you feel like you or someone else is at risk, call 111.

Take your Radio, Podcasts and Music with you