When her 14-year-old son robbed her life after interacting with an artificial intelligence chatbot, Megan Garcia turned her grief into action.
Last year, Florida’s mother sued Characher.ai, a platform that allows them to create and interact with digital characters that mimic real people and fictional people.
Garcia alleged in a federal lawsuit that the platform’s chatbots harmed the mental health of her son Swell Setzer III and Menlo Park, California.
Currently, Garcia supports state laws aimed at protecting youth from “companion” chatbots.
“As time goes by, a comprehensive regulatory framework will be needed to address all the harms, and we are grateful that California is currently at the forefront of building this ground,” Garcia said at a news conference Tuesday that she reviewed the bill at a hearing in Sacramento.
Suicide Prevention and Crisis Counseling Resources
If you or someone you know is struggling with the idea of ​​suicide, call 9-8-8 for help from a professional. The US’s first national triple-digit mental health crisis hotline 988 connects callers with trained mental health counselors. “Home” text to US and Canada 741741 to reach the text line of crisis.
As businesses move faster to move their chatbots forward, parents, lawmakers and child advocacy groups worry that there aren’t enough protective measures to protect young people from the potential dangers of technology.
To address the issue, state lawmakers have introduced a bill that requires operators of companion chatbot platforms to remind users at least every three hours that virtual characters are not human. The platform should also take other steps, such as implementing protocols to address suicidal ideation, suicide, or self-harm expressed by the user. This includes displaying users’ suicide prevention resources.
Under Senate Bill 243, operators of these platforms report the number of times a companion chatbot has nurtured suicidal thoughts or actions with users along with other requirements.
The law, which cleared the Senate Judiciary Committee, is just one way state lawmakers try to tackle the potential risks posed by artificial intelligence as the number of popular young people chatbots is surged. Over 20 million people use character.ai every month, and users create millions of chatbots.
Lawmakers say the bill could become a national model for AI protection, and some supporters of the bill include the child advocacy group Common Sense Media and the California Academy of Pediatrics.
“Technological innovation is very important, but our kids can’t use it as guinea pigs. We can’t test the safety of our products. The stakes are high,” said Sen. Steve Padilla, one of the lawmakers who introduced the bill at an event Garcia attended.
However, the technology industry and business groups such as Technet and the California Chamber of Commerce have opposed the law and told lawmakers would impose “an unnecessary and burdensome requirement on a general purpose AI model.” The Electronic Frontier Foundation, a San Francisco-based non-profit digital rights group, said the law raises the issue of the first amendment.
“The government probably has an attractive interest in preventing suicide. However, the regulations are either untrained or not accurate,” the EFF wrote to lawmakers.
Charition.ai also surfaces concerns about the first amendment regarding Garcia’s lawsuit. The attorney asked federal court in January to dismiss the case, saying that finding parents’ favor would violate the user’s constitutional rights to free speech.
Chelsea Harrison, a spokesman for Chalsea.ai, said in an email that the company takes user safety seriously, and its goal is to provide “an attractive, safe space.”
“We are always working to achieve that balance, like many companies using AI across the industry. As we begin to look at laws in this new space, we welcome working with regulators and lawmakers,” she said in a statement.
She cited a new safety feature, including tools that allow parents to see how much time teenagers spend on the platform. The company also cited efforts to mitigate potentially harmful content and direct certain users towards the national suicide and crisis lifeline.
Social media companies like Snap and Facebook’s parent company Meta are competing with Openai’s ChatGPT by releasing AI chatbots within the app. Some users use ChatGpt to get advice and full work, but they turn to these chatbots and play the role of virtual boyfriends and friends.
Lawmakers are also working on how to define “companion chatbots.” Certain apps such as Replika and Kindroid sell services as AI companions and digital friends. This bill does not apply to chatbots designed for customer service.
Padilla said at a press conference that the law is “inherently dangerous” and focuses on product design to protect minors.