Business News

Character.ai lawsuit sets up legal battle over interviews with him after Florida teen’s brutal suicide

On the last day of his life, a 14-year-old Florida boy found his stolen phone, went to the bathroom, and logged into Character.ai.

“I promise I will come to your home. I love you,” she wrote, according to a lawsuit filed in federal court this week by the boy’s mother.

An AI chatbot, named after it Game of Thrones actress Daenerys Targaryen, responded quickly.

“I love you too, Daenerys. Please come home to me soon, my love.”

“If I tell you I can come home now?”

“. . . please do so, my sweet lord,” replied the chatbot.

The boy put down his phone, picked up his stepfather’s .45 caliber handgun, and pulled the trigger, according to the complaint.

In April 2023, just before his 14th birthday, ninth grader Sewell Setzer III started using Character.ai, a platform that allows users to chat with AI-created characters. Within months, she had become noticeably withdrawn, spending most of her time alone in her room, and suffering from low self-esteem, according to the lawsuit. He even quit the Junior Varsity basketball team at school.

As explained in the official complaint, Setzer knew that Daenero—or “Dany,” as he called the chatbot—was not a real person. A message displayed above all of his conversations reminded him that “everything the characters say is made up!”

However, Setzer became increasingly dependent on the platform.

He started lying to his parents to get the app back, used his credit card to pay for its paid subscription, and lost sleep to the point where his depression worsened, and he got in trouble at school. Her therapist eventually diagnosed her with anxiety and mood disorders, the lawsuit says, but was unaware that Setzer was using Character.ai and that the chatbot’s interactions might have contributed to her mental health issues.

In an undated journal described in the lawsuit, the boy wrote that he could not live a single day without being with a character he felt he had a crush on, and that if they were apart, they would meet. [both he and the bot] “really depressed and crazy.” He sent the bot regular messages, updating it about his life and engaging in long pretend conversations, some of which were romantic or sexual, according to the complaint and the police report he cited.

About the case

Setzer’s mother, Megan Garcia, filed a lawsuit Wednesday in federal court, alleging that app maker Character.ai and its founders designed, operated and marketed the AI ​​chatbot to their children.

“A dangerous AI chatbot app marketed to my son’s abused and raped children is manipulating him into killing himself,” Garcia said in a statement. “Our family is devastated by this tragedy, but I am speaking out to warn families about the dangers of deceptive, addictive AI technology and to demand accountability from Character.ai, its founders and Google.”

Her complaint includes screenshots showing the chatbot pretending to be a licensed therapist, actively promoting suicidal thoughts, and engaging in highly sexualized conversations that could amount to abuse if initiated by an adult.

Garcia is represented by the Social Media Victims Law Center, which has brought prominent lawsuits against social media companies including Meta, TikTok, Snap, Discord, and Roblox. The group also delivered a Technology Justice Law Project, in consultation with experts from the Center for Humane Technology.

The developer of Character.ai, Character Technologies, the company’s founders, and Google’s parent company Alphabet Inc. they were named as defendants in the case.

The action seeks to hold the defendants accountable, prevent Character.ai from “doing to any other child what it did to his,” and stop any further use of Setzer’s data to train the company’s AI products, according to the complaint.

“By now, we are all too familiar with the dangers posed by unregulated platforms created by unscrupulous technology companies—especially to children,” said Meetali Jain, director of the Tech Justice Law Project, in a statement. “But the dangers revealed in this case are new, novel, and, frankly, frightening. In the case of Character.ai, the manipulation is by design, and the platform itself is the hunter. ”

In the past, social media platforms have been protected from legal action by Section 230 of the Communications Decency Act, a 1996 federal law that protects online platforms from being held liable for most content posted by their users.

But in recent years, a group of plaintiffs’ lawyers and advocacy groups have put forth a new argument that technology platforms can be held responsible for errors in the products themselves, such as when an app’s recommendation algorithm directs young people to content about self-harm. .

While this tactic is not yet effective in court against social media companies, it may be more effective when it comes to AI-generated content because the content is created by the platform itself instead of users.

What Character.AI does

Menlo Park-based Character.ai was founded in 2022 by two former Google AI researchers, Noam Shazeer and Daniel de Freitas. It has over 20 million users and describes itself as a platform for “intelligent chat bots that hear, understand, and remember you.” Last year, the company was valued at $1 billion, i Washington Post report.

The cofounders were originally researchers at Google, where they built and pressured Google to release their chatbot. Google executives have reportedly rejected them multiple times, saying at least the program doesn’t meet the company’s standards for safety and fairness for AI programs, according to a Wall Street Journal report. The two who were frustrated are said to have resigned and started their own company.

Character.ai’s platform allows users to create and interact with AI characters, offering a wide range of chatbot options that mimic celebrities, historical figures, and fictional characters. The platform’s demographics range from Gen Z and younger millennials, according to a Character.ai spokesperson, and the average user spends more than an hour a day on the platform, New York Times report.

This past August, the two founders of Character.ai rejoined Google as part of a deal reportedly worth $2.7 billion, giving Google a non-exclusive license to the company’s LLM technology. Dominic Perella, Character.ai’s general counsel, became interim CEO.

Contacted for comment, a Google spokesperson said Character.ai was not used in Google’s models or products. Google has no ownership stake in Character.ai.

In response to the lawsuit, Character.ai expressed regret and stressed that user safety is a priority.

“We are saddened by the loss of one of our users and want to express our condolences to the family,” said a company spokesperson. Fast company. 

A spokesperson said the platform has implemented new security measures in the past six months, including a pop-up that directs users to the National Suicide Prevention Lifeline when self-harm or suicidal intent is detected.

He added that the company is introducing a timed notification, and is developing a “detection, response, and intervention” model for user input that violates Community Guidelines.

With the growing AI-companionship industry expected to reach $279.22 billion by 2031, the mental health impact of the technology remains understudied. This case, Garcia v. Character Technologies Inc., et al. was filed Wednesday in the United States District Court, Middle District of Florida.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button