Full width home advertisement

Post Page Advertisement [Top]

Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide

Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old’s Suicide

Content advisory: This narrative contains discussions about self-injury and suicidal thoughts. If you're experiencing a crisis, kindly reach out by calling, texting, or chatting with the Suicide and Crisis Lifeline at 988, or connect with the Crisis Text Line by sending TALK to 741741.

A judge in Florida has recently denied a motion to dismiss a lawsuit that alleges the chatbot company Character.AI—and its closely tied benefactor Google — led to the suicide of a 14-year-old user, paving the way for the first such lawsuit to advance in court.

The legal case, initiated in October, alleges that negligently launched Character.AI chatbots subjected a teenager named Sewell Setzer III to sexual and emotional abuse. This led to an unhealthy dependence on the platform, significant psychological distress, and tragically culminated in his suicide in February 2024.

In January, the parties involved in the lawsuit — which includes Character.AI, Google, along with Character.AI co-founders Noam Shazeer and Daniel de Freitas — submitted a motion to have the case dismissed primarily based on arguments related to the First Amendment. They contend that output from AI-generated chatbots should be considered free speech, asserting that “speech alleged to cause harm, such as leading to suicide,” falls within the protective scope of the First Amendment.

However, this reasoning wasn't sufficient, the judge decided, particularly not at this preliminary juncture. opinion Presiding U.S. District Judge Anne Conway stated that the companies did not adequately demonstrate that AI-generated content from large language models (LLMs) amounts to something beyond mere words—unlike speech, which depends on intention.

Conway stated in her decision that the defendants did not explain "why sequences of words generated by an LLM qualify as speech."

The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged "intentional infliction of emotional distress," or IIED. (It's difficult to prove IIED when the person who allegedly suffered it, in this case Setzer, is no longer alive.)

Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.

Significantly, Conway's opinion allows Megan Garcia, Setzer's mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks .

In the eyes of the law, tech companies generally prefer to see their creations as services , like electricity or the internet, rather than products , like cars or nonstick frying pans . Services can't be held accountable for product liability claims, including claims of negligence, but products can.

In a statement, Tech Justice Law Project director and founder Meetali Jain, who's co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large.

"Today’s decision by a federal judge acknowledges a bereaved mother's entitlement to seek justice through the legal system, aiming to make major technology firms — along with their creators — answerable for promoting a flawed product that resulted in her offspring's demise," stated Jain.

"This historic ruling not only allows Megan Garcia to seek the justice her family deserves," Jain added, "but also sets a new precedent for legal accountability across the AI and tech ecosystem."

In 2021, Character.AI was established by Shazeer and de Freitas, who previously collaborated on various AI initiatives at Google before departing to start their own chatbot venture. Leveraging Google’s critical Cloud infrastructure for support, they garnered attention in 2024 when they paid Character.AI $2.7 billion to license the chatbot firm's data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google's fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google's Gemini LLM.

Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage" the Character.AI app "or any component part of it."

In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia's lawsuit, and said it "looked forward" to its continued defense:

It's long been true that the law takes time to adapt to new technology, and AI is no different. In today's order, the court made clear that it was not ready to rule on all of Character.AI 's arguments at this stage and we look forward to continuing to defend the merits of the case.

We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.

Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.

Any safety-focused changes, though, were made months after Setzer's death and after the eventual filing of the lawsuit, and can't apply to the court's ultimate decision in the case.

Meanwhile, journalists and researchers continue to find holes in the chatbot site's upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm , grooming and pedophilia , eating disorders , and mass violence . And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called "Character Calls" effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.

More on Character.AI: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

The post Judge Rejects Bid to Dismiss lawsuit Alleging AI Contributed to 14-Year-Old's Suicide appeared first on TechBytesLab .

No comments:

Post a Comment

Bottom Ad [Post Page]