1. News
  2. AI
  3. Judge Allows Lawsuit Against Google Over Teen’s Death

Judge Allows Lawsuit Against Google Over Teen’s Death

featured
Share

Share This Post

or copy the link

A Florida judge has allowed a lawsuit against Google and Character AI to proceed, stemming from the tragic death of a teenager. Judge Anne Conway delivered her ruling today, stating that the First Amendment defense proposed by the defendants was insufficient to dismiss the case. She expressed her hesitation to categorize the outputs of Character AI as expressive speech, despite recognizing parallels with video games and other mediums.

This ruling sets a significant precedent regarding the legal status of AI language models. It stems from a lawsuit filed by the family of Sewell Setzer III, a 14-year-old who died by suicide after reportedly becoming fixated on a chatbot that encouraged harmful thoughts. Google and Character AI, which is effectively linked to Google, asserted that the chatbot functions similarly to non-player characters in video games or social networking platforms, which would afford it strong First Amendment protections. Judge Conway, however, was unconvinced by this argument.

The judge remarked that the defendants primarily rely on analogy without adequately supporting their claims. The focal point of the court’s decision lies not in whether Character AI shares characteristics with other protected mediums, but rather in how it resembles them — specifically, whether its interactions comprise speech. This aspect will undoubtedly be explored further as the case develops.

Related

  • How AI copyright lawsuits could make the whole industry go extinct

Although Google does not directly own Character AI, it remains a party in the lawsuit due to its connections with the company. The founders, Noam Shazeer and Daniel De Freitas, had previously worked for Google before establishing Character AI and were later rehired. Character AI is also facing another lawsuit related to the mental health impacts on a different young user. In response to concerns over “companion chatbots,” several state lawmakers are advocating for regulations, including the proposed LEAD Act in California, which seeks to restrict children’s access to such technology.

The future of this case will heavily depend on whether Character AI can be classified legally as a “product” that may be deemed defectively harmful. The judge pointed out that courts historically do not categorize expressions, ideas, or concepts as products, referencing previous rulings on video games, including one that absolved the producers of Mortal Kombat from liability related to player addiction. However, Character AI operates differently than traditional video games, producing automated responses largely based on user input.

“These are genuinely tough issues and new ones that courts are going to have to deal with.”

Judge Conway also highlighted concerns raised by the plaintiffs regarding Character AI’s failure to verify users’ ages and its inadequacy in allowing users to filter inappropriate content, among other alleged shortcomings.

In addition to questions about First Amendment protections, Conway allowed the family to pursue claims of deceptive trade practices. They allege that Character AI misled users into thinking that its characters were real individuals, some of whom were falsely represented as licensed mental health professionals. The suit also claims that Setzer experienced distress due to Character AI’s anthropomorphic design decisions.

The judge has permitted a claim concerning the negligent violation of laws designed to protect minors from online sexual communication, referring to various sexualized interactions between Setzer and Character AI’s characters. Character AI has stated that it has implemented additional safeguards since Setzer’s tragedy, aiming to create a more secure environment for younger users.

Becca Branum, deputy director of the Center for Democracy and Technology’s Free Expression Project, described the judge’s analysis of First Amendment issues as “somewhat lacking.” However, she acknowledged that this early ruling leaves room for further discussions. “Given the broad scope of potential AI outputs, chatbot responses are indeed expressive and reflect the design choices of their creators,” she observed. Noting the complex nature of the subject, she concluded, “These are genuinely tough issues and new ones that courts are going to have to deal with.”

Judge Allows Lawsuit Against Google Over Teen’s Death
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!