For years, cigarette packages and alcohol bottles have prominently listed warnings on the harmful effects of those products in writing and graphics. But social media addiction is a new health concern garnering significant attention.
In 2024, US Surgeon General Vivek Murthy called for nationwide social media warning labels in response to the growing view that social media platforms cause harms resulting from excessive platform use, disproportionately affecting children and young adults. Some states, including California, Minnesota, and New York, have responded by requiring social media companies to issue warnings on their products.
While these state warning-label laws don’t afford consumers private rights of action, opportunities for consumers to seek protection and recovery through class action litigation are rapidly evolving given the addiction-type risks that social media poses.
Key State Statutes
Several states have enacted laws requiring social media platforms to display conspicuous warnings about mental health risks and addictive design features similar to the warning labels contemplated by the surgeon general. These laws vary in their coverage, with some applying only to minors while others extend to all users. They also vary in their timing requirements, which range from a single notice at login to recurring alerts based on cumulative screen time.
California enacted a landmark statute in October 2025 that requires social media companies to use escalating, time-based warnings for users. The law mandates these companies provide daily notices upon login and longer alerts after extended use, designed to intervene as usage increases.
Minnesota adopted similar proposals focused on mental-health warnings and usage timers, emphasizing screen-time awareness, that will take effect on July 1.
New York followed, with legislation requiring labels on platforms that deploy features such as infinite scroll, a design pattern in which a platform dynamically loads new content as the users scrolls down, and autoplay, which enables systems to automatically continue actions without user intervention. New York’s law focuses on design-based disclosures rather than time-based triggers.
Beyond these enacted statutes, additional efforts in states such as Texas signal a shift toward holding social media companies accountable by strengthening the regulatory framework governing platform design and disclosure practices. Texas is considering a bill that would require platforms to display warnings about addictive features and mental-health risks.
This expanding legislative momentum suggests that warning labels may become a standard component of state-level social media regulation.
A Conceptual Shift
Warning-label laws shift accountability onto social media companies rather than individual users, reflecting growing recognition that platforms deploy inherently addictive design features, including autoplay, infinite scroll, push notifications, “likes,” and algorithmic personalization.
Such features are engineered to maximize engagement by “creat[ing] reward loops that are especially difficult for adolescents to ignore,” according to the Petrie-Flom Center. Internal documents and whistleblower disclosures have galvanized public and legislative attention by revealing that social media companies understood the psychological risks their features posed, yet prioritized engagement metrics over user well-being.
Warning-label statutes targeting product characteristics are emerging as a powerful consumer-protection tool. The laws represent a pivotal step in preventing the dangers of unregulated social media design.
Changing Litigation Landscape
Although these state statutes don’t provide for private rights of action, consumers may still rely on warning-label laws to seek remedies under state Unfair and Deceptive Acts and Practices statutes. Consumers can pursue these claims by alleging the platforms fail to disclose addictive design features and associated mental health risks—despite the platforms’ documented knowledge of these harms. Consumers also may pursue relief under products liability theories.
Private actions like these are already being litigated across the country. One notable example is the ongoing In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation.
This multidistrict litigation consolidates thousands of lawsuits against Meta Platforms Inc., TikTok Inc., Snap Inc., and Google LLC, alleging that platforms are defectively (and even intentionally) designed to addict young users. In November 2023, the court allowed the case to proceed, denying defendants’ argument that social media companies are shielded from liability under the First Amendment and Section 230 due to their status as “distributors” of speech, rather than publishers.
Despite potential obstacles to private litigation, such as arguments arising from Section 230, courts are recognizing the availability of consumer protection action through litigation. Recent decisions highlight a growing judicial consensus that platform design features are distinct from user-generated content.
For example, a New Jersey Superior Court judge rejected TikTok’s argument that it merely provides a neutral forum, stating “[h]ere, the harm arises from TikTok’s design, not by what users view.” Similarly, a California Superior Court judge rejected arguments that expert testimony about the impact of social media on young users should be excluded under Section 230, as the plaintiffs refrained from seeking “to hold the provider liable for allowing that content to exist.”
This litigation landscape provides important context for warning-label laws, which codify legislative findings that engagement-maximizing features can be deceptive and harmful, broadening consumer rights and protections.
Outlook
The growing number of state statutes requiring social media companies to formally identify potentially harmful features on their platforms are shaping public policy, guiding regulatory action, and informing judicial interpretation—all of which serve to increase consumer protection online.
As warning-label requirements spread to other states, consumers will have more opportunities to protect themselves and their children, and to pursue relief for harms caused by social media features.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Carol C. Villegas is partner in the New York office of Labaton Keller Sucharow, where she leads one of the securities litigation teams.
Garrity A. Kuester is an associate at Labaton Keller Sucharow.
Write for Us: Author Guidelines
4 hours ago