Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Maryland’s AI Toy Safety Act: State-Level Regulation Fills the Federal Void on AI in Children’s Products

By Alex Lange, Meghan McMeel & Clay Marquez on March 6, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

On February 12, 2026, a bipartisan group of legislators in Maryland proposed the Maryland Artificial Intelligence Toy Safety Act. This proposed legislation would establish a sweeping regulatory framework for AI-enabled toys sold in the state, covering any device that uses machine learning, conversational AI, behavioral modeling, or similar computational processes and is marketed to or primarily used by children. This proposed legislation adds to a growing trend of increasing efforts, at both the federal and state levels, to regulate the use of AI in products and services used by children.  

The proposed act’s scope is intentionally broad, and it introduces pre-market compliance obligations requiring manufacturers to conduct child safety assessments before bringing new AI toys to Maryland consumers. Manufacturers of toys already on the market as of July 1, 2026, would be given until January 1, 2027, to complete their initial assessments. Violations of the act are classified as unfair, abusive, or deceptive trade practices under the Maryland Consumer Protection Act, exposing non-compliant manufacturers to civil penalties of up to $50,000 per violation and mandatory product recalls.

The act would impose substantial data privacy requirements as well. Manufacturers would be limited to collecting only the minimum child user data necessary for core toy functionality, and all such data must be encrypted. The act flatly prohibits selling or transferring child user data to third parties, using it to train unrelated AI models, targeting children with advertising based on that data, or retaining it for more than 12 months without renewed parental consent. In the event of a data breach, manufacturers must notify affected parents or guardians within 48 hours of discovery, a tight notification window that underscores the legislature’s prioritization of child data security.

Toys containing AI would be required to include accessible mechanisms for obtaining and revoking parental consent, and parents must be able to disable data collection without losing the toy’s core functionality. The toys themselves are prohibited from generating content that is sexual, violent, emotionally manipulative, or instructional about harmful behaviors, and must incorporate content moderation tools, age-appropriate conversational filters, and an automatic safe mode triggered by harmful or unknown inputs. Reflecting deeper concerns about children’s psychological wellbeing, the act would also prohibit manufacturers from marketing AI toys as emotional companions, parental substitutes, or psychological counselors.

Finally, the act would also create a dedicated enforcement and oversight infrastructure through the establishment of the Artificial Intelligence Toy Safety Review Panel within the Consumer Protection Division of the Office of the Attorney General. The panel is charged with reviewing manufacturer compliance, conducting independent audits of AI toys sold or distributed to children in Maryland, and evaluating industry safety standards to recommend statutory updates. Beginning December 1, 2027, the panel would be required to report annually to the General Assembly on its findings. This ongoing oversight mechanism signals that the legislature views the act not as a static set of rules, but as a living regulatory framework designed to evolve alongside advances in AI technology.

While the future of this legislation is unclear, it is indicative of a concerted effort by lawmakers to address what is viewed as a lack of regulatory oversight of AI in toys and other products used by children.

One regulator that is not currently taking steps to apply its powers to the integration of AI technologies into toys is the Consumer Product Safety Commission (CPSC). On February 13, 2026, the CPSC issued a letter from Acting Chairman Peter A. Feldman responding to a January 15, 2026, letter from Senators Klobuchar, Cantwell, and Markey regarding the integration of AI technologies into children’s toys. The CPSC clarified that its statutory mission has traditionally been focused on reasonably foreseeable risks of physical injury and that it is neither equipped nor authorized to evaluate non-physical hazards, such as mental, emotional, or psychological harm. The agency also noted that extending its jurisdiction to such harms would constitute a novel expansion beyond the authority granted by Congress. The agency emphasizes that its approach is consistent with the broader administration’s policy encouraging innovation while ensuring agencies remain within their statutory lanes.

These divergent approaches indicate the need for consistent monitoring and flexibility in a rapidly evolving regulatory scheme. Even though federal regulators are not focusing on these issues, sellers and manufacturers of these products need to be aware of the possibility of state-level regulatory actions and continue to monitor developments in this space.

  • Posted in:
    Corporate & Commercial
  • Blog:
    Retail & Consumer Products Law Observer
  • Organization:
    Crowell & Moring LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo