
From California to Texas to Ohio, lawmakers are increasingly turning to age-verification requirements to protect children online. These laws target a range of concerns, with a growing body of research suggesting that certain online activities may pose risks to children’s mental health and well-being.
At the same time, privacy laws continue to proliferate across the United States, many of which emphasize data minimization and limitations on the use of sensitive personal information. This creates a growing tension. Protecting children online may require companies to collect more personal data than they otherwise would, and potentially subject themselves to additional privacy requirements and consumer concerns.
Recent regulatory developments, including a policy statement from the Federal Trade Commission (FTC) addressing the use of age-verification technologies under the Children’s Online Privacy Protection Act (COPPA), suggest attempts to reconcile this conflict.
Expanding and Contested Age Verification Laws
Age-verification laws are not new, but they have become more prevalent as technological advances have expanded children’s access to, and reliance on, online materials. These laws typically require certain websites to implement age-verification tools designed to confirm whether a user is above a specified age before accessing certain content or services that are considered harmful to children. Depending on the system used, verification may involve government-issued identification, biometric age estimation, credit card verification, or other identity verification methods.
One of the more notable laws in this area is Texas H.B. 1181, which requires entities publishing adult content harmful to minors online to use reasonable age verification methods to verify visitors are at least 18 years old. Texas provides the following reasonable age verification methods: digital identification, government-issued identification, or a commercially reasonable method that relies on public or private transaction data to verify the age of an individual. Texas H.B. 1181 was met with a First Amendment challenge, with the Petitioners arguing the law unconstitutionally burdened the rights of adults to access legal content. This case, Free Speech Coalition, Inc. v. Paxton, eventually went to the U.S. Supreme Court, where it upheld the law as an appropriate exercise of Texas’ authority to shield children from accessing obscene materials. While Paxton is a win for states enacting age-verification laws, its reach may be limited. More specifically, we may see a different outcome where the content being restricted is not limited to obscene material, but attempts, for example, to more generally address social media use.
Beyond free speech concerns, age-verification requirements present privacy risks, particularly where the tools being deployed require collection of sensitive personal information.
Privacy Laws and Consumer Concerns
While lawmakers are introducing age-verification requirements, privacy laws continue to emerge across the United States. Comprehensive privacy laws, like those enacted in California, Colorado, Delaware, Kentucky, New Jersey, Virginia, and other states, generally require companies to limit data collection to what is reasonably necessary for a disclosed purpose, to avoid retaining personal information longer than needed, and impose heightened restrictions on certain sensitive data (e.g., biometric identifiers, children’s personal information, financial information, etc.).
These laws can complicate the implementation of age-verification technologies. In practice, age-verification systems often rely on the collection of government-issued identification, facial scans, other biometric identifiers, or other personal data to confirm that a user meets minimum age thresholds. Collecting and processing this information may conflict with data minimization principles and introduce additional compliance obligations for companies that deploy these systems.
Beyond compliance considerations, the collection of information for age verification purposes can lead to general consumer privacy concerns. For instance, Discord, the popular online communication platform, announced it would be rolling out global age verification, which would involve collection of user ID scans, facial biometrics, and/or credit card verification where needed. Users were quick to express their displeasure with the proposed policy, noting concerns about their privacy. As a result of the backlash, Discord postponed the global rollout.
Recognizing these tensions, companies and regulators have begun exploring ways to reconcile online safety objectives with privacy requirements and expectations.
FTC COPPA Policy Statement to Incentivize Use of Age Verification
On February 25, 2026 the FTC issued a policy statement announcing it will not bring enforcement actions under COPPA against certain website and online service operators that collect, use, and disclose personal information for the sole purpose of age verification without first obtaining verifiable parental consent, provided they comply with the following conditions:
- Do not use or disclose information collected for age verification purposes for any other purpose;
- Do not retain such information longer than necessary for age verification;
- Disclose such information for age verification purposes only to those third parties that are capable of protecting such information and provide written assurances to that effect;
- Provide clear notice to parents and children of the information collected for age verification;
- Employ reasonable security safeguards for such information; and
- Take reasonable steps to determine that any age verification method used is likely to provide reasonably accurate results.
The FTC acknowledges that age verification technologies play a critical role in protecting children online, but also present risks to COPPA compliance. As a result, the FTC has issued the policy statement to allow limited flexibility.
Conclusion
The increasing use of age-verification requirements reflects a growing effort by lawmakers to address risks children face online. At the same time, the deployment of age-verification technologies raises important questions about privacy. Organizations that operate online platforms, or provide technologies that facilitate access to online content, should closely monitor developments in this area and evaluate whether their practices appropriately account for both age-verification obligations and evolving privacy requirements.
If you have questions about age verification requirements or privacy compliance considerations, Taft’s Privacy, Security & AI attorneys are available to assist. As always, please sign up to receive emails of our latest posts here on Privacy and Data Security Insights, and follow us on LinkedIn for the latest in privacy, security and artificial intelligence legal news.