Earlier this week, Roblox rolled out mandatory age verification for users who want to continue chatting, making the gaming hub the latest tech platform to strengthen protections for young audiences.
Meta’s Instagram is now shielding teens from PG-13-level content, while OpenAI has adjusted how ChatGPT interacts with minors to reduce risks.
Meanwhile, Grok restricted image generation to paid subscribers after backlash over inappropriate outputs involving real people, including minors, in minimal attire.
Across gaming, social, and digital platforms, companies are responding to criticism that they lack adequate oversight of teen activity. At the same time, privacy advocates such as the Electronic Frontier Foundation warn that age-verification tools like ID checks, biometric scans, and behavioral assessments pose risks to user privacy.
“These restrictive mandates strike at the foundation of the free and open internet,” the group said.
How this issue develops carries significant financial consequences for tech companies, since they must contract vendors to provide age-verification services and carefully weigh the legal risks tied to their chosen approach.
Roblox (RBLX) announced Wednesday that it is tightening chat restrictions so users can only message peers within their age group. To do so, players must pass a “facial age estimation” check or provide a photo ID. The company said more than half of active users have opted in, though some complained about misclassification that blocked them from chatting with friends. Roblox and its vendor Persona, which provides the verification tool, did not respond to requests for comment.
Instagram’s teen accounts, introduced about a year ago, remain popular with parents. Recently, Meta began preventing these accounts from accessing content equivalent to a PG-13 movie rating. “We take a comprehensive approach to ensuring teens have age-appropriate experiences on Meta platforms,” spokesman Edward Patterson said. Meta added that privacy guardrails are built into the system, with Yoti the vendor performing age checks deleting selfies and photo IDs within 30 days.
Persona, which also provides age-estimation services for Roblox and OpenAI, confirmed that images are deleted once verification is complete, underscoring efforts to balance safety with privacy.
X has stopped allowing users to generate images on Grok unless they hold a subscription. The platform adopted this rule in recent days after its safety team emphasized that it removes illegal content and cooperates with law enforcement when necessary.
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” owner Elon Musk said on X.
More changes may follow, as governments worldwide weigh regulations on how platforms treat young users. New Zealand’s prime minister recently proposed banning social media use for those under 16, following Australia’s earlier restrictions on younger teens.
In the U.S., lawmakers are also considering new rules for youth protections online. The Electronic Frontier Foundation noted that more than half of states have already enacted laws requiring platforms to implement age-verification measures.
Roblox, Meta, OpenAI, and Grok are all tightening safeguards for young users, reflecting mounting pressure from regulators and parents. While these measures aim to protect teens from harmful content and interactions, they also raise concerns about privacy and the future of an open internet. The balance between safety, compliance, and user rights will define how platforms evolve in the coming year.