Meta has threatened to withdraw its apps from New Mexico due to new proposed state regulatory requirements designed to enhance the protection of minors within its apps, according to the AP.
Meta floated the withdrawal threat as part of the company’s legal defense in its ongoing trial over allegations that it failed in its duty to protect minors from exposure to harm across its platforms. Last month, Meta was fined $375 million in civil penalties in New Mexico after a jury ruled that the company is liable for failing to protect young users from child predators in its apps.
The second element of that same trial will see Meta defend against further charges related to public nuisance, with New Mexico regulators seeking to enact enhanced burdens on the company to enhance its security, in order to meet what it deems to be adequate standards of protection.
As reported by AP, those measures include a requirement that Meta maintain 99% accuracy in verifying that all users are at least 13 years old.
In response, ahead of the trial, Meta said the state’s requests “are so broad and so burdensome,” that if they are implemented, it might need to consider withdrawing its apps from the state entirely, as it would not be able to assure such measures, according to reporting from The New York Post.
Whether that’s a genuine threat, or a legal tactic, is difficult to say. The actual enforcement of an app ban in any single state would also prove virtually unfeasible, due to VPN use and other workarounds.
But right now at least, Meta is threatening to pull its apps entirely, unless state regulators can dilute their demands and provide more flexibility in their requirements.
The issue once again highlights the challenges of age verification, and keeping young users out of social media apps. Various regions are considering new laws to stop children from accessing social media platforms, due to concerns that they could be exposing themselves to nefarious elements. Some research reports have also indicated that social media exposure can be harmful to teens, and could be contributing to mental health issues.
Yet, the academic material on the subject is mixed, with other studies suggesting that the social benefits of such platforms outweigh the negatives.
And either way, enforcement of age barriers is notoriously difficult, especially amongst a generation of digital savvy kids who know their way around all of the various measures designed to block their path.
Indeed, in Australia, which enacted its new under 16 social media ban in December, initial reports indicate that the majority of kids are still accessing social media apps, and the bans have had no impact on usage, despite the increased potential penalties.
In its initial findings, the Australian government tested a range of age checking measures, and found that there are systems that can adequately ensure that young teens are essentially locked out of social apps. But it didn’t mandate any single solution, opting instead to let the platforms determine what they believe will work best for their needs in meeting these new requirements.
Evidently, that hasn’t resulted in broad compliance. It may be possible that there is a definitive solution that works best for age checking, but right now, there’s seemingly no system that’s foolproof, and will ensure detection to the level that regulators are seeking.
Which is why Meta is pushing back, and it’ll be interesting to see whether the company actually follows through on the threat and attempts to restrict access in a single U.S. state.
But also, if Meta can’t assure 99% compliance in keeping underage kids out of its apps, in New Mexico or presumably any other region, what level of enforcement can Meta commit to?
And if that number is below, say, 50%, with respect to the platform’s legal obligations in meeting such requirements, what’s the point of implementing new laws to restrict kids, as Meta’s basically saying that they won’t work either way?

