Meta’s NEW ‘Teen Accounts’ – Wow!

After failing to protect youth for years, Big Tech giant Meta is now deploying digital babysitters to “protect” teens on Facebook—while continuing to harvest their data and hooking them on algorithms.

At a Glance

  • Meta is introducing “teen accounts” for Facebook users under 16, restricting livestreaming and requiring parental permission for certain features
  • The new accounts, already used by 54 million teenagers on Instagram, will automatically limit overnight notifications and restrict messaging to approved contacts
  • This move comes as social media faces increased scrutiny, with Australia banning under-16s from social networks entirely
  • Critics argue these measures don’t address the fundamental problem of harmful content proliferation on Meta’s platforms
  • The rollout begins in the US, UK, Australia, and Canada with plans for global expansion

Too Little, Too Late from Big Tech

Meta has suddenly discovered that maybe, just possibly, letting children have unfettered access to social media platforms might not be the best idea. The tech giant is now extending its supposedly protective “teen accounts” from Instagram to Facebook and Messenger, targeting users aged 13-15. These accounts will automatically restrict livestreaming capabilities and require parental permission to disable filters for inappropriate images. It’s almost as if allowing children to broadcast themselves globally to potential predators wasn’t a brilliant idea from the start.

Watch coverage here.

The new accounts, which Meta claims are already used by over 54 million teenagers on Instagram, will limit overnight notifications and restrict messaging to people the user follows or is already connected to. Teens will be automatically enrolled in these accounts, with those under 16 requiring parental permission to change settings. The whole system essentially admits what parents have known all along – that social media platforms without strict controls are inappropriate for children.

Government Scrutiny Finally Forcing Action

It’s no coincidence that Meta’s sudden concern for teen safety comes as governments worldwide are cracking down on social media companies. The UK’s Online Safety Act now requires platforms to prevent or remove illegal content and protect children from harmful material. Australia’s parliament actually voted to ban under-16s from using social networks entirely last November. TikTok has introduced parental time limits in the EU. The writing is on the wall for these companies: regulate yourselves or we’ll do it for you.

“Teen Accounts on Facebook and Messenger will offer similar, automatic protections to limit inappropriate content and unwanted contact, as well as ways to ensure teens’ time is well spent.” said Meta.

Former Meta president Nick Clegg claims these changes will “shift the balance in favour of parents.” But let’s be clear: if Meta genuinely cared about teen safety more than profits, these protections would have been in place from day one. Instead, it took regulatory threats and public backlash to get the bare minimum of protections. This is the same company that internal documents showed was fully aware its platforms were harming teenage mental health, particularly for girls, yet continued business as usual.

Band-Aids on Bullet Wounds

Child safety advocates point out that these measures, while welcome, fail to address the fundamental problems with Meta’s platforms. The National Society for the Prevention of Cruelty to Children (NSPCC) supports the new restrictions but emphasizes they don’t go nearly far enough. The algorithmic nature of these platforms continues to push harmful content at users, regardless of age restrictions or parental controls.

“For these changes to be truly effective, they must be combined with proactive measures so dangerous content doesn’t proliferate on Instagram, Facebook and Messenger in the first place” said Matthew Sowemimo.

The rollout begins in the US, UK, Australia, and Canada, with plans to expand globally. Meta claims that over 90% of 13-15-year-olds on Instagram keep the default restrictions in place, but that statistic conveniently ignores how many underage users simply lie about their age to create accounts in the first place. Until these platforms implement genuine age verification – which they resist because it would hurt user numbers and profits – these protections are largely performative theater designed to stave off regulation rather than truly protect children.