Meta Will Test Features Blurring Lewd Images To Protect Teenagers

Instagram will soon test features to blur lewd messages as part of an effort to protect teenagers and bar potential scammers from reaching them, according to the platform’s parent company, Meta.

The social media company is facing increasing pressure in the U.S. and Europe over allegations that its applications were addictive and led to mental health issues among young people.

Instagram’s parent company said the features for the social media platform’s direct messages would use on-device machine learning to determine whether an image sent contains nudity.

The New York Post pointed out that the feature will be turned on by default for individuals under 18 years old and Meta will encourage adult users to turn it on.

“Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won’t have access to these images — unless someone chooses to report them to us,” the company said in a statement.

Instagram’s direct messages feature, unlike Meta’s Messenger and WhatsApp applications, are not encrypted yet.

Meta indicated that it was developing technology to help identify accounts that could be potentially engaging in lewd scams, adding that it was testing new pop-up messages for individuals who may have interacted with such accounts.

“The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” the company said.

Images containing nudity will be blurred with a warning, providing users with an option to view it. They will also have the ability to block the sender and report the message.

In January 2024, the tech giant said it would conceal more content from teenagers on Facebook and Instagram, arguing that this would make it harder for them to stumble across sensitive content like suicide, self-harm and eating disorders, according to the New York Post.

Attorney generals across the U.S., including California and New York, sought legal action against the company in October 2023, arguing that it consistently misled the public about the dangers of its social media platform.