Instagram’s Latest Teen Safety Measure: Nudity in DMs will be Automatically Blurred by Meta

Meta has announced it’s testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called Nudity Protection in DMs, which automatically blurs images detected as containing nudity. Nudity screensNudity Protection in DMs aims to protect teen Instagram users from cyberflashing by putting nude images behind a safety screen. But Meta is taking a further step of not showing the “Message” button on a teen’s profile to potential sextortion accounts, i.e. For example, in July 2021 Meta switched to defaulting young people’s Instagram accounts to private just ahead of the UK compliance deadline.

Meta has announced it’s testing new features on Instagram intended to help safeguard young people from unwanted nudity or sextortion scams. This includes a feature called Nudity Protection in DMs, which automatically blurs images detected as containing nudity.

The tech giant will also nudge teens to protect themselves by serving a warning encouraging them to think twice about sharing intimate imagery. Meta says it hopes this will boost protection against scammers who may send nude images to trick people into sending their own images in return.

It’s also making changes it suggests will make it more difficult for potential scammers and criminals to find and interact with teens. Meta says it’s developing new technology to identify accounts that are “potentially” involved in sextortion scams and applying some limits to how these suspect accounts can interact with other users.

In another step announced Thursday, Meta said it’s increased the data it’s sharing with the cross-platform online child safety program Lantern – to include more “sextortion-specific signals”.

The social networking giant has long-standing policies banning the sending of unwanted nudes or seeking to coerce other users into sending intimate images. However, these problems continue to be rife online and can cause misery for teenagers and young adults, sometimes with tragic results.

We’ve gathered the latest round of changes in greater detail below.

Nudity Protection Screens

The Nudity Protection in DMs feature aims to protect underage users of Instagram from cyberflashing by placing nude images behind a security screen. Users can then choose whether or not to view the image.

“We’ll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat,” said Meta.

The nudity safety screen will be turned on by default for all users under 18 globally. Older users will receive a notification encouraging them to turn on the feature.

“When nudity protection is turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they’ve changed their mind,” Meta added.

Anyone attempting to forward a nude image will also see a warning encouraging them to reconsider.

The feature uses on-device machine learning, so it will work within end-to-end encrypted chats as the image analysis is done on the user’s device.

Safety Tips

As another safeguarding measure, Instagram users sending or receiving nudes will be directed to safety tips — including information about potential risks — which Meta has developed with input from experts.

“These tips include reminders that people may screenshot or forward images without your knowledge, that your relationship to the person may change in the future, and that you should review profiles carefully in case they’re not who they say they are,” the tech giant explained. “They also link to a range of resources, including Meta’s Safety Center, support helplines, StopNCII.org for those over 18, and Take It Down for those under 18.

Meta is also testing pop-up messages for users who have interacted with an account that has been removed for sextortion. These messages will direct them to relevant expert resources.

“We’re also adding new child safety helplines from around the world into our in-app reporting flows. This means when teens report relevant issues – such as nudity, threats to share private images or sexual exploitation or solicitation – we’ll direct them to local child safety helplines where available,” the company added.

Technology to Spot Sextortionists

While Meta is swift in removing the accounts of sextortionists once they are aware of them, it is essential to detect these bad actors to shut them down. As a result, Meta is going further and “developing technology to help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior”.

“While these signals aren’t necessarily evidence that an account has broken our rules, we’re taking precautionary steps to help prevent these accounts from finding and interacting with teen accounts,” the company states, adding: “This builds on the work we already do to prevent other potentially suspicious accounts from finding and interacting with teens.”

It’s not clear what specific technology Meta is using for this or which signals could highlight a potential sextortionist (we’ve asked for more information). It is likely that they may analyze communication patterns to identify bad actors.

Accounts flagged by Meta as potential sextortionists will face restrictions on how they communicate or interact with other users.

“[A]ny message requests potential sextortion accounts try to send will go straight to the recipient’s hidden requests folder, meaning they won’t be notified of the message and never have to see it,” the company said.

Users who are currently conversing with potential scam or sextortion accounts will not have their chats terminated but will receive Safety Notices “encouraging them to report any threats to share their private images, and reminding them that they can say no to anything that makes them feel uncomfortable”.

Teen users are already protected from receiving unsolicited DMs from adults they are not connected to on Instagram (and from other teens in some cases). However, Meta will take the extra step of not showing the “Message” button on a teen’s profile to potential sextortion accounts, even if they are connected.

“We’re also testing hiding teens from these accounts in people’s follower, following and like lists, and making it harder for them to find teen accounts in Search results,” the company added.

It’s important to note that the EU is closely monitoring child safety risks on Instagram, with regulators asking questions about their approach since the Digital Services Act (DSA) came into effect last summer.

A Long, Slow Creep Towards Safety

Meta has announced measures in the past to combat sextortion – most recently in February when it expanded Take It Down’s reach.

This third-party tool enables users to generate an image’s hash code on their own device and share it with the National Center for Missing and Exploited Children, creating a repository of hash codes for non-consensual images that companies can use to search for and delete revenge porn.

Past efforts by Meta have been criticized because they require young people to upload their nude images. In the absence of firm laws regulating how social media platforms should protect kids, Meta was left to self-regulate for many years — with spotty results.

However, some requirements placed on platforms in recent years, such as the UK’s Code for Kids, which was created in 2021, and, more recently, the DSA in the EU, have pushed major tech companies like Meta to prioritize child safety.

For example, in July 2021, as the UK compliance deadline approached, Meta started defaulting the privacy settings of young people’s accounts to private on Instagram. In November of 2022, even tighter privacy settings for teen users on Facebook and Instagram were introduced.

This January, Meta announced that it would default teens on both Facebook and Instagram to even stricter message settings, with limits on messaging other teens they are not already connected to – just before the DSA compliance deadline of February.

The slow, incremental implementation of protective measures for young users by Meta raises concerns about why it took so long to implement stronger safeguards, suggesting that the company opted for a superficial commitment to safety in an attempt to maintain usage and prioritize engagement over security. (This is precisely what Meta whistleblower, Francis Haugen, denounced her former employer for continuously).

When asked why the latest protection measures for Instagram users are not also being introduced on Facebook, a Meta spokesperson told TechCrunch:

“We want to respond to where we see the greatest need and relevance, which, in terms of unwanted nudity and educating teenagers about the dangers of sharing sensitive images, we believe is on Instagram DMs; as a result, that is where we are focusing first.”

Avatar photo
Dylan Williams

Dylan Williams is a multimedia storyteller with a background in video production and graphic design. He has a knack for finding and sharing unique and visually striking stories from around the world.

Articles: 874

Leave a Reply

Your email address will not be published. Required fields are marked *