1. Home
  2. Business
  3. Meta Cracks Down on Predators with New Teen Safety Tools After 600,000+ Suspicious Accounts Flagged
Meta Cracks Down on Predators with New Teen Safety Tools After 600,000+ Suspicious Accounts Flagged

Meta Cracks Down on Predators with New Teen Safety Tools After 600,000+ Suspicious Accounts Flagged

2
0

Meta unveiled a new wave of safety features on Wednesday aimed at better protecting teenagers across its platforms. These updates include stronger direct messaging safeguards to combat exploitative content and new tools to help teens identify and report suspicious users.

Now, younger users will see added context about the people they’re chatting with—such as the age of the Instagram account—and will have the option to block and report someone in a single step. In June alone, Meta said teens used these tools to block over 1 million accounts and file another million safety reports.

The tech giant has been under increasing pressure from regulators over how it handles child safety. As part of its recent efforts, Meta removed approximately 135,000 Instagram accounts earlier this year for sexualizing minors. These accounts often posted inappropriate comments or solicited explicit content from accounts featuring children, which are typically managed by adults.

An additional 500,000 accounts on Instagram and Facebook were taken down due to links with those initial profiles.

To tighten protections even further, Meta has begun placing all teen and child-representative accounts into the platform’s strictest safety settings by default. This includes filters that block offensive messages and limit contact from unknown users.

Although users must be at least 13 to open an Instagram account, adults are allowed to manage profiles for younger children—as long as it’s clearly stated in the account bio.

Meta is also working to tackle spam and impersonation issues. In the first half of 2025, it removed around 10 million fake profiles that were pretending to be major content creators.

These changes come as lawmakers renew their push for stronger protections for children online. The Kids Online Safety Act was reintroduced in Congress this May, requiring tech platforms to prioritize children’s wellbeing by preventing harmful content and experiences.

Meanwhile, rival platform Snapchat has faced legal challenges, including a lawsuit from New Mexico accusing it of enabling an environment ripe for child exploitation. The company has denied the claims and is seeking to have the case dismissed.

Meta says its latest efforts are part of a broader mission to reduce harmful interactions and make its platforms safer for younger audiences.

LEAVE YOUR COMMENT

Your email address will not be published. Required fields are marked *