Facebook sees rise in violent content and harassment after policy changes

Date:

Share:


Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company’s hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta.

The reports are the first time Meta has shared data about how Zuckerberg’s decision to upend Meta’s policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules “largely remained unchanged for most problem areas.”

There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to “an increase in sharing of violating content” as well as its own attempts to “reduce enforcement mistakes.” Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified “spike” in violations in March. (Notably, this is a separate category from the company’s hate speech policies, which were re-written to allow posts targeting immigrants and LGBTQ people.)

Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day. (Meta describes its prevalence metric as an estimate of how often rule-breaking content appears on its platform.)

The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta’s report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content “actioned” under the policy, the company’s lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billion (Meta doesn’t provide stats around fake account removals on Instagram.)

At the same time, Meta claims it’s making far fewer content moderation mistakes, which was one of Zuckerberg’s main justifications for his decision to end proactive moderation.”We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025,” the company wrote in an update to its January post announcing its policy changes. Meta didn’t explain how it calculated that figure, but said future reports would “include metrics on our mistakes so that people can track our progress.”

Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. “At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible,” the company wrote. “That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying.” Meta has been rolling out “teen accounts” for the last several months, which should make it easier to filter content specifically for younger users.

The company also offered an update on how it’s using large language models to aid in its content moderation efforts. “Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas,” Meta writes. “We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies.”

The other major component to Zuckerberg’s policy changes was an end of Meta’s fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn’t offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.



Source link

━ more like this

Hbada X7 brings AI-driven lumbar support to your workspace

For any person spending six to eight hours in a chair, ergonomic discomfort is inevitable. And if it’s not given due attention, the...

How to watch NASA launch first crewed moon mission in five decades

The countdown for the first crewed lunar flight in more than 50 years is underway, NASA announced on Monday. The onsite countdown clock started...

After Galaxy S26, Samsung is bringing iPhone AirDrop support to the budget Galaxy A phones

AirDrop has long been an iPhone exclusive — a seamless, quick, and efficient way to transfer files between iPhones, iPads, and Macs (unless...

Embrace the Future of Pool Care: This Spring, the Beatbot Sora Series is Here to Simplify Your Routine 

The spring season is when pool owners all think alike – keeping a pool clean should not be this tiresome. Every Sunday morning,...

Evernote finally brings back tabbed view, but are its rising prices pushing loyal users away?

If you are an Evernote power user, there’s good news! Evernote has officially brought back tabbed notes, a feature that Mac users loved...
spot_img