Facebook sees rise in violent content and harassment after policy changes

Date:

Share:


Meta has published the first of its quarterly integrity reports since Mark Zuckerberg walked back the company’s hate speech policies and changed its approach to content moderation earlier this year. According to the reports, Facebook saw an uptick in violent content, bullying and harassment despite an overall decrease in the amount of content taken down by Meta.

The reports are the first time Meta has shared data about how Zuckerberg’s decision to upend Meta’s policies have played out on the platform used by billions of people. Notably, the company is spinning the changes as a victory, saying that it reduced its mistakes by half while the overall prevalence of content breaking its rules “largely remained unchanged for most problem areas.”

There are two notable exceptions, however. Violent and graphic content increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of 2025. Meta attributed the uptick to “an increase in sharing of violating content” as well as its own attempts to “reduce enforcement mistakes.” Meta also saw a noted increase in the prevalence of bullying and harassment on Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at the start of 2025. Meta says this was due to an unspecified “spike” in violations in March. (Notably, this is a separate category from the company’s hate speech policies, which were re-written to allow posts targeting immigrants and LGBTQ people.)

Those may sound like relatively tiny percentages, but even small increases can be noticeable for a platform like Facebook that sees billions of posts every day. (Meta describes its prevalence metric as an estimate of how often rule-breaking content appears on its platform.)

The report also underscores just how much less content Meta is taking down overall since it moved away from proactive enforcement of all but its most serious policies like child exploitation and terrorist content. Meta’s report shows a significant decrease in the amount of Facebook posts removed for hateful content, for example, with just 3.4 million pieces of content “actioned” under the policy, the company’s lowest figure since 2018. Spam removals also dropped precipitously from 730 million at the end of 2024 to just 366 million at the start of 2025. The number of fake accounts removed also declined notably on Facebook from 1.4 billion to 1 billion (Meta doesn’t provide stats around fake account removals on Instagram.)

At the same time, Meta claims it’s making far fewer content moderation mistakes, which was one of Zuckerberg’s main justifications for his decision to end proactive moderation.”We saw a roughly 50% reduction in enforcement mistakes on our platforms in the United States from Q4 2024 to Q1 2025,” the company wrote in an update to its January post announcing its policy changes. Meta didn’t explain how it calculated that figure, but said future reports would “include metrics on our mistakes so that people can track our progress.”

Meta is acknowledging, however, that there is at least one group where some proactive moderation is still necessary: teens. “At the same time, we remain committed to ensuring teens on our platforms are having the safest experience possible,” the company wrote. “That’s why, for teens, we’ll also continue to proactively hide other types of harmful content, like bullying.” Meta has been rolling out “teen accounts” for the last several months, which should make it easier to filter content specifically for younger users.

The company also offered an update on how it’s using large language models to aid in its content moderation efforts. “Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas,” Meta writes. “We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies.”

The other major component to Zuckerberg’s policy changes was an end of Meta’s fact-checking partnerships in the United States. The company began rolling out its own version of Community Notes to Facebook, Instagram and Threads earlier this year, and has since expanded the effort to Reels and Threads replies. Meta didn’t offer any insight into how effective its new crowd-sourced approach to fact-checking might be or how often notes are appearing on its platform, though it promised updates in the coming months.



Source link

━ more like this

Beyond payments: How QR codes are reshaping everyday spending in 2025 – London Business News | Londonlovesbusiness.com

QR code payments have expanded far beyond splitting dinner bills or paying for coffee. Today, they unlock discounts, enable tipping, and drive loyalty...

Amazon strikes AI licensing deal with Hearst and Condé Nast

Digiday that media conglomerates Hearst and Condé Nast have signed multi-year licensing agreements with Amazon to allow its access to the...

Reform UK’s run Kent County Council to save £40 million – London Business News | Londonlovesbusiness.com

Last May Nigel Farage’s Reform UK won control of Kent’s local elections and set up a unit to investigate the council’s finances. By 2030...

Waitrose reports Wimbledon related food sees sales soar – London Business News | Londonlovesbusiness.com

Waitrose has revealed Wimbledon-related food sales were up 300% in the last week, with strawberries and cream products seeing a 450% surge which...

Prime Day 2025: The best headphone and earbud deals from Sony, Beats, Bose and more still available today

If you’ve been holding out on a new set of earbuds or headphones during Prime Day, now’s a great time to take a...
spot_img