Facebook executives have repeatedly promised to fix their platform.
Time and time again, they've failed to deliver.
We catalogued hundreds of Facebook's broken promises
into an interactive database, spanning 16 categories:
Abuse and Nudity
Alcohol and Drugs
Animal Crimes
Bullying and Harassment
Criminal Activity
Firearms
Fraud and Deception
Harm Against Property
Hate Speech
Historical Artifacts
Manipulated Media
Privacy and Image Rights
Regulated Goods
Suicide and Self-Injury
Violence and Incitement
Violent and Graphic Content
Scroll down

Section I

Broken Promises_
year
0
violations
1
Here’s how it works:
Each news report is represented by a tile, with the color indicating the type of Facebook policy violation, like abuse and nudity.
Feb 5, 2018
FBI investigating child pornography video being shared on Facebook
Other tiles stand for violations like hate speech and
violence and incitement.
Together, they paint a devastating picture of Facebook’s Broken Promises.
Scroll to load a database
of major offenses.
Let’s drill down into the findings.
Facebook says it removes language that can be used to incite violence.
“While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we remove language that incites or facilitates serious violence.”
—Facebook Community Standards
In fact, the majority of news reports we compiled—290—relate to violence and criminal behavior on Facebook.
Mar 22, 2018
Wisconsin high school students caught planning school shooting on Facebook Messenger
Facebook insists that hate speech isn’t allowed on the platform.
“We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. That is why we don’t allow hate speech on Facebook.”
—Facebook Community Standards
In fact, we found 134 news reports featuringhate speech and other forms of objectionable content in our dataset.
Mar 29, 2020
Racist message posted on city council candidate’s Facebook page
Facebook says it prohibits content that compromises people’s safety.
“We’re committed to making Facebook a safe place. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.”
—Facebook Community Standards
However, 84 reports in our dataset involved someone’s safety at risk.
Nov 1, 2019
Vaccine advocates face bullying, harassment on Facebook
36 of these reports involved harm to children.
Nov 20, 2018
Teenage Sudanese girl auctioned on Facebook
92 reports included two or more violations of Facebook’s Community Standards.
How to read it
Each tile represents a violation and color represents the event type
Abuse and Nudity
Alcohol and Drugs
Animal Crimes
Bullying and Harassment
Criminal Activity
Firearms
Fraud and Deception
Harm Against Property
Hate Speech
Historical Artifacts
Manipulated Media
Privacy and Image Rights
Regulated Goods
Suicide and Self-Injury
Violence and Incitement
Violent and Graphic Content
None
Go to the Methodology to see how we categorized the incidents into event types.
By 2018, we recorded 162 violations.

Voice message from Mark

0:05
2018

“So this was a major breach of trust, and I’m really sorry that this happened”

By 2019, we recorded 268 violations.

Voice message from Sheryl

0:02
2019

“We know we need to do better”

By 2020, we recorded 394 violations.

Voice message from Mark

0:04
2020

“It was largely an operational mistake”

By 2021, we recorded 415 violations.

Voice message from Mark

0:02
2021

“There will always be some mistakes”

Section II

Facebook's Response_

Facebook has promised to act when its policies are broken.

“Our policies define what is and isn’t allowed on Facebook technologies.
If content goes against our policies, we take action on it.”

—Facebook policies page, 2021

violations
415
However, Facebook responded or “took action” in less than a quarter of the news reports compiled by TTP.
Even in those cases, its response was often woefully inadequate.
In one case, Facebook promised to investigate.
In five cases, Facebook denied any violation.
In 19 cases, Facebook simply restated company policy.
Overall, Facebook took substantive action, by removing the content or changing its policies, in just 18% of cases.
When responding to reports, Facebook often drew on cookie-cutter PR statements:
1/31/2018
VIOLATION #30
Robert Goodwin's murder recorded and posted on Facebook
Topic
Violence and Criminal Behavior, Objectionable Content
Event type
Violence and Incitement
Violent and Graphic Content
We want everyone using Facebook to feel safe...”
Read full statement
11/1/2019
VIOLATION #256
Vaccine advocates face bullying, harassment on Facebook
Topic
Safety
Event type
Bullying and Harassment
Hate Speech
We want members of our community to feel safe and respected on Facebook...”
Read full statement

We analyzed Facebook’s responses to identify the patterns.

Facebook used versions of these four phrases repeatedly:

It's not easy, we want to get better
17 occurrences of this sentence
2018
2019
2020
2021
We rely on a combination of technology, human review and reports from our community
24 occurrences of this sentence
2018
2019
2020
2021
We decided to remove this content based on the suggestion of our experts
27 occurrences of this sentence
2018
2019
2020
2021
We do not allow this content because it violates our Community Standards
31 occurrences of this sentence
2018
2019
2020
2021

Facebook has systematically failed to police its platform despite repeated promises by its executives.

It routinely permits violations of its policies on violence, hate speech and other topics.

When called out by the media, it rarely acts to fix the problem, resorting instead to recycled PR statements.

The time for talking is over. With Facebook unable or unwilling to clean up its platform, it’s time for regulators to step in.

To build this tool, the Tech Transparency Project collected more than three years of reports showing violations of Facebook's own policies.
Keep scrolling to explore the database.
Broken Promises
Click on a tile to explore.