The Challenge
We wanted a way to address the explosion of hate speech on social media but without feeding the algorithm or giving hate more oxygen. We needed a solution that could disrupt the spread and support the mission. LifeAfter Hate is a nonprofit helping people leave violent extremist groups and rebuild with compassion, so it was a perfect match for the idea.
The Challenge
We wanted a way to address the explosion of hate speech on social media but without feeding the algorithm or giving hate more oxygen. We needed a solution that could disrupt the spread and support the mission. LifeAfter Hate is a nonprofit helping people leave violent extremist groups and rebuild with compassion, so it was a perfect match for the idea.
The Solution
What if you could make hate speech pay… literally? WeCounterHate flipped social media into a social contract. Using AI and human moderation, we scanned Twitter for hate speech and replied with a bold warning:
“If you retweet this, a donation will be made to an anti-hate organization in your name.”
No lectures. No soapbox. Just consequence. Suddenly, trolls had skin in the game and became accidental donors to the very causes they mocked.
The AI-powered tool was built in partnership with former extremists, including ex–Neo-Nazis, skinheads, and members of Aryan Nation. They helped us train the system to detect coded and concealed hate speech. It was fed thousands of examples and closely monitored by human moderators. In real time, we intercepted hate at the point of amplification by using social pressure and transparency as catalysts for behavioral change.
The Summary
What if you could make hate speech pay… literally? WeCounterHate flipped social media into a social contract. Using AI and human moderation, we scanned Twitter for hate speech and replied with a bold warning:
“If you retweet this, a donation will be made to an anti-hate organization in your name.”
No lectures. No soapbox. Just consequence. Suddenly, trolls had skin in the game and became accidental donors to the very causes they mocked.
The AI-powered tool was built in partnership with former extremists, including ex–Neo-Nazis, skinheads, and members of Aryan Nation. They helped us train the system to detect coded and concealed hate speech. It was fed thousands of examples and closely monitored by human moderators. In real time, we intercepted hate at the point of amplification by using social pressure and transparency as catalysts for behavioral change.

The Solution
What if you could make hate speech pay… literally? WeCounterHate flipped social media into a social contract. Using AI and human moderation, we scanned Twitter for hate speech and replied with a bold warning:
“If you retweet this, a donation will be made to an anti-hate organization in your name.”
No lectures. No soapbox. Just consequence. Suddenly, trolls had skin in the game and became accidental donors to the very causes they mocked.
The AI-powered tool was built in partnership with former extremists, including ex–Neo-Nazis, skinheads, and members of Aryan Nation. They helped us train the system to detect coded and concealed hate speech. It was fed thousands of examples and closely monitored by human moderators. In real time, we intercepted hate at the point of amplification by using social pressure and transparency as catalysts for behavioral change.