It’s a game of cat and mouse. As companies, international organizations and government agencies develop new policies and tools to combat online hate speech and extremism, extremists adapt their approaches, as well.
In this episode of Big Tech, Taylor Owen speaks with Sasha Havlicek, founding CEO of the Institute for Strategic Dialogue, about her organization and how it’s helping eliminate online extremism and hate speech.
Several issues make Havlicek’s work difficult. Among them are regional, cultural and religious factors that come up when defining extremist content. Jurisdictions also present hurdles: who is responsible for deciding on norms and setting rules? Further, keeping up with evolving technology and tactics is a never-ending battle. As digital tools become more effective in identifying and removing online extremism and hate speech, extremist groups find ways to circumvent the systems.
These problems are amplified by engagement-driven algorithms. While the internet allows individuals to choose how and where they consume content, platforms exploit users’ preferences to keep them engaged. “The algorithms are designed to find ways to hold your attention,” Havlicek said. “By feeding you slightly more titillating variants of whatever it is that you’re looking for, you are going to be there longer.” These algorithms contribute to the creation of echo chambers, which are highly effective tools for converting users to extremists.
Subscribe to the Big Tech podcast on Spotify, Apple Podcasts or Google Play.