issue 60
social media, youth radicalization, and how to stop it
this is a paper i wrote two years ago. i’ve never shared my academic writing before but i think this is more important than ever, so i’m publishing it here.
In 2022, the New York Times’ Melinda Wenner Moyer discovered that young people were spending more time than ever on their phone -- “eight hours and 39 minutes” a day as of 2021, to be exact (para. 2). Here, “young people” refers to those between the ages of thirteen and twenty. These numbers increased so significantly as the COVID-19 pandemic dominated the world, lockdown mandates were instated to prevent spread of the disease, and young people stuck in the house had no choice but to connect with their peers virtually. But it was not all positive communication happening on young people’s cell phones during this time. Social media usage accounts for a substantial portion of time young people spend digitally connected, and while that gave them the chance to chat with friends who they were unable to see during this time, it also gave them the chance to become radicalized to far-right terrorist groups. Simply put, the increase in time spent on social media by youth has increased hate in youth because of the influx of extremist content on social media. Ahmad (2021) says that “fear has intensified” because of the acts of violence that have already occurred and that might occur in the future (p. 119). Social media algorithms are intentionally pushing extremist content to young people and causing high rates of youth radicalization that can be prevented with further education on bigotry and establishing community.
Social media websites, such as Instagram and X, feature algorithms that intentionally promote extremist content to young people. In this case, extremist content is anything hateful or encouraging to join extremist groups, such as ISIS. According to Pandith and Ware (2021), another social media website with a concerning algorithm is Tik Tok, which “has been manipulated for extremist recruitment” in the past (para. 12). The Anti-Defamation League states that, as of currently, the only social media website that has intentionally altered its algorithm to prevent the proliferation of extremist content is YouTube (Anti-Defamation League, 2023, para. 3). YouTube altering their algorithm to prevent the promotion of extremist content shows that all social media websites can alter these harmful algorithms to prevent their sharing of extremist content and that these websites are not doing so. As claimed by a study completed by the Anti-Defamation League (ADL) and the Tech Transparency Project (TTP), because of how these algorithms are constructed, the websites themselves are responsible for the “proliferation of antisemitism, hate, and extremism online” (2023, para. 2). Carless and Guynn (2023) reported that the study showed that, with accounts they believed to be owned by young people, the sites would shift from promoting harmless content to promoting extremist content in a very short amount of time, including “Nazi propaganda, Holocaust denial and white supremacist symbols,” and that they did not promote the same kind of accounts disseminating hateful content to accounts they believed to belong to adults (para. 7).
The proliferation of extremist content by social media algorithms is especially concerning because of the amount of time young people spend on social media websites. As Tikhonova (2018) says, “it has always been typical for teenagers to be in so-called online public environments” when, in the past, they would have communed in parks (para. 1). Another reason that the way these algorithms push extremist content is disturbing is because of the evidence that the content is being pushed specifically to young people. Young people’s brains are not yet fully developed, and there is critical concern that they may be radicalized solely because of their inability to parse what is extremist content. For example, Carless and Guynn (2023) explained that neo-Nazis on social media use “codewords, emoji combinations, and deliberate typos” (para. 26). It is important to note that due to these methods of evasion, it is difficult for content moderation on social media websites (that these websites use in lieu of updating their algorithms to not promote extremist content) to confirm if the content is in fact hateful. Additionally, because of the coded language used by neo-Nazis, people may be sharing hateful content on social media websites unintentionally, believing them to be nothing more than funny, harmless memes. Extremist groups effectively target young people because of their still-developing brains, and these social media websites not only allow it, but aid in their search by using their algorithms to promote hateful content. Because of the social media websites’ algorithms disseminating extremist content, and the amount of time spent online by young people, the threat of radicalization of youth has increased remarkably.
Because of the extremist content disseminated by social media algorithms, and the considerable amount of time that young people spend on social media websites, young people are being radicalized at accelerated rates. Asscher et al. (2020) calls young people who have been radicalized “a great threat for society” (p. 2). Recently, there has been a significant increase in violent acts committed by young people associated with online radicalization. One such act was carried out in 2020 by Kyle Rittenhouse, a teenage boy who killed Black Lives Matter protestors in Kenosha, Wisconsin in 2020, in what was deemed by right-wing activists to be an act of vigilante justice. According to Homans (2021), it was later discovered that Rittenhouse may have been an active participant in “Facebook groups and Reddit threads where...groups organized” in opposition to racial justice protests, claiming they were defending their cities. These Facebook groups that began appearing following the 2020 racial justice movement that was launched by the deaths of Breonna Taylor and George Floyd were strictly against Facebook’s then-recently instated rule regarding militia organizations. However, it was not until after the murders committed by Kyle Rittenhouse that Facebook told NPR they were removing his Facebook and Instagram pages, as well as the page of the militia he was alleged to have joined, called “Kenosha Guard” (Mihalopoulos, 2020, para. 25). This is just another example of a time a social media website could have taken charge of their radicalization problem ahead of it causing an act of violence, especially considering that the group was against Facebook’s rules, and did not, though we know from the example of YouTube altering their algorithm and from the steps that Facebook took after Rittenhouse’s act of violence that they were able to.
Another example of radicalization via social media leading to violent crimes perpetrated by youth was the Boston Marathon Bombing in 2013, which led to the deaths of three people and injured hundreds of others. Dzhokhar Tsarnaev was only nineteen years old when he and his older brother, Tamerlan, carried out this deadly attack. It was said by O’Neill (2015) that they were inspired to commit the act of violence after being radicalized by easily accessible online propaganda, describing the global jihad movement as “social media savvy” in its methods of circulating hateful content and even steps for creating items like bombs (para. 25). Despite being aware that this content was circulating online, and continues to circulate online to this day, little was done to remove it from the Internet, which could have prevented further violent acts from being committed. Not only is very little being done online to prevent online radicalization of youth, even though we know that social media algorithms could easily be altered, and extremist content could be removed from the Internet, but very little is also being done to help young people understand the true danger of this content and of continuing to share it, even though they are so frequently exposed to it.
Education about bigotry can prevent rapid rates of youth radicalization by social media algorithms advancing extremist content, as can taking steps to help young people forge community offline. It is not too late to alter the social media algorithms that are leading to youth radicalization, and to acts of violence committed due to radicalization of youth by social media. As we have seen with the example of YouTube, and with the steps taken by social media websites following acts of violence that were instigated due to their algorithms pushing hateful content, these algorithms are able to be altered, and it is a failing by these social media websites that they have not yet done so to prevent further violence. It shows that they have zero interest in doing so, that they will continue to disseminate hateful content to young people even though they are vulnerable to being radicalized by it, which could lead to extreme acts of violence committed by young people. But changing the algorithm is not the only way to prevent radicalization of youth and potential acts of violence committed by young people. Research shows that there are two main reasons why young people are being radicalized by content on social media.
The first of these reasons is, as mentioned above, because their brains are not yet fully developed, and they are unable to recognize the content that they are being exposed to by their social media algorithms as being extremist content. Education on what extremist content looks like, how harmful the messaging can be, and how easy it is to get caught up in extremism is a necessary deradicalization approach, especially today when young people are so connected, always on social media. Ahmad (2017) says that a second reason youth are becoming radicalized by social media algorithms is because of their search for community and wanting to feel like they belong to part of a group (p. 119). It was said by Asscher et al. (2020) that “experiences of abandonment” was a major risk factor for radicalized youth (p. 2). Ahmad (2021) uses Social Learning Theory to explain how young people whose “concept of self and identity are threatened” during “the formative years of youth development” turn to extremism to feel included (p. 121). A solution to finding extremism in the search for community is to simply forge community elsewhere. This does not necessarily mean cutting off young people’s access to social media websites as a deradicalization approach; in fact, that could mean cutting off their access to community altogether. Overall, education on how harmful extremist content can be and finding community are integral pieces to the deradicalization of youth.
Because of social media’s algorithms intentionally putting extremist content on young people’s timelines, they are being radicalized, leading to violent crimes being committed in some cases, and young people require education on bigotry and community to prevent more instances of this occurring. If social media websites refuse to alter their algorithms to not promote extremist content to young people, there will be acts of violence committed by young people who have been radicalized. It is integral to create deradicalization methods for our youth who will be exposed to this content. Education on what extremist content looks like and how harmful it can be to be exposed to it is one of those methods; the second is giving young people opportunities to forge community. Judging by social media websites like Facebook’s past actions of refusing to alter their algorithms that promote extremist content and not removing extremist content, even that that goes against their rules, until an act of violence has already been committed, social media websites have zero interest in changing. Because of this, we have to do the changing ourselves.
References
Anti-Defamation League. (2023, August 16). From Bad to Worse: Amplification and Auto-Generation of Hate. https://www.adl.org/resources/report/bad-worse-amplification-and-auto-generation-hate
Ahmad, H. (2017). Youth De-Radicalization: A Canadian Framework. Journal for Deradicalization. Fall(12), 119-168. https://journals.sfu.ca/jd/index.php/jd/article/view/113/94
Asscher, J. J., Emmelkamp, J., Stams, G. J. J. M., & Wissink, I. B. (2020). Risk factors for (violent) radicalization in juveniles: A multilevel meta-analysis. Aggression and Violent Behavior, 55. https://www.sciencedirect.com/journal/aggression-and-violent-behavior
Carless, W., & Guynn, J. USA TODAY. (2023, August 21). Social media algorithms push racism, studies say. USA Today. https://www.usatoday.com/story/money/2023/08/17/antisemitism-on-social-media-rising/70605213007/
Homans, C. (2021, October 26). Kyle Rittenhouse and the New Era of Political Violence. New York Times. https://www.nytimes.com/2021/10/26/magazine/kyle-rittenhouse-kenosha-wisconsin.html
Mihalopaulos, D. (2021, August 27). Kenosha Shooting Suspect Fervently Supported ‘Blue Lives,’ Joined Local Militia. NPR. https://www.npr.org/sections/live-updates-protests-for-racial-justice/2020/08/27/906566596/alleged-kenosha-shooter-fervently-supported-blue-lives-joined-local-militia
O’Neill, A. (2015, March 30). The 13th Juror: The radicalization of Dzhokhar Tsarnaev. CNN. https://www.cnn.com/2015/03/27/us/tsarnaev-13th-juror-jahar-radicalization/index.html
Pandith, F., & Ware, J. (2023, March 22). Teen terrorism inspired by social media is on the rise. Here’s what we need to do. NBC News. https://www.nbcnews.com/think/opinion/teen-terrorism-inspired-social-media-rise-here-s-what-we-ncna1261307.
Tikhonova, A.D. (2018). Social media and youth: the risk of radicalization. Psychology and Law. 8(4), 55-64. https://doaj.org/article/cd7d34cf25f4477a8069fc243f8f017b
Wenner Moyer, M. (2022, March 24). Kids as Young as 8 Are Using Social Media More Than Ever, Study Finds. New York Times. https://www.nytimes.com/2022/03/24/well/family/child-social-media-use.html

