by Farah Hasan, March 22, 2022
My phone lies face down on the table beside me, buzzing sporadically, but insistently. I ignore it, fanning myself against the mid-July heat as I attempt to concentrate on an assignment for my summer class. I drum my fingers against the desktop and whisper the words aloud to myself, trying to make sense of the convoluted sentences of the essay as the buzzing continues. What do they want? I think exasperatedly, assuming my friends are simply spamming me with memes from Instagram and funny Tiktoks. As I finish the reading passage and move on to the multiple choice questions that accompany it, I decide to spare a glance at my phone. Expecting to see Instagram direct messages (DMs) and text messages headed by my friends’ familiar usernames and contact names, I am shocked to instead see hundreds of Instagram comment notifications from unfamiliar usernames, all beginning with the common header “[Instagram user] mentioned you in a comment.” My heart racing in anticipation, I open the Instagram app and quickly scroll through my notifications. I had left a comment criticizing France’s April 2021 ban on hijabs (headscarves worn by women for religious reasons) for Muslim women under the age of 18 on a post advertising travel to the Eiffel tower, and now I see that all these comments are in response to mine. Some of them back me up, but others range from applauding France’s actions, to blatantly calling Islam backwards and incompatible with Western civilization, to attacking me as a young Muslim woman myself. I exit the app without bothering to respond to anyone and close my eyes for a second, my heart still pounding as the hate words flash through my mind repeatedly. Like me, young Muslims everywhere are exposed to Islamophobic rhetoric on the social media sites they use most, and chronic exposure to such hate inevitably takes a toll on their mental health. Online hate is not given the same coverage or attention that street-level hate crimes get, but the effects of the former may be exponentially more profound due to the wide reach of users that are present on online platforms. Actions should be taken to limit such hate speech on public platforms like social media to preserve the mental-wellbeing of users that are targeted by these remarks, even if it means limitations on the First Amendment right to free speech.
In a case close to home, a Muslim student recently graduated from my high school in the summer of 2021 and was chosen to deliver a speech at the commencement. In her speech, she advocated for the need for understanding and peaceful coexistence during difficult times, and briefly mentioned the ongoing conflict between Israel and Palestine. This part of the speech incited infuriated outcries from the audience, rude remarks shouting at her to “go back to Pakistan” as she walked off the stage, and the creation of a Facebook group as a space for angry parents to vent and express mildly Islamophobic sentiments. Due to the convenience and ease of access, social media is frequently defaulted to as a platform for these polarizing conversations. Certain social media sites, such as Twitter, are “better-designed,” in a sense, to perpetuate hate speech and to facilitate radicalized expression. Dr. Nigel Harriman, professor at the Harvard T.H Chan School of Public Health, and a group of researchers found that 57% of students that actively used the social media sites Youtube, Instagram, and Snapchat had come across hate speech, and 12% had encountered a stranger that tried to convince them of racist beliefs (this was especially common on Youtube). Additionally, exposure to hate messages was significantly correlated to Twitter use and Houseparty use (Harriman et al., 8531). Twitter is a particularly convenient hotbed for such rhetoric, as victims that come forward to tell their stories to Twitter are simply told to block the hating account or delete their own account. In 2014, Twitter issued a statement claiming that it “cannot stop people from saying offensive, hurtful things on the Internet or on Twitter. But we can take action when content is reported to us that breaks our rules or is illegal” (“Updating Our Rules Against Hateful Conduct”). Twitter more recently updated its rules against hateful content in December 2020:
In July 2019, we expanded our rules against hateful conduct to include language that dehumanizes others on the basis of religion or caste. In March 2020, we expanded the rule to include language that dehumanizes on the basis of age, disability, or disease. Today, we are further expanding our hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, or national origin.
(“Updating Our Rules Against Hateful Conduct”)
Although Twitter has taken some necessary steps to limit hate speech, this form of harassment nonetheless still exists on this and countless other platforms, and more action must be taken to counter this.
As someone that frequents social media sites like Instagram and Facebook, I understand how detrimental the algorithms themselves can be to one’s self-esteem, but coupled with exposure to hate speech, mental health for those targeted is more likely to plummet. Although I ultimately ignored the hate comments on Instagram under the post about France, the occurrence bothered me for several days afterward, leaving me anxious, unsettled, and dealing with mild sleep difficulties to the point where I deleted Instagram for a few months. Research by Dr. Helena Hansen at NYU Langone found that victims of online hate speech are found to have elevated levels of the stress hormone cortisol, leading them to exhibit a blunted stress response as well as higher rates of anxiety, sleep difficulties, and substance use (Hansen et al. 929). Dr. Brianna Hunt at Wilfrid Laurier University found that exposure to Islamophobic rhetoric is also a predictor of social isolation and loneliness, particularly among Muslim women in Waterloo, Canada. Furthermore, the dehumanizing aspect of hate speech also incites conflicts of identity in Muslim women of color, who feel that neither their religious nor their racial ingroups accept them fully, calling for the need to address mental health for more complex cases of intersectionality as well (Hunt et al.).
In an effort to mitigate the destructive effects of hate speech on mental health, individuals have advocated for limiting such speech, but opponents of these limitations have expressed their concerns and dissatisfaction with this movement. In the 2017 case Matal v. Tam, the Supreme Court of the United States ruled that hate speech, like regular speech, is protected under the First Amendment under the justification that “giving offense is a viewpoint” (as long as it does not directly incite violence) (Beausoleil 829). Thus, individuals opposing limitation of hate speech on social media argue that doing so would be an infringement on their First Amendment right. There is also the danger that limitations of this sort would be a step in the direction of mass surveillance and abuse of power, ultimately resulting in a power dynamic of large digital companies﹣and potentially the government﹣in stifling any and all dissent (Beausoleil 2124). Other supporting evidence includes the notion that some exposure to counter speech is needed for the development of stable mental health and that various studies have shown that limitation of hate speech does not correlate to improved social equality (Beausoleil 2125). In fact, Dr. Stephen Newman of York University points out that expression of this sort of dialogue may be integral to human personality development, and that exposure to robust forms of speech may actually improve societal dynamics by influencing democratic policy (Newman). Lastly, there is limited existing literature proving that hate speech limitation is beneficial, as regulations of this magnitude have not been implemented anywhere yet. Thus, this argument is largely based on studies that have shown the harmful effects of hate speech.
In a growing digital age, where social media use is a part of daily life for adolescents, young adults, and even middle aged individuals, chronic exposure to hate speech such as Islamophobic rhetoric cannot be tolerated. The longer online sites and social media platforms delay addressing such sentiments, the more widespread and normalized they will become and the more detrimental the effects will be on affected individuals’ mental health. In regards to opponents’ concerns over First Amendment compromise, the First Amendment cannot be applied perfectly to the digital age, which allows for unprecedented and unanticipated reach of communication across borders, continents, and time, as posts can always be viewed and interpreted so long as they are not deleted (Beausoleil 2127). Restrictions on the right to free speech are warranted in this case, where the mental health of countless targeted individuals on a global scale are at stake. To limit the likelihood that these companies abuse their extended powers of speech limitation, restrictions should be placed on the companies’ extent of power as well (ie. restrictions should be placed on the restrictions). Rather than immediately deleting all posts and comments including hateful rhetoric (which may be impractical), social media platforms should specifically aim to disband or deactivate groups, chat rooms, and accounts specifically devoted to or frequently posting Islamophobic﹣and other hateful﹣rhetoric. On particular posts where the comment section becomes overwhelmingly belligerent and hate-fueled, social media platforms should either delete the post, delete the inflammatory comments, or disable the comment section entirely. Lastly, these social media platforms should issue public statements against hate speech like Twitter did, include them explicitly in their terms and conditions of use, and send automated warnings to users who violate conduct rules multiple times with the intent of suspending their accounts if hateful activity continues.
Ideally, the extent to which media companies can regulate inflammatory speech should be overseen by the federal government. However, complications may arise due to matters of jurisdiction: for example, the US government may have limited say on regulation of content posted on the social media platform TikTok, as this company was founded in China. Thus, for the time being, regulations should remain on a company-to-company basis. In the short-run, it can be expected that consumer use and feedback will let companies know how effective and acceptable their policies are.
Though many praise the advent of cyberspaces and the beginning of the digital era as a way of bringing the world closer together with connections never known before, it is difficult to fathom how connected we really are amidst the divisive and discriminatory rhetoric that is often perpetuated on the very same platforms. Hate speech is present in several different forms, including anti-Semitism, racism, homophobia, gender discrimination, and prejudice against disabled individuals. As a Muslim woman, the recent increase in Islamophobic sentiments on social media have made me realize how pervasive their effects on young Muslims’ mental health are. Therefore, I strongly encourage social media platforms to limit hateful speech and promote civil and constructive dialogue instead using the methods outlined above, even if it means a slight compromise on First Amendment rights. By merely limiting and not completely eradicating hate speech, the extent of social media companies’ power is kept in check and the potential societal benefits of exposure to antagonistic speech mentioned previously may still be experienced. Taking actions such as deleting the Instagram post about France with the barrage of inflammatory comments would be steps in the direction of greater coexistence as the Muslim high school graduate’s speech earnestly called for and promoting the benefits of global connection that the digital era originally promised.
Works Cited
Beausoleil, Lauren. “Free, Hateful, and Posted: Rethinking First Amendment Protection of Hate Speech in a Social Media World.” Boston College Law Review, vol. 60, no. 7, 2019, pp. 2101–2144.
Hansen, Helena, et al. “Alleviating the Mental Health Burden of Structural Discrimination and Hate Crimes: The Role of Psychiatrists.” The American Journal of Psychiatry, vol. 175, no. 10, 2018, pp. 929–933, doi:10.1176/appi.ajp.2018.17080891.
Harriman, Nigel, et al. “Youth Exposure to Hate in the Online Space: An Exploratory Analysis.” International Journal of Environmental Research and Public Health, vol. 17, no. 22, 2020, 8531, doi:10.3390/ijerph17228531.
Hunt, Brianna, et al. “The Muslimah Project: A Collaborative Inquiry into Discrimination and Muslim Women’s Mental Health in a Canadian Context.” American Journal of Community Psychology, vol. 66, no. 3-4, 2020, pp. 358–369, doi:10.1002/ajcp.12450.
Newman, Stephen L. “Finding the Harm in Hate Speech: An Argument Against Censorship.” Canadian Journal of Political Science, vol. 50, no. 3, 2017, pp. 679–697, https://doi.org/10.1017/S0008423916001219.
“Updating Our Rules Against Hateful Conduct.” Twitter.com. N.p., n.d. Web. 26 Sept. 2021.