Facebook’s failure to tackle hate speech online has real world consequences
An August 14th article in the Wall Street Journal revealed that top Facebook policy executives in India failed to remove hate speech promoted by Hindu nationalists despite the content being “flagged internally for promoting violence.” Ankhi Das, Facebook’s top policy executive in India, ignored the call to remove the posts, citing concerns that such action would “damage the company’s business prospects in the country.”
The hate speech posts in question include those made by Bharatiya Janata Party (BJP) official T. Raja Singh who has a history of anti-Muslim rhetoric and advocating violence. Singh has previously called for Rohingya Muslims to be shot, called Muslims traitors, and threatened to demolish mosques. He’s also promoted conspiracy theories online, alleging that Indian Muslims were waging an “economic jihad,” and thus joined a twitter campaign calling for the boycotting of halal products.
The WSJ report is just the latest to highlight the role of Facebook and other social media platforms in promoting dangerous and dehumanizing speech, especially rhetoric which comes from state authorities. While Facebook presents itself as an apolitical organization, its executives have supported right-wing leaders (Sheryl Sandberg, Facebook’s COO, has stated she admires Indian Prime Minister Narendra Modi). In the United States, Facebook’s founder Mark Zuckerburg has grown close to current President Donald Trump, even dining with him at the White House. A June 2020 article in the Washington Post noted that the social media giant had altered its rules to allow Trump to continue promoting misinformation and threatening messages. The article notes that in recent years the company had “constrained its efforts against false and misleading news [and] adopted a policy explicitly allowing politicians to lie.”
As a private corporation, Facebook’s ultimate aim is to make a profit. The company is aware of the business opportunities that lie in India with Sandberg noting, “India is a very promising market with very active Facebook users. It should become Facebook’s largest market…It’s an endless opportunity to grow with very active Facebook users.” The response from Das to the internal flagging of hate speech by Hindu nationalist politicians exemplifies how Facebook prioritizes profit over ethics, despite knowing that discriminatory rhetoric online has contributed to actual violence on the streets.
India is increasingly experiencing instability and violence due to the mainstreaming of Hindu nationalism under the BJP. Much of this anti-Muslim vitriol is spread on Facebook and Whatsapp (owned by Facebook), where conspiracy theories accuse Muslims of spreading COVID-19 and engaging in “Love Jihad.” It’s not simply citizens identifying as Hindu nationalists on these platforms that are spreading this hate, as BJP politicians themselves elevate such views and thus give mass exposure to menacing statements. The consequences of these dangerous and outright false accusations range from murder to mob violence.
Earlier this year in February, New Delhi experienced the worst communal violence in the city since the 1984 Sikh massacre. The three-day pogrom came after BJP politician Kapal Mishra issued a fiery speech giving license to Hindu vigilantes to target demonstrators that had been peacefully protesting against the discriminatory Citizenship Amendment Act. Mishra’s speech went viral online and within hours, mobs of Hindu men targeted Muslim neighborhoods in the city. The violence resulted in over 50 deaths, the majority of the victims being Muslims, and the destruction of Muslim homes and businesses in the area.
Misinformation online continued during the onset of the COVID-19 pandemic as BJP politicians and right-wing Hindu nationalist voices communalized the virus. The rhetoric on social media platforms included conspiracy theories claiming that Muslims were intentionally spreading the virus. The hashtags “Coronajihad” and “Coronabombs” went viral on Twitter and Facebook. US-based rights organization, Equality labs, examined the use of these hashtags and monitored the hate speech and found that “since March 28, tweets with the hashtag #CoronaJihad had appeared nearly 300,000 times, and had been potentially seen by 165 million people on Twitter.” This episode further criminalized and stigmatized Indian Muslims, adding to the already unsafe and dangerous environment of the last few years.
Hate speech on digital platforms is not restricted to one country as these posts have the potential (and often do) to go global. Those in power have used such dangerous anti-Muslim rhetoric to target Muslims in their countries. In neighboring Myanmar, it resulted in genocide.
Between 2016-2017 the Myanmar military carried out violent operations in the Rakhine state targeting Rohingya Muslim communities. Preceding the violence, there was a campaign of dehumanizing hate speech online against the Rohingya including horrific rhetoric from Buddhist monks and Myanmar military officials. During 2016-2017, Myanmar soldiers went on a rampage, massacring Rohingya men, women, and children, raping women and girls, and burning down entire villages. This resulted in a mass exodus of over 700,000 Rohingya who fled into neighboring Bangladesh.
In 2018, Facebook carried out an independent investigation into its role in the Rohingya genocide acknowledging that its platform was used to “foment division and incite offline violence.” An independent investigation by the United Nations also came to the same conclusion, with the chairman of the U.N. Independent International Fact-Finding Mission on Myanmar, stating that social media had played a “determining role” in Myanmar. While Facebook admitted it played a role in fomenting hatred against Rohingya Muslims, it’s now obstructing the investigation into genocide by failing to hand over information that would hold Myanmar accountable.
Facebook claims it is committed to doing better, but reality demonstrates this is not the case. Giving preferential treatment to government officials who stoke hate against marginalized communities, and allowing racist, xenophobic, and Islamophobic content to remain on the platform appears to a repetitive pattern for the social media giant. While the platform functions in a digital space, its failure to clamp down on hate speech is stoking and promoting real violence against the most vulnerable communities in our world.