Criminal Liability

International Criminal Liability in the Age of Social Media: Facebook's Role in Myanmar

By: Isabella Banks, Research Associate, PILPG-NL

The New York Times reported disturbing details of a systematic campaign led by Myanmar’s military elite to use Facebook’s broad reach to incite violence against the Rohingya. Through fake accounts and celebrity pages, the military disseminated anti-Rohingya propaganda, fake news, and inflammatory photos to millions of followers. As many as 700 people were involved in spreading this content to users across the country.

Criminal liability in international law is unique from that of most national legal systems in that it extends to those physically distant from the crime. International law’s expanded notions of criminal liability and commission are what have made it possible for justice institutions – first the Nuremberg International Military Tribunal (IMT) in 1945 and now the International Criminal Court (ICC) – to hold high-level perpetrators who order, plan, coordinate, or facilitate mass atrocities from afar accountable for their actions.

A question that international legal authorities have largely left unanswered is how this expanded notion of criminal liability might be applied in the age of global online networks and in particular, social media. There is mounting evidence that in addition to helping us stay connected and “bring the world closer together,” social media platforms are being used to proliferate ideas that result in real-world violence. 

con-karampelas-1178814-unsplash.jpg

A recent study conducted in Germany found that anti-refugee hate speech on Facebook predicted violent crimes against refugees in municipalities with above average usage of the platform. The researchers took advantage of sporadic internet outages to test the causality of the relationship, and confirmed that when “internet access went down in an area with high Facebook use, attacks on refugees dropped significantly.” Other research suggests that this may be because posts that trigger strong, negative emotions are favored by the platform’s newsfeed algorithm, which aims to maximize user engagement.

Facebook’s newsfeed algorithm is particularly vulnerable to misuse and manipulation in transitional countries like Myanmar, where democratic institutions are weak, social trust is low, and Facebook is the only website that many people access. Over the past year, Facebook has been criticized repeatedly for exacerbating the ongoing genocide against the country’s Rohingya ethnic minority by failing to contain the spread of hate speech and disinformation on its platform.

The New York Times reported disturbing details of a systematic campaign led by Myanmar’s military elite to use Facebook’s broad reach to incite violence against the Rohingya. Through fake accounts and celebrity pages, the military disseminated anti-Rohingya propaganda, fake news, and inflammatory photos to millions of followers. As many as 700 people were involved in spreading this content to users across the country.

The military’s Facebook campaign was launched five years ago – approximately the same time that human rights violations against the Rohingya are reported to have worsened. A recent report published by the Public International Law & Policy Group (PILPG) documenting atrocity crimes committed in Myanmar’s Rhakine State found that while many patterns of abuse stretch back for decades, “the period of the most consistent persecution and escalating violence against the Rohingya began in 2012 and steadily intensified through the major attacks that began in August 2017.”

By the time Facebook announced its decision to remove 20 accounts linked to military and spiritual leaders that had “enabled serious human rights abuses” in August 2018, a Reuters investigation had documented over 100,000 anti-Rohingya Facebook posts and 700,000 Rohingya had fled across the border to Bangladesh. 

Inside Myanmar, Facebook’s announcement was met with widespread outrage and largely overshadowed the news of a United Nations report highlighting the massacre of at least 10,000 Rohingya, which had been released the day before. One government spokesman went so far as to say, “We have called Facebook to ask why they have done this…we worry that this action will have an impact on national reconciliation.” 

For others, Facebook’s response came too late. As early as 2013, Myanmar experts began meeting with Facebook to warn them that activity on the platform was fueling attacks on the Rohingya. In a presentation at the company’s headquarters, one Myanmar-based entrepreneur likened Facebook’s role in Myanmar to that of extremist anti-Tutsi radio broadcasts that propelled the Rwandan genocide. 

In March 2018, the chairman of the UN Independent International Fact-Finding Mission on Myanmar stated that social media had “substantively contributed to the level of acrimony and dissension and conflict” in Myanmar. The following month, civil society groups published an open letter criticizing Facebook CEO Mark Zuckerberg’s characterization of Facebook’s hate speech “detection systems” as effective in the aftermath of an incident that resulted in three violent attacks. The letter highlighted Facebook’s overreliance on third parties to flag dangerous content and its failure to implement a mechanism for emergency escalation. 

The ICC’s recent decision to open a preliminary investigation concerning the mass deportation of the Rohingya to Bangladesh represents a potential opportunity to prosecute perpetrators of international crimes in Myanmar. This raises important questions: How and to what extent should top Facebook officials be held responsible for their avoidance and mismanagement of an emerging crisis? Do the company’s stated concerns about infringing upon the free speech of its users justify its inaction? On what legal basis might the military leaders who exploited Facebook to incite violence against the Rohingya be prosecuted?

In a 2003 case before the International Criminal Tribunal of Rwanda (ICTR), Prosecutor vs. Nahimana, Barayagwiza and Ngeze, three founders of extremist media outlets were convicted of direct and public incitement to commit genocide for orchestrating a propaganda campaign intended to desensitize the Hutu population and encourage them to kill Tutsis. In 2007, the Appeals Chamber reversed several aspects of the Trial Chamber's judgement on the grounds that: 1) it was inappropriate to apply international human rights law on hate speech to genocide crimes; and 2) direct and public incitement to commit genocide was not a continuous crime. In so doing, the Appeals Chamber drew a clear distinction between hate speech and international crimes, and made it difficult to hold individuals who publicly foment hatred over a long period of time accountable for violence that may result from their actions. 

The outcome of the ICTR’s so-called “Media case” suggests that the prosecution of the military leaders responsible for the anti-Rohingya Facebook campaign will constitute a significant legal challenge in itself. Still, the question of Facebook’s responsibility towards the countries it operates in – particularly those in transition – remains.

Zuckerberg himself has acknowledged: “In a lot of ways, Facebook is more like a government than a traditional company. We have this large community of people, and more than other technology companies we’re really setting policies.” Given that international law exists in large part to ensure that governments fulfill their “responsibility to protect” their own people, it seems likely that international legal authorities will need to further adapt criminal liability to account for the outsized influence of social media companies.