From Myanmar to Michigan Facebook and other platforms are being weaponised, but who is to blame? In the latest of our series “Alex goes deep”, Nebesky looks at just how responsible Mark Zuckerburg and the social media billionaires are for the rise of hate speech and disinformation on their platforms.
In my journey through Interland I crossed Reality River with flying colours and scaled Mindfulness Mountain with a very respectable score of 200-and-something. I knew which emails were phishing scams and learned not to share details of my school ball with strangers on the internet. In so many words, I entered the realm of Interland as a child and came out an educated digital citizen. What surprised me however, was that although one can ford Reality River, secure one’s secret information in the Tower of Treasure, or learn about cyber bullying and its many impacts in the Kind Kingdom, there is no Extremism Estuary or Fake New Forest to discover and explore. While indeed heavy topics for children, it is hardly as if they are unwarranted points of education, if only for the fact that a recent survey by Netsafe, polling 1000 respondents revealed that:
“There is a generational divide too with young people thinking older people are more likely to fall for fake news and vice versa – 72 percent of young people believe older people are likely to believe in fake news and 66 percent of people aged over 50 think young people will believe in fake news.”
Interland is a Google game on Netsafe.org.nz designed “to help parents and young people put their critical thinking skills to the test. The Interland River flows with fact and fiction and you need to use your best judgement to cross the rapids.”
That same survey found also 80 percent of respondents had encountered fake news on social media sites, and Netsafe chiefs executive Martin cocker is reported in the NZ Herald as saying that some respondents simply classified reports they disagreed with as fake news. Netsafe does link to fake news educational site, yournewsbulletin.co.nz targeted at adults, and aimed at teaching tools to identify fake news wherever one encounters it.
At the end of July, the United States Department of Justice convened a six-hour long antitrust hearing during which Amazon, Facebook, Apple, and Google CEOs were grilled on their involvement in anticompetitive actions and alleged abuses of power granted by virtue of these tech monopolies’ positions. Amid inquiries into underhanded acquisitions it was made apparent that the reach and appeal of these tech giants is entirely inescapable. In particular, the capacity for social media sites like Facebook and Instagram to act as a curated and attention-based source of news.
The question is, however, that if eight out of ten New Zealanders have encountered fake news on social media platforms and 55% of adults in the US get their news direct from social media “often” or “sometimes” according to a 2019 Pew Research poll, to what degree are those platforms themselves responsible for the proliferation of the fake, extremist, violent, exploitative, or otherwise distasteful content that pervades their online communities?
In its most extreme, Facebook has found itself an integral tool in campaigns of genocide. While Mark Zuckerberg, nor Facebook, are and were not active participants or in league with those who would and do commit acts of ethnic cleansing, the social media platform’s reticence to act with any haste in deleting Facebook accounts that were hijacked or established by the military in Myanmar saw ethnic tensions stoked and mass violence incited. In all, 24,000 of the nation’s Rohingya ethnic group were murdered by the military of Myanmar and Buddhist nationalist, thousands of women and girls were raped, and mass arbitrary arrests and savage beating were reported.
Facebook’s role in the genocide was an perfect case study of the site’s permissive attitudes at towards information sharing and hate speech in a country that at the time had 18 million users of phones that come with Facebook pre-installed. Fake accounts made by the Myanmar military shared false stories of rapes committed by Rohingya men, stoked ethnic tensions, proliferated apocryphal plans for Rohingya extremist attacks, and attributed imaginary acts of violence to Rohingya groups. All this while Facebook had only four Bengal-speaking content moderators for the whole country. Posts slipped through a sloppy bureaucratic system that failed to remove the doctored images or misattributed quotes and allowed and failed to counter the spread of genocidal messaging. When Facebook finally did act, even they acknowledged
“… we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”
Even now, Facebook has blocked a request by The Gambia to disclose posts and messages used by the Myanmar military sought as evidence in the West African nation’s case in the International Court of Justice charging Myanmar with Genocide against the Rohingya community. Citing restrictions on the sharing of private communications under United States law, Facebook has declined to participate in the investigation of an atrocity in which its services were instrumental.
The use of social media by malign interests doesn’t always take place as a centralised conspiracy in a far flung country by a violent and oppressive regime. Social media has also been employed to spread extremist ideology, notably in the form of Facebook’s “Boogaloo boys” groups, collections of heavily armed political accelerationists who use memes and the reach of social media to lure new members to their cause. Boogaloo Boys are a loose collection of anti government, pro second amendment, Hawaiian shirt-wearing militiamen in the United States agitating for an armed revolution, or a “Civil War 2: Electric Boogaloo”. They have appeared at anti-lockdown and Black Lives Matter protests, ostensibly to defend the rights of protesters. However, their decidedly violent extremist collection of views has seen them co-opt many of those protests as they use heavy-handed and brutal police crowd control methods as steps toward the Boogaloo they so desperately hope to incite. Although Boogaloo Boys attend civil rights protests and target their ire at law enforcement, it must be made clear that the group is not entirely anti-racist, or even that the group is not racist. Drawing members from a broad spectrum of gun rights and anti-authoritarian ideologies, there are multiple strains of Boogaloo Boy thought, some of which are explicitly in support of Black Lives Matter, and some of whom are outwardly racist, anti-Muslim, and far right. In an article for Bellingcat, conflict journalist Robert Evans and investigative journalist Jason Wilson write:
“If there is a single common thread that unites the galaxy of Boogaloo Facebook groups, it is a desire to fight it out with the government. More specifically, members envision violent confrontations with local police and the “alphabet bois” in federal law enforcement agencies.”
Though the movement was birthed on the 4chan weapons message board /k/, mainstream social media, and Facebook in particular, have become key tools of the Boogaloo Boys, as well as many other extremist groups associated with the alt-right, in spreading their ideology through memes. The terminology of the Boogaloo boys is inherently comedic: “boogaloo”, the second Civil War becomes “big igloo”, and from there “ice house”. Slogans like “vote from the rooftops” reference the 1992 LA Riots, where armed Korean Americans took to the rooftops armed to defend their stores. It is not all rhetoric either, in May and June of this year three federal officers were murdered in two separate incidents at the hands of alleged Boogaloo Boys. It mirrors the rise of far-right organisations in Europe and America using humour to advance their political ideology, for example an Institute for Strategic Dialogue paper “MAINSTREAMING MUSSOLINI How the Extreme Right Attempted to ‘Make Italy Great Again’ in the 2018 Italian Election” notes Italian right wing extremist groups employing their own meme content at around the same time as Boogaloo Boys were migrating to mainstream social media platforms in 2018:
““Remember to spread normie propaganda”, read a post that linked to a Facebook page called ‘Battaglione Memetico’ (‘Memetic Warfare’) and had over 500 likes and followers on the day before the election. An Instagram account runs under the same name…Italian activists prioritised activity on Facebook and Instagram, which are more difficult to analyse.”
Facebook’s response to the organisation of radical Boogaloo Boys has been limited in success, and the growth of the movement on Social Media is as a direct result of Facebook’s own content promotion algorithms. For each Boogaloo-related Facebook or Instagram one follows, and this is the case for all content not just Boogaloo, the more related pages the website recommends. In an effort to drive engagement and keep eyes on, content recommendation algorithms push users further and further towards content of that same type. Although Facebook updated their violence and extremism policies in May, allowing them to ban a number of Boogaloo pages, this proliferation of content by its links to similar themes has had little effect on the movement as a whole. In the words of Evans and Wilson:
“Our research suggests that this policy has done virtually nothing to curb either the growth of this movement or reduce the violence of its rhetoric. Every new Boogaloo page and group we found led us to new related pages and “liked” pages, each either organizing people for direct armed action or agitating them to anticipate violence.”
Just like in Myanmar, Facebook’s quest to connect people, when measured against their inability to effectively manage the spread of extremist ideas and speech, has seen the platform incubate
While Facebook serves as a mainstream platform for underground extremist groups, other social media platforms have fallen victim to extremist speech from more official sources. The President of the United States, Donald Trump, has himself been found in the midst of multiple controversies regarding his sharing of hate speech, conspiracy theories, or needlessly inflammatory comments. In June, the President shared a video on Twitter to thank his supporters in Florida for a golf cart parade in his honour. In the video, one supporter can be clearly heard shouting “white power”. While the President claimed to have not heard the phrase, and later deleted the post, the sharing of such flagrant hateful speech offers in itself a question on the responsibility of platforms for the speech they are used to spread- especially when shared by someone in a position like the President. A federal appeals court ruled in 2019 that if the President has a Twitter account, which he does, and if he uses it to conduct official business, which he does, then to block a follower is in direct violation of the First Amendment of the United States Constitution for the reason stated by one of the three judges who reviewed the decision, judge Barrington Parker:
“… if the First Amendment means anything, it means that the best response to disfavored speech on matters of public concern is more speech, not less.”
That is to say, Americans have the right to debate matters of public concern and government business at the source, and if the President blocks them then he is infringing on that right. However, Trump’s post, if shared by anyone not in such a position, would have resulted in a summary ban from Twitter under their content moderation policy. Instead, on the basis of Twitter’s newsworthiness policy the post was left to circulate for hours. President Trump has a long and storied history of dog-whistling to hard right wingers and racists, and in not flagging the post or removing it, Twitter has dressed their desire to drive engagement through outrage and debate in a thin cloak of public interest and newsworthy content. It is to say that the President sharing blatant white supremacist content is worthy of Americans’ attention precisely because it is the President sharing something that rightfully incites an aggressive response and rejection. In an effort to combat the spread of misinformation, Twitter has added fact-check and content warning tags to certain tweets.
Of course, where all of these cases intersect is in the question of censorship. Social media platforms play both ways- their adherence to free speech takes its cue from the First Amendment’s binding on the state, while their position as a private entity shields them from any real First Amendment obligations. If Twitter did ban the President for his frequent off-colour posting it would engender a censorship debate that would almost certainly empower him and his supporters- even though it would be well within the rights of a private business to moderate whatever they like on their own platform. On the other side of the coin, though any right thinking individual would find many of the President’s tweets reprehensible, the content shared by the Boogaloo Bois appalling, the genocide in Myanmar condemnable to the utmost, are we prepared to give the platforms on which they are proliferated the authority to act as the gatekeepers of truth on what is an inescapable source of news, debate, and information?