Emotional Labor Offsetting: Unpacking the Racial Capitalist Fauxtomation Behind Algorithmic Content Moderation

July 15, 2024

This article is written by Kat Zhou (she/her), who recently completed the CFI MSt program. This blog is an adapted excerpt from her dissertation, which she submitted for the MSt program.

"Xyza Bacani / Redux." Alt text: Workers in a business process outsourcing center in the Philippines, where they work on a variety of tasks including labeling data for algorithmic content moderation models.

Workers in a business process outsourcing center in the Philippines, where they work on a variety of tasks, including labelling data for algorithmic content moderation models. Credit: Xyza Bacani / Redux

In October of 2022, I experienced what it was like to get swept up in a misinformation maelstrom on social media. I had tweeted about my own encounters as an Asian-American woman with racism, not expecting my thread to go viral. While my thread was initially met with overwhelming support, the solidarity was soon overshadowed. As my words gained traction, an influx of hate speech proliferated on multiple different social media platforms in response. Reflecting back on the entire ordeal, what truly stood out to me was the emotional trauma I felt due to the inadequacy of these platforms at curtailing blatant disinformation. Many of the posts that were removed were done so due to manual reporting of said content by concerned friends, families, and netizens. It was through my own terrifying encounter with the insufficiencies of online content moderation that I began to wonder about its operationalization. What safeguards were set in place to moderate problematic content?

Content moderation, defined as the practice of screening user-generated content posted to Internet sites and other online outlets for appropriateness, is a laborious task that involves continuously watching and flagging images and videos with self-harm, death, sexual violence, racism, and other problematic visuals. It can be an incredibly isolating, underpaid, and traumatic job, with content moderators reporting increased depression from the job. Surveying the available reportage on the topic, I found that while automated processes did exist, humans were still heavily involved, and their jobs as moderators were often traumatic. In 2018, Selena Scola, a white American content moderator based in California, sued Facebook for exposure to psychological harm during her employment as a content moderator. Facebook settled the case for $52 million. Four years later, Facebook (now called Meta) is being once more by another content moderator. This time, the lawsuit comes from Daniel Motaung, a Black, South African content moderator employed in Kenya. Motaung is also sueing Sama, the contractor for Meta at which he was directly employed.

The temporal space between these two lawsuits was ripe with new developments for content moderation. Social media platforms saw an international proliferation of user-generated content (UGC), catalyzing a shift towards algorithmic content moderation. In an effort to cut costs, scale for growing online communities, and reduce their reliance on human content moderation, many companies pivoted from purely analog content moderation and began incorporating algorithmic content moderation. Algorithmic content moderation is defined as systems which first classify user-generated content, and then decide whether or not the content ought to remain based on set standards. This foray into automated alternatives for content moderation would primarily manifest in two ways: companies would either develop and train their own, in-house algorithmic tools, or they would contract third-party companies to handle algorithmic content moderation. Third-party, intermediary, artificial intelligence (AI) companies that offer algorithmic content moderation have proliferated during the last few years.

While these algorithmic content moderation companies tout their abilities to replace human content moderators with AI, their claims often obscure the amount of human labour that is still needed to train their models. By constructing such fantastical representations of the data labeller roles they outsource to workers, these companies hide the trauma of performing commercialized care work for social media platforms. Further, they reinforce, via misdirection, the supposed inevitability of an automated future for moderation. This type of obfuscation has been described as fauxtomation by Astra Taylor, which represents the gap between the corporate marketing of automated tools and the realities of what those tools can accomplish.

This erasure is dangerous, especially when one considers the psychologically traumatic nature of the work itself. Increasingly, moderating content and labelling problematic UGC have been classified as forms of emotional labour, a term coined by Arlie Hochschild, which originally encapsulated the regulation of one’s own emotions in order to maintain a particular state of emotions for others. Initially used to describe face-to-face roles (such as a cashier at a grocery store), the term itself has gradually expanded and shifted, gaining traction in other industries beyond the ones mentioned in Hochschild’s writing. The psychological side effects of this type of digital emotional labour cannot be ignored. Paired with stringent and unrelenting quotas to fulfil on the job, it is no surprise that symptoms such as burnout and vicarious trauma (a severe form of post-traumatic stress disorder) flourish amongst content moderators and data labellers.

Furthermore, there is a spatial distinction between where these algorithmic content moderation companies are headquartered and where they recruit data labellers and content moderators. These companies are primarily headquartered in the Global North, while many of the data labellers they contract are sourced from countries in the Global South, such as the Philippines and Kenya (see note below). This racialized, and spatial power imbalance echoes historical projects of imperial exploitation of material resources and human labour. Racial capitalism, the intersection between our systems of exploitation and our societal constructions of race, provides a helpful lens through which we can trace the flow of care work that is provided and received. It is via the logic of racial capitalism that these Global North companies glorify and legitimize their offsetting of emotional labour to underpaid workers in the Global South.

Problematizing the fauxtomation from Global North technology companies is crucial for illuminating the tensions and contradictions behind the phenomena of these companies, offsetting the emotional labour of content moderation onto workers in the Global South. I did not realize how much I took content moderation for granted until my own traumatic encounter with abuse on the Internet. While my experience as a victim of digital harassment was horrible, I certainly do not have to consume hours of the most violent content imaginable on a daily basis. One wonders, what can we do to improve this process – for end consumers like myself and the content moderators that protect us?

Content moderation is not the only way to mitigate the overwhelming deluge of UGC that exists today on the Internet. Bemoaning the inclinations of our capitalist marketplace, Sarah T. Roberts has observed that “one obvious solution might seem to be to limit the amount of user-generated content being solicited by social media platforms…[but] the content is just too valuable a commodity to the platforms.” Thus, content moderation work remains a necessary evil. Try as these companies might to market the capabilities of their AI programs to take on the care work of content moderation, as the technology currently stands, there will always be a dependence on human emotional labour. It is this dependence on human emotional labour that provided the starting point for this dissertation.

If the premise remains true that content moderation is absolutely necessary, what can we collectively do to confront the discourse employed by Global North corporations and mitigate the exploitation of workers in the Global South? Although I do not have a panacea, I recognize there are many places from which to draw inspiration. One particular, galvanizing event is currently unfurling in Nairobi. On May 1, 2023, TIME reported that over 150 workers in the capital city of Kenya established the African Content Moderators Union, setting a historic precedent for tech workers in the Global South. Four days later, a group of content moderators in Nairobi led a passionate protest outside the Sama office, chanting for Wendy Gonzales, the CEO of Sama, to meet them for discussions. In a video of the protest that was posted to Twitter by Siasa Place, a local NGO in the region, workers can be heard demanding for their money. The caption accompanying the video is moving: “We are not machines, we are human beings.” In an industry that enforces the instrumentalization of workers in the Global South, this eight-word declaration underscores the needed resistance to the dehumanizing discourse issued by algorithmic content moderation companies from the Global North.

Unpaid workers protest outside the Sama offices in Nairobi. CREDIT: @CONTMODERATORAF.

Unpaid workers protest outside the Sama offices in Nairobi. Credit: @CONTMODERATORAF.

Note on Language use:

I used the terms “Global North” and “Global South” while acknowledging how they are imperfect conceptual apparatuses for describing not only the complex, geopolitical webs of social media companies but also the locales where content moderators work to benefit said companies (Levander & Mignolo, 2011). For example, while China might have qualified as a “Global South” country throughout most of the 20th century, does its current accumulation of capital and technological advancements disqualify it from such an identifier? Chinese companies such as ByteDance, the owner of TikTok, depend on content moderation efforts sourced from regions in Latin America and Africa. However, because terminology that can sufficiently encompass the nations that churn out massive technology companies or the nations to which content moderation is outsourced does not exist to my knowledge, I will be leveraging the terms “Global North” and “Global South,” albeit with the caveat that they are imperfect constructs.

About the Author:

Kat (she/her) is the creator of the <Design Ethically> project, which started out as a framework for applying ethics to the design process and has now grown into a toolkit of speculative activities that help teams forecast the consequences of their products. Through her work with <Design Ethically>, she has spoken at events hosted by the European Parliament (2022) and the US Federal Trade Commission (2021), as well as an assortment of tech conferences. Kat has been quoted in the BBC, WIRED, Fast Company, Protocol, and Tech Policy Press. Outside of <Design Ethically>, Kat has worked as a designer in the industry for years. Following her MSt at CFI, she will be commencing a Doctor of Philosophy at the University of Oxford this fall, with the Oxford Internet Institute.