Social Media

COVID-19 Has Shifted Social Media Content Moderation From Humans To AI

AI is good at flagging things like spam. But it can't detect the same nuances in posts that humans can, which can lead to unnecessary takedowns.

COVID-19 Has Shifted Social Media Content Moderation From Humans To AI
AP / Jeff Chiu
SMS

It’s no secret social media companies have struggled with moderating bad content for years, and experts say COVID-19 adds a new layer to the problem as more of the content moderation burden has shifted from humans to artificial intelligence.

“Content moderation was stepped up across the board during the coronavirus period.  A lot of material was taken down that was pushing false preventative steps, false cures. It’s such a vital part — without which their companies just wouldn’t work,” said Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights and author of the new report "Who Moderates the Social Media Giants? A Call to End Outsourcing." 

Barrett’s report details problems with the widespread practice of outsourcing content moderation jobs to third-party contractors. It also looks at how COVID-19 has impacted those operations, as companies don’t always allow moderators to work from home for security or privacy reasons.

“The three big platforms — FacebookYouTube and Twitter — announced publicly that because their outsourced moderators were going to be basically sent home, that the companies were going to shift more of the burden on moderation over to the automated systems. They said upfront that we’re likely to have more false positives, and the reason is because the technology is not as refined as they’d like it to be.”

Barrett noted that prior to COVID-19, artificial intelligence was responsible for flagging a "significant proportion" of bad material on platforms and was particularly good at identifying things like spam or nudity. But since AI started reviewing a greater share of posts, the system showed it judged content with less nuance than humans.

“There was an example with Facebook involving taking down posts by people who were voluntarily stitching masks together for first responders and donating them because the AI had been so calibrated and fine-tuned to pick out profiteering and commercialization of products related to the coronavirus that it snapped up this completely innocent set of posts.”

And this is all occurring as social media giants see record-breaking traffic on their platforms. In his study, Barrett found that every day, 3 million Facebook posts are flagged for review by 15,000 Facebook content moderators. The ratio of moderators to users is 1 to 160,000. 

“If you have a volume of that nature, those humans, those people are going to have an enormous burden of making decisions on hundreds of discrete items each work day.”

It’s unclear how long AI will keep taking on this added content burden, as Facebook has only begun to slowly reopen of some of its 20-plus moderation centers. Barrett says moving forward, returning employees and content moderation itself should be embraced as more crucial to the company.

“The stature and status of the people who do it should be higher,” said Barrett. “They should be getting to use, to use Facebook as an example — they should be getting Facebook’s scale compensation and crucial benefits.”