Social media companies are taking steps to tamp down coronavirus misinformation – but they can do more

As we practice social distancing, our embrace of social media gets only tighter. The major social media platforms have emerged as the critical information purveyors for influencing the choices people make during the expanding pandemic. There’s also reason for worry: the World Health Organization is concerned about an “infodemic,” a glut of accurate and inaccurate information about COVID-19.

The social media companies have been pilloried in recent years for practicing “surveillance capitalism” and being a societal menace. The pandemic could be their moment of redemption. How are they rising to this challenge?

Surprisingly, Facebook, which had earned the reputation of being the least trusted tech company in recent years, has led with the strongest, most consistent actions during the unfolding COVID-19 crisis. Twitter and Google-owned YouTube have taken steps as well to stem the tide of misinformation. Yet, all three could do better.

As an economist who tracks digital technology’s use worldwide at The Fletcher School at Tufts University, I’ve identified three important ways to evaluate the companies’ responses to the pandemic. Are they informing while simultaneously curtailing misinformation? Are they enforcing responsible advertising policies? And are they providing helpful data to public health authorities without compromising privacy?

Tackling the infodemic

Social media companies can block, demote or elevate posts. According to Facebook, the average user sees only 10% of their News Feed and the platforms determine what users see by reordering how stories appear. This means demoting and elevating posts could be as essential as blocking them outright.

Blocking is the most difficult decision because it bumps up against First Amendment rights. Facebook, in particular, has recently been criticized for its unwillingness to block false political ads. But Facebook has had the most clear-cut policy on COVID-19 misinformation. It relies on third-party fact-checkers and health authorities flagging problematic content, and removes posts that fail the tests. It also blocks or restricts hashtags that spread misinformation on its sister platform, Instagram.

Twitter and YouTube have taken less decisive positions. Twitter says it has acted to protect against malicious behaviors. Del Harvey, Twitter’s vice president of trust and safety, told Axios that the company will “remove any pockets of smaller coordinated attempts to distort or inorganically influence the conversation.”
YouTube removes videos claiming to prevent infections. However, neither company has a transparent blocking policy founded on solid fact-checking.

The wave of misinformation on social media includes dubious preventatives and cures for COVID-19. Robert Patton/Flickr, CC BY-NC

While all three platforms are demoting problematic content and elevating content from authoritative sources, the absence of consistent fact-checking standards has created a gray area where misinformation can slip through, particularly for Twitter. Panic-producing tweets claimed prematurely that New York was under lockdown, and bots or fake accounts have slipped in rumors.

Even the principle of deferring to authoritative sources can cause problems. For example, the widely read @realDonaldTrump has tweeted misinformation. Influential figures who are not officially designated authoritative sources have also managed to circulate misinformation. Elon Musk, founder of Tesla and SpaceX, tweeted a false assertion about the coronavirus to 32 million followers and Twitter has declined to remove his tweet. John McAfee, founder of the eponymous security solutions company, also tweeted a false assertion about the coronavirus. That tweet was removed but not before it had been widely shared.

Harnessing influence for good

Besides blocking and re-ordering posts, the social media companies must also ask how people are experiencing their platforms and interpreting the information they encounter there. Social media platforms are meticulously designed to anticipate the user’s experience, hold their attention and influence actions. It’s essential that the companies apply similar techniques to influence positive behavior in response to COVID-19.

Consider some examples across each of the three platforms of failing to influence positive behaviors by ignoring the user experience.

For Facebook users, private messaging is, increasingly, a key source of social influence and information about the coronavirus. Because these groups often bring together more trusted networks – family, friends, classmates – there is a greater risk that people will turn to them during anxious times and become susceptible to misinformation. Facebook-owned Messenger and WhatsApp – both closed platforms in contrast to Twitter – are of particular concern since the company’s ability to monitor content on these platforms is still limited.

For Twitter, it’s essential to track “influencers,” or people with many followers. Content shared by these users has greater impact and ought to pass through additional filters.

YouTube has taken the approach of pairing misleading coronavirus content with a link to an alternative authoritative source, such as the Centers for Disease Control and Prevention or World Health Organization. This juxtaposition can have the opposite of the intended effect. A video from a non-authoritative individual appears with the CDC or WHO logo beneath it, which could unintentionally give viewers the impression that those public health authorities have approved the videos.

Responsible advertising

There is money to be made from ads offering products related to the outbreak. However, some of those ads are not in the public interest. Facebook set a standard by banning ads for medical face masks and Google followed suit, as did Twitter.

Social media companies are giving the CDC and WHO free advertising to promote coronavirus-related messages like this WHO Facebook post.
World Health Organization, CC BY-NC

All three companies have offered free ads to appropriate public health and nonprofit organizations. Facebook has offered unlimited ads to the WHO, while Google has made a similar but less open-ended offer and Twitter offers Ads for Good credits to fact-checking nonprofit organizations and health information disseminators.

There have been some policy reversals. YouTube initially blocked ads meant to profit from content related to COVID-19, but then allowed some ads that follow the company’s guidelines.

Overall, the companies have responded to the crisis, but their policies on ads vary, have changed and have left loopholes: Users could still see ads for face masks served by Google even after it had officially banned them. Clearer industry-wide principles and firm policies can help keep businesses and people from exploiting the outbreak for commercial gain.

Data to track the outbreak

Social media can be a source of essential data for mapping the spread of the disease and managing it. The key is that the companies protect user privacy, recognize the limits of data analysis and not oversell it. Geographic information systems that build on data from social media and other sources have already become key to mapping the worldwide spread of COVID-19. Facebook is collaborating with researchers at Harvard and National Tsing Hua University in Taiwan by sharing data about people’s movements – stripped of identifying information – and high-resolution population density maps.

Search and location data on YouTube and its parent, Google, are invaluable trend-trackers. Google hasn’t offered its trends analyses for COVID-19 in any systematic manner to date, perhaps out of reluctance because of the failure of an earlier Google Trends program that attempted to predict the paths of transmission of influenza and completely missed the peak of the 2013 flu season.

Think with Google, the company’s current data analytics service for marketers, offers a powerful example of insights that can be gleaned from Google’s data. It could help with projects for contact tracing and social distancing compliance, provided it’s done in a way that respects user privacy. For example, as users’ locations are tagged along with their posts, the people they’ve met and the places they’ve been can help determine whether people on the whole or in a location are complying with public health safety orders and guidelines.

Moreover, data shared by companies – stripped of identifying information – could be used by independent researchers. For example, researchers could use Facebook-owned Instagram and CrowdTangle to correlate travelers’ movements to COVID-19 hotspots with user conversations to locate sources of transmission. Research teams I direct have been analyzing coronavirus-related Twitter hashtags to identify the primary misinformation sources to detect patterns.

The expanding footprint of the pandemic and its consequences are evolving quickly. To their credit, the social media companies have attempted to respond quickly as well. Yet, they can do more. This could be their time to rebuild trust with the public and with regulators, but the window to make the right choices is narrow. Their own futures and the futures of millions may depend on it.

Bhaskar Chakravorti, Tufts University

Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SHARE NOW