Building a community to counter influence operations: Four questions for Alicia Wanless

People inside subway cover their faces with masks in Hong Kong

Alicia Wanless is the director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace, a grantee of our Cyber Initiative.

Influence operations—the kind of coordinated efforts, often using social media platforms, to affect public opinion and interfere in elections that have played out in democracies around the world in recent years—represent a complex challenge to society, and countering them requires responses from sectors including government, nonprofits, academia and the social platforms themselves. The goal of the Partnership, in its own words, is “to grow this community and equip it to fight influence operations worldwide.”

We asked Alicia about the work of the Partnership on this critically important issue.

Could you tell us a little bit about your background? I recall working together several years ago, just after the Russian disinformation scandal broke, when you were still working on your Ph.D. What led you to where you are now, directing the Partnership for Countering Influence Operations?

A few years on from our conversations about what became the Partnership for Countering Influence Operations, I have nearly finished that thesis and the Partnership works with a growing community—now including 26 official advisors and partners. As far as my path, I’ve been fascinated by propaganda since I was a kid. Growing up in the long shadow of the World Wars, I wanted to understand how some people could be encouraged to lay down their lives, while others were so filled with hate to do the unspeakable.

This interest led me to research things like nationalism, propaganda, and language engineering in my undergrad, and later in the 2010s, to start analyzing how influence operations were changing in a Digital Age. That research, which covered political campaigns and alternative media outlets, led me to work with militaries, major tech platforms, and civil society organizations—and ultimately to pursue a Ph.D. in War Studies at King’s College London.

In working across academia, industry, government, and civil society, and in providing training for journalists, it became increasingly apparent that trust, cooperation, and understanding can be low among these communities. The Partnership emerged to try to bridge disparate stakeholders who are key to countering influence operations.

The disinformation landscape surrounding the 2020 elections was so different than what we saw in 2016, with domestic, verified users supplanting bots and fake accounts, and new narratives focusing on both Covid and electoral fraud.  The actors, behaviors, and content keep changing. How do you see the disinformation landscape evolving in the future?

I’m not sure it’s the information environment changing quickly so much as our focus on what happens within it shifting. In researching the U.S. primary election in 2016, it was apparent that domestic actors were likely far more active and prevalent than any foreign actors, and this was later confirmed in Benkler, Faris, and Roberts’ Network Propaganda. The disclosure that Russian actors were actively running influence operations made for more shocking news, whereas disentangling the complex web of domestic actors can be difficult and highly politicized. The role of domestic actors engaging in influence operations has certainly come to the fore in the wake of the attack on Congress on 6 January.

One major challenge in understanding the information environment is that research tends to focus on threat actors, tactics, and the content they produce, often in the form of case studies— interesting and informative, but fairly narrow in scope. Unless it is all brought together in bigger meta studies, it leaves us focused on the last bad thing that happened, which is not great for being strategic about addressing the problem. To that end, I think we need to move towards a more systemic understanding about disinformation and influence operations, looking at the wider information environment in which these things occur to find patterns that might offer a warning where the next problem is emerging. That would allow us to intervene sooner.

I appreciated your recent report on Platform Interventions, which found that in 2020 alone the platforms rolled out more than 60 interventions—primarily redirecting users to verified sites like the World Health Organization or voter information centers, or labeling content. What, if anything, do we know about the efficacy of these interventions?

There is very little publicly available information on the efficacy of those countermeasures. It’s possible that the platforms are conducting measurement and evaluation on them. But that analysis is seldom presented with the notice of the intervention itself. This is a major gap in addressing influence operations. In a forthcoming meta study Princeton University’s Empirical Studies of Conflict Project is working on for us, we find that the bulk of academic research on countermeasures focuses on fact-checking efforts. There also tends to be an over-reliance on survey data to understand if interventions work, which can be dependent on self-reporting. This gap in knowledge presents a serious challenge to fostering evidence-based policy solutions to counter influence operations.

When it comes to addressing disinformation, I continue to believe that data access and transparency are essential prerequisites before we can get much else done—we must know what’s happening on social media platforms in order to better hold them accountable. You take this one step further with your proposal for a multi-stakeholder research and development center (MRDC). Could you explain how MRDCs operate, and why you think this would improve on earlier efforts like, for example, Social Science One (where, full disclosure, I played an early role)?

I agree, and part of the problem is that outside the social media platforms it can be a challenge for researchers (and others) to even know what data might be available. And for researchers who take the plunge and go inside to help the platforms it can take months just to understand how they operate and what’s possible in a research project, to say nothing of what it might take to convince leadership to support it.

An additional issue is that when there is collaboration between industry and external researchers, it tends to be on a per-project basis, which makes it challenging to build up institutional knowledge and experience of how the platforms work as organizations. To better understand influence operations and related countermeasures, researchers need more than one-time access to data; they need to regularly collect and update quantitative data to facilitate hypothesis testing and design intervention strategies.

An MRDC takes inspiration from U.S. government–sponsored entities like the RAND Corporation, which has produced trusted, high-quality research on public policy issues for decades—often while working on projects that require highly classified information be handled responsibly. With an MRDC, online platforms would take the place of the U.S. government, providing money, data, and some input on research priorities—but not direct control. Essentially, an MRDC provides an independent venue where industry and external researchers can come together for a sustained period to collaborate within a common structure. It’s more than a vehicle for data sharing, but a bridge organization that can vet researchers, address contracting issues, and sustain longer-term research projects.

Coming back to Social Science One, it aimed to address issues of data sharing, but suffered without institutional resources to help reduce operational frictions and offer opportunities for collaboration. Privacy concerns hampered the release of a dataset of URLs that were “shared publicly more than 100 times between January 2017 and August 2019 “. In that case, Facebook applied differential privacy to add noise to the dataset arguably making it less useful to vetted researchers. Indeed, some critics argued that third-party companies have better access to Facebook data than did these researchers.  But the protections that could be afforded an MRDC for managing data in a secure manner and the vetting of potential research staff could help overcome some of these limitations. There is clearly a need for different models—an MRDC is not the only solution, but it has a lot of attractive features.

What sets an MRDC apart from some of the initiatives that have been tried is that it could be truly multi-stakeholder. Increasingly, I see a bridge organization like this having the ultimate aim to inform evidence-based policy development for countering influence operations. Achieving this requires not just data from companies and collaboration from academic researchers, but also inputs from policymakers to help ensure that research projects are designed with policy implications in mind. For it to work, an MRDC must be independent and have sustained operational funding, but with those things and buy-in from the right stakeholders, it could be a critical tool in effectively countering influence operations.

Search Our Grantmaking


By Keyword