SIPA Magazine

Restoring Trust: Tackling the Challenges of Misinformation

By Aastha Uprety MPA ’21
Posted Nov 11 2022
Restoring Trust
Illustration by Partners in Crime

From vaccine-related fearmongering on Facebook groups to advice from politicians that dubious medications could treat COVID-19, false and misleading information tore through society over the course of the pandemic. The news media was left to highlight the truth while keeping up with rapidly changing and occasionally conflicting scientific guidelines.

In spring 2020 Anya Schiffrin was teaching Global Media: Innovation and Economic Development, a course examining the challenges to sustaining a robust media infrastructure, including misinformation. She and her students had been following news about the virus, which would soon disrupt journalism and every other institution in society. “It was going to be a big mess, but we were ready for it,” says Schiffrin, director of SIPA’s Technology, Media, and Communications specialization.

On March 13, 2020, the United States declared a national emergency in response to the spread of the coronavirus. “I rewrote my syllabus overnight,” Schiffrin says.

“Misinformation” generally refers to false or out-of-context information that is presented as fact. “Disinformation,” a subcategory of misinformation, is deliberately spread with the intent to deceive or mislead. While misinformation isn’t a new challenge — it has historically followed wherever the mass media leads — the digital revolution has expanded the problem into one that touches numerous new domains.

Misinformation is often discussed in relation to a diverse array of challenges — all centered around the spread of information online — that emerge in the sectors of media and journalism; technology and social media; and geopolitics, cybersecurity, and national security. Solutions to these challenges require collaboration between the government, technology companies, and the media.

“We try to look at the problem from every side,” says Schiffrin, who has written extensively about the topic of misinformation. “The field moves so quickly.”

At SIPA, students, faculty, and alumni are untangling the challenges posed by online misinformation with the forward-looking and multidisciplinary approach that it necessitates.

‘We Were There from the Start’ 

Schiffrin came to SIPA nearly two decades ago, in 2003, and soon helmed what was then called the International Media concentration. She got to work developing new classes to modernize the curriculum. “I looked all around the city and found new experts to teach for us,” she says. At the turn of the decade, technology journalist Julia Angwin taught a course on social media, and academic Phil Howard instructed students about digital networks and democracy. Current SIPA instructors Alexis Wichowski and Peter Micek later brought expertise in digital government and internet governance.

“We were there from the start,” Schiffrin says. The story of misinformation in the digital era can’t be told without the stories of internet governance and social media, and for years SIPA offered courses in digital diplomacy and online activism. They were filled to capacity, especially after the social media-fueled democratic uprisings of the 2011 Arab Spring.

Online misinformation was launched into the mainstream consciousness with the 2016 US presidential election, when Russian-backed information campaigns on social media frequently used bots or fake accounts to spread misleading, out-of-context information with the aim of influencing the outcome of the election. That same year the United Kingdom made its Brexit from the European Union following a heated national discourse that included rampant misinformation.

“The year 2016 was the wake-up call,” says Schiffrin. “The world’s awareness of how dangerous disinformation is shifted, and as a result, classes that we’d been offering for years became even more in demand.” Courses specific to misinformation were added to the growing list of SIPA’s offerings.

SIPA faculty research also addressed the role of misinformation in the 2016 US election. 

A 2022 study conducted by Douglas Almond, professor of economics and international and public affairs, with Alana Vogel MIA ’20 and fifth-year PhD student Xinming Du examined the impact of Russian internet trolls on online betting markets, suggesting that the trolls’ activity influenced the 2016 US presidential election in the direction of Donald Trump. They found that the days with the least social media activity from Russian accounts corresponded with the lowest market odds for Republicans, indicating a positive correlation between Russian bot activity and expectations for Republican candidates.

Situated at the nexus of national security and social media disinformation, the 2016 election exemplified the far-reaching implications of online information challenges. “Propaganda and disinformation have always been a facet of geopolitics,” Camille François MIA ’13, a cyber conflict expert, told SIPA News in September 2021. But before 2016, she said, “influence operations on social media were a strategic blind spot for many in cybersecurity and national security.”

François is global director of trust and safety at Niantic Labs, an augmented reality software development company, where she helps protect user privacy and ensure a safe user experience. She also teaches Information Operations on Social Media, a new SIPA course exploring how foreign and domestic actors use organized disinformation campaigns to manipulate public discourse.

‘A Contest of Narratives’

Virpratap Vikram Singh MIA ’20 is the coordinator for SIPA’s cybersecurity-related projects, including those led by Jason Healey, a senior research scholar and adjunct professor at SIPA focused on the strategic dynamics of cyber conflict. While a SIPA student, Vikram Singh signed up for nearly every cybersecurity-related class that was offered, including Technology, National Security, and the Citizen, a course taught by Wichowski, a top official at the New York City Office of Technology and Innovation. He noticed that misinformation came up frequently in student discussions. “The 2016 election was still on everyone’s minds,” he says.

SIPA has been expanding its cybersecurity focus to encompass the overlap of cyber warfare and information operations. Cybersecurity involves defending against threats like hacking or hijacking a technology system, while information operations focuses on the spread of false or misleading information meant to influence decision-making.

“We live in a world where data and information are so important,” Vikram Singh says. “If you can steal that data, you can manipulate it for whatever kind of campaign you want.”

The realms of cybersecurity and information operations are bridged by trust and safety — the efforts led by teams within tech platforms to combat platform abuses and “inauthentic behavior,” such as spam and fake accounts. Vikram Singh says that in an effort to create a more diverse trust and safety workforce, SIPA’s cyber experts have worked with external partners to help SIPA students get technical certifications and training.

Before attending SIPA, Vikram Singh worked in publications for a think tank in Mumbai. His background in media influences how he looks at the cyber and information operations space. “It’s really a contest of narratives and perspectives,” he says.

Narrative warfare can have huge impacts, especially when wielded by nations in the form of propaganda.

On February 24 Russia invaded Ukraine, an act of aggression leading to violence and destruction and displacing millions of people. Alexander Bornyakov MPA ’19 is Ukraine’s deputy minister of digital transformation. Before the invasion the ministry was primarily focused on using technology for efficient governance. Now Bornyakov is leading the charge against Russian state propaganda, launching Ukraine’s own state counternarratives with the aim of explaining its experience of the war to Russian citizens and encouraging them to protest their government’s actions. Along with spreading informative videos in Russian, the Ukrainian government asked Western technology companies, including Apple and PayPal, to cooperate by blocking their services in Russia.

This past spring adjunct assistant professor Robert McKenzie — who coedited Exploring Hate: An Anthology (Brookings Institution Press, 2022), a book about the growth of online hate and extremism — advised students in a Capstone workshop analyzing different aspects of China’s efforts to supplant US global influence with its own, with a focus on online influence operations.

Given current events, student interest in misinformation and trust and safety is growing. Vikram Singh hopes that SIPA can respond by continuing to tap into its unique advantages. “New York City is where business is being done, and it’s where the United Nations is based,” he says. “Those two pillars are hugely influential and informative in regard to what needs to happen around cybersecurity and misinformation.”

‘It Has Real-World Implications’

By 2020 multiple questions of misinformation were rooted in domestic issues. The year spotlighted not only pandemic-related misinformation but also conspiracy theories based on unfounded claims of voter fraud that pushed the false idea that the US presidential election had been stolen. These theories eventually galvanized white nationalist groups to lead an attack on the US Capitol on January 6, 2021.

Hate and extremism online are key dimensions of the misinformation challenge. Extremist groups often use misinformation, including in the form of conspiracy theories, to spread their beliefs. This can spiral into more violent, extremist speech and effect severe consequences like racist rhetoric inspiring mass shootings in churches and grocery stores in the United States and military-backed Islamophobia on Facebook targeting the Rohingya people in Myanmar. 

“It’s clear that what’s happening online is not limited to the online world,” says Tamar Mitts, an assistant professor of international and public affairs at SIPA. “It has real-world implications.”

Mitts’s political science research, which explores how extremists use online platforms, began with a focus on terrorist groups like the Islamic State, which used social media to advance its cause in the early 2010s. “I was fascinated with the phenomenon,” she says, “and how they used mainstream platforms to attract people from all around the world.” Social media platforms act as intermediaries between the producers of content (the extremist groups) and the consumers of content (their potential recruits), Mitts explains. Through her research, she has found similarities in how different groups— for example, ISIS and the Proud Boys, a far-right US hate group known for storming the Capitol on January 6 — use online platforms.

As a student, Nusrat Farooq MPA ’21 worked as Mitts’s lead research assistant. Farooq specialized in Data Analytics and Quantitative Analysis, through which she took courses like Artificial Intelligence in Public Policy, taught by computer scientist Sameer Maskey, and Text as Data, with Mitts. Her term paper for Text as Data, a course focusing on the quantitative analysis of text, analyzed the print media’s spread of anti-Muslim hate in India during the COVID-19 outbreak. Farooq’s analysis later became the foundation of a chapter, coauthored with SIPA lecturer Rumela Sen, in the book Covid-19 in India, Disease, Health and Culture: Can Wellness Be Far Behind? (Routledge, 2022).

In March Farooq began working as a technology and programs associate with the Global Internet Forum to Counter Terrorism (GIFCT), an organization founded in 2017 by Facebook (now Meta), Microsoft, Twitter, and YouTube that works with governments, the tech industry, and civil society to combat terrorism and violent extremism online. While GIFCT’s focus is violent extremism, not misinformation, Farooq says that it is difficult to draw a line exactly where misinformation turns into violent speech.

‘No Golden Solution’

After the 2016 election, the public and private sectors dedicated considerable resources toward attempts to analyze and address online misinformation. Social media platforms implemented a patchwork of content moderation and platform transparency policies to mitigate the impact of influence operations, which constituted an unprecedented but still imperfect response.

Through ongoing experimentation, it has become apparent that no solution to online information challenges is perfect on its own. For example, improvements in media literacy can help decrease profound educational inequities, but without other efforts, it could be seen as an individual-focused solution ignoring the role of technology companies and government in misinformation’s spread. Also, some solutions are more easily conceptualized than implemented — it can be difficult to establish a credible, unbiased system of fact-checking, for example, and issuing corrections can increase distrust of the media among some audiences.

Further, while algorithms often do stem the flow of large quantities of misinformation, human moderators (who suffer from the traumatic content they see on the job) are still needed to detect cultural nuances. Content moderation itself can also clash with the ideals of free speech. Solutions that center around moderating content can be misapplied, Mitts explains. For example, she says, a policy banning hate speech could be wrongly used to stifle activist dissent against government atrocities.

“There’s no golden solution,” says Mitts, who is writing a book about content moderation policies. Through her research Mitts has examined how hate groups evade

moderation and has discovered how banning or deplatforming them can backfire. In a recent study, she looked at how extremist groups, when deplatformed from mainstream websites like Twitter, will migrate to alternative outlets and intensify their hate speech on those platforms.

Even determining what is true and false can be relative and subject to ideological biases. “Misinformation is hard to solve,” Mitts says, “because often we can’t agree on what it is.”

‘Whenever I Teach, I Teach the Solution’

Still, there’s no shortage of energy behind solving the problem of misinformation. SIPA’s new dean, Keren Yarhi-Milo, plans to position the School to engage in research on how to empower students and citizens to recognize misinformation when they confront it online. By promoting common-sense solutions like online literacy training, she hopes to help foster a more cordial and open online environment for all.

“Mis- and disinformation have emerged as major threats to democracy, civil society, and national security,” Yarhi-Milo says. “Much of the current focus on disinformation and misinformation has been on how to regulate social media and other online platforms, but of course there is the danger that significant restrictions could limit the freedom of speech that is crucial to a vibrant democratic society.”

Yarhi-Milo is exploring the creation of a new institute that would address the crisis in democracy, democratic norms, and institutions in the United States and globally. Part of the institute’s immediate research agenda would be to engage scholars from across the University to focus on both “supply-side and demand-side solutions,” a concept coined by Schiffrin.

Supply-side solutions tackle the production of information and include fixes like regulating big tech and strengthening the quality of journalism. Demand-side solutions focus on information consumers and include strategies like improved media literacy. Schiffrin recently coauthored a chapter in the book Disinformation in the Global South (Wiley-Blackwell, 2022) examining the common fixes of fact-checking, media literacy, and regulation as they have been applied in sub-Saharan Africa and elsewhere. The chapter argues for more robust efforts in all areas, but particularly tech-related regulations like algorithmic transparency and privacy protections.

“My rule is that whenever I teach, I teach the solution,” Schiffrin says. Along with Emily Bell, founding director of Columbia’s Tow Center for Digital Journalism, Schiffrin co-taught Policy Solutions for Online Mis/Disinformation, a class centered around the various ways society is trying to solve misinformation.

“I found most interesting the role of journalists in this puzzle,” says Anna Spitz MPA ’22, who took the course in fall 2021. “It’s journalists that are down in the trenches fighting day in and out on this one issue of making sure that the public gets the truth.”

Spitz took part in a term project about startups that use artificial intelligence to combat misinformation, which was also the subject of a Capstone project advised by Schiffrin this past spring. A team of seven SIPA students surveyed 20 companies and published the resulting findings about the market for tech-based solutions with the German Marshall Fund in July. Schiffrin also led a fall 2020 Capstone workshop comparing international governments’ public health messaging in response to COVID-19.

Mitts teaches Data Science for Public Policy — “my favorite class to teach!” she says. The course helps policy professionals understand data science methods, drawing examples from issues like the pandemic, elections, crime and policing, and machine bias. The vision for the course, which includes both SIPA students and Columbia students from programs like Data Science and Quantitative Methods in the Social Sciences, is to bring together students from interdisciplinary backgrounds to mimic a real-world team environment.

“If you’re just a policy person, you can’t fully understand these issues without the technical expertise,” Mitts says. “And if you’re a data person, you need to understand the social aspect and policy behind it.”

Schiffrin echoes this idea and emphasizes the role government and informed policy experts play in tackling misinformation. “We can’t leave the conversation to Silicon Valley,” she says.

And so the conversation is happening at SIPA. At two events in March, experts discussed the European Union’s Digital Services Act, a law addressing illegal content on tech platforms that is expected to be put into effect in 2024. Panelists emphasized how the act protects free speech by focusing on demanding algorithmic accountability and transparency instead of content censorship from tech companies, which could make it a model for the United States.

In April, Schiffrin organized the Women in the Digital World conference, a two-day event with a sweeping agenda examining women’s speech online, digital violence, racial and gender bias in algorithms, big tech regulations, and disinformation in Russia’s war on Ukraine. SIPA’s multifaceted approach to misinformation is a reminder that the staggering and varying challenges posed by distrust in institutions, a weakened press, violent extremism, the rise of big tech corporations, and geopolitical tension cannot be solved through the lens of technology alone — and that misinformation itself is only one dimension of those crises.

For Mitts, combating online information challenges requires holistic thinking. “Our lives are beyond social media,” she says. “Our lives are so much more than these platforms.”