November 19th, 2025Assistant Professor of Computer Science Sukrit Venkatagiri helps society resist technologically caused harms.
Written by Tomas Weber
Photography Laurence Kesterson
Imagine this: A realistic AI-generated video appears online of the president announcing a military strike on a nuclear-armed adversary. It circulates on fringe accounts, but then it’s reposted by influencers with millions of followers. Cable news speculates. Stock markets crash. Russia and China raise their military readiness. By the time the White House denies the video’s authenticity, the damage has been done. Conspiracy theories abound. Many suspect a cover-up.
Not long ago, this scenario might have been dismissed as science fiction. But it’s now plausible — and one of many deepfake risks that concern computer scientist Sukrit Venkatagiri.
A panicked voice on the phone that sounds like your brother asks for thousands of dollars. A fake, sexually explicit video of a public official is posted on X. A Fortune 500 CEO announces she is resigning in disgrace, tanking the stock price. In a society already weakened by distrust, deepfakes threaten to tip the balance.
But could technology also offer a solution?
Venkatagiri, an assistant professor of computer science, studies how humans interact with computers. As director of Swarthmore’s Collective Resilience Lab, Venkatagiri develops ways to help people resist technologically caused harms. Together with his students, he designs tools, such as interactive games and crowdsourcing platforms, to help people protect themselves against the digital world’s darker consequences — from misinformation and privacy violations to hate speech and deepfakes.
“A lot of people who develop technology ask: ‘Can we do it?’ and not ‘should we?’” says Venkatagiri, who is also a faculty affiliate at Swarthmore’s Healthy, Equitable, and Responsive Democracy (HEARD) research initiative.
“They don’t think about it from a social perspective. They don’t ask: ‘What are the social implications of giving every person on the internet access to powerful tools?’”
Venkatagiri’s journey to this work began in Bangalore, India. In 2017, after earning an undergraduate degree in computer science, he moved to Blacksburg, Va., for a Ph.D. at Virginia Tech. His topic was human-computer interaction, and he designed online tools to facilitate crowdsourcing — the technique of mobilizing large numbers of people to solve problems. Venkatagiri designed a crowd-sourcing platform to help journalists geolocate images. To build this system, he consulted with journalists who investigate human-rights violations, such as those who helped search for Austin Tice, an American journalist kidnapped while reporting in Syria in 2012.
Following his Ph.D., Venkatagiri discovered that his tools could also be used to tackle digital harms.
“I realized the crowdsourcing tools we were developing were also helpful for investigating online disinformation,” he says.
In 2022, Venkatagiri joined the Election Integrity Partnership, a project run by the University of Washington’s Center for an Informed Public. There, he built a crowdsourcing system to flag rumors and unverified claims spreading online during the 2022 midterm elections, which included allegations of massive voter fraud as the votes were being tallied.
Sukrit Venkatagiri
“The purpose of academia is to bring truths to light,” he says, “and to do the work others won’t because it’s not profitable.”
The goal was not to fact-check — the pace of misinformation was outstripping the time it would take to analyze the claims’ veracity. Instead, the aim was to communicate uncertainty in real time.
“We wanted, first, to understand how information was spreading, where it was originating from,” says Venkatagiri, “and then to communicate the fact that we don’t know how true it is, to reveal the uncertainty in the information environment.”
Journalists at The New York Times and at Reuters reported rumors that Venkatagiri’s system had first flagged, helping the public navigate a chaotic and often overwhelming information ecosystem.
His task has only become harder. When Elon Musk purchased Twitter (now X), he restricted access to data about how posts were moving through the platform.
“It definitely became more difficult to understand how much, and how quickly, rumors were spreading,” says Venkatagiri.
Still, Venkatagiri believes in the long game. There will come a point, he explains, when the harms caused by technology can no longer be ignored by companies or governments.
“Tech companies might not see the value of this work right now, because they want to outcompete other tech companies. But at some point, if people don’t feel safe using their technologies, they will stop using them — and then companies and governments will have to act,” he says.
In the meantime, academia can play a crucial role.
“The purpose of academia is to bring truths to light,” he says, “and to do the work others won’t because it’s not profitable.”
The Communications Office invites all members of the Swarthmore community to share videos, photos, and story ideas for the College's website. Have you seen an alum in the news? Please let us know by writing news@swarthmore.edu.