Bridging the Gap between Problems and Policy: Dr. Sadia Khan on Guiding AI Governance
Dr. Sadia Khan, managing director of the Academic Alliance for AI Policy, discusses scholars’ roles in regulating a rapidly changing AI landscape
By Ryan Garipoli
On September 30th 2025, OpenAI released ‘Sora 2’, an artificial intelligence prompt to video generator that showcases the newest advancements in AI technology. Dubbed by many as the “AI TikTok”, Sora 2 features more advanced visuals, sound effects, and likeness replication than any AI video generator that came before it. Sora 2 has already seen significant success in terms of user downloads and global recognition, but its rise has renewed discussions about oversight and regulation in an industry still navigating questions of copyright and ethics.
Developments like the release of Sora 2 highlight the need for discussion between AI experts who understand the societal impact of unchecked AI innovation, and those with the ability to regulate it. Among those experts in conversation with policy makers is Dr. Sadia Khan, an AI Ethics and Policy Researcher as well as the managing director of the Academic Alliance for AI Policy (AAAIP). The AAAIP is a coalition of more than 100 scholars across the United States dedicated to researching and informing AI policy in order to, as its website states, “correct the communication pipeline between academia, the newsroom, and policymaking”.
Founded at a 2023 symposium, the AAAIP’s work has centered on informational webinars and collaborations with the advocacy group Public Citizen. In its most recent effort, the alliance hosted a webinar on AI chatbots and offered policy recommendations to help inform Public Citizen’s legislative proposals.
Forming collaborations between experts and policymakers is what motivated Khan to take on her role as managing director of the AAAIP — and continues to drive her research today.
“Filling that gap between academia and policy is what kind of motivates me,” Khan said. “Whatever originates in academia can take years to take root, so my interest in advocacy — and the AAAIP — is about bridging that gap.”
Khan is not the only AI researcher who understands the importance of and impact a collection of scholars can have on more effectively understanding AI and effectively pushing for its regulation.
Lee Rainie is another current member of the AAAIP. He is the former director of Internet and Technology research at Pew Research Center, and the current Director of Elon University’s Imagining the Digital Future Center. When launching his new program at Elon University, Rainie felt joining the AAAIP would be beneficial towards his own research due its ability to connect him to a knowledge community with people who had conducted ‘consequential research’ in the sphere of AI.
“It’s always better when more heads are together than fewer heads, and it's always better when you can bat ideas off people and get feedback. Knowledge networks are powerful things.” Rainie said.
Despite AI scholars’ efforts to collaborate and advance research, the rapid pace of AI developments and the slow-moving political system often leave them struggling to influence legislation in a timely manner.
Rainie spoke to the struggle scholars have had to build traction for their policy recommendations due to recent political trends.
“There’s been a long running decline in trust in institutions in this country, including scientific institutions,” Rainie said. “It’s not just about competing for people’s time and attention, that’s hard enough to do, but doing it in a climate where often there’s a suspicion of experts offering their help or offering their insights.”
When looking forward to the future, Khan is confident in the work of the AAAIP but also believes there things need to change if policy is ever to catch up with the real time developments in the AI world.
“The harms are foreseeable. There’s this saying that technology leads policy, that policy never catches up, and I just don’t buy it. It doesn’t have to be that way.” Khan said.
Khan believes that the necessary institutions are already in place, but a lack of action has led to continued chaos in the AI world.
“I think agencies like the National Institute for Standards in Technologies could have a big role, and then other agencies that have laws they could apply need to stand up.” Khan said. “The harms that we see are not inevitable, but as far as the way things are, it just seems like ‘yeah, it’s going to be that way’”.