Welcome to
Women in AI Research

New episodes released every three weeks on Wednesday.

Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven.

In WiAIR, we interview successful female AI researchers coming from diverse cultural backgrounds, showcasing their inspirational cutting-edge research and insights into the future of AI. Through these conversations, we explore their personal journeys - how they overcome unique challenges, balance careers and family life, and make difficult decisions when necessary. We aim to understand how women in AI research perceive success and what it takes to achieve their goals.

Discover

Why Listen?

  • Gain Insights: Learn from leading women in AI and stay updated on the latest research and developments.
  • Be Inspired: Hear powerful stories of overcoming obstacles and breaking stereotypes.
  • Connect: Join the community of like-minded early career researchers and build your network.

Community

Stay Connected

Bluesky

Get Involved

Call to Action

  • Subscribe Now: Don't miss an episode! Subscribe to Women in AI Research (WiAIR) today.
  • Share: Spread the word and share our podcast with your network.

Our Podcast

Latest Episodes

Conversations with leading women in AI research from around the globe.

Ep.19: Does Liking Yellow Make You a School Bus Driver? Hidden Failures in LLMs, with Dr. Hila Gonen

Ep.19: Does Liking Yellow Make You a School Bus Driver? Hidden Failures in LLMs, with Dr. Hila Gonen

March 4, 2026

In this conversation, Dr. Hila Gonen (Assistant Professor at the University of British Columbia) joins us to explore the deep insights into how large language models (LLMs) leak semantic information, behave across languages, and how researchers can uncover their root causes. Dr. Gonen shares her journey in interpreting AI systems, addressing biases, and controlling model outputs for safer, fairer applications.In this episode:The influence of prompt elements, like colour, on model predictionsHow semantic leakage impacts model outputs unintentionallyThe role of multilinguality and modality in model safety and behaviourInterventional vs. observational approaches to understanding modelsChallenges in controlling and aligning AI behavior across languages and domainsFuture directions in model interpretability, safety, and causal analysisKey Topics:Color and semantic influence on language model completionsThe concept of semantic leakage and examples from real promptsDifferences between bias, hallucination, and leakage failuresUnintended behaviours discovered through experimentationThe importance of model interpretability and transparencyRoots of behaviour: training data and internal representationsInterventional analysis as a causal tool in NLP researchCross-lingual and cross-modal alignment in safety detectionChallenges in evaluating safety across languages and modalitiesStrategies for building robust controls against unseen attack typesThe future of AI research: combining performance with reliability and safetyEthical considerations: avoiding directions that hinder societal benefitsResources & Links:Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove ThemDoes Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language ModelsRewriting History: A Recipe for Interventional Analyses to Study Data Effects on Model BehaviorOMNIGUARD: An Efficient Approach for AI Safety Moderation Across Modalities and LanguagesConnect with Dr. Hila Gonen:LinkedInhttps://x.com/hila_gonenNote: This episode emphasizes practical and theoretical challenges in model interpretability, safety, bias detection, and causality—providing a comprehensive view suitable for researchers, practitioners, and AI enthusiasts interested in responsible AI development.🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.⁠⁠WiAIR website⁠⁠Follow us at:⁠⁠LinkedIn⁠⁠⁠⁠Bluesky⁠⁠⁠⁠X (Twitter)

Ep.18: Faithfulness and Hallucinations in Reasoning Models, with Dr. Letitia Parcalabescu

Ep.18: Faithfulness and Hallucinations in Reasoning Models, with Dr. Letitia Parcalabescu

February 11, 2026

Are reasoning models actually reasoning — or just producing convincing stories?Our guest in this episode of #WiAIRpodcast is Letitia Parcalabescu, the creator of the @AICoffeeBreak youtube channel. Letitia joins Jekaterina Novikova for a deep dive into the topics of faithfulness, self-consistency, hallucinations, and the reliability illusion in LLMs and multimodal reasoning models.We discuss why chain-of-thought explanations may not reflect what the model actually did, why RAG does not automatically fix hallucinations, and how vision–language models often rely far more on text than images. We also explore new approaches for grounding and rejection — and why models struggle to say "I don't know."Instead of focusing only on benchmark scores, this conversation asks: What kind of evidence do we need to truly trust reasoning models?REFERENCES:On Measuring Faithfulness or Self-consistency of Natural Language ExplanationsDo Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?Bounding Hallucinations: Information-Theoretic Guarantees for RAG Systems via Merlin-Arthur ProtocolsAI Coffee Break with Letitiahttps://www.youtube.com/c/AICoffeeBreakhttps://x.com/AICoffeeBreak🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.⁠WiAIR website⁠Follow us at:⁠LinkedIn⁠⁠Bluesky⁠⁠X (Twitter)

Ep.17: AI Safety Beyond Benchmarks -- Dr. Swabha Swayamdipta on Evaluation, Personalization, and Control

Ep.17: AI Safety Beyond Benchmarks -- Dr. Swabha Swayamdipta on Evaluation, Personalization, and Control

January 21, 2026

As language models become more capable, the hardest questions are no longer just about performance, but about safety, interpretation, and control.In this episode of Women in AI Research, we speak with Swabha Swayamdipta, Assistant Professor of Computer Science at the University of Southern California and co-Associate Director of the USC Center for AI and Society. Swabha’s research examines how the design and deployment of language models intersect with real-world risks — from how models behave in unexpected ways to how seemingly technical choices can have broader societal consequences.We talk about AI safety from multiple angles: what it means when hidden inputs to models can sometimes be inferred from their outputs, why personalization introduces new trade-offs around privacy and user agency, and how assumptions about model behavior can quietly shape downstream harms. Rather than focusing only on accuracy or benchmarks, the conversation asks what kinds of evidence we actually need to trust these systems in practice.REFERENCESBetter Language Model Inversion by Compactly Representing Next-Token DistributionsImproving Language Model Personas via Rationalization with Psychological ScaffoldsOATH-Frames: Characterizing Online Attitudes Towards Homelessness with LLM AssistantsUncovering Intervention Opportunities for Suicide Prevention with Language Model Assistants🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.⁠WiAIR website⁠Follow us at:⁠LinkedIn⁠⁠Bluesky⁠⁠X (Twitter)

The Team

About Us

Meet the people behind Women in AI Research.

Jekaterina Novikova

Jekaterina Novikova

Founder & Host

Dr. Jekaterina Novikova is the AI researcher with over 10 years of experience in natural language processing and human-AI interaction. She holds a Ph.D. in Computer Science from the University of Bath and has an extensive international experience working in the academia, industry and non-profits. She was recognized as one of the Top 50 Most Extraordinary Women Advancing AI In 2024, Top 25 Women in AI in Canada in 2022, received the "Industry Icon Award" by the University of Toronto in 2021, and included in the list of 30 Influential Women Advancing AI in Canada in 2018.

Malikeh Ehghaghi

Malikeh Ehghaghi

Co-Host

Malikeh is a machine learning researcher at the Vector Institute, and an incoming PhD student at the University of Toronto, where she works under the supervision of Prof. Colin Raffel. Born and raised in Iran, she is a bilingual researcher fluent in Farsi and English who immigrated to Canada in 2019. She earned an MScAC degree in Computer Science from the University of Toronto and has over five years of industry research experience at companies such as Winterlight Labs, Cambridge Cognition, and Arcee AI.

Anais Hristea

Anais Hristea

Lead Illustrator & Designer

Anais is a talented graphic designer and illustrator who creates all the visual assets for the Women in AI Research podcast. With a background in digital art and design, she brings a unique aesthetic to the podcast's brand identity, from logo design to branding, ensuring a strong and professional look.

Asal Mohammadjafari Mamaqani

Asal Mohammadjafari Mamaqani

Technical Content Creator

Asal is a final-year Computer Science undergraduate at Amirkabir University of Technology in Tehran. She works as a Research Assistant, specializing in deep learning and computer vision, and has experience as a Teaching Assistant for courses such as Artificial Intelligence (AI) and Machine Learning (ML). She is currently looking for opportunities to pursue postgraduate studies to further her research in AI.

Parnian Fazel

Parnian Fazel

Technical Content Creator

Parnian is pursuing her MSc in Computing (Artificial Intelligence & Machine Learning) at Imperial College London. She holds a bachelor's degree in Computer Engineering from the University of Tehran. She contributes to the Women in AI Research podcast as a technical content creator, where she helps turn complex ideas into clear and engaging content.

Ali Akram

Ali Akram

Technical Producer

Ali is an experienced AI engineer and technical producer who ensures the podcast's technical quality. He handles audio editing, production, and technical aspects of the podcast, bringing years of experience in audio engineering and AI development. Ali also develops and maintains the podcast's website.

Mary MacCarthy

Mary MacCarthy

Producer & Marketing

Mary is the Head of Product Marketing at Arcee AI, a fast-growing startup that pioneered small language models (SLMs) and intelligent model routing. She pivoted into tech after a long career as an international news correspondent. A proud solo mom, Mary is a fierce advocate for women in tech and is known for bringing a critical eye to the ethics (or lack thereof) in the industry.