Welcome to
Women in AI Research

New episodes released every three weeks on Wednesday.

Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven.

In WiAIR, we interview successful female AI researchers coming from diverse cultural backgrounds, showcasing their inspirational cutting-edge research and insights into the future of AI. Through these conversations, we explore their personal journeys - how they overcome unique challenges, balance careers and family life, and make difficult decisions when necessary. We aim to understand how women in AI research perceive success and what it takes to achieve their goals.

Discover

Why Listen?

  • Gain Insights: Learn from leading women in AI and stay updated on the latest research and developments.
  • Be Inspired: Hear powerful stories of overcoming obstacles and breaking stereotypes.
  • Connect: Join the community of like-minded early career researchers and build your network.

Community

Stay Connected

Bluesky

Get Involved

Call to Action

  • Subscribe Now: Don't miss an episode! Subscribe to Women in AI Research (WiAIR) today.
  • Share: Spread the word and share our podcast with your network.

Our Podcast

Latest Episodes

Conversations with leading women in AI research from around the globe.

Ep.25: EACL 2026: Why LLMs Hallucinate, and How to Make Them Say "I Don't Know"

Ep.25: EACL 2026: Why LLMs Hallucinate, and How to Make Them Say "I Don't Know"

April 3, 2026

LLMs are notoriously overconfident, but can we teach them to admit uncertainty?In this episode, Maor Juliet Lavi (Tel Aviv University) presents her EACL 2026 paper on Detecting Unanswerability in Large Language Models with Linear Directions.Paper: https://arxiv.org/abs/2509.22449We cover:Why prompt-based fixes for hallucinations aren’t enoughHow “unanswerability” emerges inside model representationsA simple but powerful idea: linear directions in hidden statesWhy this method generalizes better across datasetsWhat this reveals about where abstract concepts live inside LLMs👉 Watch to see how far we can push models toward knowing when they don’t know.Maor:https://www.linkedin.com/in/maor-juliet-lavi-07494a155👍 Like & subscribe for more deep dives into cutting-edge AI research🔔 New episodes from EACL 2026 coming soon

Ep.24: EACL 2026: From Paraphrases to Diagnostics: A Fine-Grained Framework for LLM Auditing

Ep.24: EACL 2026: From Paraphrases to Diagnostics: A Fine-Grained Framework for LLM Auditing

April 1, 2026

LLMs often give different answers to the same question, just phrased differently. But how do we measure and understand this behaviour rigorously?In this episode of the #WiAIRpodcast, Cléa Chataigner (Mila, McGill) presents AUGMENT, a user-grounded, controlled paraphrasing framework for auditing prompt sensitivity in large language models, accepted as an oral at EACL 2026.Paper: https://arxiv.org/abs/2505.03563Instead of relying on noisy, unconstrained paraphrasing, AUGMENT introduces:Structured, linguistically grounded paraphrase types (e.g., voice, style, dialect)A guided generation + automated quality control pipelineFine-grained analysis of how specific linguistic variations impact model behaviourKey insights:Different paraphrase types can shift model performance in opposite directionsStandard baselines can hide critical failure modesEven strong LLMs struggle with certain transformations (e.g., voice changes)Evaluated across:Bias benchmarks (BBQ)Knowledge tasks (MMLU)Multiple open-source model familiesCléa:https://scholar.google.com/citations?user=NdToDmMAAAA👍 Like & subscribe for more deep dives into cutting-edge AI research🔔 New episodes from EACL 2026 coming soon

Ep.23: EACL 2026: You're Using Persona Prompting Wrong

Ep.23: EACL 2026: You're Using Persona Prompting Wrong

March 30, 2026

How much control do persona prompts actually give us over LLM behaviour?In this episode of #WiAIRpodcast, Jing Yang (TU Berlin) speaks about the study on persona prompting in socially sensitive tasks, including hate speech detection, sentiment analysis, and commonsense reasoning.Paper: https://arxiv.org/abs/2601.20757The paper takes a closer look at a common assumption: that adding demographic or identity-based personas can help align model outputs with different user groups.In this conversation, we discuss:Whether persona prompting meaningfully changes model predictionsWhy simulated personas don’t necessarily align with real-world demographicsThe gap between improving labels vs. improving model rationalesEvidence that LLMs may systematically over-predict harmful contentWhat this means for synthetic data generation and evaluation practicesOne of the key takeaways is that persona prompting has limited effect as a steering mechanism in these settings ,and should be applied with care, especially in high-stakes or socially sensitive applications.Jing Yang:https://www.linkedin.com/in/jing-yang-7b07aa135👍 Like & subscribe for more deep dives into cutting-edge AI research🔔 New episodes from EACL 2026 coming soon

The Team

About Us

Meet the people behind Women in AI Research.

Jekaterina Novikova

Jekaterina Novikova

Founder & Host

Dr. Jekaterina Novikova is the AI researcher with over 10 years of experience in natural language processing and human-AI interaction. She holds a Ph.D. in Computer Science from the University of Bath and has an extensive international experience working in the academia, industry and non-profits. She was recognized as one of the Top 50 Most Extraordinary Women Advancing AI In 2024, Top 25 Women in AI in Canada in 2022, received the "Industry Icon Award" by the University of Toronto in 2021, and included in the list of 30 Influential Women Advancing AI in Canada in 2018.

Malikeh Ehghaghi

Malikeh Ehghaghi

Co-Host

Malikeh is a machine learning researcher at the Vector Institute, and an incoming PhD student at the University of Toronto, where she works under the supervision of Prof. Colin Raffel. Born and raised in Iran, she is a bilingual researcher fluent in Farsi and English who immigrated to Canada in 2019. She earned an MScAC degree in Computer Science from the University of Toronto and has over five years of industry research experience at companies such as Winterlight Labs, Cambridge Cognition, and Arcee AI.

Anais Hristea

Anais Hristea

Lead Illustrator & Designer

Anais is a talented graphic designer and illustrator who creates all the visual assets for the Women in AI Research podcast. With a background in digital art and design, she brings a unique aesthetic to the podcast's brand identity, from logo design to branding, ensuring a strong and professional look.

Asal Mohammadjafari Mamaqani

Asal Mohammadjafari Mamaqani

Technical Content Creator

Asal is a final-year Computer Science undergraduate at Amirkabir University of Technology in Tehran. She works as a Research Assistant, specializing in deep learning and computer vision, and has experience as a Teaching Assistant for courses such as Artificial Intelligence (AI) and Machine Learning (ML). She is currently looking for opportunities to pursue postgraduate studies to further her research in AI.

Parnian Fazel

Parnian Fazel

Technical Content Creator

Parnian is pursuing her MSc in Computing (Artificial Intelligence & Machine Learning) at Imperial College London. She holds a bachelor's degree in Computer Engineering from the University of Tehran. She contributes to the Women in AI Research podcast as a technical content creator, where she helps turn complex ideas into clear and engaging content.

Ali Akram

Ali Akram

Technical Producer

Ali is an experienced AI engineer and technical producer who ensures the podcast's technical quality. He handles audio editing, production, and technical aspects of the podcast, bringing years of experience in audio engineering and AI development. Ali also develops and maintains the podcast's website.

Mary MacCarthy

Mary MacCarthy

Producer & Marketing

Mary is the Head of Product Marketing at Arcee AI, a fast-growing startup that pioneered small language models (SLMs) and intelligent model routing. She pivoted into tech after a long career as an international news correspondent. A proud solo mom, Mary is a fierce advocate for women in tech and is known for bringing a critical eye to the ethics (or lack thereof) in the industry.