Welcome to
Women in AI Research

New episodes released every three weeks on Wednesday.

Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven.

In WiAIR, we interview successful female AI researchers coming from diverse cultural backgrounds, showcasing their inspirational cutting-edge research and insights into the future of AI. Through these conversations, we explore their personal journeys - how they overcome unique challenges, balance careers and family life, and make difficult decisions when necessary. We aim to understand how women in AI research perceive success and what it takes to achieve their goals.

Discover

Why Listen?

  • Gain Insights: Learn from leading women in AI and stay updated on the latest research and developments.
  • Be Inspired: Hear powerful stories of overcoming obstacles and breaking stereotypes.
  • Connect: Join the community of like-minded early career researchers and build your network.

Community

Stay Connected

Bluesky

Get Involved

Call to Action

  • Subscribe Now: Don't miss an episode! Subscribe to Women in AI Research (WiAIR) today.
  • Share: Spread the word and share our podcast with your network.

Our Podcast

Latest Episodes

Conversations with leading women in AI research from around the globe.

Ep.30: 100% Jailbreak Success? The Hard Truth About AI Safety, with Dr. Saadia Gabriel (Part 2)

Ep.30: 100% Jailbreak Success? The Hard Truth About AI Safety, with Dr. Saadia Gabriel (Part 2)

April 17, 2026

What actually happens when AI systems fail in the real world?In this final part of our conversation with Saadia Gabriel (UCLA), we unpack one of the most urgent challenges in modern AI: why even the most advanced models remain vulnerable to manipulation - and what that means for safety, fairness, and society.From multi-turn jailbreaking attacks with near 100% success rates to misinformation shaping human beliefs, this conversation goes beyond surface-level concerns and dives into how harms actually emerge in deployed systems.We explore:Why current guardrails are not enoughHow realistic attack scenarios differ from academic benchmarksThe connection between model vulnerabilities and societal harmWhat AI can (and cannot) do about misinformation and persuasionThe open research problems that still don’t have solutionsResources & Links:Generative AI in the Era of 'Alternative Facts'ModelCitizens: Representing Community Voices in Online SafetyTranslation as a Scalable Proxy for Multilingual EvaluationConnect with Dr. Saadia Gabriel:https://x.com/GabrielSaadiahttps://bsky.app/profile/skgabrie.bsky.social

Ep.29: From Hate Speech to Best Paper: Building Safer AI Systems, with Dr. Saadia Gabriel (Part 1)

Ep.29: From Hate Speech to Best Paper: Building Safer AI Systems, with Dr. Saadia Gabriel (Part 1)

April 15, 2026

What does it mean to build AI systems we can actually trust?In this first part of our conversation with Saadia Gabriel (UCLA), we explore the deeply personal and technical journey behind her work on AI safety, misuse, and responsible NLP.From experiencing targeted hate speech firsthand to receiving a best paper nomination, Saadia shares how her lived experience shaped her research — and why language models must be designed with both capability and risk in mind.🧠 In this episode, we cover:How personal experiences influence AI research directionsThe intersection of NLP, security, and privacyWhy LLMs can be both powerful and dangerousWhat it means to build trustworthy AI systemsLessons from working across multiple research paradigmsHow to pursue high-impact research as a PhD or early-career scientistResources & Links:X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-AgentsConnect with Dr. Saadia Gabriel:https://x.com/GabrielSaadiahttps://bsky.app/profile/skgabrie.bsky.social

Ep.28: EACL 2026: LLMs Can Hear… But Can They Reason? A New Benchmark for Audio Intelligence

Ep.28: EACL 2026: LLMs Can Hear… But Can They Reason? A New Benchmark for Audio Intelligence

April 13, 2026

What does it actually mean for a model to understand audioPaper: https://arxiv.org/abs/2601.19673In this episode, I talk with Iwona Christop, a PhD student at Adam Mickiewicz University, about her recent EACL paper introducing ART (Audio Reasoning Tasks) — a new benchmark designed to evaluate whether multimodal LLMs can truly reason over audio, not just transcribe or classify it.Most existing benchmarks test audio skills in isolation (like ASR or classification). But real-world intelligence requires something deeper: combining signals, comparing sounds, tracking context, and making decisions.This work takes a different approach:No text-only shortcuts — tasks can’t be solved via transcription aloneReasoning-first design — models must combine multiple audio cuesNo expert knowledge required — anyone can verify correctnessWe also dive into the diverse task design, including:Audio arithmetic (counting and comparing sounds)Cross-recording speaker & language identificationSound-based reasoning (e.g., inferring properties from audio)Speech feature comparison (accents, variations)Multimodal reasoning across text and soundThe dataset includes 9 tasks, 9,000 samples, and 30+ hours of audio — all generated in a scalable way using templates and TTS.👉 If you care about multimodal reasoning, evaluation, or the limits of current LLM capabilities, this conversation is for you.Iwona Christop:https://www.linkedin.com/in/iwona-christop/👍 Like & subscribe for more deep dives into cutting-edge AI research🔔 New episodes from EACL 2026 coming soon#WiAIR #EACL2026

The Team

About Us

Meet the people behind Women in AI Research.

Jekaterina Novikova

Jekaterina Novikova

Founder & Host

Dr. Jekaterina Novikova is the AI researcher with over 10 years of experience in natural language processing and human-AI interaction. She holds a Ph.D. in Computer Science from the University of Bath and has an extensive international experience working in the academia, industry and non-profits. She was recognized as one of the Top 50 Most Extraordinary Women Advancing AI In 2024, Top 25 Women in AI in Canada in 2022, received the "Industry Icon Award" by the University of Toronto in 2021, and included in the list of 30 Influential Women Advancing AI in Canada in 2018.

Smriti Singh

Smriti Singh

Founding Lead

Smriti Singh is an ML Research Scientist at Zacks Investment Research and holds an MS in Computer Science from UT Austin. Her research focuses on AI Safety and Generative AI applications in FinTech. As the Founding Lead of the Women in AI Research Mentorship Research Lab, she is dedicated to training new researchers and promoting equality and safety in AI while uplifting women leaders in the field.

Anais Hristea

Anais Hristea

Lead Illustrator & Designer

Anais is a talented graphic designer and illustrator who creates all the visual assets for the Women in AI Research podcast. With a background in digital art and design, she brings a unique aesthetic to the podcast's brand identity, from logo design to branding, ensuring a strong and professional look.

Asal Mohammadjafari Mamaqani

Asal Mohammadjafari Mamaqani

Technical Content Creator

Asal is a final-year Computer Science undergraduate at Amirkabir University of Technology in Tehran. She works as a Research Assistant, specializing in deep learning and computer vision, and has experience as a Teaching Assistant for courses such as Artificial Intelligence (AI) and Machine Learning (ML). She is currently looking for opportunities to pursue postgraduate studies to further her research in AI.

Parnian Fazel

Parnian Fazel

Technical Content Creator

Parnian is pursuing her MSc in Computing (Artificial Intelligence & Machine Learning) at Imperial College London. She holds a bachelor's degree in Computer Engineering from the University of Tehran. She contributes to the Women in AI Research podcast as a technical content creator, where she helps turn complex ideas into clear and engaging content.

Ali Akram

Ali Akram

Technical Producer

Ali is an experienced AI engineer and technical producer who ensures the podcast's technical quality. He handles audio editing, production, and technical aspects of the podcast, bringing years of experience in audio engineering and AI development. Ali also develops and maintains the podcast's website.