Two women smile before a blue background with bold text, "It takes a village to secure AI." The "Breaking Badness Podcast" logo and "RSAC 2025 Edition" appear, teasing Building Secure Campaigns and Better Humans: A Conversation with Mick Baccio.
Podcasts

It Takes a Village to Secure AI

In this episode of Breaking Badness, we sit down with Raji Vanninathan, a cybersecurity leader at Microsoft driving the charge on AI security and safety. Raji shares her experience leading the team responsible for managing the end-to-end lifecycle of AI vulnerability disclosures, building proactive safety frameworks, and cultivating a global community of AI security researchers. From developing Microsoft’s AI Bug Bar to launching the “Guardians of AI Safety” Discord community, she brings both vision and practical strategies to a rapidly evolving field.

We discuss the shifting threat landscape as threat actors begin to leverage generative AI, the critical need for shared language and cross-functional collaboration, and how Microsoft is thinking about trust, transparency, and incident response in the AI era. If you’re navigating the challenges of AI risk, vulnerability coordination, or ethical deployment, this is an essential listen.

Rethinking AI Security: A New Era of Trust, Safety, and Collaboration

In a wide-ranging conversation with Microsoft’s Raji Vannianathan, we explored what it really takes to make AI systems safe and secure in the real world. With a background across proactive and reactive cybersecurity, Raji is leading Microsoft’s efforts to build a scalable, inclusive approach to AI vulnerability management, disclosure, and trust.

“Cybersecurity is an infinite game. The attacker only has to be right once. The defender has to be right all the time.”

Her team at Microsoft’s MSRC is focused on AI vulnerability disclosure, incident response, and growing the broader safety research ecosystem. And she’s clear that it’s not just about fixing bugs, it’s about evolving how the security community works together.

The AI Bug Bar and Coordinated Disclosure

One of the most practical outcomes of this work is Microsoft’s AI Bug Bar—a structured framework for what kinds of AI vulnerabilities are considered impactful and reportable. Raji emphasizes the need for clear incentives and pathways for security researchers to report issues before threat actors exploit them.

“We want to incentivize high-value research. That’s why we published the AI Bug Bar.”

In addition, her team is pushing for coordinated vulnerability disclosure (CVD) in the AI space, a model that requires strong partnerships between external researchers and internal security teams.

Learn more about the Microsoft MSRC Researcher Portal for AI vulnerability disclosure.

Guardians of AI Safety: A Cross-Functional Movement

Raji’s vision goes beyond Microsoft. After leading a standing-room-only “Birds of a Feather” session at BSides San Francisco, she launched a community Discord called “Guardians of AI Safety.”

“The goal is to bring people with different perspectives together—AI researchers, developers, enthusiasts, red teamers, and policy folks.”

This space is designed for ongoing conversations, shared learnings, and problem-solving across technical and non-technical domains. Her approach blends cybersecurity fundamentals with a liberal arts mindset.

Want to get involved? Reach out to Raji on LinkedIn to join the Guardians of AI Safety
community.

Threat Actors, Language Fatigue, and Iterative Defense

The episode also touches on the emergence of AI-powered phishing attacks, the fatigue of constant AI discussions, and the challenge of maintaining momentum.

“We’re no longer dealing with misspelled phishing emails. Threat actors are using
AI to craft better lures.”

To combat this, Raji advocates for clear shared language, realistic expectations, and the creation of small, focused workstreams rather than boiling the ocean.

“This can’t be a one-and-done panel. This is an ongoing conversation.”

The Path Forward: AI as a Sociotechnical Challenge

Raji concludes with a reflection on the biggest recent win: widespread acknowledgment that AI isn’t just a technical challenge, it’s also deeply social.

“Just the acknowledgment that AI brings psychosocial harms and needs diverse perspectives is a huge step.”

Through public-private collaboration, coordinated research, and community accountability, she believes we can build systems that are not just technically sound—but also ethically grounded.

Watch on YouTube


That’s about all we have for this week, you can find us on Mastodon and Twitter/X @domaintools, all of the articles mentioned in our podcast will always be included on our podcast recap. Catch us Wednesdays at 9 AM Pacific time when we publish our next podcast and blog.

*A special thanks to John Roderick for our incredible podcast music!