Shai Mendel, CPO and co-founder of Nagomi Security, brings a unique perspective to the world of cybersecurity. With roots in national security, his career has evolved into building technologies that help organizations reduce their cybersecurity exposure and optimize the security infrastructure they already have in place.
In this interview, he reflects on the early experiences that shaped his approach, the challenges organizations face in balancing control and complexity, and how AI is both a transformative tool and a potential liability when misused. From the illusion of control to the future of autonomous security systems, Shai shares clear, actionable insights into what it really takes to stay secure in an increasingly unpredictable landscape.
Explore more interviews here: Miles Spencer — Soul Tech, Digital Ancestors & AI’s Role in Grief and Memory
You began your career in national security for the Israeli Prime Minister’s Office. Was that a deliberate choice, or did it happen by default?
To answer your question, it started with compulsory military service, so there wasn’t much of a choice initially. But in truth, I had always been fascinated by covert operations. I read countless books about that world as a kid, so I was actually excited for the opportunity.
What really stuck with me from that time was the mindset: nothing is impossible. That environment teaches you that with persistence and the right team, you can find a solution to any problem. That belief has stayed with me ever since.
That mindset clearly influenced your approach to cybersecurity. You’re now the CPO and co-founder of a company focused on exposure management and data security. What does that actually mean in practice?
The company is called Nagomi, which means “balance.” That word reflects our philosophy. We help organizations reduce security noise and risk, not by adding more tools, but by maximizing the tools and talent they already have. It’s about achieving clarity in complexity.
I saw a recent example where Microsoft Defender was compromised. From your perspective, how could your approach help in a situation like that?
Almost all of today’s high-profile breaches, including that one, could have been prevented with existing tools. It’s rarely a zero-day vulnerability or some advanced attack. It’s basic hygiene.
The real issue is that many organizations don’t fully use the capabilities they already have. If they had better visibility and prioritization, if they could get more out of what’s already in place, many breaches would be avoidable. That’s exactly where we come in.
You’ve spoken about the “illusion of control” in cybersecurity. Do organizations really have any meaningful control anymore?
That’s a very relevant question. What we often see is a false sense of security. Organizations assume their tools are functioning correctly and that they’re fully protected, but that assumption can be dangerous.
Yes, some control exists. But fully understanding that control, and its limits, is a different conversation. The illusion comes from trusting tools without verifying their effectiveness.
What’s your perspective on digital privacy? With the technology available today, is privacy even real anymore?
It’s a big question. Philosophically, especially with social platforms, privacy has been eroded. But in the broader threat landscape, the reality is this: the entry barrier to launching sophisticated attacks has dropped significantly, thanks to AI.
At the same time, the tools available to defenders have also improved—but everything is evolving so rapidly: threats, defenses, attack surfaces. It’s easy to feel overwhelmed.
That’s why we emphasize a pragmatic approach: start by getting the most out of what you already have. AI and human teams working together can make a meaningful difference.
You’ve also warned that AI agents can backfire in security contexts. Why does that happen?
It usually comes down to using AI for the sake of using AI. Like any new technology, it can be misapplied. Organizations sometimes try to automate the wrong things, and that misalignment becomes obvious quickly.
When that happens, they waste time and money without generating meaningful impact, and worse, they miss the opportunity to invest in something that could have truly helped. That disconnect often leads to frustration and disappointment, especially among teams who were excited about what AI could deliver.
Can you share an example of how AI and humans are working well together in your world?
Absolutely. Here’s a real scenario. One of our clients asked our platform: “How protected am I against Scattered Spider?”, a well-known attack group.
Our AI-driven system automatically pulled in relevant threat intelligence, scanned the client’s environment for vulnerabilities related to that attacker, identified misconfigurations, prioritized them based on actual risk, and proposed remediations, all while maintaining a clear, natural conversation with the user.
That’s the kind of collaboration between humans and AI that the industry should move toward.
Do you follow any innovators in AI and cybersecurity that are particularly inspiring to you?
I follow quite a few, less individuals and more communities, both in and outside of Israel. One example is the n8n community. It’s an automation tool I’d recommend.
But really, the most important thing is to keep learning. Whether it’s through communities, podcasts, or just following people on X or LinkedIn, what matters is staying curious and staying updated.
There’s so much noise in the security and AI space today. How do you filter what to trust?
It’s hard, but the best way I’ve found is word of mouth. Peer recommendations are still the most trusted source. If someone you know and respect tells you a product or a data source is reliable, that still carries more weight than any review or website.
Many CISOs I know operate the same way. Trust spreads through networks,and that’s how I evaluate most new tools and platforms as well.
You’ve emphasized the idea of getting more out of what we already have. What does that look like in practice for a security team?
In our domain, there are a few key things security leaders want to validate:
- Are existing tools covering all the relevant assets?
- Are those tools configured correctly?
- Are basic security hygiene practices being enforced?
- Are misconfigurations and vulnerabilities being prioritized effectively?
- And can as much of this as possible be automated?
When those pieces come together, organizations get the most out of both their tools and their teams. In cybersecurity, success is measured by exposure reduction. That’s the North Star.
As we move toward more automation, what role will humans play in cybersecurity?
They’re not going anywhere. In fact, their role is evolving in meaningful ways.
I believe human analysts will take on three key responsibilities:
- Building and fine-tuning AI agents—ensuring the logic is up-to-date and effective.
- Managing AI operations—acting as team leads to AI “juniors” that handle mid-level tasks.
- Tackling complex cases—handling nuanced threats that AI can’t yet manage.
As AI matures, humans will go one level above—overseeing, steering, and elevating the system.
What do you envision for the future of AI in cybersecurity—in the next five years?
In the next two years, we’ll see a lot of augmentation—AI working alongside humans as copilots, assisting and learning.
In five years, I expect we’ll see far more autonomy. Tasks that are assisted today will be fully automated. That shift will elevate the role of humans, who’ll be tasked with building, managing, and refining those autonomous systems.
What advice would you give to leaders, whether at startups or large enterprises, who are exploring AI in their security strategy?
My best advice: just start.
There’s no perfect playbook. It’s okay to try, fail, learn, and iterate. Waiting for certainty will only hold you back.
Start experimenting. The learning will come from doing.
To end on a big-picture question: how secure are we, really?
It’s a tough one.
People often feel more pessimistic than the reality justifies. And I get why, it’s a scary space, and we don’t always know what’s coming next.
But I’m optimistic. I truly believe that defenders will learn to use AI better than attackers in the long term. And when that happens, we’ll be more secure than we are today.