
If it sounds like science fiction, that’s only because the speed of change is outpacing the rules we thought we could trust. For Patti Titus, former Chief Information Security Officer and Chief Privacy Officer at Markel, the era of AI-enabled threats is already here, and it’s evolving fast.
“ChatGPT has given us a whole new landscape to think about how we provide the right training and guidelines to our employees,” she said. “Our adversaries are becoming more sophisticated at figuring out how to socially engineer people. It's becoming more pervasive.”
As CISO of a global Fortune 500 insurance and investment company, Titus must forecast and monitor risk. Her dual role in cybersecurity and privacy makes her attuned to both the technological and human vulnerabilities that AI has begun to surface. Whether through phishing emails crafted by generative models or synthetic logic attacks, she sees a future where security is no longer linear.
According to Titus, ChatGPT has both changed the way we look at content and transformed the entire perimeter of social engineering. Traditional phishing relied on broken grammar, obvious links, or crude impersonation. Not anymore.
“Our adversaries are becoming more sophisticated,” she warned. “You used to be able to look at a phishing email and you’d see all kinds of red flags. That’s no longer the case.”
Now, AI-generated messages can mirror internal communications with frightening precision, spoof tone and phrasing, and even simulate ongoing conversations. Titus believes this means cybersecurity leaders need to reinvent how they train their workforce.
That means using AI not just as a defense mechanism, but as a way to upgrade awareness programs. Security training must evolve to reflect the increasing sophistication of attackers, mirroring the tactics employed by the threat itself.
AI acts as both an external threat vector and an internal dependency. Titus emphasized that companies deploying AI tools must take governance seriously, treating model oversight not as an IT process but as a critical security function.
“Model governance is going to be a necessity,” she said. “And inside that model, governance is a function of incident response.”
This represents a shift in mindset: from thinking of AI systems as utilities to recognizing them as agents capable of action and requiring controls. Titus sees reporting as the cornerstone of this strategy.
“We are going to have to teach our people to report faster [instead of] the ‘Oh, it's just phishing, I'm going to delete it’ mentality,” she explained. “Don’t do that. I want you to report everything.”
Why? Because signals that seem harmless in isolation, like a single phishing attempt, can become invaluable when aggregated. “All the data that you provide to us enriches our threat perspective so that we can see what's really happening.”
In this model, governance isn’t reactive. It’s proactive threat intelligence, built from patterns that only emerge through participation. Reporting becomes a form of collaboration between human judgment and AI analysis.
Looking ahead, Titus envisions a more radical evolution in cybersecurity: a world where AI is both attacker and defender. “Can AI become pervasive enough to recognize itself as a threat?” she asked. “And then can it, in turn, think about how that AI is being developed and predict the next step before its counterpart predicts the next step?
It’s a scenario where logic loops replace malware, where the battlefield isn’t just digital, but synthetic. “You are getting into a game of cat and mouse,” she said, “only much more sophisticated, like the cat and the mouse are playing chess.”
Titus believes organizations need to prepare for threat environments where AI agents learn from one another in real-time, escalating not only speed but also complexity. Attacks won’t come in the form of brute force, but as adversarial strategies calculated by models designed to adapt faster than humans can. And in that environment, static defenses will fail.
Despite the rise of automation, Titus is clear that humans are still at the center of both the problem and the solution. Training is where companies win or lose, and that means organizations must invest not just in new technology, but in the behavioral science behind secure decision-making.
She also believes that the relationship between security and business teams needs to change. Titus sees an opportunity for CISOs to lead beyond the bounds of firewalls and SIEM dashboards. “This is our time to educate, to influence, to reframe how the organization thinks about risk.”
Silence can mask emerging campaigns, new tactics, or sophisticated trial runs by attackers. What an employee might think is nothing might be the breadcrumb that helps a company find the real threat.
In this way, every employee becomes a sensor in the enterprise defense system. Combined with AI, that sensor network creates feedback loops capable of catching anomalies that neither machine nor human could detect alone.
Titus doesn’t shy away from the complexity of the AI era. She’s clear-eyed about what it demands from security leaders. Modern organizations are not just defending against software, but also preparing for intelligent systems that evolve.
That requires a new kind of operational playbook, one where trust is built through transparency, governance becomes a force multiplier, and resilience is shared across both human and machine layers.
From ChatGPT to chess-playing adversaries, Patti is preparing for an intelligence battlefield. And she’s doing it by putting governance, awareness, and human intuition back at the heart of security strategy.