
“You can't make it a people problem and a tech problem. It's got to be holistic,” says Ryan Fritts, CISO of ADT. For a security leader charged with protecting a brand synonymous with trust, Fritts isn’t battling just breaches or bugs; he’s confronting a world where scale itself is the threat.
In the past, cybersecurity was architected around the concept of perimeter defense. But with cloud-first architectures, those walls have crumbled. “The whole IT ecosystem has been kind of deconstructed and reconstructed in services that are hosted by third parties,” Fritts noted. Today’s CISO isn’t guarding a castle. They’re monitoring an open city of endpoints, all scattered across someone else's infrastructure.
ADT, with over 20,000 employees and countless connected systems, generates a flood of data that challenges traditional security operations. Fritts describes it simply: “When you have this many systems and this volume of data sets, the data explosion really hurts.”
Five to ten years ago, the security problem was basic: could you even get the logs? Log aggregation was the bottleneck, and any AI application was merely speculative. But those days are long gone. “You're no longer sifting through a mountain of hay to find a needle. You're sifting through a pile of needles,” Fritts explained. While that may sound like progress, the reality is more nuanced.
With so much interconnectedness, the failure points have multiplied. Legacy systems and static defenses falter under the complexity. And as defenders reach for AI to automate detection, attackers do the same, but faster and often more creatively.
“When something gives you efficiency to see an operation, it doesn't just give it to you. It gives it to everybody,” Fritts warned. This parity of power has altered the equation. For the first time, attackers using generative AI can convincingly imitate legitimate business communications.
“A lot of these [phishing attempts] originate from non-native English speakers targeting native English speakers,” he noted. But with AI, language barriers vanish. An attacker who speaks no English at all can generate persuasive, human-like dialogue to socially engineer an unwitting employee.
In response to this evolving threat matrix, ADT has shifted its focus toward leveraging AI not as a catchphrase but as a strategic imperative. “The advancements that have really been happening on the AI and analytics front have enabled more efficient deployment of resources,” Fritts said.
The security team can now apply AI models to establish behavioral baselines, identify anomalous activities, and expose threats embedded in legitimate workflows. For example, if a trusted vendor sends a change in payment instructions, AI helps assess whether that behavior deviates from normal patterns, flagging potential fraud that legacy rule-based systems would miss.
“How do you find and identify that, and how do you prevent it before it becomes a loss? That is where AI and analytics really shine,” Fritts emphasized. The defensive goal isn't just precision but contextual awareness. Whether it's evaluating irregular network usage or detecting odd transactional patterns, AI helps security analysts focus on the exceptions that truly matter.
Still, Fritts tempers expectations. AI doesn’t eliminate false positives. It doesn’t solve every problem. But it dramatically reduces noise, making the complex task of enterprise defense more manageable.
Despite these defensive gains, Fritts is deeply concerned about the speed with which adversarial AI is evolving. Deepfake audio, for example, is a looming nightmare. “With a lot of AI, deepfake technology, how can you possibly train when [these technologies] can give you something so believable that it's indiscernible from reality?”
That threat strips away many of the social engineering defenses built on employee vigilance. Looking at phone numbers? Spoofable. Listening to voice tone? Fabricated. Even training begins to lose efficacy in the face of highly personalized, AI-generated threats.
“It’s just to the point where there's no technology that you can deploy... Are we gonna give people secure phones and two tin cans and a rope when they want to call somebody?” Fritts asked, half joking but fully serious.
He calls this moment the “cat ahead of the mouse,” where threat actors are outpacing defenders in sophistication. And that gap widens with every passing month as generative AI grows more accessible.
AI’s promise isn’t just in detection but in anticipating threats. “The ability of AI and analytics to look over a problem, know what normal looks like, and call out things helps turn the pile and the mountain of hay that you're trying to find the needle in into the pile of needles,” Fritts said. Even with some “hay” still in the mix, the contextual signals uncovered by AI reduce guesswork.
ADT’s use of AI spans from network traffic anomaly detection to communications analysis, surfacing the outliers that pose real danger. The shift isn’t just technological but cultural. Security teams must now think probabilistically, not deterministically. The question is no longer: Did X happen? But rather: Is X normal?
Yet, amidst all the advancement, Fritts remains skeptical of the overpromised, overhyped AI narratives. “Everybody assumes you drop ChatGPT into a product, and magically, it's a hundred times better... That's not true,” he cautioned. AI can be transformative, but only when grounded in clear problem-solving logic.
He likens AI’s marketing sheen to modern snake oil: a magic solution in search of a problem. “There’s only so much bandwidth. You're gonna have to do POCs and evaluations,” he said, calling for pragmatism amid the noise.
Asked about AI’s long-term impact, Fritts doesn’t flinch: the biggest advantage may lie with attackers. “The ability to generate an adversarial AI that can know what various controls might exist and how to best evade them… is going to be the thing that is most impactful.”
In this landscape, no one is immune to manipulation. “I could be tricked. You could be tricked. Anybody listening could be tricked,” he said. Human fallibility is the weak link AI-savvy adversaries will exploit.
The solution? Strategic clarity, not panic. Understanding where AI helps, where it falls short, and where attackers might go next. “Be passionate about understanding how things work,” Fritts advised, “and break them down into the smallest possible constructs.”
For ADT, that construct begins with trust. And in a world where deepfakes, prompt-engineered scams, and mountains of network noise reign, that trust is protected not just with sensors and scripts but with AI that sees what humans can’t and calls it out before it’s too late.