On the 22nd episode of Enterprise AI Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Joe Silva, former Chief Information Security Officer at JLL. JLL is a commercial real estate company operating in 84 countries worldwide. The company has over 100,000 employees, $20 billion dollars in annual revenue, and ranks #193 on the Fortune 500. Managing billions of square feet of property worldwide, JLL delivers a full suite of services, including — property management, leasing, capital markets, and real estate technology solutions. In this conversation, Joe dives into the realities versus the hype of AI in cybersecurity, AI’s role in shifting the balance between human judgment and automated systems, and AI’s potential to solve long-standing defender blind spots.
Quick hits from Joe:
On the new attack surface presented by AI: “If I look at how corporate functions at large enterprises, HR, finance, they were using RPAs (robotic process automations) to automate so much of this work, and now you look at AI agents as essentially hyper aware RPAs. It's a natural evolution. RPAs, which themselves created a massive attack surface and now we just start moving all of that to AI because we're completely taking the human out of the loop.”
On the increasing negative impact of AI cyberthreats: “Criminals can leverage AI to create highly bespoke and tailored fraud to individuals whose identities they can stitch together across multiple data sets. Organizations will start feeling the impact of AI abetting criminal activity, and that will raise the consequences.”
On areas that AI is moving the needle: “Gen AI is making it a lot easier for providers to make more information accessible and provide more context in tools…Where we see Gen AI being helpful is the ability to train machine learning models, and actually get more utility out of machine learning. We've been hearing ML and AI for the last 10 years as buzzwords associated with products and the utility of ML has improved due to AI.”
Book Recommendation: Five Years to Freedom by James Rowe
Evan: Hi there and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how AI has changed the threat landscape, real-world examples of modern attacks, and the role AI can play in the future of cybersecurity.I'm Evan Reiser, the CEO and founder of Abnormal Security
Mike: And I’m Mike Britton, the CISO of Abnormal Security. Today on the show, we're bringing you a conversation with Joe Silva, Former Chief Information Security Officer at Jones Lang LaSalle.
JLL is a commercial real estate company operating in 84 countries around the globe. The company has over 100,000 employees,--- $20 billion dollars in annual revenue, and ranks #193 on the Fortune 500. Managing billions of square feet of property worldwide, JLL delivers a full suite of services, including — property management, leasing, capital markets, and real estate technology solutions.
In this conversation, Joe dives into the realities versus the hype of AI in cybersecurity, AI’s role in shifting the balance between human judgment and automated systems, and AI’s potential to solve long-standing defender blind spots.
Evan: Well, Joe, first of all, thank you so much for joining us today. Mike and I have really been looking forward to the episode. Do you mind giving our audience a bit of a overview about kind of your career and maybe how you got to where you are today?
Joe: Sure. Well, uh, so for the last few months, I've been CEO of a stealth startup that I cofounded. But prior to that,uh, I had been the CISO at Jones Lang LaSalle, the big global commercial real estate company, for about three years until July. And previous to that, I led cybersecurity at TransUnion. Getting there was a, you know, a transition from the vendor side, where I had been at Symantec and then previously EyeSight Partners before they were acquired by FireEye. That was actually my first job in the cybersecurity industry.
Uh, came there from government, but, you know, I think I had was pretty fortunate to start off in cyber security, like at a time when Eyesight particularly was, you know, really kind of changed the industry with an adversary intelligence focused approach to cyber security. I think I really benefited from that when I went on the Defender side, uh, you know, in leadership roles at TU and JLL.
Evan: And maybe for our audience whose not familiar, do you mind explaining, uh, what JLL does? Like, what's the business or what kind of customers you guys serve?
Joe: So JLL does literally everything associated with commercial real estate globally. 84 countries, uh, we were in. And still are. It's every service associated with commercial real estate, managing properties, developing, and providing applications to customers to manage all of their real estate assets, both property, associated contracts, technologies within buildings, and they have real estate holdings themselves. So it's a very large, complex global business, servicing the biggest companies in the world. As well as, you know, smaller real estate owners that are JLL servicing their buildings, leasing facilities, management, all manner of services, including capital markets services associated with properties.
Evan: So, Joe, we're sitting here in 2024. It's very hard to do any sort of podcast without mentioning, you know, something, the specter of AI. Right. So we got to talk about AI, obviously. That's like, that's like our jobs.
Um, what do you think is kind of like real versus hype, right? There's a lot of kind of media about, you know, criminals using AI. There's, you know, every vendor in cybersecurity is claiming to use AI. Some of it's true, some of it's not. Where are we in, kind of, that hype cycle and, you know, what, what do you think, you know, some of your peers should be paying attention to, what do you think they should be ignoring?
Joe: Yeah, well, I guess what I'll say is I think axiomatic, just what you kind of see the history of cyber security is attackers are always outpacing defenders, right? We're always playing catch up with attackers. They have less bureaucracy and their SDLC is faster, particularly than large enterprises in terms of their ability. And they don't need to be successful all the time. The cost of failure is a lot lower for attackers.
So that's true. And if it's been true historically in cybersecurity, so there's no reason they won't be ahead also with regards to the use of AI. They're not worried about data privacy concerns with the use of AI and criminal organizations or nation States. They're going to leverage it much faster and more effectively than defenders will.
On the tool side, what also is real is two things. One, obviously, Gen AI is making it a lot easier for providers, I would say, to just make more information accessible, provide more context in tools. Also, you know, I'm doing a stealth startup now, and we're focused on third party software security, and where we see Gen AI being helpful is the ability to train machine learning models, and actually get more utility out of machine learning, which I mean, we've been hearing ML and AI for the last 10 years, right? As buzzwords associated with products. I think more than anything, uh, machine learning is actually the utility of it at least has improved due to AI.
Now, where I think it's, where I think it's a lot of hype is the ability to detect novel attacks. I have not seen any evidence of that. The ability to write secure code. I mean, I could talk about this at length. There's not enough secure code in the world to train Gen AI on to write secure code.
Evan: Well, there's a lot of stuff that people think is secure, but then we find out, you know, usually at some point, actually it wasn't.
Joe: That's, that's right.
Mike: Have you seen any actual examples of where attackers are leveraging, you know, AI or technology to be more effective in their attacks? Maybe some, some specific examples of you looked at the attack and you're like, you know, that's a pretty unique way that they're attacking this situation.
Joe: Sure. The most firsthand experience I've had is with social engineering attacks, uh, in a couple of different ways. One is obviously the sophistication of deepfakes, you know, its increasing tremendously. But also the targeted nature of deepfakes. Of these attacks in terms of their ability to target users with very realistic lures, understand how they communicate, how they do business, what terminology they're using, like the days of social engineering attacks using broken English are over.
So that's, I think, an almost an unsolvable problem because it's essentially AI taking advantage of the ultimate vulnerability in any organization, which is users and their trust in their colleagues. And it's been very sophisticated that way.
What I've seen from others, uh, and heard about is the fact that AI is just creating a almost TTP proliferation problem where the barrier to entry for attackers to use attacks that had previously been too sophisticated for them is falling.
The spray and pray type attacks that would happen, you know, against applications, uh, over the internet. Or with LORs, much higher hit rate, iterating through failure much faster, uh, and bypassing defenses much faster than even some technology providers can manage. So that's really what it's creating.
You're seeing almost nation state or very sophisticated organized crime TTPs becoming accessible to what had been kind of ankle biter type attackers that were really only taking advantage of organizations that had pretty gaping controls failures.
Evan: So like, you can imagine a world where a lot of configurations, the infrastructure at some point could be fully optimized, right? It is theoretically possible for you to patch every piece of software and make it perfectly secure code.
I know, I know that, like, it's at least theoretically possible, practically, probably not. But humans on the side, you can't patch the humans, right? Like you said, there's some inherent vulnerability. And so like, what does that mean? Right? As criminals get more access to tools, these tools become more, you know, generate contents more indistinguishable from authentic human generated content.
Like how do we do business in like three years or five years, right? How do we communicate? How does that work?
Joe: It can very well get so bad that humans just have to get taken out of the loop. Ultimately, when you look at so many of these attacks and they're built on, they're, they're largely aimed towards financial fraud, and it's always some breakdown in financial controls, cyber enabled, but breakdown in financial controls involving humans. So humans start to come out of the loop, both in their ability to detect, to detect fraud, to vet it. And when that happens, then everything is just going to happen largely via AI. I mean, this is a path, I'm not sure that it's definitely going to happen, but I could very well see it. If you say that humans are an unpatchable vulnerability, all you can do is remove the human from the equation, like you said.
So then, I mean, the dark thought is if we get to a point where we're having to leverage AI for all of these core processes, then what's the worst that can happen?
What scares me is Model poisoning at scale. If you think about right now, like what's going on in society where like the nature of social discourse in society is that we have a hard time aligning on objective truth as a society.
Well, when that starts to occur in technologies that we rely on all the time, you and I are pretty confident right now that each of us could pull up our maps app and navigate someplace correctly, but if we start to lose confidence in those basic technologies. I think that could take us to a very, like, dark place.
And again, but what does that do? It probably just accelerates the use of technology in lieu of human judgment when human judgment ends up being a vulnerability.
Evan: And so you're saying like, if the, the opportunity, the one potential path that resolves this issue is kind of removing the human from the loop of sorts.
And so like, you know, today, obviously one of the most common types of, you know, social engineering, right, is CEO impersonation, right. People impersonate me, they email people on teams, say, Hey, please get, you know, please pay this invoice. Today, like you're saying, the reason that's, that is a attack vector is because there's some human that has the ability to influence the payment process. So you're saying if that's kind of gone right now, it's just like either a mechanical system, we have some AI accounts payable, right? The AI accounts payables like we don't pay invoices for people without purge disorders. We don't pay purge disorders to scam companies in foreign countries, right? Is that, is that kind of like, how you envision this, or?
Joe: That's, that's, that's exactly what I'm saying. All of the vetting anomaly detection is occurring with some AI enabled technology and humans are completely out of the loop.
Evan: You know, if you, if you said that like three years ago, everyone would've called you crazy. They'd be like, Joe, that's science fiction. Now you're looking at some of the technology, like, I don't know, these autonomous agents, let's see if it's like the path we're on, right.
It's, you know, what used to be, it felt like 10 years in the future. Now it feels like a couple of years in the future. So like, is that where we’re going?
Joe: That's where I can see us going. If I look at how corporate functions at large enterprises, HR, finance, they were using RPAs to automate so much of this work, and now you look at AI agents as essentially hyper aware RPAs.
Evan: Yeah.
Joe: It's a natural evolution.
Evan: It's like RPAs, like flexible judgment to like some thoughtfulness.
Joe: And RPAs, which themselves created like a pretty, a pretty massive attack surface in my mind. And now we just start moving all of that to AI because we're, we're completely taking the human out of the loop.
It's going to be organization by organization, right? So many times you'll see an organization have a traumatic fraud event. Like there were the events, I think there were a number of British companies in the spring where deep fakes impacted them pretty significantly, leveraging the CEO.
And I, I think you see it a lot. Also, attackers are very smart at knowing who to target culturally because certain cultures are much more hierarchical. And so when they're not going to question something that comes from the CEO, whereas others, I think there's a, they maybe will question a lot more because it's more socially acceptable. Uh, but I could see an organization having a traumatic event and adopt, there's going to be vendors out there that are offering humanless payment processing, and it would be enticing to adopt those technologies after you've had a traumatic human based fraud event.
Mike: You know, the one good thing about AI, especially on the attacker side, um, I think it will potentially force us to change some very antiquated things. Like, it still shocks me today, you know, 2024, that, I have to go pay a vendor because they send me a PDF in an email. And then I wire transfer money that, you know, I hope we transferred it to the right place. Like, there's so many archaic things that were created decades and decades ago that really have not been forced to innovate. Like, at some point, the way to pay people has got to evolve beyond what it is today because it's so fraught with opportunities for fraud, opportunities for social engineering. It's mind blowing that that has not innovated.
Joe: Well, Mike, when the security incentives are using AI are aligned with the financial incentives in terms of it's shrinking comp and benefit as a percentage of op ex in the budget. When those incentives align, it's pretty sure bet it's going to happen. Probably the only thing that might hold it back are regulatory requirements for publicly traded companies. SEC, SEC is going to have something to say about financial audits.
Mike: Yeah. Well, I think the other inconvenient truth too, is go look at any company that suffered a major breach. And a year after that, their stock is usually higher than pre breach. They still have as many customers. Their brand is really not taking a hit on it. And so until the market starts punishing bad hygiene, bad security practices, um, I feel like that's another impediment on, on technology innovation on the defense side as well.
Evan: But Mike, don't you think that like is almost, doesn't, won't that naturally happen, right? Because obviously like the market will care about the business performance. And as more, as there's more enterprise software, more data gets digitized as the, the kind of blast radius and the magnitude of impact of every breach goes up. It's just a matter of time, right?
It seems like today, people don't kind of like, Maybe don't fully appreciate the consequence of some of these breaches. But it seems like in the future, right, where we're going, these breaches get bigger and scarier.
Mike: Potentially. I think the other problem is like, I can't tell you how many letters I've gotten in the mail this year from some unknown third party vendor that was compromised. And my data was provided, I think I'm on like 15 different, uh, credit monitoring solutions right now, all for free because, and I think that's part of the, You know, if, if you were in the government, you were part of the OPM breach. If you, you know, at this point, your data is everywhere. And so I think it's probably numbed a lot of us too, to the fact of, I just assume my data is out there. I just assume whatever company I share my data with is going to lose it.
Joe: But there's an AI tie in there too. I think one of the reasons that we're so inured to like losing our data and getting these notifications is most of us don't really feel an impact.
Mike: Yeah.
Joe: The percentage of folks whose identity has been stolen, but then have actually experienced some consequences from it is pretty low, but I think AI is going to change that.
Whereas criminals understand they have data sets, whether they've stolen them or bought them from other criminals, and then they can leverage AI to not no longer have these kinds of spray and pray attacks, but to highly bespoke and tailored fraud to individuals whose identities they can stitch together across multiple data sets.
I think individuals and organizations will start feeling the impact more through AI abetted criminal activity, and that will raise the consequences for organizations and potentially the liability associated with losing data.
Evan: You're basically saying like, you know, if my data gets, you know, breached, I'm like, okay, I'm just going to get some more spam emails, I guess, cause my email's out there, but there's really no consequence for me as a consumer.
You're saying that part of the problem is there's so much data that it's almost like hard to leverage, right? As a criminal. But AI then takes the ability to kind of suck up large data system, do stuff with it, make it very easy. And so now kind of like, the, the, uh, impact of that, that data, right, or the, the value that data to a criminal is kind of, it's a, it has a catalyst for that, and so then the consequence of this thing actually goes up, or the impact goes up a lot. So, is that kind of like?
Joe: That's what I'm, that's what I'm saying. So all of these disparate data sets, you maybe have your mobile records are lost. Your name, social, and address is lost. Well, the ability to combine those data sets, then do something like also combine it with your LinkedIn profile data and come up with very bespoke lures and fraud and have stitched together various elements of PII that may end up being like, you know, these are your security questions [GAP] and to do that at scale where it's not just a ton of effort for criminals to do that, uh, for, for one person or a small group of folks, but they can just do it for everybody and just have Some AI based technology do it for them. That's going to make us all feel it a lot more and take it a lot more seriously, when we see one of our providers lose our data.
Evan: I see. So it's like in the old world, like maybe some, maybe some retailer lost my transaction data, right? And the way the criminal use that in the old world was like, they know I buy stuff and they may have my email address. Maybe my phone number. In the future world, they're having their GPT model go scan through that and be like, oh, like this guy's a gamer. Let's go craft a kind of, you know, very targeted fishing. You know, it looks like a, I don't know, open beta access for the video game that you bought 10 years ago. And then we'll use like a very targeted email to like, So like, that's kind of like the opportunity, right?
Where it's like the content that's leaked is now used to personalize and increase the, you know, efficacy of these, um, the kind of these AI generated attacks.
Joe: I've not seen AI be able to generate very novel things. What I've seen it able to do is take large data sets that were too difficult or too resource intensive to correlate and make sense of and do that pretty capably. So that's what I would expect criminals are doing as well.
Evan: I agree. The only thing I would challenge is like, I think the difference between what you just described and a novel attack, that's just, I think the difference there is only based on like the I think the difference goes away as that data set gets bigger, and so I think it's just, it makes sense that at some point the AI criminal. Right. Or the criminal with the AI brain is getting more effective, right. Then the criminal with that one.
Joe: Right. And there again, historically attackers have always been ahead of defenders and I expect it to be true here as well. They'll be executing these attacks before we really understand what's practical for them to execute and can implement defenses against it.
Evan: So, I think in the short term, like we're in trouble, right? Cause like I said, historically criminals adopt new technologies faster. It's always been the case. It'll be the case of AI, but like long term, like give us the, give us the bull case, like why did the defenders win? You know, probably not next year, right. But five years from now, is there a structural advantage that you think defenders have, or do you think, you know, the speed of technology acceleration is so high that, that, um, the time that, or the ability for criminals to mobilize technologies that becomes a persistent advantage, like, you know, what's, what's the bookcase here?
How do we win?
Joe: Here's the positives that I see is that. The natural advantage defenders should have is that we should know ourselves better than an attacker on the outside knows ourselves. But if you look at what are the most persistent risks that you can't seem to get away from no matter what tools you buy and implement, I would say it's probably tied to inventory gaps because you can't secure what you can inventory.
And I yet to see an organization with a accurate and appropriately contextualized asset inventory. And the other is I'd say third party risk. Where it's a black box, what these services or applications that we're buying from third parties are doing. So regardless of what controls that we have at our disposal, we can't figure out how to array them against those risks because they are a black box.
The advantage of AI and the promise of it is again, the ability to take these disparate data sets, uh, and put them together and give us a contextualized inventory, tell us about the behavior and accesses of all of these third party applications and services that we're using so that we can proactively array controls, speed our time to detect, respond and remediate. So I do think AI has a lot of promise there.
I mean, I right now there's so many products that are incredibly effective. But only if you're applying them in the context of a highly accurate asset inventory. So the ability for us to actually get more security yield and risk reduction out of our security controls because we'll know ourselves better with AI. That's how I think we leverage our natural advantage.
Mike: And I think the problem, too, is AI is the marketing hype for everything. So, you know, anytime you go to a conference, every single vendor, bar none, is an AI vendor. So how do, how do defenders kind of cut through all the hype and, and really understand, okay, this, this solution, this technology is, really leveraging AI to solve a problem where this, you know, it's, it's window dressing and they're, you know, they just have something sitting on top to kind of make it look pretty.
Joe: For me, it's given what I know and what I have at my disposal. What would I, what could I do with it if I had unlimited time and people? Those types of hypotheticals are, I think, where you can apply AI. AI to tell me something I don't think I ever would have come up with and figured out. I'm not so confident there. So it's really about time to value, increasing time to value, uh, and doing it in a way that's also, uh, much more, much more contextualized with all the other information I have at my disposal.
Everybody is always trying to infuse intelligence into all of their operations, whether it's vulnerability management, uh, or their, their sock. But I would say, like, it's never quite, it's never as effectively done as it can be, but again, there's a huge data set of intelligence on attackers, on vulnerabilities, that AI can infuse into all of our operations and ensure we're using it as effectively as possible.
So that's, that's where I see AI coming in, but AI coming up with a magical solution to attacks we've never seen before. I, I'm highly skeptical of that. It's, what are we training it on that it's figuring this out, is the question I would ask.
Mike: Well, and a good, you know, kind of a sidebar too. I think as cybersecurity folks, we become so beholden to intelligence. Uh, and I think you, you kind of alluded to this earlier too, with, with AI and attackers, it's almost like, do we get to a point where the speed and scale and specificity is so great with TTPs and IOCs that maybe, maybe the, our old way of looking at threat intelligence is kind of out the window and we need to find a new quote unquote threat intelligence to, to kind of, find and understand and defend against attacks.
Joe: The half life of the value of information is going to diminish tremendously. Again, if I look at application attacks where WAFs are bypassed, you'll see, you would see attackers, they would manually tweak their payload until they finally bypassed the WAF. That may have taken hours before my experience, I'd seen it over the course of a couple of days where a lot of successful attacks on applications over the internet are preceded by unsuccessful attempts.
Well, that whole attack cycle can now happen in minutes. And their ability to iterate through infrastructure and TTPs is now compressed entirely. So any intelligence on those IOCs, by the time you have it, it's already of little to no value. So I do agree. It's going to, intelligence is going to have to be more focused on what is AI capable of, not what have attackers done in the past.
Mike: Obviously AI is going to disrupt the defender side as well. What's kind of the mental model you take from a, how does that impact your workforce? How does that impact your future cyber security analyst. How does that impact the current staff? Obviously, I don't think I is necessarily going to replace jobs, but you know, it is going to have an impact on the workforce one way or the other. What do you see, from a workforce perspective?
Joe: Yeah. I'd say one is just the expectation that we have to be bringing on people that, again, can answer that question, what would I do if I had 50 hours in a day? I need people to be able to answer that question, so, and then that will tell me what they can do with AI.
Uh, but that, I'll tell you, the thing that concerns me the most is, you're right. I don't think. I don't think highly technical or expert roles are going to be replaced anytime soon by A. I. But what we're seeing is a lot of lower level junior roles are being, people are saying A. I. Should be able to do this and then I worry about what that's doing to the bottom layer of the talent pyramid and where are we building the experts of tomorrow? If we're overly reliant on A. I. For what are maybe some of the rote tasks that interns or entry level employees are doing? Uh, but that they need to do as the bedrock to build expertise upon. That does concern me.
Evan: Joe, um, this may be a surprise to you, but at the end of the episode, we'd like to do a quick lightning round just to get kind of your, your kind of quick take hits.
So we're looking for like the, the one tweet answer. These questions are very hard to compress down to one tweet, but you are, you are, I already could tell you're someone that is very intentional and thoughtful about the words you use to communicate. So I think you are gonna, you're gonna pass this lightning round test with no problems.
Mike Britton, you want to kick it off for us?
Mike: What advice would you have for a security person or security leader stepping into their very first CISO job? What's something they may overestimate or underestimate about the job?
Joe: What they're probably underestimating is what percentage of their job is managing up and out versus managing down. [GAP] And what they're probably overestimating is the extent to which they have control over the security of the organization.
Evan: What's the best way for CISOs to stay up to date with kind of the latest in AI?
Joe: To stay close to their people and not have too many layers of management between them and the people who are using it on a regular basis.
Evan: What do you think would be true about AI's future impact on cybersecurity that most other people think would be science fiction?
Joe: Well, one, cause I've heard so many people say it code won't be more secure. It'll just be insecure in different ways because AI is going to find ways to exploit flaws that aren't exploitable now.
But two, I do think we are on a potential path where we will no longer be able to trust technology the way we do now because of the potential for like mass model poisoning.
Mike: So on the more personal side, what's a book you've read that's had a big impact on you and why?
Joe: Probably the one that's had the most impact on me, although I haven't read it in a few years is, uh, five Years To Freedom. It's about, uh, Rocky Versace, Vietnam POW and how he escaped. Uh, I've been in some pretty, pretty uncomfortable, shitty situations. Nothing compared to anything that he endured. Uh, you know, I think about it probably at least once or twice a month, whenever I'm feeling sorry for myself.
Mike: Alright, last question. Any advice you'd like to share to inspire the next generation of security leaders?
Joe: Well, I mean, honestly, if I can do it, anybody can, I mean, it's probably the one I would say, but there's not a traditional path into this world. Right. I don't come from a technical background. I come from like a soft skills, human intelligence background.
Um, but honestly, you're, you're only as good as the network of people who are willing to work with you. I've been incredibly fortunate to have like some really amazing, you know, People and minds, uh, work with me on, at multiple places, like any success I've had or will have will largely be because of them.
And I think folks need to start looking at who is their network of folks and who would be willing to work with me again. Uh, in terms of how they evaluate themselves and their capabilities as a leader.
Evan: Appreciate you sharing, Joe. Thank you so much.
Joe: No, it's my pleasure. Thanks for having me.
Mike: That was Joe Silva, former Chief Information Security Officer at JLL. I'm Mike Britton, the CISO of Abnormal Security.
Evan: And I'm Evan Reiser, the founder and CEO of Abnormal Security. Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe, so you never miss an episode. Learn more about how AI is transforming the enterprise from top executives at enterprisesoftware.blog
This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.