Skip to content

CISO Success Story: A Real-Life Marvel ‘Superhero’ on AI Fighting Cybercrime

With cyber-villains weaponizing AI at an alarming rate, we asked former Marvel CISO Mike Wilkes how it can also be used to counter these threats.

Q&A

Despite the growing threats of hackers using generative artificial intelligence (AI) tools to infiltrate corporate networks, more security professionals now think the technology can also deter such threats.

Photo of a white man in a suit with white hair, glasses, and a thin sculpted mustache and beard.In fact, more than a third (34%) of organizations are either already using or are implementing AI security tools to mitigate the accompanying risks of generative AI, according to a Gartner survey late last year. And this month a major player in the AI tools arms race, Microsoft, unleashed Microsoft Copilot for Security, an AI assistant it says will help fight cybercrime much faster and more accurately.

With both white-hat and black-hat hackers turning to AI at an almost incomprehensible clip, Focal Point thought it would be useful to ask one of the sharpest minds in cybersecurity where things are headed.

With AI-assisted cyberattacks on the rise, it’s essential to know how exposed your organization really is. Here’s how – get a comprehensive risk score in just 5 days.

Meet Mike Wilkes (pictured above). As CISO for Marvel Entertainment from 2016 to 2018 Wilkes oversaw safeguarding networks for the filmmaking giant’s superhero assets, including The Avengers, Spider-Man, She-Hulk, and X-Men.

Nowadays, Wilkes – who also served as CISO for SecurityScorecard, Lark Health and ASCAP – teaches courses on cybersecurity and technology innovation at NYU, runs a firm called The Security Agency that helps find jobs for displaced and fractional CISOs, and is a member of a World Economic Forum’s working group on quantum computing security. He also holds a master’s degree in philosophy from Stanford University, which he says makes him a theorist at heart, despite his 30 years of technical experience building critical infrastructure.

In a wide-ranging recent interview, we picked Wilkes’ brain on a variety of AI and cybersecurity topics. Here’s what he had to say:

(The following interview has been edited for clarity and length.)

What concerns you the most about hackers using AI to launch bigger and bigger attacks?

The fact that anyone can be a cybercriminal these days. It’s no longer just nation-states and organized crime with these capabilities. Everyone now has access to powerful, nation-state–level tools. It is the democratization of crime, and AI is enabling that democratization.

My grandmother could potentially become an elite hacker just by having the right tools and access to the AARP and local retirement community networks.

My grandmother could potentially become an elite hacker just by having the right tools and access to the AARP and local retirement community networks. And we as defenders are now seeing the return of bored teenagers launching attacks, some on their own and others because they are being bullied into doing so. Incident response teams are even now bringing high school guidance counselors into ransomware negotiations to help work out solutions.

Are you concerned about companies uploading sensitive data to AI assistants like ChatGPT?

Not so much. A little education and guardrails go a long way toward mitigating that risk.

People didn’t get up in arms about the security and information disclosure implications of allowing Grammarly or Google to review documents in real time. When you’re writing emails, systems know exactly what words you wrote and even use token calculations to figure out what your next word might be.

[Read also: Fight fire with fire – 3 major ways AI fuels your cybersecurity arsenal]

Data has been exiting companies for years. I often remark that data is like water flowing downhill: It will always find a way to get where it’s needed. But now it seems like the ubiquity and hype surrounding ChatGPT has led to this whole outsized concern, even though this stuff has been going on for a long time with other tools.

Are AI tools like Microsoft’s Copilot just the tip of the iceberg for countering cyberthreats?

I think so. Microsoft Copilot will help analysts waste less time with the obtuse details of Windows event logs so they can spend more time adding value for their security operations centers (SOCs).

Companies can use AI to conduct more effective penetration testing by introducing the capability for lightweight, continuous testing with automation, leaving the more interesting work for humans.

These kinds of assistants will keep coming because machines are really good at quickly parsing lots of data and helping to determine when there’s an anomaly and what might be a significant detail while taking a hell of a lot of tedium out of the lives of security analysts.

How does that reduce risk? Well, that reduces the risk by allowing companies to crowdsource, create, and implement more meaningful policies and procedures. Identifying best practices and writing more-coherent runbooks and SOPs (Standard Operating Procedures) has suddenly become much easier. They can conduct more-effective audits and access-rights reviews. They can find and show discrepancies between their policies and actual practices, and then fix them. And they can use AI to conduct more effective penetration testing by introducing the capability for lightweight, continuous testing with automation, leaving the more interesting work for humans to perform.

Beyond AI tools, what can be done to counter threats?

The bad guys have less friction since they don’t have to contend with governmental oversight or regulatory requirements like the rest of us. As such, they can innovate and pivot faster than we can.

Our collective cyber resilience, therefore, lies not in a single new tool, technology, or algorithm; it actually lies in information-sharing communities like ISACs (Information Sharing and Analysis Centers) and Infragard. I think that we need to double down on membership and participation in these threat-intelligence sharing groups.

[Read also: Ultimate guide to AI cybersecurity – benefits, risks, and rewards]

For us to be resilient as an economy and a digitally dependent society, we have to think about the private sector as the engine of our economy and recognize that all of that is vulnerable and prone to systemic risk. Systemic risk is an emergent property of complex systems that are highly connected and interdependent. The way we reduce that systemic risk is to act like a bunch of meerkats standing outside of our dens, each looking in different directions toward the horizon, watching for threats and signs of predators, and immediately barking an alarm as soon as we see something worth sharing with the rest.

That lessens the impact of cyberattacks overall and reduces exposure for the entire community.

Do you see any parallels between the Marvel universe and the cybersecurity world?

Oh, definitely. Maybe it’s the philosopher in me, but there are dystopias and utopias in our future. Technologies like AI are neither good nor evil unto themselves; they are just amplifiers of peoples’ intentions and visions for the future of humanity.

In the Marvel Cinematic Universe, you have maybe 5,000 very principled characters, like Captain America, and some… whose technology choices gave birth to catastrophes.

But they are, of course, like the proverbial double-edged sword. With them, we can author a future life for humanity and happiness that becomes more utopian with freedom and prosperity for everyone – or if we choose to go the other way, be selfish a-holes, and manifest all sorts of dystopias. The choice is ours to make, which is a theme of many of the stories found in comic books and movies.

In the Marvel Cinematic Universe, you have maybe 5,000 very principled characters, like Captain America, and some who are more complex, whose technology choices (Infinity Stones) gave birth to catastrophes, like in 2018’s Avengers: Infinity War and 2019’s Avengers: Endgame. A lot of characters are not necessarily villains as much as antiheroes; they are just people. Even Thanos (a villain played by Josh Brolin whose character sought to bring stability to the universe by using the power of six Infinity Stones to wipe out half its population) has a cadre of supporters for his utilitarian-based sense of ethics.

[Read also: How to prepare for the EU’s AI Act – start you’re your risk level]

Similarly, not everyone using AI in questionable ways is necessarily evil. Some are just like Thanos, reacting to something they feel is an imbalance or a danger to society writ large. They could be doing something with technology that they think will make the universe a better and more sustainable place, even if the rest of us do not agree with their goals or methods.

What we need is to ensure that all voices are heard and that we navigate our path forward with empathy, respect for one another, and a sense of shared fate. There is only one Earth for us to care for and, unlike video games, there are no do-overs.


TO LEARN MORE

Check out other exclusive interviews with security leaders in our “Success Stories” series.

Wendy Lowder

Wendy Lowder is a freelance writer based in Southern California. When she’s not reporting on hot topics in business and technology, she writes songs about life, love, and growing up country.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW