Skip to content

Why the Best AI Policies Start With Employee Education

Everybody’s talking about AI these days, but let’s face it: Very (very) few understand it. That’s why CISOs need to spearhead AI education at all levels — within IT teams, the C-suite, and across the organization. These four objectives are a great start.

Perspective

Between the rapid rise of shadow AI and the growing number of AI-powered scams and cyberattacks, now (like, yesterday) is a critical time for businesses to prioritize AI education – and CISOs can take the lead.

According to a survey by ISC2 of more than 15,000 chief information security officers and other cybersecurity practitioners and decision-makers, 90% say they face skills shortages at their organization, with more than half of those surveyed saying they believe a shortage of skills puts their organization at significant risk. The number-one skills gap reported to the international cyber training and certification organization? AI.

CISOs are critical to fostering awareness around AI, building competencies, developing training programs, and creating safe-use guidelines. Without these efforts, theoretical threats like shadow AI – employees using AI tools or apps without IT’s approval or, worse, without their awareness altogether – become all but guaranteed, says Jordan Rae Kelly, senior managing director and head of cybersecurity for the Americas at business management consulting firm FTI Consulting.

The future of IT and security is autonomous. But most organizations don’t know which manual processes are easy to eliminate. This is where you start.

“Businesses and employees are going to find a way to use AI, and if they’re not informed or well-versed on those technologies, the company is going to lose control of how their information is being used and what they’re able to protect,” she says. For example, an employee might use AI-driven analytics tools to analyze customer data, sales numbers, or customer feedback, all of which put sensitive data at risk.

Visibility across all of an organization’s devices is essential to keeping these kinds of risks in check, but that puts the entire weight and responsibility of securing an enterprise on understaffed, overstressed security departments. Comprehensive security requires a team effort.

To keep pace with the constantly evolving AI landscape, CISOs need to prioritize education at all levels – within their own department, in the C-suite, and across the organization. And this needs to happen before, during, and after the development of AI governance policies. These four directives are a good place to start.

1. Align with business priorities before tackling AI policies

To effectively shape how AI is managed, CISOs need to first educate themselves on the organization’s business model for AI, says Ali Aqel, cybersecurity consultant and AI security subject-matter expert at staffing firm ALKU.

Employees are going to find a way to use AI, and if they’re not informed or well-versed on those technologies, the company is going to lose control.

Jordan Rae Kelly, senior managing director and head of cybersecurity for the Americas, FTI Consulting

“A CISO needs to sit with the business and different parts of the technology organization to understand what workloads they’re going to use for AI, what types of services they’ll use, the functionality of those, and the business and functional requirements necessary for AI,” he says.

In the context of AI usage, application workloads encompass the operations and activities that AI-driven applications need to execute – for instance, the workloads for data processing and analysis applications can include tasks like cleaning, organizing, and transforming large datasets to make them suitable for training AI models. “Once they have a clear understanding of this,” says Agel, “the security team can start creating and aligning all of the requirements, standards, policies, and processes that are wrapped around that AI requirement.”

By understanding the organization’s business models, CISOs can begin to design education and risk-awareness programs that focus on the most relevant AI threats and opportunities, he says.

[Read also: 4 critical leadership priorities for CISOs in the AI era]

As an added bonus, CISOs can then turn around to their chief financial officers and other executives and point to the ways those AI education programs support revenue growth and/or prevent loss. “One of the skills that is lacking within the profession is that ability to contextualize cybersecurity…as a business driver,” notes ISC2 executive vice president of corporate affairs Andy Woolnough on a recent episode of the Solution Spotlight podcast. “You’ve got to be able to convince the CFO; it’s as simple as that.”

2. Educate executives through AI policy development conversations

CISOs play a key role in educating the C-suite on the risks and benefits of AI, and these critical conversations must happen as (or better yet, before) organizations begin developing AI policy guidelines, Kelly says. CISOs should drive these discussions around a number of security considerations.

[Security teams] need to understand what they’re translating, developing, and securing.

Ali Aqel, cybersecurity consultant and AI security subject-matter expert, ALKU

“They should hit on topics that are important to the information security apparatus of the company, like data privacy, intellectual property, and how they’re looking at outputs of AI models and doing validation,” she says. “They should be very specific about what experimentation looks like and should have a role in the assessments on things like bias and fairness.”

Driving these conversations is critical because they help executives understand the risks and benefits of AI, and ultimately gain their support in AI risk management, Aqel says.

[Read also: 5 key goals to guide cybersecurity budgets in 2025]

As AI continues to evolve, CISOs should keep the C-suite informed with updates about new threats, opportunities, and emerging trends. Regular briefings or reports can help maintain executive awareness.

3. Develop foundational knowledge in the basics of AI within the security team

CISOs need to ensure that their security team has a basic understanding of AI functionality, Aqel says.

“They need to understand what they’re translating, developing, and securing. They need to understand the different models of AI, like machine learning, deep learning, natural language processing, and specialized models,” he says. “These things are very important for security staff to understand because they need to know how to mimic security controls for these models.”

Aqel advises that security teams dedicate at least four hours a month in one-hour blocks to AI education. This education could happen in a variety of formats — closed-door sessions where teams delve into one particular topic, formal training through resources like Pluralsight, or resource training or certifications through the organization’s AI vendor. It’s important, Aqel adds, that security teams have dedicated budget to support employee education in emerging technologies like AI.

4. Champion companywide AI training

Training all employees in how to safely use AI builds an awareness and understanding of the unique risks associated with AI technologies – from adversarial attacks and data poisoning to ethical concerns and regulatory compliance. This helps the organization better safeguard their AI systems, protect data, and ensure compliance with industry regulations.

It also helps instill in all employees at all levels a sense of commitment: Everyone is part of the team when it comes to safeguarding the organization.

[Read also: Teamwork is an underrated tool in your cybersecurity arsenal – so is friendship]

The areas of focus in an organization’s training program will vary depending on the company, Kelly says. For example, many AI systems used for analytics rely on data that often includes sensitive or personally identifiable information. Employees should be trained on the importance of protecting this data and complying with certain regulations to avoid breaches and legal consequences.

For these programs to be successful, Kelly says they should be layered with examples that are applicable to the employees taking the training. This might include an email that appears to be from their company’s CEO requesting an urgent wire transfer for a new business initiative, alerting employees to the possible red flags (telltale clues like unusual phrasing, sender inconsistencies, or requests for sensitive information) that signal a phishing attempt, business email compromise, or other forms of social engineering.

“This helps them to really understand in close detail how the decisions around AI risks are really going to be applicable to the organization,” she says.

Kristin Burnham

Kristin Burnham is a freelance journalist covering IT, business technology, and leadership.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW