Why Giving Employees the AI Tools and Training They Want Is a Win-Win

This is the first of two posts from Glen Cathey on why companies need to provide direction and tools so their workforces can use AI safely.

There appears to be a massive disconnect in the corporate world when it comes to AI literacy. While an overwhelming majority of workers are eager to develop their generative AI skills and put them to work, only a fraction of companies are stepping up to provide the necessary training. 

This isn’t just a minor oversight — it’s a critical strategic blind spot that could expose companies to serious data privacy and security risks while also significantly impacting their organizational effectiveness and competitiveness in today’s rapidly evolving AI-augmented world of work.

Recent research from Microsoft — covering over 31,000 people across 31 countries — found that 75% of knowledge workers are already using generative AI at work and 78% of them are bringing their own AI to work. On top of that, more than half of the people using generative AI at work are reluctant to admit it.

This creates a concerning situation where employees are secretly leveraging these powerful tools for work without proper guidance or support.

The benefits of using GAI are too compelling to ignore

Some companies have taken action to limit access to generative AI tools like Copliot, ChatGPT, Claude, or Gemini at work, but let’s be realistic — they can only effectively do that on work-provided devices. Even with policies prohibiting public AI tools, there’s no effective way to monitor and block use on personal devices. And if a company hasn’t provided employees with private and secure alternatives, it’s easy to understand the motivation for bringing their own AI to work.

The benefits of using generative AI for work are simply too compelling for people to ignore. Microsoft’s research found that when people can use generative AI at work:

  • 90% say it helps them save time
  • 85% say it helps them focus on their most important work
  • 84% report it helps them be more creative
  • 83% say they enjoy their work more 

You’ve probably seen the controlled studies like the one from Wharton and Harvard and the one from MIT showing how knowledge workers can do more work in less time and at higher quality and enjoy their work more. Perhaps one of the most important findings from these studies is that generative AI can act as the great leveler — less experienced workers using AI can often perform at the same level as their more senior colleagues.

The reality we need to face is this: If you’ve provided your employees with private and secure generative AI tools and comprehensive training in their safe and effective use, you’re likely in good shape. 

For everyone else, there’s cause for concern.

Provide the tools and training that remove the risk of shadow AI use and potential data issues

On the one hand, you have employees who are eager to use generative AI for work because they can do more and better work. On the other hand, you have four types of companies:

  1. The Prepared: Those that provide both secure AI tools and comprehensive training, enabling their workforce to safely maximize the value of these powerful tools
  2. The Partially Prepared: Those that have provided secure tools but no training, limiting the potential value and creating risk through improper use
  3. The Restrictive: Those that haven’t provided any secure AI tools or training, forcing their workforce to choose between productivity and compliance
  4. The Avoidant: Those that are simply ignoring the situation entirely

Any approach except the first creates the perfect storm for shadow AI use and potential data privacy and security risks.

Whether or not you provide secure generative AI solutions for your employees, you have to recognize that some employees are likely to use generative AI tools on their mobile devices for work, whether they’re formally allowed to or not. That’s why I believe providing training on safe and responsible use is absolutely critical. And this applies across the board — whether companies are still working on their AI strategy or have already provided their workforce with private, secure AI tools. 

The key is ensuring employees have the knowledge they need to use these tools effectively and safely to realize their full potential.

Legitimate security concerns don’t have to paralyze your organization

The primary concern of most companies in the use of generative AI is the misuse of data. The fact of the matter is that there are a great number of use cases of generative AI that do not involve entering personally identifiable information (PII) or sensitive, confidential, or proprietary information. If you’re not entering any of those kinds of data, there simply isn’t a data privacy or information security concern.

While data security concerns are legitimate, they shouldn’t paralyze organizations from embracing generative AI. Many valuable use cases don’t involve entering any sensitive information at all. Consider these examples:

  • Writing and improving job descriptions to attract diverse talent
  • Generating creative sourcing strategies and strings for hard-to-fill roles
  • Creating candidate outreach messages and follow-up templates
  • Developing interview questions and screening frameworks
  • Analyzing public labor market trends and compensation data
  • Writing engaging job posting headlines and social media content
  • Creating onboarding checklists and new hire documentation
  • Drafting employee engagement survey questions
  • Developing training materials for hiring managers
  • Writing inclusive workplace policies and employee handbooks

These are just a few examples of how generative AI can enhance HR and recruiting work without touching sensitive data. Of course, there are other use cases that do involve sensitive or confidential information.

So, the principal challenge is helping your teams understand the difference between the types of tasks that pose no risk and those that do involve sensitive information. This is exactly why it is ideal to provide employees with safe and responsible use guidelines by role and specific use, clearly stating which use cases are acceptable and which use cases are not allowed and why. 

The key lies in providing clear, role-specific guidelines that delineate:

  • Generative AI do’s and don’ts
  • Approved use cases with examples
  • Prohibited scenarios and the rationale behind restrictions
  • Best practices for data handling
  • Procedures for handling edge cases
  • Reporting mechanisms for potential issues

Final thoughts: You can create an environment where your workforce feels confident it is using AI appropriately

It all comes down to this: Your employees want to use these powerful tools and they’re going to find ways to do so. 

The real question is whether they’ll do it with proper guidance or in the shadows. By providing clear, role-specific guidelines about what’s acceptable and what isn’t, along with the reasoning behind these decisions, you can create an environment where people feel confident using AI tools appropriately. 

This isn’t just about minimizing risk — it’s about maximizing the potential of both your tools and your talent while keeping your organization’s data secure and being fully compliant with privacy regulations.

Glen Cathey is a strategic thinker and global keynote speaker with extensive experience in talent acquisition and leadership. He is passionate about making a difference, developing others, and solving problems. Glen has served as a thought leader for sourcing and recruiting strategies, technologies, and processes for firms with more than 2 million hires annually. He has played a key role in implementing and customizing ATS and CRM systems, and has hired, trained, and developed large local, national, global, and centralized sourcing and recruiting teams. Glen has spoken at numerous conferences, including LinkedIn Talent Connect, SourceCon, Talent42, and Sourcing Summit Europe.

Uncategorised