Your Employees Are Already Using Generative AI: Here Are Guidelines to Help Them Use It Responsibly
Do you know how your employees are using generative AI at work? There’s a good chance that more people are using generative AI (GAI) than you think, and most of them are doing it without your knowledge or permission.
A recent survey found that more than 40% of professionals have used ChatGPT or other generative AI in some capacity at work. However, 68% admit they’re using it without their boss’s knowledge. This suggests a level of uncertainty around how companies might perceive using such technology within the workplace.
In the face of this uncertainty, what are you doing to provide clarity and ensure your employees are using generative AI productively, safely, and responsibly?
Experiment with generative AI so you can tap its benefits
You can’t blame people for being interested in the recent buzz around generative AI given the simplicity of new user interfaces for using AI to create high-quality text, images, and videos in a matter of seconds.
Generative AI applications such as OpenAI’s ChatGPT, Google’s Bard, Midjourney, and others have many potential applications across a wide range of industries, including HR, sourcing and recruiting, software development, marketing, sales, and fashion. It can help improve productivity and creativity by automating tasks such as writing marketing copy, writing/debugging code, taking notes during virtual meetings, drafting and personalizing emails, improving job descriptions, creating slide presentations, and much more.
You’d be smart to encourage your employees to experiment and work with generative AI as it can measurably impact worker productivity. An MIT study of 444 white-collar workers showed productivity increases with the use of ChatGPT for writing and editing tasks along the lines of marketing, grant writing, data analysis, and human resources.
ChatGPT users were 37% faster at completing tasks (17 minutes to complete vs. 27 minutes) with roughly similar grades (level of quality), and as the ChatGPT users repeated their tasks their work quality increased significantly faster.
A new study from researchers out of Stanford and MIT found that the use of a GAI-based assistant increased productivity by 14% on average, with the greatest impact on novice and low-skilled workers who were able to complete work 35% faster with the tool’s assistance. They also found that AI assistance improved customer sentiment, reduced requests for managerial intervention, and improved employee retention.
The use of GAI raises concerns about privacy, security, accuracy, and ethics
The use of generative AI also poses some challenges and risks that need to be addressed by companies and their employees. Without proper guidance, employees could be using GAI solutions like ChatGPT in ways that are unwise or unethical. Some of the areas of caution include:
Privacy: Your employees may be using personal data to generate content, such as candidate information, customer data, employee records, or your company’s intellectual property. You may have recruiters entering resumes or LinkedIn profiles into ChatGPT to generate personalized messaging or job matches, and this raises concerns about data protection and compliance with regulations such as GDPR. Employees need to be aware of the data sources and permissions required by GAI systems and ensure that they do not violate any privacy policies or laws when using them. For example, Samsung employees leaked confidential company information to ChatGPT when trying to find a fix for source code and defective equipment and when asking the chatbot to create meeting minutes. Security: Employees may inadvertently share sensitive information with the wrong people, and GAI systems may be vulnerable to hacking or manipulation by malicious actors, such as cybercriminals or state-sponsored agents. Your employees need to be careful about the security and reliability of the GAI systems they use, including the risks of exposing your company to data breaches through the use of malicious ChatGPT Chrome extensions. Inaccuracy: Generative AI systems may generate content that is factually incorrect or inconsistent with the input data or the intended purpose. This is because GAI systems may “hallucinate” or make up information that is not supported by the training data or the real world. Employees need to verify and validate the content they generate or consume using GAI systems and correct any errors or inconsistencies. This would be in line with the common ethical AI principle of human agency/oversight. If your company hasn’t already developed its own ethical AI principles, this is something you should be looking into. Ethics: As they clearly state, generative AI systems may generate content that is biased, offensive, or harmful to certain groups or individuals. For example, GAI systems may reproduce existing stereotypes or prejudices in the training data or generate content that is inappropriate or misleading for the intended audience or purpose. Employees need to be mindful of the ethical implications and social impact of the content they generate or consume using GAI systems and avoid using them for malicious or fraudulent purposes.
To get the most from GAI, educate and empower your teams and evaluate its impact
Given the seriousness of potential harm, it is imperative that companies provide guidance on the responsible and acceptable use of generative AI while reaping its benefits. Here are some steps you can take to create guidance for your employees on the responsible use of GAI:
Educate: Provide your employees with basic knowledge and awareness of what generative AI is, how it works, what it can do, how to get the best results (for example, prompt engineering), and what the potential benefits and risks are. This should include clear guidelines on what data can and cannot be used, who can access it, and how it should be protected. You can use online resources, webinars, training workshops, your L&D platform, team meetings, and newsletters to inform your employees about GAI, its applications, and responsible use. As fast-moving as the GAI space is, be prepared to keep updating your communications and training. For example, OpenAI just announced that ChatGPT users can now turn off their chat history, allowing users to choose which conversations can be used to train OpenAI’s models. Empower: Encourage your employees to explore and experiment with generative AI systems that are relevant and useful for their work. You can provide them access to trusted and secure (nonpublic) GAI platforms or tools, such as OpenAI’s ChatGPT API or Microsoft’s Azure OpenAI Service, and support them with feedback and guidance on how to use them effectively. Evaluate: Monitor and evaluate the performance and impact of generative AI systems on your employees, business outcomes, and customer satisfaction. You can use metrics such as quality, accuracy, relevance, diversity, novelty, efficiency, and engagement to measure the value and effectiveness of GAI systems for your work. Enforce: Establish and enforce clear policies and standards for the responsible use of generative AI systems by your employees. You can use codes of conduct, ethical principles, best practices, checklists, or audits to ensure that your employees comply with the legal and ethical requirements and expectations when using GAI systems.
In my current company, I worked with a multidisciplinary team that included representatives from global data protection, IT, legal, marketing, sales, and other business units to create a living document of guidance on the safe and responsible use of GAI. We used this as a basis for delivering global training on the topic. I highly recommend you take a similar approach so that you can benefit from the perspectives and expertise of multiple, disparate stakeholders.
Don’t overreact to the challenges of GAI and miss the opportunities
I think it’s important to emphasize that companies should not be overly reactive to concerns about generative AI and block their employees from using it productively and assume that eliminates their risk. Even if a company does choose to block ChatGPT and other forms of GAI at work, it may not actually prevent employees from being able to access and use GAI tools, which they can continue to do through mobile devices and personal laptops/computers depending on their work setup.
Knowing this, even if a company chooses to block public-facing generative AI solutions at work, it’s still critical for companies to provide helpful guidance that enables their employees to use GAI safely and responsibly while reaping the benefits it brings. This, in combination with providing their employees with private and secure GAI solutions, can enable companies to leverage the full potential of GAI while mitigating potential risks, providing a win-win situation for both the company and its employees.
Final thoughts: Embrace the possibilities GAI offers even as you manage its risks
The arrival of widely available generative AI is a watershed moment with regard to AI’s impact on people and the world of work.
On one hand, it offers tremendous opportunities for innovation and improvement in various domains and tasks. On the other hand, it requires careful consideration and management of the potential challenges and risks. It is crucial for companies to create guidance for their employees on the responsible and acceptable use of GAI.
By following the steps outlined above, you can create guidance for your employees on the responsible use of generative AI that will help them achieve their goals while avoiding potential risks and negative consequences.
[ChatGPT wrote this bio of Glen] Glen Cathey is a strategic thinker and global keynote speaker with extensive experience in talent acquisition and leadership. He is passionate about making a difference, developing others, and solving problems. Glen has served as a thought leader for sourcing and recruiting strategies, technologies, and processes for firms with more than 2 million hires annually. He has played a key role in implementing and customizing ATS and CRM systems, and has hired, trained, and developed large local, national, global, and centralized sourcing and recruiting teams. Glen has spoken at numerous conferences, including LinkedIn Talent Connect, SourceCon, Talent42, and Sourcing Summit Europe.