How to Mitigate Bias and Risk in AI, According to a Three-Armed Expert

Mention the words generative AI and most people have a quick reaction: They love it — or they hate it. 

The truth is, AI is a polarizing technology. By now, most people know about the huge benefits AI offers, including time savings and increased productivity. But it also poses risks, such as built-in bias and information that’s sometimes . . . just not correct. 

That’s why LinkedIn’s Talent Connect brought together two executives from Textio to discuss how talent professionals can harness AI’s potential while putting guardrails in place for safe adoption. 

Jackye Clayton is Textio’s vice president of talent acquisition and diversity, equity, inclusion, and belonging. And Tacita Morway is the company’s chief technology officer. Their spirited breakout session, Navigating the Dualities of AI: Balancing Innovation and Risk, offered plenty of useful advice about how any company can more safely use AI. 

First, be aware of the issues and risks 

“So, we’re going to be talking about the state of AI in HR,” Jackye told the audience at the outset, “some of the things we’ve noticed, products that people are trying to implement, but also the impact and issues that you need to be aware of, and some of the best practices for adopting AI.” 

Most talent professionals, Jackye said, have already used AI for years, for sourcing, assessing applications, and matching candidates to the right jobs. But even though AI has been around for a while, she said, its recent iterations have “come very, very fast and furious.” They’ve come so quickly that many companies haven’t had time to put guardrails in place. “And,” Jackye said, “we’re starting to see some serious face-plant moments.”

She pointed to a company that was recently sued for age discrimination because its hiring AI automatically rejected men over the age of 60 and women over 55. Even Textio’s own research has uncovered bias in AI. Textio creates software to detect or eliminate bias, and in an experiment, they asked ChatGPT to “write constructive performance feedback for a marketer . . . who has had a rough first year.” They asked the AI to do this for two marketers, one who went to Harvard and one who went to Howard, a prestigious, historically Black university. 

ChatGPT responded by suggesting that the Howard marketer was “missing technical skills” and suffered from a “lack of attention to detail.”  Meanwhile, the Harvard marketer was “condescending” and “micromanaging.” 

“But ChatGPT also suggested that the Harvard grad should ‘step up to lead more,’” Jackye joked, “because who doesn’t love a condescending, micromanaging leader?”

Tacita then shared a picture (see above) she had asked DALL-E to create of herself, to show her son how stressed out she was feeling. To create the image, she had given DALL-E a picture of herself and told the tool she was a CTO.

The illustration came back of her with a beard and a third arm. She loved the third arm — “now I have new inspiration for tattoos,” she joked — but she couldn’t remove the beard. “DALL-E knows that only 8.3% of CTOs in America are women,” she said, “so I probably am a dude.” 

Get clear on what you want to achieve with AI and create guidelines

“Unfortunately, the bias is there and we can’t make it go away,” Tacita said. “But we can monitor and measure it and put some controls and safeguards in place.” 

To do that, she offered these five concrete steps: 

1. Be clear on your scope and purpose.  “First and foremost, have a goal,” Tacita said. “Do not just adopt AI for the sake of adopting AI or to be on the bleeding edge of things.” Instead, you should have a specific purpose in mind when adopting GAI, so that the AI aligns with your goals. 

2. Set guiding principles. “You will not be there to help employees make judgment calls as they’re interacting with AI,” Tacita said, “so you need to give them guiding principles.” She suggested that your principles include the following:  

  • Be thoughtfully skeptical. “This thing can and will get it wrong,” she said, “so doubt it.”
  • Own your own work. AI draws information from other sources, so make sure you don’t pass other people’s work — or mistakes — off as your own. 
  • Protect privacy. “If it’s something you wouldn’t post to your company’s blog post or social media,” she said, “don’t give it to AI.” 
  • Don’t overshare. “When you’re interacting with AI, keep some of the details fuzzy,” Tacita said. In the times she’s asked AI to help brainstorm ideas for a new product, she has identified herself as “someone who works for a software company that builds AI to combat bias in the workplace” — not as the CTO of Textio.
  • Show where you use it. Tacita said that it’s not only ethically right to highlight where you’ve used AI in your work, but it also helps tap into your coworkers’ thoughtful skepticism — which could save you embarrassment later. 

3. Determine what is acceptable use.  Employees need to know where they can use AI safely and where they cannot. So, you might want to identify where in the workflow they can use AI; make sure they don’t use any personally identifiable information; and prohibit them from sharing intellectual property. 

4. Make a list of approved tools. Tacita suggests having your information security, legal, HR, and DEI teams evaluate all the tools you’re using. Then make a list of which ones are acceptable for employees to use. If there are tools you decide are off-limits, say that and explain why. “Because people are going to be like, ‘That’s so cute, you think you don’t want me to use that, I’m definitely still going to use it,’” she explained. “So you’ve got to say why.” 

5. Outline the risks.  Identify the risks, so that employees are clear on what those risks are — and keep in mind that they will be different for different teams. HR, for example, may worry about different issues than your IT or social media team. “You have to customize the scenarios and use cases,” Tacita said, “to meet the needs of each team.” 

Ask the right questions when you’re purchasing AI-powered software

“Now comes the shop therapy,” Tacita joked with the audience. “This is the fun part, when we actually have to go out and select our software.” In her role as CTO, she said, she spends a lot of time earning the trust of customers and prospective customers. Here are the five questions she advises them to ask Textio — or any other software vendor they’re considering: 

1. What was this software designed for? “Was it designed to meet your specific needs?” she asked. “If not, it becomes a lot harder to measure outcomes.” 

2. Where is their data from? The company may not own the data in their AI or even control it, and that doesn’t have to be a deal-breaker. “But if they can’t tell you, transparently, where their data is from,” she said, “that’s a major red flag.” 

3. Is the company qualified to build AI-powered software?  You might want to ask: Have they done this before? Is this their first go-around and is there a diverse team responsible for building it? 

4. How do they manage bias? There is a lot that engineers can do to mitigate bias in AI, but it’s difficult, expensive, and time-consuming. “For those of us who are actually doing that, we’re making that choice,” Tacita said. “We’re screaming and shouting about it.” If the company is not actively doing this, she said, “move along because it’s easy to say, ‘Yes, we’re responsibly built.’ But if they’re not giving the details behind that work, it would make me nervous. This is not trade secret stuff.”  

5. How do they protect your privacy? You need to ask: How is the AI vendor protecting your company? How are they protecting your users, end users, and your customers? “And make sure,” Tacita said, “that you’re hearing everything you want to hear here.” 

Final thoughts: You need to bring everyone along on the journey, including HR

As you implement AI into your business, one of the most important things you can do is have HR and talent professionals actively involved. “It’s funny, I’ve been hearing that HR and IT don’t usually like to work together,” Jackye said, “but in these cases, you really have to.” Why? Because you need to bring everyone along on the journey. 

Tacita said that she had recently noticed something that disturbed her. “We’re seeing this sort of ‘cool kids club,’ and the ‘not cool kids club’ thing happen,” she said. “There are the early adopters and then there are the folks who are like, ‘I guess I missed the train, I’m going to leave that to somebody else.’” But the folks who are not coming along, she said, are those who might be at the greatest risk from bias. 

“You have to make sure everybody feels comfortable taking those first steps,” she said, “because there’s so much cool, good stuff that can come from this.”

Uncategorised