Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Artificial intelligence is transforming HR. We’ve already seen a marked rise in the number of HR tools applying machine learning and artificial intelligence to “people problems” in the workplace. As organizations embrace continuous change and greater decentralization, those who put that technology to good use are the ones who will win.
High risk, high reward
At their core, AI and machine learning are just tools. And like any tool, they can do both good and harm. If you use or configure them unwisely, they have the potential to damage your processes or culture. This is a significant risk for HR professionals in particular, who tend to be less familiar with the underlying mechanics of these tools than tech professionals are.
So what’s a well-intentioned HR pro to do?
First, make sure you understand the problem you’re solving. I mean, really understand it. Once you’ve identified and properly framed the problem, ask yourself whether you truly need this technology. Are you hamstrung without it? Would the tech make things simpler and free you up to solve other problems? Or could you solve the problem with existing tech and/or a different approach to the work you’re already doing?
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
If you decide you do indeed need some slick new AI-flavored tech, it’s critical you educate yourself on the strengths and weaknesses of AI. It’s worth going beyond a quick Google search to get deeper insights. Tap into your professional networks and the expertise of others in your company. Invite them to participate in reviewing the tools you’re considering.
When employed correctly, AI can free up a lot of your time and energy, transforming HR from an operational center into a strategic function.
Choosing the right tools for the right problems
All problems are not created equal. And algorithms aren’t equally good at solving them, so make sure the problem you’ve got is suitable for AI.
Algorithms are poorly suited for:
- Problems where little data is generated, or the data is a poor proxy for real-world outcomes or behaviors.
- Problems with extreme edge cases, or where the underlying data set is strongly biased (there are ways to overcome this, however – see below).
- Problems that require value judgments (in this case, a combination of human and algorithmic solutions is optimal).
On the bright side, algorithms can be great for:
- Problems where significant data is available and the data is directly relevant to the behaviors and outcomes you’re interested in.
- Problems where patterns you’re looking for are predictable (or at least consistent over time).
Keep in mind that AI doesn’t always up-skill your workforce, so tools that simply automate processes might not get the results you want. If you’re looking for behavioral changes, use tools that help your employees learn. Research suggests tool-based feedback that is timely, specific, and actionable is most effective for solving behavior-based issues.
For example, the Textio platform provides feedback on the job posting you’re writing as you’re writing it, suggesting ways to make it more attractive to the candidates you’re after. Joonko analyzes activity in your workplace productivity and collaboration tools, looking for evidence of unconscious bias, then notifies employees with suggestions for corrective action. [Disclosure: I serve as an advisor to Joonko.]
Algorithms are people, too
Designing AI is more art than science. Creators can inadvertently bake their own biases into the technology, as Google famously learned when a nascent facial recognition technology classified dark-skinned people as gorillas. (Yikes.)
Do your homework when shopping for AI-based tools. Find out how the models were developed and what the implications of those development choices might be. Ask questions like:
- What data set was used to train the algorithm?
- What potential bias might exist in that data, and how has the model corrected for it? (E.g., If an algorithm is reading data that shows women are more likely to be assigned low-priority tasks, it may “understand” that women are less capable of higher-priority work, as Joonko points out.)
- How does the model evolve over time, and how are issues of bias addressed by its creators?
The bottom line
AI and machine learning have the potential to fundamentally change the role of HR functions and increase the positive impact HR professionals can make. But applying AI or machine learning won’t create sustainable change at your organization all on its own. If you use AI to accelerate the positive changes already happening, the technology-based changes can be reinforced by other strategic programs you’re running.
The robots won’t replace us – they’ll just make us look good.
If you’d like to know more about how much impact AI can have on HR, I’ll be speaking at the VBSummit next week.
Aubrey Blanche is the Global Head of Diversity & Inclusion at Atlassian. She relies heavily on empirical social science in her work, and has developed a new team-level paradigm for external diversity reporting. She is an advisor to SheStarts, a Sydney-based accelerator focused exclusively on supporting female founders, BeVisible, and Joonko. She is a co-founder of Sycamore, a community aiming to fix the VC funding gap for underrepresented founders.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.