The classic example of “crowd wisdom” dates back to 1906, when Sir Francis Galton observed a contest in which attendees were asked to guess the weight of an ox at a country fair in England. In what many consider to be the first experiment on crowd wisdom, the average of the 800 guesses was within one pound of being correct.
Consider that these kinds of experiments can now be done digitally – across cultures and time zones and fairly instantaneously. The classic experiment was reenacted recently with a digital crowd when a photo of a cow was posted online and viewers were invited to guess her weight. More than 17,000 votes were cast and the average guess was within 5 percent of being accurate.
So, crowds can be collectively “wise,” and with online access to large groups of individuals and the real-time sharing of opinions, the power and frequency with which we might harness this power is ever increasing.
New research, however, is adding a kink in idea of benefitting from crowd wisdom. The incredible power and ubiquity of modern crowds necessitates further scrutiny regarding bias.
Market and crowd gender bias
One might argue that the stock market is the ultimate “crowd,” with company valuations based on the collective perception of all investors. Advances in computational research show that even if individual actors are not biased, if they believe that others are, they will price in this bias into the market.
Ned Smith, professor at the Kellogg School of Management, studies decision-making in financial markets and teaches on the influence of crowds. In a recent study, he demonstrates that greater media coverage of a CEO appointment results in negative market reactions for a female CEO but positive market reactions for male CEOs, all else held constant. So much for an unbiased crowd.
In this new take on bias and markets, Professor Smith’s research suggests that it is not individuals attributing their own biases to the female CEO appointment. To be sure, there is plenty of research demonstrating the benefits of female executives, particularly as related to increased innovation, collaboration, and even returns.
Rather, his research suggests that the negative market reaction is likely the result of individual assumptions that others in the market will make biased investment decisions.
Professor Smith and a PhD student, Kevin Gaughan, studied female CEO appointments of publicly traded companies from 2000-2015 and found that female appointments accounted for 1 percent of appointments but on average received 3 times the media attention of male appointments. When female appointments received little media attention, the market responded with a favorable return on the day of the announcement (holding other factors constant).
However, when media attention was high enough for investors to presume that all investors have heard the news, the announcing company’s stock traded at a discount on the day of this announcement. Professor Smith proposes that investors discount the company’s value on the assumption that others are negatively biased, a phenomenon referred to as “second-order bias.”
Professor Smith wrote about this in Fortune when GlaxoSmithKline announced the appointment of Emma Walmsley as CEO.
He got his results by applying sociological insights to large data sets, coupled with machine learning techniques. In fact, his study had four times as many observations as any prior study on market response to female executive appointments.
Machines to help address gender bias
As machines allows us to analyze bias, we can engage computational partners to help identify or even overcome gender bias, even when it’s not overt. A few innovative companies are using machines to address bias, and they are starting at the first step in anyone’s career — the recruitment process.
Textio uses machine learning to detect patterns including evidence of bias in job postings. Its natural language processing engine consumes job postings, over 54M to date, and analyzes them to help companies as varied as CVS, Capital One, and Apple to understand the outcome of these postings in terms of candidates who apply, receive an offer, and accept that offer.
Textio gives language suggestions that can help lead to a more diverse group of applicants. For example, the service points out that the word “manage” tends to draw more male job seekers and suggests choices such as “handle,” “lead,” or “run” instead. On average, companies with job listings that have a high score in Textio recruit people that are 24 percent more qualified with 12 percent more diversity — and they do it 17 percent faster.
According to Textio CEO Kieran Snyder, “Addressing this kind of subtle gender bias is important in fields that are traditionally male dominated — such as technology or finance — but bias is everywhere and Textio helps customers from the arts, health care, and manufacturing too.” By crafting job postings that resonate more with women, companies seeking to attract more women now have machine partners that can help them overcome the implicit bias that was creating unintentional obstacles to female recruitment.
Addressing hiring practices is an obvious place to start tackling gender bias, especially when that bias may not be immediately evident to the human eye. And while including machines does not free us from the inherent bias of human collaborators, future human-machine partnerships may target other forms of implicit and unintentional bias as well.
Alanna Lazarowich is a senior director at the Kellogg School of Management at Northwestern University.
This article appeared originally at ChicagoInno. Used with permission.