There has been a lot written about diversity (or the lack thereof) in the tech industry. Even with Fisher v. University of Texas at Austin, the recent U.S. Supreme Court ruling upholding affirmative action, tech companies are looking for alternative approaches to increase diversity. The first step is combating hiring bias. A recent survey found that black and Hispanic computer science graduates were less likely to be hired than their white and Asian counterparts. The statistics on women in tech aren’t much better, with women also being systematically underrepresented and making $10,000 less than their male counterparts.

But there is one simple thing that we could all do that would greatly reduce this the hiring bias. The effects are well-documented and the costs are negligible. As administrators of The Data Incubator, a selective fellowship that helps students and industry practitioners find jobs as data scientists, we practice this in our admissions process and as a result, half of our current tech staffers are minorities. Not only is it cheap and simple to implement, not only is it fairer for minority applicants, but we believe it allows us to find more talented applicants. What is it? Getting rid of the traditional resume screening.

When people hire for tech positions, they almost always first glance through a pile of resumes. There’s a large body of research demonstrating that they make discriminatory judgements based on what they find. A famous paper from the University of Chicago shows that candidates with white-sounding names but otherwise identical resumes were 50% more likely to be called in for an interview.  Employers preferred to interview Emily and Greg to Lakisha and Jamal. Likewise, researchers from Yale showed that (otherwise identical) resumes for scientists with male names were “significantly” more likely to be rated as competent and hirable than identical ones with female names — even by female reviewers.

What’s driving this? Psychologists have found that most of us hold unconscious stereotypes that bias against female or minority applicants in aggregate. Our brains rely on a whole host of implicit associations as cognitive shortcuts for everyday life — instincts that might have served our ancestors well while roaming the Serengeti but that we must actively fight in the modern workplace.

When these biases are removed, our hiring practices become much fairer. Researchers from Harvard University examined over 14,000 audition records across eight major symphony orchestras from the 1950s to 1995. They found that adopting “blind” auditions that concealed the gender of the performer increased the rate of women passing by as much as 30%.

Even if we compensate for our inherent biases around race and sex, resume screening often selects for candidates based on other extraneous factors like an alma matter or a former employer. While these factors may not correlate well with performance, they do lead to narrow educational and employment pedigrees and an asphyxiating corporate monoculture. If all your coders are Stanford computer science majors, they have all had the same professors, solved the same exercises, and are all trained to think the same way. It’s not a surprising byproduct of this groupthink that Silicon Valley has been called the “latest old boys club” amid accusations of sex discrimination and its startups compete furiously to solve the same first-world problems like food delivery and laundry pickup.

Technologists come in all stripes and defy simple racial, sexual, academic, and professional categorization. Just as you would diversify a stock portfolio, a company should diversify its talent stock to broaden its capabilities. And groupthink can have very serious effects on the global economy: A recent Bank of England report cited groupthink as one of the contributing causes to the 2008 financial crisis.

But if we stop relying on resumes as the single source of truth, what other factors should we use for evaluation?

Some companies (including ours) have started relying more on objectively evaluating performance on standardized coding challenges. A software engineering candidate might be asked to build an API, while a data science candidate could be asked to analyze anonymized or synthetic log data — tasks that are more closely linked to the actual demands of the job than resume crafting. Skills as objective as data science or software engineering should not be judged solely by the phrenology of resume-screening.

In an age of freely available open-source code, companies can also infer a lot about an applicant’s professional qualifications from answers on technical discussion boards like Stack Overflow or code contributions on public repositories like Github.

Finally, if resume screening is necessary, asking an admin to black out the name on a resume so it can be reviewed “blind” by a hiring manager can go a long way to combating our biases.

Obviously, reforming resume screening will not magically eliminate minority and female underrepresentation overnight — but it can go a long way toward combating hiring bias.

Tianhui Michael Li is the founder of The Data Incubator.

Ariel M’ndange-Pfupfu is a Data Scientist in Residence at The Data Incubator.