One of the world’s most prestigious science publications, Nature, is out with a new op-ed arguing that top-notch PhD programs should lower their math requirements to admit more women.
Only 26 percent of women score above a 700 on the GRE’s Quantitative section, excluding them from the nation’s elite STEM grad programs.
“In simple terms, the GRE is a better indicator of sex and skin color than of ability and ultimate success,” argue professors Casey Miller and Keivan Stassun [PDF].
As an alternative, the University of South Florida prioritized more soft skills in the application process, including measures of motivation and “service to community,” resulting in higher graduation rates (81 percent) and significantly more underrepresented groups.
The op-ed presents a rather conflicted argument. It plays into all the stereotypes of women’s inferior math ability and superior soft skills, while promoting more inclusive policies.
There is a disturbing gender gap on all sorts of standardized test scores, including a 33-point gap on the SAT (though its narrowing). There are various theories about what causes the gap, from confidence issues [PDF] to sociological explanations. I asked Stassun his theory.
“I don’t know. I’m an astrophysicist, not a psychologist, and I don’t particularly care,” he wrote to me.
“We have identified better ways of identifying capable and promising students in ways that are not biased against women and minorities.”
Likewise, Miller admitted, “I don’t really know the underlying origin.” Either way, both writers wanted to make the gap “irrelevant to success.”
Regardless of the reason women perform worse on standardized math tests, the authors believe the scores don’t accurately reflect women’s and minorities’ true scientific aptitude.
“Yes, scientists do better than non-scientists on the quantitative part, on average. But when you start comparing scientists to scientists, the utility of the GRE becomes extremely tenuous,” wrote Miller.
The two authors tell me they want to avoid playing into female stereotypes, but it’s going to be difficult without a full-fledged solution to the problem.
Unfortunately, they don’t quite yet have a polished solution for big-name universities. Thirty-minute interviews may work for the University of South Florida and Fisk-Vanderbilt in Nashville, but many household name schools are inundated with applicants.
In 2011, Stanford’s Computer Science program had 692 applicants. In initial interviews alone, that translates into about 350 hours of work. Unlike undergrad programs, faculty members are the ones making the decisions, and it’s hard enough to get faculty to focus on teaching, let alone admissions.
The op-ed is optimistic about new tools from the Education Testing Service (ETS) that can automatically score personal attributes. Potentially, there is a technological solution to the admissions problem.
You can read the full article in the newest issue of Nature, or read it here [PDF].