This week, in what might have been construed as a gesture of goodwill, Amazon announced that it would partner with the National Science Foundation (NSF) to commit up to $10 million in research grants over the next three years to develop systems focused on “fairness in AI.” But the intervening days brought debate rather than praise as researchers questioned the Seattle company’s true motives — and its methods.

They quickly pointed out that Amazon would only contribute a part of the $7.6 million in total rewards, and that this portion might be provided via an “agreement” or “contract.” And they noted that, before Amazon signs on the dotted line, it’ll be afforded a chance to review the budget and to negotiate terms, and to provide access to Amazon researchers who would act as project advisors.

That on its face isn’t necessarily a bad thing. But Amazon doesn’t have a sterling reputation when it comes to AI fairness.

A recent MIT study found that Rekognition — Amazon Web Services’ (AWS) object detection API — was incapable of reliably determining the sex of people with darker-skinned faces in certain scenarios. (Amazon disputed — and continues to dispute — those findings, and says that in internal tests of an updated version of Rekognition, it observed “no difference” in gender classification accuracy across all ethnicities.) And this past summer, the ACLU reported that in a test involving a public data set of 25,000 mugshots, Rekognition misidentified 28 members of Congress, including 11 people of color, as criminals.

To researchers like University of Washington assistant professor Nicholas Weber, the NSF solicitation thus feels disingenuous.

“[Computer] science has only really started to do this in the last two years,” he told VentureBeat in a phone interview, referring to the cosponsorship. “It puts us in an odd arrangement — it’s unclear what the responsibilities to researchers are when we submit a budget to the NSF. [And] it allows [Amazon] to piggyback on the NSF’s peer review process — what many consider to be the gold standard of evaluation and review.”

“Amazon is doing the right thing, trying to work with researchers to understand [a problem] that they’ve mostly exacerbated,” Weber added. “But there’s a better way to approach this: Just give money to the National Science Foundation.”

Indeed, in a recently released draft of its 20-year roadmap for AI research in the U.S., the Computing Community Consortium — the organization whose professed goal is to catalyze the computing industry to pursue high-impact research — says that achieving the full potential of AI technologies will require “significant sustained investment” and a “radical transformation” of the AI research enterprise.

“Universities now lack the massive resources (unique datasets, special-purpose computing, extensive knowledge graphs, well-trained AI engineers, etc.) that have been acquired or developed by major IT companies,” it wrote. “These are fundamental capabilities to build forward-looking AI research programs.”

Corporate co-sponsorship of research, done correctly, can yield tremendous technological advances. Weber points out that Intel and VMWare — the latter of which partnered with the NSF to investigate edge computing data infrastructure — “allow both … fields [to move] forward” through contributions in the form of software and hardware.

“[They’ve tended] to be about giving [grantees] using their products [support],” he said. “Amazon is not in a position to give cutting-edge artificial intelligence. [It’s a] distributionally unfair outcome.”

History is filled with examples of research tainted by corporate influence. Soft drink brands like Coca-Cola have invested millions in studies arguing about the tenuousness of the link between obesity and fizzy drinks. The tobacco industry became a major sponsor of medical science in the 1950s, a strategy famously advanced by John W. Hill of public relations firm Hill & Knowlton. And oil giants such as Shell, Chevron, and BP regularly (and generously) support Harvard and MIT research.

AI’s nascency has shielded it from much of the interference that’s plagued its academic forebears. To avoid falling into the same traps, it’ll have to learn to recognize the pitfalls they didn’t.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

P.S. Please enjoy this video of Boston Dynamics’ Handle robot stacking boxes in a warehouse.