Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
In January, AI research lab OpenAI released Dall-E, a machine learning system capable of creating images to fit any text caption. Given a prompt, Dall-E generates photos for a range of concepts, including cats, logos, and glasses.
The results are impressive, but training Dall-E required building a large-scale dataset that OpenAI has so far opted not to make public. Work is ongoing on an open source implementation, but according to Connor Leahy, one of the data scientists behind the effort, development has stalled because of the challenges in compiling a corpus that respects both moral and legal norms.
“There’s plenty of not-legal-to-scrape data floating around that isn’t [fair use] on platforms like social media, Instagram first and foremost,” Leahy, who’s a member of the volunteer AI research effort EleutherAI, told VentureBeat. “You could scrape that easily at large scale, but that would be against the terms of service, violate people’s consent, and probably scoop up illegal data both due to copyright and other reasons.”
Indeed, creating AI training datasets in a privacy-preserving, ethical way remains a major blocker for researchers in the AI community, particularly those who specialize in computer vision. In January 2019, IBM released a corpus designed to mitigate bias in facial recognition algorithms that contained nearly a million photos of people from Flickr. But neither the photographers nor the subjects of the photos were notified by IBM that their work would be included. Separately, an earlier version of ImageNet, a dataset used to train AI systems around the world, was found to contain photos of naked children, porn actresses, college parties, and more — all scraped from the web without those individuals’ consent.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
“There are real harms that have emerged from casual repurposing, open-sourcing, collecting, and scraping of biometric data,” said Liz O’Sullivan, cofounder and technology director at the Surveillance Technology Oversight Project, a nonprofit organization litigating and advocating for privacy. “[They] put people of color and those with disabilities at risk of mistaken identity and police violence.”
Techniques that rely on synthetic data to train models might lessen the need to create potentially problematic datasets in the first place. According to Leahy, while there’s usually a minimum dataset size needed to achieve good performance on a task, it’s possible to a degree to “trade compute for data” in machine learning. In other words, simulation and synthetic data, like AI-generated photos of people, could take the place of real-world photos from the web.
“You can’t trade infinite compute for infinite data, but compute is more fungible than data,” Leahy said. “I do expect for niche tasks where data collection is really hard, or where compute is super plentiful, simulation to play an important role.”
O’Sullivan is more skeptical that synthetic data will generalize well from lab conditions to the real world, pointing to existing research on the topic. In a study last January, researchers at Arizona State University showed that when an AI system trained on a dataset of images of engineering professors was tasked with creating faces, 93% were male and 99% white. The system appeared to have amplified the dataset’s existing biases — 80% of the professors were male and 76% were white.
On the other hand, startups like Hazy and Mostly AI say that they’ve developed methods for controlling the biases of data in ways that actually reduce harm. A recent study published by a group of Ph.D. candidates at Stanford claims the same — the coauthors say their technique allows them to weight certain features as more important in order to generate a diverse set of images for computer vision training.
Ultimately, even where synthetic data might come into play, O’Sullivan cautions that any open source dataset could put people in that set at greater risk. Piecing together and publishing a training dataset is a process that must be undertaken thoughtfully, she says — or not at all, where doing so might result in harm.
“There are significant worries about how this technology impacts democracy and our society at large,” O’Sullivan said.
Thanks for reading,
AI Staff Writer
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.