Microsoft’s AI for Accessibility, which was unveiled in May 2018, is the Seattle company’s second so-called AI for Good program. It follows on the heels of — and was largely modeled after — the company’s AI for Earth, which provides training and resources to organizations looking to tackle problems relating to climate, water, agriculture, and biodiversity. Through it, Microsoft pledged $25 million over the following five years for universities, philanthropic organizations, and others developing AI tools that serve those with disabilities.
Nine organizations and projects — including Zyrobotics, iTherapy’s InnerVoice, Present Pal, Equadex’s Helpicto, Abilisense, Timlogo, the University of Iowa, the Indian Institute of Science, and the Frist Center for Autism and Innovation — were awarded AI for Accessibility grants in 2018 to work on a range of projects. And today in conjunction with Global Accessibility Awareness Day, Microsoft announced the newest cohort of recipients. Here’s the list:
- The University of California, Berkeley
- Massachusetts Eye and Ear, a teaching hospital of Harvard Medical School
- Voiceitt in Israel
- The University of Sydney in Australia
- Birmingham City University in the United Kingdom
- Pison Technology of Boston
- Our Ability, of Glenmont, New York
AI for Accessibility is overseen by Microsoft chief accessibility officer Jenny Lay-Flurrie, Microsoft senior accessibility architect Mary Bellard, and others and rewards the most promising candidates in three categories — work, life, and human connections — with seed grants and follow-on financing each fiscal quarter. Proposals are accepted on a rolling basis and are evaluated “on their scientific merit,” in addition to their innovativeness and scalability.
“What stands out the most about this round of grantees is how so many of them are taking standard AI capabilities, like a chatbot or data collection, and truly revolutionizing the value of technology in typical scenarios for a person with a disability like finding a job, being able to use a computer mouse or anticipating a seizure,” said Bellard. “[The research being done] … is an important step in scaling accessible technology across the globe. People are looking for products or services to make things easier and AI might be able to help.”
Toward that end, Our Ability, an organization founded 2011 to match disabled job seekers with “meaningful” career opportunities, will team up with students from Syracuse University to create an AI-powered chatbot that matches businesses with would-be workers. Specifically, it’ll assist with filling out paperwork, identifying the skills required for top jobs, and surfacing work profiles.
Our Ability founder John Robinson, who was born without lower arms or legs, noted in a statement that the unemployment rate among people with disabilities is about twice as high — 7.9% — as those without them. “[The chatbot] will provide a much more rapid way of getting more people to connect with one another. By creating a place where we assess real-life skills, train real-life skills and match them with employment — that’s every disability job coach’s goal in the last 50 years,” he said. “We’re going to be able to do it with technology a lot faster and a lot better.”
As for Pison Technology cofounder Dexter Ang, an MIT graduate whose mother suffered from the neurodegenerative disorder amyotrophic lateral sclerosis (ALS), he hopes to commercialize a low-cost wearable that’ll enable people with neuromuscular disorders to control digital devices. Much like startup Ctrl-labs’ forthcoming Ctrl-kit, it’ll leverage AI algorithms to translate muscle neuron EMG (electromyography) signals into actions, like simulating a mouse click.
“Our proprietary technology can sense nerve signals on the surface of the skin,” said Ang. “To be able to maintain and increase access to that digital world is exceptionally important for people with disabilities.”
Meanwhile, senior lecturer at the University of Sydney’s faculty of engineering and information technologies Omid Kavehei is developing with colleagues an AI tool that can read a person’s electroencephalogram (EEG) data via a wearable cap, and then communicate that data back and forth to the cloud to provide seizure monitoring and alerts. It targets the more than 50 million people worldwide who live with epilepsy, as estimated by the World Health Organization.
Kavehei and team intend to test a cap on epilepsy patients using driving simulations, and to leverage Microsoft’s Azure Machine Learning service to attempt to predict seizures from human signals.
“To have a non-surgical device available for those living with epilepsy will make a significant difference to many, including family members, friends, and of course those impacted by epilepsy,” said Epilepsy Action Australia CEO Carol Ireland, a group that’s working with the researchers on the project. “Such a device would take away the fear element of when and if a seizure may occur, ensuring that the person living with epilepsy can get into a safe place quickly.”
Selected AI for Accessibility applicants receive compute credits for Microsoft’s Azure AI Platform in increments of $10,000, $15,000, or $20,000, depending on their project’s scope and needs, and additional funds to cover costs related to collecting or labeling data, refining models, or other engineering-related work. They also gain access to Microsoft engineers, who work with them to accelerate development and incorporate their innovations into “platform-level” partner services.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here