Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.

More than 56 million people in the United States are living with a disability, according to the U.S. Census Bureau, and there’s a growing digital divide between those who have a disability and those who don’t. Disabled Americans are roughly three times as likely to avoid going online and 20 percent less likely to own a computer, smartphone, or tablet. Moreover, just 40 percent of them say they’re confident in their ability to use the internet.

In an effort to promote a more accessible web, Google and New York University’s Ability Project today launched Creatability, a set of experiments exploring how artificial intelligence (AI) can lend a hand in accommodating blind, deaf, and physically differently-abled people.

They’re available at the Creatability website, and Google’s open-sourced the code. It’s soliciting new experiments from developers, who can submit their creations for a chance to be featured.

The experiments range from a music-composing tool that lets you create tunes by moving your face to a digital canvas that translates sights and sounds into sketches and a music visualizer tool that mimics the effects of synesthesia.

Most leverage Posenet — a machine learning model powered by Google’s TensorFlow machine learning framework — for body joint detection in images and videos. Using any off-the-shelf webcam, you can draw with your face or tap out a tune with your nose. It runs in Javascript, and images are processed on-device and in-browser.

Google said it worked with creators in the accessibility community to build Creatability, including composer Jay Alan Zimmerman, who’s deaf; Josh Miele, a blind scientist and designer; Chancey Fleet, a blind technology educator; and Open Up Music founders Barry Farrimond and Doug Bott, who work with young disabled musicians to build inclusive orchestras.

“We hope these experiments inspire others to unleash their inner artist regardless of ability,” Claire Kearney-Volpe, a designer and researcher at the NYU Ability Project, wrote in a blog post. “Art gives us the ability to point beyond spoken or written language, to unite us, delight, and satisfy. Done right, this process can be enhanced by technology — extending our ability and potential for play.”

It’s not the first time AI has been used to build accessible products.

Google’s DeepMind division is using it to generate closed captions for deaf users. In a 2016 joint study with researchers at the University of Oxford, scientists created a model that significantly outperformed a professional lip-reader, successfully translating 46.8 percent of words without error in 200 randomly selected clips compared to the human professional’s 12.4 percent of words.

Facebook, meanwhile, has developed captioning tools that describe photos to visually impaired users. Google’s Cloud Vision API can understand the context of objects in photos. And Microsoft’s Seeing API can read handwritten text, describe colors and scenes, and more.


VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member