Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Google two years ago launched Teachable Machine, a web experiment intended to elucidate machine learning concepts. It let any user with a webcam train an AI model to output specific media — an image, sound, speech, or GIF — corresponding with a hand gesture, object, or activity. Now Teachable Machine is expanding to incorporate inputs beyond those it initially supported, including audio. Additionally, it will allow folks to export their trained models to websites, apps, devices, and more.
Google says it worked with people across industries with different needs — like architect Steve Saling, who has amyotrophic lateral sclerosis (ALS) — to test and shape the new Teachable Machine. “People are using AI to explore all kinds of ideas — identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance,” wrote the company in a blog post. “We collaborated with educators, artists, students, and makers of all kinds to figure out how to make [Teachable Machine] useful for them.”
Teachable Machine 2.0 can recognize images, sounds, and poses from uploaded files or live mics and webcams. A button click kicks off model training in the browser on the examples provided, and the subsequent results panel shows both the model and performance metrics. (Google notes that training samples stay on-device and don’t leave a user’s PC unless they choose to save the samples to Google Drive).
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
“Our hope is that the new version of Teachable Machine will be a super easy way for anyone to train their machine learning models and use them in their own projects, wherever Tensorflow.js models can be run,” wrote Google.
Google is not the only one offering free tutorials designed to get intrepid practitioners up to speed on AI and machine learning basics. One recent example is a partnership between Amazon and Udacity to launch the DeepRacer Scholarship Challenge, a program that helps students create, train, and optimize AI models while receiving support from the community. Udacity previously launched a self-driving car nanodegree in partnership with big-name brands such as Mercedes-Benz.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.