Speaking at the AI Frontiers Conference in Santa Clara, California today, Google senior fellow Jeff Dean provided an update on the accuracy of the company’s speech recognition software.

Google’s word error rate — how frequently Google transcribes a word incorrectly — has fallen by more than 30 percent, Dean said, according to a tweet from Mashable’s Karissa Bell. A Google spokesperson confirmed this statistic in an email to VentureBeat.

Per Bell’s tweet, Dean attributed the improvement to the “addition of neural nets,” which are systems that Google and other companies use as part of deep learning. People train neural nets on lots of data, such as snippets of speech, and then get them to make inferences about new data. Google first put neural nets into production for speech recognition in 2012, with the launch of Android Jelly Bean.

Google doesn’t often talk about its momentum in this important area, which affects an increasing number of Google products, from the Google Home smart speaker to the Gboard virtual keyboard for Android and iOS. But in 2015, Google chief executive Sundar Pichai said that the company had an eight percent word error rate.

In August, Alex Acero, senior director of Siri at Apple, told Backchannel’s Steven Levy that Siri’s “error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases.”

And in September, Microsoft said that researchers had achieved a word error rate of 6.3 percent on a benchmark.

Google recently incorporated neural machine translation into Google Translate.

Dean also said the Smart Reply feature that first appeared in the Inbox by Gmail app — and has subsequently appeared in other Google products — got its start as an April Fools joke in 2009, as Bell noted in a tweet┬átoday.