The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
Three pioneers in AI — University of Toronto faculty member and Google Brain researcher Geoffrey Hinton, Facebook chief AI scientist and NYU professor Yann LeCun, and Element AI founder and University of Montreal professor Yoshua Bengio — were honored this morning with the Turing Award, an annual prize the Association for Computing Machinery (ACM) has since 1966 given to individuals who’ve made contributions “of lasting and major technical importance to the computer field.” The three will share in this year’s award, along with its accompanying $1 million prize, which is furnished in part by Google.
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said ACM president Cherri M. Pancake in a statement. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton, and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools — in areas ranging from medicine to astronomy to materials science.”
Hinton, who has spent the past 30 years tackling a few of AI’s biggest challenges, has been referred to by some as the “Godfather of AI.” In addition to his seminal work in neural networks — layers of mathematical functions modeled after biological neurons — he has authored or coauthored over 200 peer-reviewed publications in machine learning, perception, memory, and symbol processing, including a 1986 paper (“Learning Representations by Back-propagating Errors”) on a machine learning technique called backpropagation. This, in particular, aided by increasingly cheaper, more robust computer hardware, has enabled monumental leaps in computer vision, natural language processing (NLP), machine translation, drug design, and material inspection, with some neural nets producing results superior to human experts.
In 2012, Hinton and two graduate students tackled ImageNet, a well-known AI benchmark, with a system that managed to sort 100,000 photos into 1,000 categories within five guesses 85 percent accurately — more than 10 percentage points better than the runner-up.
Bengio, for his part, was one of the first to combine neural networks with probabilistic models of sequences, a concept that has been extended to contemporary speech recognition systems. In a paper published nearly two decades ago, he introduced the concept of word embeddings, a language modeling and feature learning paradigm in which words or phrases from a vocabulary are mapped to vectors of real numbers. Embeddings — and Bengio’s more recent work with computer scientist and Google Brain researcher Ian Goodfellow on generative adversarial networks (GANs) — have revolutionized the fields of machine translation, image generation, audio synthesis, and text to speech systems.
Not to be outdone, LeCun is credited with developing convolutional neural networks, class-efficient, multilayered neural nets most commonly applied to analyzing visual imagery but also employed in a host of other applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering. He was the first to train an AI system on images of handwritten digits in the 1980s while working at the University of Toronto and Bell Labs, and contributed to an early version of the backpropagation algorithm. Moreover, he popularized the notion of hierarchical feature representation — which captures both local relationships and interrelationships for data as a whole — and AI model architectures that can manipulate structured data, such as graphs.
Hinton, LeCun, and Bengio advanced their ideas more or less independently, but they have crossed paths frequently in the last three decades. LeCun, for instance, performed his postdoctoral work under Hinton’s supervision, while Bengio worked with LeCun at Bell Labs beginning in the early 1990s. And roughly 10 years ago, Hinton, with $400,000 in backing from the Canadian government, organized a research community dedicated to “neural computation and adaptive perception” — the Canadian Institute for Advanced Research — with LeCun, Bengio, and other academics in the field.
“Deep neural networks are responsible for some of the greatest advances in modern computer science, helping make substantial progress on long-standing problems in computer vision, speech recognition, and natural language understanding,” said Google senior fellow and senior vice president of AI Jeff Dean. “At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year’s Turing Award winners: Yoshua Bengio, Geoff Hinton, and Yann LeCun. By dramatically improving the ability of computers to make sense of the world, deep neural networks are changing not just the field of computing, but nearly every field of science and human endeavor.”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more