In just over a week, NeurIPS — the biggest AI summit of the year — will kick off in Montreal. In 2016, the event had 5,000 registered participants. Last year, the number was 8,000. This year, the first batch of 2,000 tickets sold out in 12 minutes.
The annual conference on Neural Information Processing System — until earlier this month it bore the acronym NIPS, which some attendees protested for its potentially offensive connotations — was first proposed in 1986 at the Snowbird Meeting on Neural Networks for Computing, an event organized by the California Institute of Technology and Bell Laboratories. Originally conceived as a meeting for researchers exploring biological and artificial neural networks, it’s come to be dominated by papers on artificial intelligence (AI), statistics, and machine learning.
In recent years, NeurIPS has played host to tech giants like Intel, Amazon, IBM, Nvidia, Google, Apple, Facebook, Tesla, and Uber, which throw elaborate parties and networking events intended to woo researchers and students. (At the Tesla party last year, Elon Musk and director of AI Andrej Karpathy spoke at a panel moderated by microchip luminary Jim Keller.) These events are not just for show — there’s a scarcity of AI talent. A typical specialist with a PhD can expect to make between $300,000 and $500,000 a year in salary and work stock, according to the New York Times.
Despite concerns that the showboating will overshadow the workshops, demonstrations, and poster presentations proper, NeurIPS remains very much a conference for researchers. Last year, scientists from Nvidia submitted a paper (“Unsupervised Image-to-Image Translation Networks”) describing a system that could transfer an image from a winter scene to a summer scene, for example, or a leopard to a house cat. In a talk, Google principal scientist John Platt discussed how machine learning can accelerate and optimize nuclear fusion research. And Google subsidiary DeepMind’s David Silver revealed in a session that AlphaZero, an AI model that defeated top-ranked human players in the board game Go, handily bested Stockfish, the highest-rated chess engine, in a 100-game match.
So what does this year have in store?
Microsoft is holding a session about Ruuh, a chatbot with the persona of a 21-year-old-female that makes typos and corrects them; reacts differently to quick replies and delayed ones; and makes timely pop culture references. (To date, it’s held over 40 million conversations and racked up 100,000 followers on Facebook.) Xiaomi researchers will present a poster describing a deep adversarial algorithm that can learn local camera exposure. And scientists at the Max Planck Institute for Informatics will detail their work on what they call “adversarial scene editing,” a model that learns to find and remove objects from images.
One of the talks to watch will be Google’s panel on AI fairness. Here, engineers and researchers will discuss how algorithmic bias can affect product development to create opportunities — and challenges — for the industry.
Another worth bookmarking is Princeton professor Edward Felten’s talk about machine learning and public policy. He intends to give an overview of how policymakers deal with new technologies, how the process might develop in the case of AI and machine learning, and why “constructive engagement” with the policy process will lead to “better outcomes for the field, for governments, and for society.”
Personally, I’m looking forward to hearing from Laura Gomez, who will speak during NeurIPS’ opening ceremony. She’s a former Google, YouTube, Jawbone, and Twitter engineer and founder and CEO of venture-backed startup Atipica, which focuses on machine learning and analytics for talent acquisition and diversity teams. Gomez also serves on the board of the Institute for Technology and Public Policy. She’ll speak about the need for diversity and inclusivity in the tech industry.
NeurIPS promises to be action-packed this year — Christmas comes early for AI enthusiasts and scientists alike. If December’s conference conforms to the mold of the previous three decades or so, expect to see research that charts the course for years to come.
AI Staff Writer
P.S. Please enjoy this video summarizing a paper from DeepMind that’ll be presented a NeurIPS 2018:
Amazon today announced general availability for the Alexa Mobile Accessory kit to bring Alexa to more wireless headphones, wearables, and Bluetooth devices.
Researchers at Purdue trained an AI model on Wi-Fi access data from college freshmen to predict their locations and relations.
Chromecast that plug into televisions can now be part of audio groups that include Home speakers to play music, podcasts, or audiobooks.
The International Conference on Learning Representations will take place in Africa in 2020 to give Africans denied visas more access to the AI community.
It’s official: Alexa users can now place calls with Skype, Microsoft and Amazon confirmed. The integration even extends to SkypeOut numbers.
Scientists at Amazon have developed a novel method of modeling different speaking styles using AI. The results are pretty impressive.
The vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. (via MIT Technology Review)
Alongside Geoff Hinton and Yan LeCun, Bengio is famous for championing a technique known as deep learning that in recent years has gone from an academic curiosity to one of the most powerful technologies on the planet. (via MIT Technology Review)
Completing someone else’s thought is not an easy trick for A.I. But new systems are starting to crack the code of natural language. (via NY Times)
Some notable individuals such as legendary physicist Stephen Hawking and Tesla and SpaceX leader and innovator Elon Musk suggest AI could potentially be very dangerous. (via Forbes)