Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Yesterday, Microsoft Germany CTO Andreas Braun was quoted as saying that GPT-4 will be introduced next week and will include multimodal models. The report, which ran in the German news outlet Heise, instantly led to renewed online chatter about the possibility of GPT-4’s debut, less than four months after the GPT 3.5 series, which ChatGPT is fine-tuned on, was released.
Coincidentally, deep learning pioneer Yoshua Bengio, who won the 2018 Turing Award together with Geoffrey Hinton and Yann LeCun, also made comments yesterday about ChatGPT and the potential of multimodal models.
In a virtual Q&A titled “What’s Lacking In ChatGPT? Bridging the gap to human-level intelligence,” Bengio said that current work on multimodal large neural nets, that have images or video as well as text, would “help a lot” with the ‘world model’ issue — that is, that models need to understand the physics of our world.
He also warned that market pressures will likely push tech companies towards secrecy rather than openness with their AI models, and that the “media circus” around ChatGPT is a “wake-up call” about the potential of powerful AI systems to both do good for society as well as create significant ethical concerns.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
ChatGPT has raised awareness of the potential for powerful AI
Bengio emphasized that while it is impressive, ChatGPT is a “very small step” scientifically and called it “mostly an engineering advance.” ChatGPT is more significant from a social standpoint, he explained — that is, making people aware of what can be done with AI.
But, he warned, it is up to humans to decide how they are going to design these machines — which can, to some extent, already pass the Turing test — from an ethical and responsible standpoint.
“Are we going to build systems that are going to help us have a better life in a philosophical sense, or is it just going to be an instrument of power and profit?” he said.
The need for regulation
In our economic and political system, “the right answer to this is regulation,” he said, pointing out that startups are willing to take risks that lead larger Big Tech companies like Google and Microsoft to “feel compelled to jump into the race.”
Protecting the public, he added, “in the long run is good for everyone and it’s leveling the playing field — so that the companies that are more willing to take risks with the public’s well being are not rewarded for doing it.”
He emphasized that there are discussions around making sure AI regulation does not hurt the innovation economy. “But it is going to slow some things down, but that’s probably a good thing,” he said.
Taking a long-term view of ChatGPT and LLMs
Bengio acknowledged that at the moment, companies are feeling an urgency to bring ChatGPT and other LLMs into their products and services. But he pointed out that academics and some companies are also looking out at a longer horizon about what’s next.
“How do we become the next big company in the field? How do we lead? For that, you have to think about what’s missing, what are the failure modes,” he said. “That kind of research is hard and might take years to answer. Hopefully some people will have the vision to look beyond the immediate panic that I think is happening right now.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.