Presented by SambaNova Systems
To stay on top of cutting-edge AI innovation, it’s time to upgrade your technology stack. Learn how advances in computer architecture are unlocking new capabilities for NLP, visual AI, recommendation models, scientific computing, and more at this upcoming VB Live event.
For the last decade or so, computing has been focused on transactional processing, from core banking and ERP systems in the enterprise to taxation systems in government, and more. Recently, however, there’s been a shift in the software and applications world toward AI and machine learning, says Marshall Choy, VP of product at SambaNova Systems, and that’s something companies need to sit up and take notice of. Those previous hardware architectures, which were good at transactional processing, aren’t well-equipped for running the AI and ML software stack.
“We’re seeing huge growth in both AI and ML software and hardware purchases going forward, in terms of compounded annual growth rates, which has spawned a need for a different way to run these new software applications,” Choy says.
Single cores in and of themselves are becoming less efficient. Putting many of those together on a chip just increases that inefficiency. And then putting many of those inefficient multicore chips in a system compounds even greater inefficiency in the system. Hence the need for a different way to do computation for next-generation AI and machine learning software.
“The added complexity to all this is that we’re really in the early days of AI and machine learning,” he says. “As is typical of any application space, there’s a lot of churn and change happening at the software and application level. And so this is where the countervailing forces of software development and hardware development come into play, where developers are changing, improving, and inventing new ways of doing machine learning at a breakneck speed.”
If you look at RXIV.org, there are innumerable new research papers being published on machine learning, which translates to a steady stream of new ideas on how to do machine learning, and how to write algorithms, models, and applications differently, Choy points out. When it comes to hardware and processors, we typically see an 18- to 24-month cycle to develop a new piece of infrastructure, which means you can very quickly become out of sync with the changes in development and delivery cycles.
What’s needed is an infrastructure that’s much more flexible to the needs and requirements of the ever-changing software stack.
The new architecture paradigm, which Choy calls reconfigurable data flow architecture, enables a hardware stack that is designed to be flexible to the requirements coming down from the software stack for the models, applications, and algorithms that exist today — as well as those that have not yet been invented for the future. Effectively, we need a future-proofed architecture that can be reconfigurable and flexible to wherever software development takes us over the next several years.
“I do firmly believe that this transition to AI-driven computing will be just as big, if not bigger, than the internet itself and the impact it had on compute,” Choy says. “The transition from pre-internet to post-internet literally changed everything. The whole nature of software and the distribution of applications and capabilities changed, and linked every developer and every end user across the world through internet-connected devices.”
The internet effectively refactored major portions of the Fortune 500 and below, and created and eliminated companies, depending on how prepared they were for the transformation.
“Now, companies that invest in AI and machine learning will come out of this adoption period in a much stronger and more competitive position, able to develop and deliver new and differentiated services and products to their customers, and therefore generate new lines of business and new revenue streams,” he says.
Technology leaders should look to integrating these new and disruptive technologies into their existing technology stack in a way that will bring as little disruption as possible as it continues to evolve and advance. It’s essential to choose partners who can make that an easy transition in terms of speed of deployment, ease of integration in your existing developer environment, the software ecosystem, and workflows.
“You want to get the technology in there and working quickly so you can focus your time and resources on the actual business outcomes you’re looking for, versus just setting up your infrastructure,” Choy says. “It’s not just about software and it’s not just about hardware, but a complete solution that’s going to provide you end-to-end results in terms of better performance, better efficiency, and maybe most importantly, a higher level of ease of use and ease of programmability for your developers.”
Don’t miss out!
Attendees will learn:
- Why multicore architecture is on its last legs, and how new, advanced computer architectures are changing the game
- How to implement state-of-the-art converged training and inference solutions
- New ways to accelerate data analytics and scientific computing applications in the same accelerator
- Alan Lee, Corporate Vice President and Head of Advanced Research, AMD
- Kunle Olukotun, Co-founder and Chief Technologist, SambaNova Systems
- Naveen Rao, Investor, Adviser & AI Expert (moderator)
More speakers to be announced soon.