We're thrilled to announce the return of GamesBeat Next, hosted in San Francisco this October, where we will explore the theme of "Playing the Edge." Apply to speak here and learn more about sponsorship opportunities here. At the event, we will also announce 25 top game startups as the 2024 Game Changers. Apply or nominate today!
Intel today announced a strategic business and technology collaboration with Deci to optimize machine learning on the former’s processors. Deci says that in the coming weeks, it will work with Intel to deploy “innovative AI technologies” to the companies’ mutual customers.
Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington’s Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.
Intel and Deci say the partnership will enable machine learning “at scale” on Intel chips, potentially enabling new applications of inference through reductions in costs and latency. Already, Deci has worked to accelerate the inference speed of the well-known ResNet-50 neural network on Intel processors, achieving a reduction in the models’ latency by a factor of 11.8 and increasing throughput by up to 11 times.
“By optimizing the AI models that run on Intel’s hardware, Deci enables customers to get even more speed and will allow for cost-effective and more general deep learning use cases on Intel CPUs,” Deci CEO and cofounder Yonatan Geifman said. “We are delighted to collaborate with Intel to deliver even greater value to our mutual customers and look forward to a successful partnership.”
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Deci achieves runtime acceleration through a combination of data preprocessing and loading, selecting model architectures and hyperparameters (i.e., the variables that influence a model’s predictions) as well as datasets optimized for inference. It also takes care of steps like deployment, serving, monitoring, and explainability. Deci’s accelerator redesigns models to create new models with several computation routes, all optimized for a given inference device.
Deci’s router component ensures that each data input is directed via the proper route. (Each route is specialized with a prediction task.) As for the company’s accelerator, it works in synergy with other compression techniques like pruning and quantization. The accelerator can even act as a multiplier for complementary acceleration solutions such as AI compilers and specialized hardware, according to the company.
Deci was cofounded by Geifman, entrepreneur Jonathan Elial, and Ran El-Yaniv, a computer science professor at Technion in Haifa, Israel. Geifman and El-Yaniv met at Technion, where Geifman is a PhD candidate at the university’s computer science department. To date, the Tel Aviv-based company, a participant in Intel’s Ignite startup accelerator, has raised $9.1 million from investors including Square Peg.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.