Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Recently, I wrote a piece for VentureBeat distinguishing between companies that are AI-based at their very core and ones that simply use AI as a function or small part of their overall offering. To describe the former set of companies, I coined the term “AI-Native.”
As a technologist and investor, the recent market downturn made me think about the technologies poised to survive the winter for AI — brought on by a combination of reduced investment, temporarily discouraged stock markets, a possible recession aggravated by inflation, and even customer hesitation about dipping their toes into promising new technologies for fear of missing out (FOMO).
You can see where I am going with this. My view is that AI-Native companies are in a strong position to emerge healthy and even grow from a downturn. After all, many great companies have been born during downtimes — Instagram, Netflix, Uber, Slack and Square are a few that come to mind.
But while some unheralded AI-native company could become the Google of the 2030s, it wouldn’t be accurate — or wise — to proclaim that all AI-Native companies are destined for success.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
In fact, AI-Native companies need to be especially careful and strategic in the way they operate. Why? Because running an AI company is expensive — talent, infrastructure and development process are all expensive, so efficiencies are key to their survival.
Need to tighten your belt? There’s an app for that
Efficiencies don’t always come easy, but luckily there’s an AI ecosystem that’s been brewing long enough to offer good, helpful solutions for your particular tech stack.
Let’s start with model training. It’s expensive because models are getting bigger. Recently, Microsoft and Nvidia trained their Megatron-Turing Natural Language Generation model (MT-NLG) across 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs — which cost millions of dollars.
Luckily, costs are dropping due to advances in hardware and software. And algorithmic and systems approaches like MosaicML and Microsoft’s DeepSpeed are creating efficiencies in model training.
Next up is data labeling and development, which [spoiler alert] is also expensive. According to Hasty.ai — a company that aims to tackle this problem — “data labeling takes anywhere from 35 to 80% of project budgets.”
Now let’s talk about model creation. It’s a tough job. It requires specialized talent, a ton of research and endless trial and error. A big challenge with creating models is that the data is context specific. There has been a niche for this for a while. Microsoft has Azure AutoML, AWS has Sagemaker; Google Cloud has AutoML. There are also libraries and collaboration platforms like Hugging Face that are making model creation much easier than in previous years.
Not just releasing models to the wild
Now that you’ve created your model, you have to deploy it. Today, this process is painstakingly slow, with two-thirds of models taking over a month to deploy into production.
Automating the deployment process and optimizing for the wide array of hardware targets and cloud services supports faster innovation, enabling companies to remain hyper-competitive and adaptable. End-to-end platforms like Amazon Sagemaker or Azure Machine Learning also offer deployment options. The big challenge here is that cloud services, endpoints and hardware are constantly moving targets. This means that there are new iterations released every year and it is hard to optimize a model for an ever-changing ecosystem.
So your model is now in the wild. Now what? Sit back and kick your feet up? Think again. Models break. Ongoing monitoring and observability are key. WhyLabs, Arize AI and Fiddler AI are among a few players in the industry tackling this head-on.
Technology aside, talent costs can also be a hindrance to growth. Machine learning (ML) talent is rare and in high demand. Companies will need to lean on automation to reduce reliance on manual ML engineering and invest in technologies that fit into existing app dev workflows, so that more abundant DevOps practitioners can join in the ML game.
The AI-native company: Solving for all these components
I would like to see us add a sentence about agility/adaptability. If we are talking about surviving a nuclear winter, you have the be the most hyper-competitive and adaptable — and what we aren’t talking about here is the actual lack of agility in terms of ML deployment. The automation we bring is not just the adaptability piece, but the ability to innovate faster — which, right now is gated by incredibly slow deployment times
Fear not: AI will reach adulthood
Once investors have served their time and paid some dues (usually) in the venture capital world, they have a different perspective. They have experienced cycles that play out with never-before-seen technologies. As the hype catches on, investment dollars flow in, companies form, and the development of new products heats up. Often it’s the quiet turtle that eventually wins over the investment rabbits as it humbly amasses users.
Inevitably there are bubbles and busts, and after each bust (where some companies fail) the optimistic forecasts for the new technology are usually surpassed. Adoption and popularity is so widespread that it simply becomes the new normal.
I have great confidence as an investor that regardless of which individual companies are dominant in the new AI landscape, AI will achieve much more than a foothold and unleash a wave of powerful smart applications.
Luis Ceze is a venture partner at Madrona Ventures and CEO of OctoML
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!