Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Most AI vendors develop solutions that target broad use cases with large markets. This is because investors have shown they are only interested in a target market if it is worth several billion dollars. Therefore smaller markets have been excluded, and AI solution ideas designed for niche markets often die out and the companies behind them come to a standstill before they have the chance to see the light of day.
Another side effect of the limited capital to build niche models is that AI vendors tend to build one model and market it to a large set of disparate users. For example, a company selling a vehicle detection system would normally build a single model to detect all types of vehicles across multiple use cases and geographies. An animal detection model typically would detect many different animals and have lower accuracy than a model designed to detect a single animal. These broad-reach products result in lower model accuracy and erode public trust in AI’s capabilities. They also require that humans remain in the loop for verification, consuming more human resources and increasing the overall cost of the solution for customers.
The reason investors focus on broad-reach solutions is that niche solutions are very costly to produce. In order to make a model for a niche use case, you need data that is very specific to it. And collecting data while addressing all of the relevant regulations and security concerns is a big challenge.
And even if a vendor is able to develop a model for a niche use case, the challenge isn’t over, because an AI model is rarely a standalone solution — it often relies on a number of external components. And the more niche your model, the more niche the components of the solution will be. For example, in the case of computer vision or vision AI, some critical components include:
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
- Camera setup and management
- AI Model management and updates
- Dealing with video storage and data retention policies
- Alerts and notification rules
- Role based access control for users
Running the AI model at one of the steps in the software stack is a very small piece of the puzzle. The bulk of an AI vendor’s time goes to building the rest of the software stack to handle the devices and other services that make a complete solution.
And then there are compliance and security issues to consider. Any AI solution a vendor sells has to be comply with different regulations in different geographies and must be secure. Handling those requirements is a big task for any company. This gets even more difficult if the company or developers have to deal with uncharted waters with no prior solution existing in their space. In such cases they may have to go to the local, state, or central government to navigate laws about deploying AI solutions.
Given that most lawmakers are not technology experts, it takes time to get regulations passed and adopted. This can be a slow process, risking the viability or success of such projects if the company does not have deep pockets to wait it out.
Making AI available for niche use cases
So how can the AI vendor community overcome these challenges to bring solutions to the many niches where broad-reach products don’t apply?
1. Build a customer council with friendly customers. In order to handle the data-collection challenge I outlined above, AI vendors should aim to find friendly customers who can help. Such customers can not only provide some of the necessary data; they can also help vendors put the right structure in place for data collection and management. In return, vendors should offer the solution to them at a very low cost. As the initial customer council, they can take pride in building a useful solution for others.
2. Avoid building from scratch. Some vendors decide to build everything themselves using existing software libraries and core infrastructure services from a cloud vendor. This approach provides complete control over the design but can take almost a year to build. The initial goal should be to build a solution quickly and take it to market for testing. The solution can always be improved or optimized later, after initial customers and early adopters have been established. Some solutions have emerged to increase go-to-market time. For example, AWS Panorama and Microsoft Percept have launched various solutions for edge deployments to help build AI solutions using existing or smart cameras. These devices especially help with deployment of AI models on the edge closer to the devices. In general, look for platforms that enable quick transition from AI model to full solution.
3. Build an AI/ML pipeline. AI vendors can build pipelines that allow them to quickly train models on specific data sets. They should design a pipeline so that the data used to build specific models can be easily tracked in order to make it easier to add new data from customers as it is available. There are several solutions in this space already like Kubeflow, AWS Sagemaker, GCP AI pipelines that mean you can avoid building a pipeline from scratch.
The bottom line
There’s a lot of talk about democratizing artificial intelligence to make it more available to more user organizations. Currently we have many broad-reach AI models on the market for things like human detection and voice recognition. But the models are so generic that they run the risk of being inaccurate. To increase precision and accuracy of AI, and to make it usable to a wider range of organizations, we must enable a long tail of AI models that are designed for niche use cases. Although the current cost of developing such niche models and taking them to market is currently too high, we must find ways to break that barrier.
Ajay Gulati is CTO of vision AI company Kami Vision.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!