Artificial intelligence came alive in the ’80s with many startups, governments, and large enterprises deploying new systems that executed tasks typically performed by human experts. These were largely rule-based systems that encoded behaviors in rules instead of using the strict procedural logic of traditional programming languages. Then, as memory became more affordable, systems were able to handle much more computationally intense tasks, such as machine learning, planning and scheduling, and natural language understanding. Now in the age of big data, many believe AI has completely changed the tech landscape, but in some ways, as the Talking Heads song goes, it’s the “same as it ever was.”
What remains the same are the core elements of an intelligent application. The technology behind the applications I deployed at NASA in the late ’80s and ’90s in the Space Shuttle program — the unmanned probes, the space telescopes, the Space Station, and ultimately, the planetary rover programs — was subsequently commercialized in the supply chain. This was the basis for the applications we later deployed in the ERP industry and the marketing applications deployed in the ecommerce, CRM, and programmatic advertising spaces. Now, I’ve recently been working with AI applications that need to handle massive amounts of data, and while they are in very different domains, they are all built on common themes.
These applications include:
- Life science applications that can learn from clinical trial data to advise doctors of the most promising drug trials for a patient with a disease
- Cyber-threat security systems that can predict the most vulnerable elements of a business to determine where to buy insurance
- Internet of Things (IoT) systems that can react to the changing location of assets, based on RFID tags, to plan more effectively, predict future scenarios, and deter crime
Plus, there are systems we all interact with or hear about every day: Siri and Alexa listen to our instructions, Amazon and Netflix recommend products, cars park themselves, some cars drive autonomously, unmanned trains transport us in many cities and airports, computers play chess, GO, and Jeopardy! — and the list goes on.
Across all of these examples, five core elements have endured, connecting the dots across almost 40 years of AI advancement. These AI applications must ingest huge amounts of data, be reactive to their surroundings, adapt over time by learning to perform better, project into the future, and serve many people and systems simultaneously.
1. Data intensive ingestion
Data-intensive AI systems deal with voluminous amounts of data, often in excess of billions of records coming in at high velocity. Ingesting this data in real time is one of the most demanding things that an AI application has to do. Plus, it has to be adept at ingesting continuous streaming data (lots of small items, like IoT sensor measurements) and batch data (a few large items, like historical data tables).
Adaptive applications use machine learning to improve themselves. Over time, they observe their results and learn to do better. The machine learning workflow requires data scientists to perform model selection, an iterative process of feature engineering, algorithm selection, and parameter tuning in an experimentation environment. Application developers then deploy models, and, as new data comes in, the model can classify it and behave based on that classification. Then the application reviews the outcomes of the classification and uses these outcomes to re-train.
Modern AI systems react to the changing data around them in real time. Unlike traditional applications that are more batch-oriented (you schedule them and they run, store their results, and are then shut down), AI applications continuously monitor their inputs, often from streaming data platforms. And when certain conditions apply, they invoke procedures, rules, and behaviors or compute scores, and make decisions. Put simply, they are always on, reacting to their inputs.
Many AI systems don’t just react. They also have to project possible futures to determine the best course of action now. Planning systems, games, and even language-parsing systems need to project in a forward-looking way to get the best answers. The systems have to be able to quickly switch between scenarios as new inputs come in (e.g. a typhoon has delayed shipping of components from China, requiring re-optimization of the manufacturing plan based on a variety of assumptions).
AI systems, just like traditional applications, must handle multiple people or systems interacting simultaneously. They use techniques adopted by those developing distributed systems in the fields of operating systems and databases to maintain ACID properties, a long-standing principle of traditional transactional databases.
For modern AI systems, these five attributes enable them to perform with the speed and scale needed to meet the demands of both human and electronic users. Plus, as data volumes grow and response times shorten, properly constructed systems can simply scale out their technology infrastructure, rather than having to rework their approach. Considering the pivotal role these applications perform for both individuals and businesses, staying online and operational might be the best attribute of all for an AI system.