Learn how your company can create applications to automate tasks and generate further efficiencies through low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.
Table of contents
The training process for artificial intelligence (AI) algorithms is designed to be largely automated innately. There are often thousands, millions or even billions of data points and the algorithms must process all of them to search for patterns. In some cases, though, AI scientists are finding that the algorithms can be made more accurate and efficient if humans are consulted, at least occasionally, during the training.
The result creates hybrid intelligence that marries the relentless, indefatigable power of machine learning (ML) with the insightful, context-sensitive abilities of human intelligence. The computer algorithm can plow through endless files of training data, and humans correct the course or guide the processing.
The ML supervision can take place at different times:
- Before: In a sense, the human helps create the training dataset, sometimes by adding extra suggestions to the problem embedding and sometimes by flagging unusual cases.
- During: The algorithm may pause, either regularly or only in the case of anomalies, and ask whether some cases are being correctly understood and learned by the algorithm.
- After: The human may guide how the model is applied to tasks after the fact. Sometimes there are several versions of the model and the human can choose which model will behave better.
To a large extent, supervised ML is for domains where automated machine learning does not perform well enough. Scientists add supervision to bring the performance up to an acceptable level.
Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.
It is also an essential part of solving problems where there is no readily available training data that contains all the details that must be learned. Many supervised ML problems begin with gathering a team of people who will label or score the data elements with the desired answer. For example, some scientists built a collection of images of human faces and then asked other humans to classify each face with a word like “happy” or “sad”. These training labels made it possible for an ML algorithm to start to understand the emotions conveyed by human facial expressions.
What is the difference between supervised and unsupervised ML?
In most cases, the same machine learning algorithms can work with both supervised and unsupervised datasets. The main difference is that unsupervised learning algorithms start with raw data, while supervised learning algorithms have additional columns or fields that are created by humans. These are often called labels although they could have numerical values too. The same algorithms are used in both cases.
Supervision is often used to add fields that are not apparent in the dataset. For example, some experiments ask humans to look at landscape images and classify whether a scene is urban, suburban or rural. The ML algorithm is then used to try to match the classification from the humans.
In some cases, the supervision is added during or after the ML algorithm begins. This feedback may come from end users or scientists.
How is supervised ML conducted?
Human opinions and knowledge can be folded into the dataset before, during or after the algorithms begin. It can also be done for all data elements or only a subset. In some cases, the supervision can come from a large team of humans and in others, it may only be subject experts.
A common process involves hiring a large number of humans to label a large dataset. Organizing this group is often more work than running the algorithms. Some companies specialize in the process and maintain networks of freelancers or employees who can code datasets. Many of the large models for image classification and recognition rely upon these labels.
Some companies have found indirect mechanisms for capturing the labels. Some websites, for instance, want to know if their users are humans or automated bots. One way to test this is to put up a collection of images and ask the user to search for particular items, like a pedestrian or a stop sign. The algorithms may show the same image to several users and then look for consistency. When a user agrees with previous users, that user is presumed to be a human. The same data is then saved and used to train ML algorithms to search for pedestrians or stop signs, a common job for autonomous vehicles.
Some algorithms use subject-matter experts and ask them to review outlying data. Instead of classifying all images, it works with the most extreme values and extrapolates rules from them. This can be more time efficient, but may be less accurate. It is more popular when human expert time is expensive.
Types of supervised ML
The world of supervised ML is broken down into several approaches. Many have much in common with unsupervised ML because they use the same algorithms. Some distinctions, though, focus on the way that human intelligence is folded into the dataset and absorbed by the algorithms.
The most commonly cited different types of algorithms are:
- Classification: These algorithms take a dataset and assign each element to a fixed set of classes. For example, Microsoft has trained a machine vision model to examine a photograph and make an educated guess about the emotions of the faces. The algorithm chooses one of several terms, like “happy” or “sad”. Often, models like this begin with a set of human-generated classifications for the training data. A team will review the photos and assign a label like “happy” or “sad” to each face. The ML algorithm will then be trained to approximate these answers.
- Regression analysis: The algorithm fits a line or another mathematical function to the dataset so that numerical predictions can be made. The inputs to the function may be a mixture of raw data and human labels or estimates. For instance, Microsoft’s face classification algorithm can also generate an estimate of the numerical age of the human. The training data may rely upon the actual birthdates instead of some human estimate.
- Support vector machine: This is a classification algorithm that uses a bit of regression to find the best lines or planes to separate two or more classes. The algorithm relies upon the labels to separate the different classes and then it applies a regression calculation to draw the line or plane.
- Subset analysis: Some datasets are too large for humans to label. One solution is to choose a random or structured subset and seek the human input on just these values.
How are major companies handling supervised ML?
All the major companies offer basic ML algorithms that can work with either labeled or unlabeled data. They are also beginning to offer particular tools that simplify and even automate the supervision.
Amazon’s SageMaker offers a full integrated development environment (IDE) for working with their ML algorithms. Some may want to experiment with prebuilt models and adjust them according to the performance. AWS also offers the Mechanical Turk that’s integrated with the environment, so humans can examine the data and add annotations that will guide the ML. Humans are paid by the task at a price you set, and this affects how many sign up to work. This can be a cost-effective way to create good annotations for a training dataset.
IBM’s Watson Studio is designed for both unsupervised and supervised ML. Their Cloud Pak for Data can help organize and label datasets gathered from a wide variety of data warehouses, lakes and other sources. It can help teams create structured embeddings guided by human resources and then feed these values into the collection of ML algorithms supported by the Studio.
Google’s collection of AI tools include VertexAI, which is a more general product, and some automated systems tuned for particular types of datasets like AutoML Video and AutoML Tabular. Pre-analytic data labeling is easy to do with the various data collection tools. After the model is created, Google also offers a tool called Vertex AI Model Monitoring that watches the performance of the model over time and generates automated alerts if the model seems to be drifting.
Microsoft has an extensive collection of AI tools, including Azure Machine Learning Studio, a browser-based user interface that organizes the data collection and analysis. Data can be augmented with labels and other classification using various Azure tools for organizing data lakes and warehouses. The studio offers a drag-and-drop interface for choosing the right algorithms through experiment with data classification and analysis.
Oracle’s data infrastructure is built around big databases that act as the foundation for data warehousing. The databases are also well-integrated with ML algorithms to optimize creating and testing models with these datasets. Oracle also offers a number of focused versions of their products designed for particular industries, such as retail or financial services. Their tools for data management can organize the creation of labels for each data point and then apply the right algorithms for supervised or semi-supervised ML.
How are startups developing supervised ML?
The startups are tackling a wide range of problems that are important to creating well-trained models. Some are working on the more general problem of working with generic datasets, while others want to focus on particular niches or industries.
CrowdFlower, started as Dolores Labs, both sells pre-trained models with pre-labeled data and also organizes teams to add labels to data to help supervise ML. Their data annotation tools can help in-house teams or be shared with a large collection of temporary workers that CrowdFlower routinely hires. They also run programs for evaluating the success of models before, during and after deployment.
Swivl has created a basic data labeling interface so that teams can quickly start guiding data science and ML algorithms. The company has focused on this interaction to make it as simple and efficient as possible.
The AI and data handling routines in DataRobot’s cloud are designed to make it easier for teams to create pipelines that gather and evaluate data with low-code and no-code routines for processing. They call some of their tools “augmented intelligence” because they can rely upon both ML algorithms and human coding in both training and deployment. They say they want to “move beyond simply making more intelligent decisions or faster decisions, to making the right decision.”
Zest AI is focusing on the credit approval process, so lending institutions can speed up and simplify their workflow for granting loans. Their tools help banks build their own custom models that merge their human experience with the ability to gather credit risk information. They also deploy “de-biasing tools” that can reduce or eliminate some unintended consequences of the model construction.
Luminance helps legal teams with tasks like discovery and contract drafting. Its ML tools create custom models by watching the lawyers work and learning from their decisions. This casual supervision helps the models adapt faster, so the team can make better decisions.
Is there anything that supervised ML can’t do?
In many senses, supervised ML produces the best combination of human and machine intelligence when it creates a model that learns how a human might categorize or analyze data.
Humans, though, are not always accurate and they often don’t understand the data well enough to work accurately. They may grow bored after working with many data items. In many cases, they make mistakes or categorize data inconsistently because they don’t know the answer themselves.
Indeed, in cases where the problem is not well understood by humans, using supervised algorithms can fold in too much information from the inconsistent and uncertain human. If the human opinion is given too much precedence, the algorithm can be led astray.
A common problem with supervised algorithms is the sheer size of the datasets. Much of ML depends upon big data collections that are gathered automatically. Paying for humans to classify or label each data element is often much too expensive. Some scientists choose random or structured subsets of the data and seek human opinions on just them. This can work in some cases, but only when the signal is strong enough. The algorithm cannot rely on the ML algorithm’s ability to find nuance and distinction in very large datasets.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.