Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Today, more and more marketing tools are AI-powered. As that shift has occurred, marketers are grappling with the fact that there will always be some form of unintentional algorithmic bias affecting those platforms. The bias is programmed even without data science teams realizing it, making it difficult to detect and resolve.
As marketers, we inherit the biases in the algorithms we use for advertising, whether they’re algorithms we build or buy. Thus, it’s important to develop concrete steps to ensure minimal bias in the algorithms we use, whether it’s your own AI or AI solution from vendors. AI, particularly machine learning, already enhances a wide range of marketing solutions including hypersegmentation, dynamic creative, inventory quality filtering, dynamic sites, and landing pages. But there are lots of things that can get in the way of an algorithm’s success.
When bias sneaks into AI, it can wreak havoc on efforts and campaigns in a variety of ways. This often happens because marketers have better or more data about some situations or customers than others, and that leads an algorithm toward being more accurate for the ones with greater data volume. Here are some common examples:
- We all want to “conquest” competitors’ customers, but marketers usually have better information about existing customers than future prospects. As a result, there can be a fair amount of risk that those algorithms are inherently more successful at finding people just like their current customers.
- Many marketers segment and target high-value customers. Since there is likely to be fewer of those, algorithms are typically trained mostly on data from the more common, lower-value customers. Consequently, those algorithms prove to be biased toward finding lower-value customers, hurting efforts overall.
- Marketers may have trouble optimizing marketing for late-adopting customers when early adopters make up most of the customer base for a newer product. This can easily occur, because it’s primarily the early adopters’ data that will be used to train the algorithm.
- Marketers might inadvertently prioritize inventory on shorter tail apps because the algorithms we use for bid optimization had more training data from those apps than from others.
A key lesson here is that we can’t take AI algorithms at face value — and they’re certainly not infallible. Along with the new technology and new capabilities comes a new set of concerns to be aware of. Marketers need to ask a lot of questions — about everything from the motivations of the company selling the AI, to where the training data is coming from. We need to look at ourselves too, knowing that we bring biases to our interpretations based on our personal experience.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Here are five concrete steps to take to ensure your AI isn’t overly biased:
1. Get involved and stay involved. Constant human involvement with AI is crucial. Question all assumptions and compare human decisions to model decisions, digging into any differences or patterns you can find. As a marketer, make sure not to commit too early to a “set-and-forget” automation use case for AI, and instead periodically ensure the algorithm is working the way you want.
2. Use representative training data. For any and all groups you want in your marketing, make sure the training data is well represented by that group. Predict rare outcomes, such as conversions, more accurately by ensuring those outcomes are over-indexed in training data, which will make sure the algorithm has lots of examples of success for each. As a marketer evaluating a vendor, make sure you are comfortable that your vendor has taken steps to ensure data representativeness.
3. Look beneath the surface. When you’re measuring accuracy, don’t just focus on the performance of the algorithm overall, but also look at each individual subgroup, like platforms, genders, and high vs. low LTV customers. Otherwise, you might only end up with accurate projections for digital as opposed to TV advertising, or for publishers with which you already invest a lot of money, as opposed to those new to your brand, for example.
4. Continually pursue better data. Don’t ever settle. Keep looking for better training data and ensure that your vendors are following the same approach. Get more, go wider and try new things to collect and/or leverage data you can use to optimize. Whoever has the best, most thorough and accurate training data has a massive advantage. As a marketer evaluating a vendor, ask about the training data — it’s accuracy, where it comes from, how often it’s updated. It’s important to remember that the “best” training data isn’t necessarily the biggest data set. The strength of the training data is more dependent on quality than quantity.
5. Evaluate AI with a dose of skepticism. It’s a powerful tool that is playing an increasingly larger role in targeting, data accuracy, creative versioning, testing, and measurement. AI-driven solutions can help marketers work smarter and achieve exciting new things at greater scale. Like any other investment, you need to know what you need to do to avoid risk.
When you invest in an AI-based solution, you need to ask about algorithmic bias. Once you adopt a solution, ask again … and again.
Jake Moskowitz is Vice President of Data Strategy and Head of the Emodo Institute.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.