The advertising industry is a fluid one with many advertisers being catered for by many providers. As such, it only makes sense that agencies and advertisers constantly benchmark the performance of providers.
There are three main methods for testing retargeting providers but the lessons apply to testing other technologies as well. These methods are: month on month, shared cookie pool, and cookie split.
The first two are the easier to run, but can seriously blinker advertisers to the true value of their vendors. There are major pitfalls inherent in the traditional methods that need to be avoided.
Month on month provides no actual insight
The first methodology runs as follows; each provider is run separately one month after the other to measure their performance i.e. the incumbent is turned off so that a different provider can be turned on.
The accuracy of any conclusions we draw from this will be extremely limited due to inconsistencies in onsite metrics and the distribution of inventory from month to month. For example, sale periods or the number of weekends in a given month can seriously affect site performance.
Compounding these accuracy issues are the opportunity costs associated with your ad tech provider’s data. From a data perspective this methodology is disastrous, the minute your ad tech vendor is no longer serving ads they are no longer able to make real time decisions. You lose the ability to analyse product/user correlation which in a world of big data significance can lead to lost revenue.
Testing at the same time needs to be done right
To counteract month on month discrepancies, advertisers will often elect to run vendors head to head. While this helps to counteract the flaws of the above methodology it raises further issues if advertisers choose to run providers on the same cookie pool. This seems particularly prevalent in immature global markets.
This means that when a user lands on site every retargeter being tested will drop a cookie on the user. When that user reaches a publishers site both retargeters will bid against each other on the inventory, artificially driving up CPMs — a cost that will be passed onto the advertiser one way or the other.
The key to a strong test is a clear cookie split
So to achieve a fair head to head test, an advertiser must split their cookies between vendors. Like every robust testing methodology you will of course need the necessary tech to track these cookies or have a reliable third party provider.
In this methodology the advertiser will assign a user to a vendor as they land on site so that no user is ever being tracked by more than one vendor. This way artificially raised CPMs can be avoided while ensuring a fair user split is in place.
Crucially though, if a vendor is not sharing the cost of media i.e. running on a cost per click basis, it is very easy for the vendor to manipulate test results by lessening their margin. This means that test period performance will be artificially high and the advertiser could then be caught in a contract when that vendor raises their margin after they win the test.
Do your testing right, now and in the future
Ultimately, testing methodology will shift focus away from the cookie. 65% of multi device activity starts on mobile and 90% of people use two or more devices to browse the net a day. With so much activity on cookieless devices testing needs to be able to take into account all the drive towards conversion that occurs away from cookied environments. There are a number of testing vendors looking to solve this particular problem and we will watch with a keen interest for any advancement in this area.
At this point in time though, there are good and bad ways to test your vendors. While the first two methodologies are easy, using them will cost you more in the end. Without splitting your cookies you are not testing vendors accurately, at their cost and yours.
Sam Barnett is the CEO of Struq, a company that focuses on cross-device retargeting for ads.
VentureBeat is studying mobile marketing automation
, and we’ll share the data.