Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Google has vowed to give brands and advertisers more tools to help prevent their ads from appearing against controversial content on YouTube.
The internet giant has been facing mounting criticism for making money off ads that run against videos associated with terrorist groups. For last month’s Super Bowl, for example, Hyundai created an advertisement hailing U.S. troops that was then reportedly used as pre-roll to a video supporting Hezbollah. In the wake of the Super Bowl, companies such as Thomson Reuters suspended part of their programmatic online advertising program, while in the last 24 hours both the Guardian newspaper and the U.K. government announced suspensions to their YouTube advertising.
“With millions of sites in our network and 400 hours of video uploaded to YouTube every minute, we recognize that we don’t always get it right,” said Ronan Harris, managing director for Google U.K., in a blog post. “In a very small percentage of cases, ads appear against content that violates our monetization policies. We promptly remove the ads in those instances, but we know we can and must do more.”
Whether Google has been disincentivized to tackle the issue due to the revenues generated, or whether it’s just been too complex to solve, the matter isn’t going away anytime soon, which is why the company says that it’s now working on additional ways to help companies control where their ads appear. Over the next few weeks, Google says it will be “making changes” to enable brands to have “more control over where their ads appear across YouTube and the Google Display Network,” though it didn’t elaborate on what these changes will entail.
“We’ve heard from our advertisers and agencies loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content,” added Harris.
Technology companies across the board have been facing increasing scrutiny over their roles in promoting or facilitating terrorist groups online. Last June, family members of some of the victims of the Pulse nightclub killings in Florida filed a lawsuit against Google, Twitter, and Facebook for providing “material support” to the terrorist organization known as ISIS. The same month, the father of a U.S. citizen who was killed in the Paris terrorist attacks in November 2015 announced plans to sue Twitter, Facebook, and Google for the same reason. Earlier in the year, the families of two U.S.-based contractors killed in a suspected terrorist attack in Jordan announced they were suing Twitter for “knowingly” allowing ISIS to attract new recruits.
While YouTube already allows advertisers to stipulate certain topics and site categories they don’t want to be associated with, it’s not entirely clear what else it could do to solve the main problem. Anyone, including terrorist groups, can upload videos to YouTube and opt into having ad placements. Aside from manually checking each video, it’s difficult to automate the process of identifying the content contained, so there will always be a chance that brands end up paired with undesirable videos.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.