We're thrilled to announce the return of GamesBeat Next, hosted in San Francisco this October, where we will explore the theme of "Playing the Edge." Apply to speak here and learn more about sponsorship opportunities here. At the event, we will also announce 25 top game startups as the 2024 Game Changers. Apply or nominate today!
While Google indexes web pages to make it easier for people to find content from across the more than one billion websites that exist online, a fledgling startup is making moves to do something similar in the audio realm.
Founded in 2015, Audioburst touts itself as a “curation and search site for radio,” delivering the smarts to render talk radio in real time, index it, and make it easily accessible through search engines. It does this through “understanding” the meaning behind audio content and transcribes it using natural language processing (NLP). It can then automatically attach metadata so that search terms entered manually by users will surface relevant audio clips, which it calls “bursts.”
Today, Audioburst announced a $6.7 million round of funding led by Japanese speech recognition tech company Advanced Media, with participation from additional investors, including Flint Capital and 2B-Angels. In addition, the company lifted the lid on a new API that will let broadcasters, brands, and publishers “deliver a personalized audio listening experience,” according to a statement issued by the company.
In real terms, the API allows third-party developers to access Audioburst’s library of content to feature audio-based feeds in their own applications, in-car entertainment systems, and other connected devices. Indeed, Audioburst already has an Alexa skill available for Echo devices that allows users to verbally request the latest news on, say, Donald Trump, with Audioburst serving up appropriate clips from across the broadcasting sphere through Amazon’s speaker.
“The growing popularity of Amazon Echo, Google Home, and voice-activated apps has created an exponential demand for audio content,” explained Audioburst cofounder and CEO Amir Hirsh. “In fact, as of 2016, 20 percent of search queries on Google’s mobile app and on Android devices are voice searches. Users have learned to get their factual answers spoken to them by the different devices but are now looking for a winning content experience.”
If you read an article somewhere online and then wish to revisit that article later, it’s usually fairly easy to find it again using keywords in a search engine. But if you listen to an interview with your favorite comedian on one of the thousands of radio stations across the U.S., it’s not quite so easy to find the exact segment from that broadcast — this is the problem that Audioburst is looking to fix. And for broadcasters, it may help them market and promote their best content by making it easier to find later.
“We want to change the way people experience audio content,” added Hirsh. “Audioburst is essentially the Spotify for spoken word content. Rather than tuning into a pre-programmed station or playlist, our AI builds a unique audio stream for every listener. Each stream is built from short audio clips that are cut from diverse professional audio sources.”
With hubs in New York, Palo Alto, and Tel Aviv, Audioburst is largely focused on working with developers for now. It does offer a basic search interface through its own website, though it isn’t particularly easy to find unless you have the direct link. And a spokesperson confirmed to VentureBeat that the company plans to offer its own consumer-focused apps in the future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.