Google just completed today’s Searchology event, a “State of the Union” of sorts for its web search products. The first half of the event was a rehash of some of the features that are already live in Google search, with explanations of Google’s philosophy and development process, but things got more interesting later on, with the announcement of several new features. These aren’t just tweaks to the existing experience, either, but potentially powerful new ways to search for and see the data that you’re looking for.
I liveblogged the event (you can read that after the jump below), but here’s a roundup of the major announcements:
Desktop/mobile sync — An upcoming feature that saves the searches you make when signed in to your Google account so you can access them on your desktop later on, when using your mobile phone, and vice versa. That’s especially useful when you’re out and about and trying to re-locate information you found earlier on your computer.
Search options — A feature going live later today that lets you “slice and dice” your search results, based on what you’re interested in. So you could see search results based on how recent they are, or based on a specific type of media, or a specific genre such as reviews. This makes it easier to jump between different types of searches than, say, moving between Google’s standard web search and Google Image Search, and it also lets you combine searches, so you only see the images from the most recent search results. In the reviews area, Google also uses sentiment analysis to make sure the snippet it shows you reflects the general tone of the review. There’s also a “wonder wheel,” which allows users to explore related subjects.
Google Squared — From a technical perspective, this may be the most interesting feature unveiled today. Basically, it tasks information from around the web and uses it to populate spreadsheets with relevant data. For example, if you’re interested in buying a small dog, you could enter “small dogs” as a query, and Google Squared would create a spreadsheet showing the different types of dogs, an image, their average weight, and other information. Users can add and subtract rows and columns based on their interests. Also, if they think a piece of data is wrong, they can see alternate answers pulled from around the web, and select the answers that seem the most correct. Vice President of Search Products and User Experience Marissa Mayer says that behind the scenes, Google is looking at the structure of web pages to determine the meaning of data, and then corroborating its answers by trying to find similar structures on other web pages. This feature will be live in Google Labs later this month.
Rich snippets — Google is now incorporating metadata from web sites so it can add more information to its search results beyond just a snippet of text. In a a link to a restaurant’s Yelp profile, for example, it can show the average user rating and the number of reviews it has received. In the search results for a person, it can show their geographic location. This is rolling out to a limited number of websites initially, but Google will let site owners sign up if they’re interested. (For open web nerds, I’ll be more specific: Google says it’s now supporting the open web standards RDFa and microformats.) Yahoo already supports similarly rich search results through its SearchMonkey initiative for search applications.
You can read my liveblog of the event behind the jump.
10:05am: Gabriel Stricker, director of search communications, takes the stage. Housekeeping: You can send your questions in via the webcast, linked to above. This will be a “State of the Union” into search at Google. Web and users are becoming more sophisticated. Users demands and expectations are increasing. Fundamental task: To deliver the complexity of the web to users in a way “elegantly simple and straightforward.”
10:07am: Udi Manber, VP of Core Search, takes the stage. Humanity now in a position to shift attention from controlling nature to understanding people. “It’s more important.” So he calls search “a new rocket science … but it’s a quiet kind of rocket science.”
10:09am: Effective web search is very hard, but you take it for granted. “And that’s the way we like it.” The real goal of search, is to solve the users’ problems. “If users can’t spell, it’s our problem. If they don’t know how to form the query, it’s our problem.”
10:11am: Some examples of how you take search for granted, but it is rocket science. Visiting a university website: There’s a link in the search results just to skip the site intro. Also, a search for IRS shows a table of contents for the site. A search for an address or for plumbers will bring locally-relevant information.
10:15am: Manber says search traditionally was focused on limited by technology (storage, etc.), now it’s more focused on understanding people. And, um, now he’s juggling eggs “to highlight beginnings.” It only lasted about a second, though: “I had to do it quickly, otherwise the PR people stop breathing.”
10:18am: Patrick Riley, engineer in search quality, says he’s going to give us a “ground level view” of the “Did you mean?” feature. As an example, he describes a search result for “labor” that could mean different things: Department of Labor vs. giving birth. The second, alternate meaning is highlighted in a different set of search results further down the page.
10:20am: Now looking at how to use that idea with with alternate spellings. Google developed “spellmeleon” project to return alternate spelling results, but was a huge tax on Google’s computing infrastructure. An engineer in Tokyo office decided it made sense to put those alternate results at the top of the page. It was a controversial decision, because it involved using “prime real estate” in the search results.
10:25am: Search results for “Macy Ray.” Did users want results for someone named Macy Ray, or for the singer Macy Gray? Lots of time developing the user interface. On the main results page: “We really care about every pixel on that page.” Settled on interface that reminded user both at beginning and end of “Did you mean?” section that these aren’t normal results.
10:29am: Scott Huffman, Engineering Director of Search, takes the stage to talk about mobile search. “What’s so interesting about mobile, besides that it’s got a mobile screen?” Well, for one thing, mobile search is growing faster than search from PCs. Also, Google needs to support hundreds of mobile devices with radically different capabilities. Also, it’s harder to type searches on a mobile keyboard. Lastly, mobile search is local.
10:31am: Google’s goal is to make mobile search a daily activity. “We’re not quite there today.” Three things need to happen: Mobile search needs to be complete, it needs to be easy, and it needs to be local.
10:32am: Complete: Highlighting a few examples of how Google changes the search experience for mobile. Creating a button allowing users to call businesses with just one touch. Allowing users to swipe through image search results on a touchscreen. Summarizing product descriptions, then allowing users to drill in deeper.
10:35am: In other countries, there are more sites that are either primarily accessed via mobile devices, or are mobile-only.
10:37am: Easy: Upcoming feature involves sharing queries between desktop and mobile environment, so that for example if you search for a flight on your computer, you can check it again on your mobile phone when you’re heading to the airport.
10:40am: Marissa Mayer on-stage, going to make some announcements, but first looking at what’s been accomplished. At a dinner, man told her about searching for “how to tie a bowtie” and getting helpful videos and diagrams in the results, rather than incomprehensible text descriptions. Illustrates the benefits of universal search, which was launched in 2007.
10:43am: In the past two years, the amount of rich media included in the results has proliferated. Universal search runs in 175 countries and triggers in one in four search results.
10:44am: Using “the bento box” as an illustration of delivering lots of media and information in a compact form.
10:45am: Discussing SearchWiki, allowing users to edit and annotate the search results. “It’s been a really big success for us.” Search improvements today are built around more interactivity and more rich media.
10:47am: Problems: Finding the most recent information. Finding just a particularly type of search result. Knowing which results are best. Knowing what users are looking for. Moving beyond keyword-driven search.
10:49am: Launching a product called Google Search Options. Going live today. Example: Search for Hubble Space Technology. You can bring up a Search Options panel that allows you to “slice and dice” results. You can view results by genre, most recent, images, timeline. and wonder wheel. (More on the last thing in a second.)
10:51am: You can also combine different options — limiting search to results from the past week, then drilling down further by seeing images in those results.
10:53am: Another example: Searching for solar ovens. If you just wanted to see videos, you can just click on videos without switching sites or contexts. Then you could switch to just seeing forum posts.
10:54am: In the reviews section, Google uses “sentiment analysis” to determine whether a review is positive or engative, then trying to show a snippet that captures that sentiment. It’s a different snippet than what you’d see in a normal search result.
10:56am: So what is the wonder wheel? It’s a visualization for exploring your search results. In the center, you see your query, then there are related topics clustered around it. The search results, meanwhile, are shown in a column on the right side of the page. As you click on related topics, you can jump between different wonder wheels.
10:58am: Search Options is also a convenient way to introduce new features to search, becomes they can be added into the existing interface, rather than creating a whole new section.
10:59am: Next project called Google Squared, which is coming later this month in Google Labs. Example query: Small dogs. Automatically builds a “square” of information — basically a spreadsheet with information for Google search results. Finds meaningful facts around names, pictures, descriptions, etc.
11:01am: You can add new rows to get information about a specific type, say adding a specific beed of small dog, or new columns to get new categories of information. You can also choose alternate values from the search results, if the given information doesn’t seem correct.
11:04am: What are some of the challenges in Google Squared? For example, in a search for “vegetables,” it returns information about squash the sport, rather than squash the vegetable. (Incidentally, I noticed that Mayer is keeping her famous laugh under control during this speech, but a few giggles escaped here.) You can also save a square for future use.
11:05am: Next product: Rich snippets. (Aaand one of my college dorm-mates is now on stage with Mayer. Hi Kavi!) The rich snippet shows extra metadata beyond just a text excerpt. such as the number of user reviews in restaurant search results.
11:06am: Another example: Searching for an electronic device, the snippet also shows when the review was conducted. A third example: Searching for a person, you can get metadata like their location, which helps you figure out which person is the one you were looking for.
11:10am: How does this work? Google will now supporting two different open standards for annotating a page to show meaning: RDFa and microformats. “This is a step towards making the whole Internet smarter.” For example, you could use these tags to render information differently on a mobile phone.
11:11am: Last demo, “which is about the stars.” Huh?
11:14am: Showing off SkyMap, an Android app for viewing the stars. Pan, zoom in, zoom out. But why is this better than a paper star map? Uses GPS to produce a star amp that’s local to your location on Earth, and the stars that you would see.
11:17am: Android also knows which direction you’re looking, and as you turn, the map changes with the phone. “Can you do that with a piece of paper?” You can also search for a specific star, which delivers an arrow, pointing to where your star is in the sky.
11:20am: Mayer: Google has long joked about locating physical objects. Now with Android, it’s starting to do that (although searching for stars isn’t the most practically useful thing to find).
11:21am: Question and answer session. Q: As Google becomes more semantic, will Google start selling semantic keywords? Mayer: No plans to change how Google sells keywords, and she also resists the idea that Google is becoming more semantic.
11:23am: Q: What can you say about Google’s international support? And are search engines only useful on the web? Manber: Google is committed to international support, though that doesn’t mean it can release all products in all countries. Mayer: Google has focused on online search, and is now trying to bring offline information online.
11:25am: Q: Can you talk about meaning extraction for some of the new features? Manber: No. Mayer: “I think we can open the kimono a little bit.” It’s “totally amazing.” Google Squared “really, really blew me away.” Basically, it looks for structures on web that seem to imply facts. Corroborating those facts by see if those structures repeat across pages.
11:28am: Q: Stephen Wolfram expressed a lot of dissatisfaction with information you get in databases to build Wolfram Alpha. Does Google feel there is enough satisfactory information in databases? Manber: You have to corroborate the information you find. Sergei Brin and Manber did see an early demo fo Wolfram Alpha, by the way. Mayer: Google is optimistic about information on the web, and how information gets corrected quickly.
11:31am: Q: Are there risks of copyright infringement with Squared, since it doesn’t point to the web pages where it’s pulling information? Mayer: Well, Squared may provide information in a more useful way than the original website, but Google will still cite its sources.
11:32am: Q: What is exposed via application programming interfaces? Especially rich snippets. Mayer: Yes, it’s an open API.
11:33am: Q: How important is it to search within videos? Mayer: Voice is much further along than video. “But that’s just my personal opinion.” Video search is very important, though. Manber: We don’t have to make a distinction between specific areas in determines of importance. “They’re all important.”
11:36am: Q: When will Google Squares be available? Mayer: Later this month.
11:37am: Q: Include all sites with rich snippets? A: Limited amount of sites initially, and sites can sign up to be indexed as well. Q: Is there any way to opt out? A: Since it’s information that’s added to the HTML, any site can choose not to provide that information. But no guarantees, since Google will be using its own algorithms too.
11:39am: Q: How much closer do today’s announcements bring Google to becoming a perfect search engine? Mayer: Search is still in its infancy, and today’s announcements just reinforced that for her. Search is a “90-10″ problem (variation on the more common “80-20″ description), so that the last 10 percent that’s unsolved will require 90 percent of the work. Huffman: Mobile search is even further away from completion. Manber: To me, every five years is science fiction. Head of US Patent Office in 1860 said: “Anything that can be invented has been invented.” Which was, um, wrong.
11:43am: Q: How have law enforcement agencies changed their attitudes towards Google’s data? Mayer: We’re very sensitive to users needs in this area. We want to provide best possible service, but want users to offer information in areas where they think it’s worth it. Can’t specifically comment on legal cases.
11:44am: Q: How will desktop queries linked to mobile phone? A: Linkage is only there if you’re signed in to Google account. When you delete things from desktop browser, it will be deleted on your mobile browser, vice versa.